PART 4. Government Analytics Using Public Servant Surveys
- Home
- Overview
- Foundations
- Administrative Data
- Public Servants Surveys
- External Assessments
Chapter Summaries
-
Chapter 18. Surveys of Public ServantsThe Global Landscape
Ayesha Khurshid and Christian Schuster
Governments around the world increasingly implement surveys of public servants to better understand — and provide evidence to improve — public administration. As context for the subsequent chapters on surveys of public servants in this book, this chapter reviews the existing landscape of government-wide surveys of public servants. What concepts are measured in these surveys? How are these concepts measured? And what survey methodologies are used? Our review finds that, while governments measure similar concepts across surveys, the precise questions asked to measure these concepts vary across government surveys, as do survey methodologies — for instance in terms of sampling approaches, survey weights and survey modes. The chapter concludes, first, that discrepancies in survey questions for the same concepts put a premium on crosscountry questionnaire harmonization, and introduces the Global Survey of Public Servants as a tool to achieve harmonization. The chapter, second, concludes that methodological differences across surveys — despite similar survey objectives — underscore the need for stronger evidence to inform methodological choices in surveys of public servants. The remaining chapters in this volume focus on providing such evidence. -
Chapter 19. Determining Survey Modes and Response RatesDo Public Officials Respond Differently to Online and In-Person Surveys?
Xu Han, Camille Parker, Daniel Rogger, and Christian Schuster
Measuring important aspects of public administration, such as the level of motivation of public servants and the quality of management they work under, requires the use of surveys. The choice of survey mode is a key design feature in such exercises and therefore a key factor in our understanding of the state. This chapter presents evidence on the impact of survey mode from an experiment undertaken in Romania that varied whether officials were enumerated the same survey face-to-face or online. The experiment shows that at the national-level the survey mode does not substantially impact the mean estimates. However, the mode effects have detectable impact at organization-level, as well as across matched individual respondents. Basic organizational and demographic characteristics explain little of the variation in these effects. The results imply that survey design in public service should pay attention to survey mode, in particular in making fine-grained comparison across lower level units of observation.
-
Chapter 20. Determining Sample SizesHow Many Public Officials Should Be Surveyed?
Robert Lipinski, Daniel Rogger, Christian Schuster, and Annabelle Wittels
Determining the sample size of a public administration survey is often a trade-off between increasing the precision of survey estimates and the high costs of surveying a larger number of civil servants. Survey administrators ultimately have to decide on the sample size based on the type of inferences they want the survey to yield. This chapter aims to quantify the sample sizes that would be necessary to make a range of inferences commonly drawn from public administration surveys. It does so by employing Monte Carlo simulations and past survey results from Chile, Guatemala, Romania, and the U.S.A. The analyses show that civil service-wide estimates can be reliably derived using sample sizes considerably below the current ones. On the other hand, comparisons across demographic groups - gender and managerial status - as well as ranking of individual public administration organizations require large sample sizes, often substantially larger than those available to survey administrators. The results suggests that not all types of inferences and comparisons can be drawn from surveys of civil servants, which instead might need to be complemented by using other research tools, like interviews or anthropological research. This chapter is also linked with an online toolkit that allows one to interactively estimate the optimal sample size given the types of inferences that are expected to be drawn from a survey. Together, they allowed practitioners involved in survey design for the civil service to understand trade-offs involved in the sampling and what type of comparisons can be reliably drawn from the data.
-
Chapter 21. Designing Survey QuestionnairesWhich Survey Measures Vary and for Whom?
Robert Lipinski, Daniel Rogger, Christian Schuster, and Annabelle Wittels
Many aspects of public administration, such as employee satisfaction and engagement, are best measured using surveys of public servants. However, evaluating the extent to which survey measures are able to effectively capture underlying variation in these attributes can be challenging given the lack of objective benchmarks. At a minimum, such measures should provide a degree of discriminating variation across respondents to be useful. This chapter assesses variation in a set of typical indicators derived from data sets of public service surveys from administrations in Africa, Asia, Europe, North and South America. It provides an overview of the most commonly used measures in public servant surveys and presents the variances and distributions of these measures. As such, the chapter provides benchmarks against which analysts can compare their own surveys and an investigation of the determinants of variation in this field. Standard deviations of the measures we study range between 0.91 and 1.23 on a five point scale. The determinants of variation are mediated by the focus of the variable, with country-fixed effects the largest predictors for motivation and job satisfaction, and institutional structure key for leadership and goal clarity.
-
Chapter 22. Designing Survey Questionnaires To What Types of Survey QuestionsTo What Types of Survey Questions Do Public Servants Not Respond?
Robert Lipinski, Daniel Rogger, and Christian Schuster
Surveys of public servants sharply differ in the extent of item non-response: respondents skipping or refusing to respond to questions. Item non-response can affect the legitimacy and quality of public servant survey data. Survey results may be biased, for instance, where those least satisfied with their jobs are also most prone to skipping survey questions. Understanding why public servants respond to some survey questions but not others is thus important. This chapter offers a conceptual framework and empirical evidence to further this understanding. Drawing on the existing literature on survey non-response we theorize that public servants are less likely to respond to questions that are complex (as they are unable to) or sensitive (as they are unwilling to). We assess this argument using a newly developed coding framework of survey question complexity and sensitivity, which we apply to public service surveys in Guatemala, Romania and the U.S. We find one indicator of complexity – the unfamiliarity of respondents with the subject question – to be the most robust predictor of item non-response across countries. By contrast, other indicators in our framework or machine-coded algorithms of textual complexity do not predict item non-response. Our findings point to the importance of avoiding questions which require public servants to speculate about topics they are less familiar with.
-
Chapter 23. Designing Survey QuestionnairesShould Surveys Ask about Public Servants’ Perceptions of Their Organization or Their Individual Experience?
Kim Sass Mikkelsen and Camille Mercedes Parker
Many civil service surveys are centrally interested in organizational aggregates. This raises the issue whether civil service surveys should use organizational referents in question design. That is, should respondents be asked about their organization? Or, using individual referents, about themselves? We examine this question using survey experiments in Guatemala and Romania, which enable us to causally estimate the difference use of organizational versus individual referents makes. We find that, while there are not strong conceptual grounds to prefer either organizational or individual referents, the choice matters to responses. Organizational questions are particularly useful when questions are very sensitive, whereas individual questions are particularly useful when measured attitudes or practices are not either very common or very rare. Overall, there is not a general answer to which referent is better. It depends on characteristics of the question and the organization it seeks to measure.
-
Chapter 24. Interpreting Survey FindingsCan Survey Results Be Compared across Organizations and Countries?
Robert Lipinski, Jan-Hinrik Meyer-Sahling, Kim Sass Mikkelsen, and Christian Schuster
With the rise in the worldwide efforts to understand public administration by the use of surveys of civil servants, the issues of comparability become paramount. Surveys can rarely be understood in a void, but rather require benchmarks and points of reference. It is however not clear whether survey questions, even when phrased and structured in the same manner, measure the same concept in different contexts. For multiple reasons, including work environment, adaptive expectations, and cultural factors, different people might understand the same question in various ways and adjust their answers accordingly. This might make survey results incomparable not only across countries, but also within different groups of national public administration. This chapter uses results from 7 public service surveys from across Europe, Latin America, and South Asia, to investigate to what extent the same survey questions measure the same concept, using as an example questions related to 'transformational leadership'. To ascertain the measurement invariance, models of transformational leadership are compared across countries, as well as along the gender, educational, and organizational lines. Weak evidence of either metric or scalar invariance is found in cross-country and organizational settings. On the other hand, factor loadings can be judged equal across genders and educational levels in most countries (metric invariance), as can be, to a smaller extent, latent factor means (scalar invariance).
-
Chapter 25. Making the Most of Public Servant Survey ResultsLessons from Six Governments
Christian Schuster, Annabelle Wittels, Nathan Borgelt, Horacio Coral, Matt Kerlogue, Conall Mac Michael, Alejandro Ramos, Nicole Steele, and David Widlake
Governments around the world increasingly implement government-wide surveys of public servants. How can they make the most of them to improve civil service management? This chapter first develops a self-assessment tool for governments which lays out the range of potential uses and benefits of public servant survey findings, arguing that public servant survey results can improve civil service management by a) providing tailored survey results to four key types of users (the government as a whole, individual public sector organizations, units within organizations, and the public, including public sector unions), b) holding government organizations accountable for taking action in response to survey results and c) complementing descriptive survey results with actionable recommendations and technical assistance for how to address the survey findings to each user type. To substantiate the tool, the chapter then assesses the extent to which six governments – the US, UK, Australia, Canada, Colombia and Ireland – make use of the range of potential uses of public servant survey findings. It finds that 5 out of 6 governments provide tailored survey results at both the national and agency level, yet no government fully exploits all potential uses and benefits of public servant surveys. For instance, not all governments provide units inside government organizations with their survey results, or complement survey results with accountability or recommendations for improvements. Many governments could thus, at low cost, significantly enhance the benefits they derive from public servant surveys for improved civil service management.
-
Chapter 26. Using Survey Findings for Public ActionThe Experience of the US Federal Government
Camille Hoover, Robin Klevins, Rosemary Miller, Maria Raviele, Daniel Rogger, Robert Seidner, and Kimberly Wells
Generating coherent public employee survey data is only the first step towards using staff surveys to stimulate public service reform. The experience of agencies of the United States Federal Government in their use of the Office of Personnel Management Federal Employee Viewpoint Survey (OPM FEVS) provides lessons in the translation of survey results to improvements in specific public agencies and the public administration in general. An architecture at the agency-level that supports that translation process is critical and has typically included a technical expert capable of interpreting survey data, a strong relationship between that expert and a senior manager, and the development of a culture or reputation for survey informed agency change and development initiatives. This chapter outlines the way the OPM FEVS, its enabling institutional environment, and corresponding cultural practices have been developed to act as the basis for public sector action.
Handbook Teaser Trailer
-
SPONSORS
A collaboration between the Development Impact Evaluation Department, Office of the Chief Economist of Equitable Growth, Finance and Institutions.
Chapters
Part 1: Overview
Chapter 1: The Power of Government Analytics to Improve Public Administration
Chapter 2: How to Do Government Analytics: Lessons From the Book
Chapter 3: Government Analytics of the Future
Part 2: Foundational Themes in Government Analytics
Chapter 5: Practical Tools for Effective Measurement and Analytics
Chapter 6: The Ethics of Measurement of Public Administration
Chapter 7: Measuring and Encouraging Performance Information Use in Government
Chapter 8: Understanding Corruption Through Government Analytics
Part 3: Government Analytics Using Administrative Data
Chapter 9: Creating Data Infrastructures for Government Analytics
Chapter 10: Government Analytics Using Human Resource and Payroll Data
Chapter 11: Government Analytics Using Expenditure Data
Chapter 12: Government Analytics Using Procurement Data
Chapter 13: Government Analytics Using Data on the Quality of Processes
Chapter 14: Government Analytics Using Customs Data
Chapter 15: Government Analytics Using Administrative Case Data
Chapter 16: Government Analytics Using Machine Learning
Chapter 17: Government Analytics Using Data on Task and Project Completion
Part 4: Government Analytics Using Public Servant Surveys
Chapter 18: Surveys of Public Servants: The Global Landscape
Chapter 20: Determining Sample Sizes: How Many Public Officials Should Be Surveyed?
Chapter 21: Designing Survey Questionnaires: Which Survey Measures Vary, and for Whom?
Chapter 22: Designing Survey Questionnaires: To What Types of Survey Questions Do Public Servants Not Respond?
Chapter 24: Interpreting Survey Findings: Can Survey Results Be Compared across Organizations and Countries?
Chapter 25: Making the Most of Public Servants Survey Results: Lessons from Six Governments
Chapter 26: Using Survey Findings for Public Action: The Experience of the US Federal Government
Part 5: Government Analytics Using External Assessments
Chapter 27: Government Analytics Using Household Surveys
Chapter 28: Government Analytics Using Citizen Surveys: Lessons from the OECD Trust Survey
Chapter 29: Government Analytics Using Measures of Service Delivery
Chapter 30: Government Analytics Using Anthropological Methods