7. Challenges and strategies for effecting paradigm change

Constance F. Citro

Previous

I have argued for a paradigm change in which statistical agencies design and update their flagship programs by determining the best combination of data sources and methods to serve user needs in a topic area of ongoing importance. I use U.S. household surveys as an example, where the evidence is strong that relying on survey responses alone will not suffice to serve critical needs for high-quality information on income, expenditures, and related subjects. I expect it is also true that the use of administrative records alone, as in some countries with detailed population registers, may not provide sufficiently complete and high-quality information in the absence of regular efforts to review the quality of the register data and augment and correct them with information from other sources, such as surveys. As a case in point, Axelson, Homberg, Jansson, Werner and Westling (2012) describe the utility of surveys for evaluating the quality of housing and household data from a new dwelling register that was constructed for the 2011 census in Sweden.

I close by listing factors that make paradigm change difficult, countered by ways to effect the change I recommend and ingrain it in statistical agency culture. The U.S. and other statistical systems have admirable records of innovation in many aspects of their programs, but changing paradigms is always difficult, as was evident in the battle to introduce probability sampling to official U.S. statistics in the 1930s. It is particularly hard to rethink long-lived, ongoing, statistical programs with which both the producer agency and the user base are comfortable.

Factors that can impede change include: (1) inertia, particularly when a program was originally innovative and very well designed, so it can coast on its earlier success; (2) becoming out of touch with stakeholders’ changing needs, which can be exacerbated when an agency views itself as the only source of needed data and not in competition; (3) fear of undercutting existing programs combined with fear of “not-invented here”; (4) inadequate ongoing evaluation of data quality in all of its dimensions; and (5) constrained staff and budget resources, coupled with an understandable reluctance of agency staff or their established user base to cut back on one or another long-standing statistical series in order to make important advances in other series.

Yet there are many outstanding examples of important innovation in U.S. and other nation’s statistical agencies, so clearly there are ways to overcome the constraints listed above to effect paradigm change. The essential ingredient for paradigm change, I believe, is leadership buy-in and continued support at the top of a statistical agency, proactively deployed to garner buy-in at all levels of the agency. For an outstanding example of such leadership, see the discussion in National Research Council (2010a) of the role of Morris Hansen and his colleagues in reengineering what had been an enumerator-based census into a mailout/mailback census. The reengineering effort was initiated and sustained on the basis of evidence of substantial interviewer bias and variance for important data items. There was also concern that it could become more difficult to recruit enumerators as women moved into the work force.

Specific steps for agency leadership to get behind for the specific purpose of inculcating the use of multiple data sources for ongoing official statistical programs include (see Prell, Bradsher-Fredrick, Comisarow, Cornman, Cox, Denbaly, Martinez, Sabol and Vile (2009), who conducted case studies of successful statistical uses of administrative records in the United States, for similar conclusions): (1) setting clear expectations and goals for staff, such as the expectation that statistical programs will, as a matter of course, combine such sources as surveys and administrative records in the interests of relevant, accurate and timely data produced cost-effectively and with minimal respondent burden; (2) according a prominent role to subject-matter specialists - to interface with outside users and inside data producers; (3) staffing operational programs with expertise in all relevant data sources, which includes putting specialists in survey design and specialists in administrative records or other data sources on an equal footing; (4) providing for rotation of assignments, including internal rotations, rotations among statistical agencies, rotations with data user organizations and rotations with sources of alternative data sources; (5) carving out resources for continued evaluation; and (6) treating organizations with alternative data sources that play important roles in statistical programs as partners. On this last point, see, e.g., Hendriks (2012, p. 1473), who, in discussing the experiences of Statistics Norway with their first register-based census in 2011, stresses that “The three C’s of register based statistics (in order to achieve data quality) are Co-operation, Communication and Coordination.”

Statistical agencies have shown the ability to make far-reaching changes in response to threats to established ways of doing business. The second half of the 20th century gave us the probability survey paradigm in response to the increasing costs and burden of conducting full enumerations and the flaws of non-probability designs. The 21st century can surely give us the paradigm of using the best source(s), including surveys, administrative records and other sources, to respond to policy and public needs for relevant, accurate, timely and cost-effective official statistics.

Acknowledgements

This paper is based on the author’s years of experience at the Committee on National Statistics, but the views expressed are her own and should not be assumed to represent the views of CNSTAT or the National Academy of Sciences. The author thanks John Czajka, David Johnson and Rochelle Martinez for helpful comments on an earlier draft. A longer version of this paper is available from the author on request.

References

Anderson, M.J. (1988). The American Census: A Social History. New Haven. CT: Yale University Press.

Axelson, M., Homberg, A, Jansson, I., Werner, P. and Westling, S. (2012). Doing a register-based census for the first time: The Swedish experience. Paper presented at the Joint Statistical Meetings, San Diego, CA (August). Statistics Sweden, Stockholm.

Bee, A., Meyer, B.D. and Sullivan, J.X. (2012). The validity of consumption data: Are the consumer expenditure interview and diary surveys informative? NBER Working Paper No. 18308. Cambridge, MA: National Bureau of Economic Research.

Biemer, P., Trewin, D., Bergdahl, H. and Lilli, J. (2014). A system for managing the quality of official statistics, with discussion. Journal of Official Statistics, 30(3, September), 381-442.

Brackstone, G. (1999). Managing data quality in a statistical agency. Survey Methodology, 25(2), 139-149.

Bradburn, N.H. (1992). A response to the nonresponse problem. 1992 AAPOR Presidential Address. Public Opinion Quarterly, 56(3), 391-397.

Citro, C.F. (2012). Editing, Imputation and Weighting. Encyclopedia of the U.S. Census: From the Constitution to the American Community Survey, Second Edition, M. J. Anderson, C.F. Citro and J.J. Salvo, eds, 201-204. Washington, DC: CQ Press.

Couper, M.P. (2013). Is the sky falling? New technology, changing media, and the future of surveys. Keynote presentation at the 5th European Survey Research Association Conference. Ljubliana, Slovenia. http://www.europeansurveyresearch.org/sites/default/files/files/Couper%20keynote.pdf [July 2014].

Czajka, J.L. (2009). SIPP data quality. Appendix A in Reengineering the Survey of Income and Program Participation. National Research Council. Washington, DC: The National Academies Press.

Czajka, J.L. and Denmead, G. (2012). Income measurement for the 21st century: Updating the current population survey. Washington, DC: Mathematica Policy Research. Available: http://www.mathematica-mpr.com/~/media/publications/PDFs/family_support/income_measurement_21 century.pdf [July 2014].

Czajka, J.L., Jacobson, J.E. and Cody, S. (2004). Survey estimates of wealth: A comparative analysis and review of the Survey of Income and Program Participation. Social Security Bulletin, 65(1). Available: http://www.ssa.gov/policy/docs/ssb/v65n1/v65n1p63.html [July 2014].

Daas, P.J.H., Ossen, S.J.L., Tennekes, M. and Nordholt, E.S. (2012). Evaluation of the quality of administrative data used in the Dutch virtual census. Paper presented at the Joint Statistical Meetings, San Diego, CA (August). Methodology Sector and Division of Social and Spatial Statistics, Statistics Netherland, The Hague.

De Leeuw, E.D. and De Heer, W. (2002). Trends in Household Survey Nonresponse: A Longitudinal and International Comparison. R.M. Groves, D.A. Dillman, J. L. Eltinge and R.J.A. Little, eds. Survey Nonresponse, 41-54. New York: Wiley.

Duncan, J. W. and Shelton, W. C. (1978). Revolution in United States Government Statistics 1926–1976. Office of Federal Statistical Policy and Standards, U.S. Department of Commerce. Washington, DC: Government Printing Office.

Eurostat. (2000). Assessment of the quality in statistics. Doc. Eurostat/A4/Quality/00/General/Standard report. Luxembourg (April 4-5). Available: http://www.unece.org/fileadmin/DAM/stats/documents/2000/11/metis/crp.3.e.pdf [July 2014].

Fixler, D. and D.S. Johnson (2012). Accounting for the distribution of income in the U.S. National Accounts. Paper prepared for the NBER Conference on Research in Income and Wealth, September 30. Available: http://www.bea.gov/about/pdf/Fixler_Johnson.pdf.

Fricker, S. and R. Tourangeau (2010). Examining the relationship between nonresponse propensity and data quality in two national household surveys. Public Opinion Quarterly, 74(5), 935-955.

Griffin, D. (2011). Cost and workload implications of a voluntary American community survey. U.S. Census Bureau, Washington, DC (June 23).

Groves, R.M. (2011). Three eras of survey research. Public Opinion Quarterly, 75(9), 861-871. Special 75th Anniversary Issue.

Groves, R.M. and Peytcheva, E. (2008). The impact of nonresponse rates on nonresponse bias: A meta-analysis. Public Opinion Quarterly, 72(2), 167-189.

Harris-Kojetin, B. (2012). Federal Household Surveys. Encyclopedia of the U.S. Census: From the Constitution to the American Community Survey, Second Edition, M. J. Anderson, C.F. Citro and J.J. Salvo, eds, 226-234. Washington, DC: CQ Press.

Heckman, J. J. and LaFontaine, P.A. (2010). The American high school graduation rate: trends and levels. NBER Working Paper 13670. Cambridge, MA, National Bureau of Economic Research. Available: http://www.nber.org/papers/w13670 [July 2014].

Hendriks, C. (2012). Input data quality in register based statistics-The Norwegian experience. Proceedings of the International Association of Survey Statisticians-JSM 2012, 1473-1480. Paper presented at the Joint Statistical Meetings, San Diego, CA (August). Statistics Norway, Kongsvinger, Norway.

Hicks, W. and Kerwin, J. (2011). Cognitive testing of potential changes to the Annual Social and Economic Supplement of the Current Population Survey. Report to the U.S. Census Bureau, Westat, Rockville, MD (July 25).

Holt, D.T. (2007).The official statistics Olympics challenge: Wider, deeper, quicker, better, cheaper. The American Statistician, 61(1, February), 1-8. With commentary by G. Brackstone and J.L. Norwood.

Horrigan, M.W. (2013). Big data: A BLS perspective. Amstat News, 427(January), 25-27.

Hoyakem, C., Bollinger, C. and Ziliak, J. (2014). The role of CPS nonresponse on the level and trend in poverty. UKCPR Discussion Paper Series, DP 2014-05. Lexington, KY: University of Kentucky Center for Poverty Research.

Iwig, W., Berning, M., Marck, P. and Prell, M. (2013). Data quality assessment tool for administrative data. Prepared for a subcommittee of the Federal Committee on Statistical Methodology, Washington, DC (February).

Johnson, B and Moore, K. [no date]. Consider the source: Differences in estimates of income and wealth from survey and tax data. Available: http://www.irs.gov/pub/irs-soi/johnsmoore.pdf [July 2014].

Keller, S.A., Koonin, S.E. and Shipp, S. (2012). Big data and city living - what can it do for us? Statistical Significance, 9(4), 4-7, August.

Kennickell, A. (2011). Look again: Editing and imputation of SCF panel data. Paper prepared for the Joint Statistical Meetings, Miami, FL (August).

Laney, D. (2001). 3-D data management: Controlling data volume, velocity and variety. META Group [now Gartner] Research Note, February 6. See: http://goo.gl/Bo3GS [July 2014].

Manski, C.F. (2014). Communicating uncertainty in official economic statistics. NBER Working Paper No. 20098. Cambridge, MA: National Bureau of Economic Research.

McGuire, T., Manyika, J. and Chui, M. (2012). Why big data is the new competitive advantage. Ivey Business Journal (July-August).

Meyer, B. D. and Goerge, R.M. (2011). Errors in survey reporting and imputation and their effects on estimates of Food Stamp Program participation. Working Paper. Chicago Harris School of Public Policy, University of Chicago.

Meyer, B.D., Mok, W. K.C. and Sullivan, J.X. (2009). The under-reporting of transfers in household surveys: Its nature and consequences. NBER Working Paper No. 15181. Cambridge, MA: National Bureau of Economic Research.

Moore, J.C., Marquis, K.H. and Bogen, K. (1996). The SIPP cognitive research evaluation experiment: Basic results and documentation. SIPP Working Paper No. 212. U.S. Census Bureau, Washington, DC (January). Available: http://www.census.gov/sipp/workpapr/wp9601.pdf [July 2014].

Morganstein, D. and Marker, D. (2000). A conversation with Joseph Waksberg. Statistical Science, 15(3), 299-312.

Mule, T. and Konicki, S. (2012). 2010 Census Coverage Measurement Estimation Report: Summary of Estimates of Coverage for Housing Units in the United States. U.S. Census Bureau, Washington, DC.

National Research Council (1995). Measuring Poverty: A New Approach. Washington, DC: National Academy Press.

National Research Council (2004). The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press.

National Research Council (2010a). Envisioning the 2010 Census. Washington, DC: The National Academies Press.

National Research Council (2010b). The Prevention and Treatment of Missing Data in Clinical Trials. Washington, DC: The National Academies Press.

National Research Council (2013a). Measuring What We Spend: Toward a New Consumer Expenditure Survey. Washington, DC: The National Academies Press.

National Research Council (2013b). Nonresponse in Social Science Surveys: A Research Agenda. Washington, DC: The National Academies Press.

National Research Council (2013c). Principles and Practices for a Federal Statistical Agency. Washington, DC: The National Academies Press.

Nelson, N. and West, K. (2014). Interview with Lars Thygesen. Statistical Journal of the IAOS, 30, 67-73.

Passero, B. (2009). The impact of income imputation in the Consumer Expenditure Survey. Monthly Labor Review (August), 25-42.

Prell, M., Bradsher-Fredrick, H., Comisarow, C., Cornman, S., Cox, C., Denbaly, M., Martinez, R.W., Sabol, W. and Vile, M. (2009). Profiles in success of statistical uses of administrative records. Report of a subcommittee of the Federal Committee on Statistical Methodology, U.S. Office of Management and Budget, Washington, DC.

Shapiro, G.M. and Kostanich, D. (1988). High response error and poor coverage are severely hurting the value of household survey data. Proceedings of the Section on Survey Research Methods, 443-448, American Statistical Association, Alexandria, VA. Available: http://www.amstat.org/sections/srms/ Proceedings/papers/1988_081.pdf [July 2014].

Steeh, C.G. (1981). Trends in nonresponse rates, 1952-1979. Public Opinion Quarterly, 45, 40-57.

Tourangeau, R. (2004). Survey research and societal change. Annual Review of Psychology, 55, 775-801.

U.S. Office of Management and Budget. (2014). Guidance for Providing and Using Administrative Data for Statistical Purposes. Memorandum M-14-06. Washington, DC.

Waksberg, J. (1978). Sampling methods for random digit dialing. Journal of the American Statistical Association, 73, 40-46.

Woodward, J., Wilson, E. and Chesnut, J. (2007). Evaluation Report Covering Facilities - Final Report. 2006 American Community Survey Content Test Report H.3.U.S. Census Bureau. Washington, DC: U.S. Department of Commerce. January.

Zelenak, M.F. and M.C. David (2013). Impact of Multiple Contacts by Computer-Assisted Telephone Interview and Computer-Assisted Personal Interview on Final Interview Outcome in the American Community Survey. U.S. Census Bureau, Washington, DC.

Previous

Date modified: