Collection and questionnaires

Filter results by

Search Help
Currently selected filters that can be removed

Keyword(s)

Geography

1 facets displayed. 0 facets selected.

Survey or statistical program

1 facets displayed. 0 facets selected.

Content

1 facets displayed. 0 facets selected.
Sort Help
entries

Results

All (19)

All (19) (0 to 10 of 19 results)

  • Articles and reports: 12-001-X202200100005
    Description:

    Methodological studies of the effects that human interviewers have on the quality of survey data have long been limited by a critical assumption: that interviewers in a given survey are assigned random subsets of the larger overall sample (also known as interpenetrated assignment). Absent this type of study design, estimates of interviewer effects on survey measures of interest may reflect differences between interviewers in the characteristics of their assigned sample members, rather than recruitment or measurement effects specifically introduced by the interviewers. Previous attempts to approximate interpenetrated assignment have typically used regression models to condition on factors that might be related to interviewer assignment. We introduce a new approach for overcoming this lack of interpenetrated assignment when estimating interviewer effects. This approach, which we refer to as the “anchoring” method, leverages correlations between observed variables that are unlikely to be affected by interviewers (“anchors”) and variables that may be prone to interviewer effects to remove components of within-interviewer correlations that lack of interpenetrated assignment may introduce. We consider both frequentist and Bayesian approaches, where the latter can make use of information about interviewer effect variances in previous waves of a study, if available. We evaluate this new methodology empirically using a simulation study, and then illustrate its application using real survey data from the Behavioral Risk Factor Surveillance System (BRFSS), where interviewer IDs are provided on public-use data files. While our proposed method shares some of the limitations of the traditional approach – namely the need for variables associated with the outcome of interest that are also free of measurement error – it avoids the need for conditional inference and thus has improved inferential qualities when the focus is on marginal estimates, and it shows evidence of further reducing overestimation of larger interviewer effects relative to the traditional approach.

    Release date: 2022-06-21

  • Articles and reports: 82-003-X201500514169
    Description:

    The Cancer Risk Management Model incorporates the risk of developing cancer, disease screening and clinical management with cost and labour data to assess health outcomes and economic impact. A screening module added to the lung cancer module enables a variety of scenarios to be evaluated for different target populations with varying rates of participation, compliance, and frequency of low-dose computed tomography screening.

    Release date: 2015-05-20

  • Articles and reports: 12-001-X201400214092
    Description:

    Survey methodologists have long studied the effects of interviewers on the variance of survey estimates. Statistical models including random interviewer effects are often fitted in such investigations, and research interest lies in the magnitude of the interviewer variance component. One question that might arise in a methodological investigation is whether or not different groups of interviewers (e.g., those with prior experience on a given survey vs. new hires, or CAPI interviewers vs. CATI interviewers) have significantly different variance components in these models. Significant differences may indicate a need for additional training in particular subgroups, or sub-optimal properties of different modes or interviewing styles for particular survey items (in terms of the overall mean squared error of survey estimates). Survey researchers seeking answers to these types of questions have different statistical tools available to them. This paper aims to provide an overview of alternative frequentist and Bayesian approaches to the comparison of variance components in different groups of survey interviewers, using a hierarchical generalized linear modeling framework that accommodates a variety of different types of survey variables. We first consider the benefits and limitations of each approach, contrasting the methods used for estimation and inference. We next present a simulation study, empirically evaluating the ability of each approach to efficiently estimate differences in variance components. We then apply the two approaches to an analysis of real survey data collected in the U.S. National Survey of Family Growth (NSFG). We conclude that the two approaches tend to result in very similar inferences, and we provide suggestions for practice given some of the subtle differences observed.

    Release date: 2014-12-19

  • Articles and reports: 11-522-X201300014282
    Description:

    The IAB-Establishment Panel is the most comprehensive establishment survey in Germany with almost 16.000 firms participating every year. Face-to-face interviews with paper and pencil (PAPI) are conducted since 1993. An ongoing project examines possible effects of a change of the survey to computer aided personal interviews (CAPI) combined with a web based version of the questionnaire (CAWI). In a first step, questions about the internet access, the willingness to complete the questionnaire online and reasons for refusal were included in the 2012 wave. First results are indicating a widespread refusal to take part in a web survey. A closer look reveals that smaller establishments, long time participants and older respondents are reluctant to use the internet.

    Release date: 2014-10-31

  • Articles and reports: 11-522-X200800010956
    Description:

    The use of Computer Audio-Recorded Interviewing (CARI) as a tool to identify interview falsification is quickly growing in survey research (Biemer, 2000, 2003; Thissen, 2007). Similarly, survey researchers are starting to expand the usefulness of CARI by combining recordings with coding to address data quality (Herget, 2001; Hansen, 2005; McGee, 2007). This paper presents results from a study included as part of the establishment-based National Center for Health Statistics' National Home and Hospice Care Survey (NHHCS) which used CARI behavior coding and CARI-specific paradata to: 1) identify and correct problematic interviewer behavior or question issues early in the data collection period before either negatively impact data quality, and; 2) identify ways to diminish measurement error in future implementations of the NHHCS. During the first 9 weeks of the 30-week field period, CARI recorded a subset of questions from the NHHCS application for all interviewers. Recordings were linked with the interview application and output and then coded in one of two modes: Code by Interviewer or Code by Question. The Code by Interviewer method provided visibility into problems specific to an interviewer as well as more generalized problems potentially applicable to all interviewers. The Code by Question method yielded data that spoke to understandability of the questions and other response problems. In this mode, coders coded multiple implementations of the same question across multiple interviewers. Using the Code by Question approach, researchers identified issues with three key survey questions in the first few weeks of data collection and provided guidance to interviewers in how to handle those questions as data collection continued. Results from coding the audio recordings (which were linked with the survey application and output) will inform question wording and interviewer training in the next implementation of the NHHCS, and guide future enhancement of CARI and the coding system.

    Release date: 2009-12-03

  • Articles and reports: 11-522-X200800010987
    Description:

    Over the last few years, there have been large progress in the web data collection area. Today, many statistical offices offer a web alternative in many different types of surveys. It is widely believed that web data collection may raise data quality while lowering data collection costs. Experience has shown that, offered web as a second alternative to paper questionnaires; enterprises have been slow to embrace the web alternative. On the other hand, experiments have also shown that by promoting web over paper, it is possible to raise the web take up rates. However, there are still few studies on what happens when the contact strategy is changed radically and the web option is the only option given in a complex enterprise survey. In 2008, Statistics Sweden took the step of using more or less a web-only strategy in the survey of industrial production (PRODCOM). The web questionnaire was developed in the generalised tool for web surveys used by Statistics Sweden. The paper presents the web solution and some experiences from the 2008 PRODCOM survey, including process data on response rates and error ratios as well as the results of a cognitive follow-up of the survey. Some important lessons learned are also presented.

    Release date: 2009-12-03

  • Articles and reports: 11-522-X200800010991
    Description:

    In the evaluation of prospective survey designs, statistical agencies generally must consider a large number of design factors that may have a substantial impact on both survey costs and data quality. Assessments of trade-offs between cost and quality are often complicated by limitations on the amount of information available regarding fixed and marginal costs related to: instrument redesign and field testing; the number of primary sample units and sample elements included in the sample; assignment of instrument sections and collection modes to specific sample elements; and (for longitudinal surveys) the number and periodicity of interviews. Similarly, designers often have limited information on the impact of these design factors on data quality.

    This paper extends standard design-optimization approaches to account for uncertainty in the abovementioned components of cost and quality. Special attention is directed toward the level of precision required for cost and quality information to provide useful input into the design process; sensitivity of cost-quality trade-offs to changes in assumptions regarding functional forms; and implications for preliminary work focused on collection of cost and quality information. In addition, the paper considers distinctions between cost and quality components encountered in field testing and production work, respectively; incorporation of production-level cost and quality information into adaptive design work; as well as costs and operational risks arising from the collection of detailed cost and quality data during production work. The proposed methods are motivated by, and applied to, work with partitioned redesign of the interview and diary components of the U.S. Consumer Expenditure Survey.

    Release date: 2009-12-03

  • Articles and reports: 82-003-X200710113309
    Geography: Canada
    Description:

    This article summarizes the design, methods and results emerging from the Canadian Health Measures Survey pre-test, which took place from October through December 2004 in Calgary, Alberta.

    Release date: 2007-12-05

  • Articles and reports: 11-522-X20050019445
    Description:

    This paper describes an innovative use of data mining on response data and metadata to identify, characterize and prevent falsification by field interviewers on the National Survey on Drug Use and Health (NSDUH). Interviewer falsification is the deliberate creation of survey responses by the interviewer without input from the respondent.

    Release date: 2007-03-02

  • Articles and reports: 11-522-X20040018747
    Description:

    This document describes the development and pilot of the first American Indian and Alaska Native Adult Tobacco Survey. Meetings with expert panels and tribal representatives helped to adapt methods.

    Release date: 2005-10-27
Data (0)

Data (0) (0 results)

No content available at this time.

Analysis (15)

Analysis (15) (0 to 10 of 15 results)

  • Articles and reports: 12-001-X202200100005
    Description:

    Methodological studies of the effects that human interviewers have on the quality of survey data have long been limited by a critical assumption: that interviewers in a given survey are assigned random subsets of the larger overall sample (also known as interpenetrated assignment). Absent this type of study design, estimates of interviewer effects on survey measures of interest may reflect differences between interviewers in the characteristics of their assigned sample members, rather than recruitment or measurement effects specifically introduced by the interviewers. Previous attempts to approximate interpenetrated assignment have typically used regression models to condition on factors that might be related to interviewer assignment. We introduce a new approach for overcoming this lack of interpenetrated assignment when estimating interviewer effects. This approach, which we refer to as the “anchoring” method, leverages correlations between observed variables that are unlikely to be affected by interviewers (“anchors”) and variables that may be prone to interviewer effects to remove components of within-interviewer correlations that lack of interpenetrated assignment may introduce. We consider both frequentist and Bayesian approaches, where the latter can make use of information about interviewer effect variances in previous waves of a study, if available. We evaluate this new methodology empirically using a simulation study, and then illustrate its application using real survey data from the Behavioral Risk Factor Surveillance System (BRFSS), where interviewer IDs are provided on public-use data files. While our proposed method shares some of the limitations of the traditional approach – namely the need for variables associated with the outcome of interest that are also free of measurement error – it avoids the need for conditional inference and thus has improved inferential qualities when the focus is on marginal estimates, and it shows evidence of further reducing overestimation of larger interviewer effects relative to the traditional approach.

    Release date: 2022-06-21

  • Articles and reports: 82-003-X201500514169
    Description:

    The Cancer Risk Management Model incorporates the risk of developing cancer, disease screening and clinical management with cost and labour data to assess health outcomes and economic impact. A screening module added to the lung cancer module enables a variety of scenarios to be evaluated for different target populations with varying rates of participation, compliance, and frequency of low-dose computed tomography screening.

    Release date: 2015-05-20

  • Articles and reports: 12-001-X201400214092
    Description:

    Survey methodologists have long studied the effects of interviewers on the variance of survey estimates. Statistical models including random interviewer effects are often fitted in such investigations, and research interest lies in the magnitude of the interviewer variance component. One question that might arise in a methodological investigation is whether or not different groups of interviewers (e.g., those with prior experience on a given survey vs. new hires, or CAPI interviewers vs. CATI interviewers) have significantly different variance components in these models. Significant differences may indicate a need for additional training in particular subgroups, or sub-optimal properties of different modes or interviewing styles for particular survey items (in terms of the overall mean squared error of survey estimates). Survey researchers seeking answers to these types of questions have different statistical tools available to them. This paper aims to provide an overview of alternative frequentist and Bayesian approaches to the comparison of variance components in different groups of survey interviewers, using a hierarchical generalized linear modeling framework that accommodates a variety of different types of survey variables. We first consider the benefits and limitations of each approach, contrasting the methods used for estimation and inference. We next present a simulation study, empirically evaluating the ability of each approach to efficiently estimate differences in variance components. We then apply the two approaches to an analysis of real survey data collected in the U.S. National Survey of Family Growth (NSFG). We conclude that the two approaches tend to result in very similar inferences, and we provide suggestions for practice given some of the subtle differences observed.

    Release date: 2014-12-19

  • Articles and reports: 11-522-X201300014282
    Description:

    The IAB-Establishment Panel is the most comprehensive establishment survey in Germany with almost 16.000 firms participating every year. Face-to-face interviews with paper and pencil (PAPI) are conducted since 1993. An ongoing project examines possible effects of a change of the survey to computer aided personal interviews (CAPI) combined with a web based version of the questionnaire (CAWI). In a first step, questions about the internet access, the willingness to complete the questionnaire online and reasons for refusal were included in the 2012 wave. First results are indicating a widespread refusal to take part in a web survey. A closer look reveals that smaller establishments, long time participants and older respondents are reluctant to use the internet.

    Release date: 2014-10-31

  • Articles and reports: 11-522-X200800010956
    Description:

    The use of Computer Audio-Recorded Interviewing (CARI) as a tool to identify interview falsification is quickly growing in survey research (Biemer, 2000, 2003; Thissen, 2007). Similarly, survey researchers are starting to expand the usefulness of CARI by combining recordings with coding to address data quality (Herget, 2001; Hansen, 2005; McGee, 2007). This paper presents results from a study included as part of the establishment-based National Center for Health Statistics' National Home and Hospice Care Survey (NHHCS) which used CARI behavior coding and CARI-specific paradata to: 1) identify and correct problematic interviewer behavior or question issues early in the data collection period before either negatively impact data quality, and; 2) identify ways to diminish measurement error in future implementations of the NHHCS. During the first 9 weeks of the 30-week field period, CARI recorded a subset of questions from the NHHCS application for all interviewers. Recordings were linked with the interview application and output and then coded in one of two modes: Code by Interviewer or Code by Question. The Code by Interviewer method provided visibility into problems specific to an interviewer as well as more generalized problems potentially applicable to all interviewers. The Code by Question method yielded data that spoke to understandability of the questions and other response problems. In this mode, coders coded multiple implementations of the same question across multiple interviewers. Using the Code by Question approach, researchers identified issues with three key survey questions in the first few weeks of data collection and provided guidance to interviewers in how to handle those questions as data collection continued. Results from coding the audio recordings (which were linked with the survey application and output) will inform question wording and interviewer training in the next implementation of the NHHCS, and guide future enhancement of CARI and the coding system.

    Release date: 2009-12-03

  • Articles and reports: 11-522-X200800010987
    Description:

    Over the last few years, there have been large progress in the web data collection area. Today, many statistical offices offer a web alternative in many different types of surveys. It is widely believed that web data collection may raise data quality while lowering data collection costs. Experience has shown that, offered web as a second alternative to paper questionnaires; enterprises have been slow to embrace the web alternative. On the other hand, experiments have also shown that by promoting web over paper, it is possible to raise the web take up rates. However, there are still few studies on what happens when the contact strategy is changed radically and the web option is the only option given in a complex enterprise survey. In 2008, Statistics Sweden took the step of using more or less a web-only strategy in the survey of industrial production (PRODCOM). The web questionnaire was developed in the generalised tool for web surveys used by Statistics Sweden. The paper presents the web solution and some experiences from the 2008 PRODCOM survey, including process data on response rates and error ratios as well as the results of a cognitive follow-up of the survey. Some important lessons learned are also presented.

    Release date: 2009-12-03

  • Articles and reports: 11-522-X200800010991
    Description:

    In the evaluation of prospective survey designs, statistical agencies generally must consider a large number of design factors that may have a substantial impact on both survey costs and data quality. Assessments of trade-offs between cost and quality are often complicated by limitations on the amount of information available regarding fixed and marginal costs related to: instrument redesign and field testing; the number of primary sample units and sample elements included in the sample; assignment of instrument sections and collection modes to specific sample elements; and (for longitudinal surveys) the number and periodicity of interviews. Similarly, designers often have limited information on the impact of these design factors on data quality.

    This paper extends standard design-optimization approaches to account for uncertainty in the abovementioned components of cost and quality. Special attention is directed toward the level of precision required for cost and quality information to provide useful input into the design process; sensitivity of cost-quality trade-offs to changes in assumptions regarding functional forms; and implications for preliminary work focused on collection of cost and quality information. In addition, the paper considers distinctions between cost and quality components encountered in field testing and production work, respectively; incorporation of production-level cost and quality information into adaptive design work; as well as costs and operational risks arising from the collection of detailed cost and quality data during production work. The proposed methods are motivated by, and applied to, work with partitioned redesign of the interview and diary components of the U.S. Consumer Expenditure Survey.

    Release date: 2009-12-03

  • Articles and reports: 82-003-X200710113309
    Geography: Canada
    Description:

    This article summarizes the design, methods and results emerging from the Canadian Health Measures Survey pre-test, which took place from October through December 2004 in Calgary, Alberta.

    Release date: 2007-12-05

  • Articles and reports: 11-522-X20050019445
    Description:

    This paper describes an innovative use of data mining on response data and metadata to identify, characterize and prevent falsification by field interviewers on the National Survey on Drug Use and Health (NSDUH). Interviewer falsification is the deliberate creation of survey responses by the interviewer without input from the respondent.

    Release date: 2007-03-02

  • Articles and reports: 11-522-X20040018747
    Description:

    This document describes the development and pilot of the first American Indian and Alaska Native Adult Tobacco Survey. Meetings with expert panels and tribal representatives helped to adapt methods.

    Release date: 2005-10-27
Reference (4)

Reference (4) ((4 results))

  • Surveys and statistical programs – Documentation: 11-522-X20010016308
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    The Census Bureau uses response error analysis to evaluate the effectiveness of survey questions. For a given survey, questions that are deemed critical to the survey or considered problematic from past examination are selected for analysis. New or revised questions are prime candidates for re-interview. Re-interview is a new interview where a subset of questions from the original interview are re-asked to a sample of the survey respondents. For each re-interview question, the proportion of respondents who give inconsistent responses is evaluated. The "Index of Inconsistency" is used as the measure of response variance. Each question is labelled low, moderate, or high in response variance. In high response variance cases, the questions are put through cognitive testing, and modifications to the question are recommended.

    The Schools and Staffing Survey (SASS) sponsored by The National Center for Education Statistics (NCES), is also investigated for response error analysis and the possible relationships between inconsistent responses and characteristics of the schools and teachers in that survey. Results of this analysis can be used to change survey procedures and improve data quality.

    Release date: 2002-09-12

  • Surveys and statistical programs – Documentation: 75F0002M1993005
    Description:

    This paper presents general observations from the members of the Survey of Labour and Income Dynamics head office project team, a summary of responses by a subset of interviewers in the test who were asked to complete a debriefing questionnaire after completing the test and detailed comments by the observers from Head Office.

    Release date: 1995-12-30

  • Surveys and statistical programs – Documentation: 75F0002M1995003
    Description:

    This paper presents the structure and questions of the January 1995 labour interview. It also discusses changes made to the labour interview between 1994 and 1995.

    Release date: 1995-12-30

  • Surveys and statistical programs – Documentation: 75F0002M1995005
    Description:

    This paper presents the questions, responses and question flow for the 1995 Survey of Labour and Income Dynamics (SLID) preliminary interview.

    Release date: 1995-12-30
Date modified: