Response and nonresponse

Filter results by

Search Help
Currently selected filters that can be removed

Keyword(s)

Type

1 facets displayed. 0 facets selected.

Geography

1 facets displayed. 0 facets selected.

Survey or statistical program

1 facets displayed. 0 facets selected.

Content

1 facets displayed. 0 facets selected.
Sort Help
entries

Results

All (16)

All (16) (0 to 10 of 16 results)

  • Articles and reports: 12-001-X202200100006
    Description:

    In the last two decades, survey response rates have been steadily falling. In that context, it has become increasingly important for statistical agencies to develop and use methods that reduce the adverse effects of non-response on the accuracy of survey estimates. Follow-up of non-respondents may be an effective, albeit time and resource-intensive, remedy for non-response bias. We conducted a simulation study using real business survey data to shed some light on several questions about non-response follow-up. For instance, assuming a fixed non-response follow-up budget, what is the best way to select non-responding units to be followed up? How much effort should be dedicated to repeatedly following up non-respondents until a response is received? Should they all be followed up or a sample of them? If a sample is followed up, how should it be selected? We compared Monte Carlo relative biases and relative root mean square errors under different follow-up sampling designs, sample sizes and non-response scenarios. We also determined an expression for the minimum follow-up sample size required to expend the budget, on average, and showed that it maximizes the expected response rate. A main conclusion of our simulation experiment is that this sample size also appears to approximately minimize the bias and mean square error of the estimates.

    Release date: 2022-06-21

  • Articles and reports: 12-001-X201300211885
    Description:

    Web surveys are generally connected with low response rates. Common suggestions in textbooks on Web survey research highlight the importance of the welcome screen in encouraging respondents to take part. The importance of this screen has been empirically proven in research, showing that most respondents breakoff at the welcome screen. However, there has been little research on the effect of the design of this screen on the level of the breakoff rate. In a study conducted at the University of Konstanz, three experimental treatments were added to a survey of the first-year student population (2,629 students) to assess the impact of different design features of this screen on the breakoff rates. The methodological experiments included varying the background color of the welcome screen, varying the promised task duration on this first screen, and varying the length of the information provided on the welcome screen explaining the privacy rights of the respondents. The analyses show that the longer stated length and the more attention given to explaining privacy rights on the welcome screen, the fewer respondents started and completed the survey. However, the use of a different background color does not result in the expected significant difference.

    Release date: 2014-01-15

  • Articles and reports: 12-001-X201000211384
    Description:

    The current economic downturn in the US could challenge costly strategies in survey operations. In the Behavioral Risk Factor Surveillance System (BRFSS), ending the monthly data collection at 31 days could be a less costly alternative. However, this could potentially exclude a portion of interviews completed after 31 days (late responders) whose respondent characteristics could be different in many respects from those who completed the survey within 31 days (early responders). We examined whether there are differences between the early and late responders in demographics, health-care coverage, general health status, health risk behaviors, and chronic disease conditions or illnesses. We used 2007 BRFSS data, where a representative sample of the noninstitutionalized adult U.S. population was selected using a random digit dialing method. Late responders were significantly more likely to be male; to report race/ethnicity as Hispanic; to have annual income higher than $50,000; to be younger than 45 years of age; to have less than high school education; to have health-care coverage; to be significantly more likely to report good health; and to be significantly less likely to report hypertension, diabetes, or being obese. The observed differences between early and late responders on survey estimates may hardly influence national and state-level estimates. As the proportion of late responders may increase in the future, its impact on surveillance estimates should be examined before excluding from the analysis. Analysis on late responders only should combine several years of data to produce reliable estimates.

    Release date: 2010-12-21

  • Articles and reports: 11-522-X200800010952
    Description:

    In a survey where results were estimated by simple averages, we will compare the effect on the results of a follow-up among non-respondents, and weighting based on the last ten percents of the respondents. The data used are collected from a Survey of Living Conditions among Immigrants in Norway that was carried out in 2006.

    Release date: 2009-12-03

  • Articles and reports: 11-522-X200800010957
    Description:

    Business surveys differ from surveys of populations of individual persons or households in many respects. Two of the most important differences are (a) that respondents in business surveys do not answer questions about characteristics of themselves (such as their experiences, behaviours, attitudes and feelings) but about characteristics of organizations (such as their size, revenues, policies, and strategies) and (b) that they answer these questions as an informant for that organization. Academic business surveys differ from other business surveys, such as of national statistical agencies, in many respects as well. The one most important difference is that academic business surveys usually do not aim at generating descriptive statistics but at testing hypotheses, i.e. relations between variables. Response rates in academic business surveys are very low, which implies a huge risk of non-response bias. Usually no attempt is made to assess the extent of non-response bias and published survey results might, therefore, not be a correct reflection of actual relations within the population, which in return increases the likelihood that the reported test result is not correct.

    This paper provides an analysis of how (the risk of) non-response bias is discussed in research papers published in top management journals. It demonstrates that non-response bias is not assessed to a sufficient degree and that, if attempted at all, correction of non-response bias is difficult or very costly in practice. Three approaches to dealing with this problem are presented and discussed:(a) obtaining data by other means than questionnaires;(b) conducting surveys of very small populations; and(c) conducting surveys of very small samples.

    It will be discussed why these approaches are appropriate means of testing hypotheses in populations. Trade-offs regarding the selection of an approach will be discussed as well.

    Release date: 2009-12-03

  • Articles and reports: 11-522-X200800010984
    Description:

    The Enterprise Portfolio Manager (EPM) Program at Statistics Canada demonstrated the value of employing a "holistic" approach to managing the relationships we have with our largest and most complex business respondents.

    Understanding that different types of respondents should receive different levels of intervention and having learnt the value of employing an "enterprise-centric" approach to managing relationships with important, complex data providers, STC has embraced a response management strategy that divides its business population into four tiers based on size, complexity and importance to survey estimates. Thus segmented, different response management approaches have been developed appropriate to the relative contribution of the segment. This allows STC to target resources to the areas where it stands to achieve the greatest return on investment. Tier I and Tier II have been defined as critical to survey estimates.

    Tier I represent the largest, most complex businesses in Canada and is managed through the Enterprise Portfolio Management Program.

    Tier II represents businesses that are smaller or less complex than Tier I but still significant in developing accurate measures of the activities of individual industries.

    Tier III includes more medium-sized businesses, those that form the bulk of survey samples.

    Tier IV represents the smallest businesses which are excluded from collection; for these STC relies entirely on tax information.

    The presentation will outline:It works! Results and metrics from the programs that have operationalized the Holistic Response Management strategy.Developing a less subjective, methodological approach to segment the business survey population for HRM. The project team's work to capture the complexity factors intrinsically used by experienced staff to rank respondents. What our so called "problem" respondents have told us about the issues underlying non-response.

    Release date: 2009-12-03

  • Articles and reports: 11-522-X200800010999
    Description:

    The choice of number of call attempts in a telephone survey is an important decision. A large number of call attempts makes the data collection costly and time-consuming; and a small number of attempts decreases the response set from which conclusions are drawn and increases the variance. The decision can also have an effect on the nonresponse bias. In this paper we study the effects of number of call attempts on the nonresponse rate and the nonresponse bias in two surveys conducted by Statistics Sweden: The Labour Force Survey (LFS) and Household Finances (HF).

    By use of paradata we calculate the response rate as a function of the number of call attempts. To estimate the nonresponse bias we use estimates of some register variables, where observations are available for both respondents and nonrespondents. We also calculate estimates of some real survey parameters as functions of varying number of call attempts. The results indicate that it is possible to reduce the current number of call attempts without getting an increased nonresponse bias.

    Release date: 2009-12-03

  • Articles and reports: 12-001-X20060019257
    Description:

    In the presence of item nonreponse, two approaches have been traditionally used to make inference on parameters of interest. The first approach assumes uniform response within imputation cells whereas the second approach assumes ignorable response but make use of a model on the variable of interest as the basis for inference. In this paper, we propose a third appoach that assumes a specified ignorable response mechanism without having to specify a model on the variable of interest. In this case, we show how to obtain imputed values which lead to estimators of a total that are approximately unbiased under the proposed approach as well as the second approach. Variance estimators of the imputed estimators that are approximately unbiased are also obtained using an approach of Fay (1991) in which the order of sampling and response is reversed. Finally, simulation studies are conducted to investigate the finite sample performance of the methods in terms of bias and mean square error.

    Release date: 2006-07-20

  • Articles and reports: 11-522-X20040018735
    Description:

    This paper describes analyses on nonresponse among ethnic minorities in the Netherlands. These analyses show that the response effect is mediated by the degree of urbanisation. A negative impact is observed among ethnic minorities.

    Release date: 2005-10-27

  • Articles and reports: 12-001-X20020026428
    Description:

    The analysis of survey data from different geographical areas where the data from each area are polychotomous can be easily performed using hierarchical Bayesian models, even if there are small cell counts in some of these areas. However, there are difficulties when the survey data have missing information in the form of non-response, especially when the characteristics of the respondents differ from the non-respondents. We use the selection approach for estimation when there are non-respondents because it permits inference for all the parameters. Specifically, we describe a hierarchical Bayesian model to analyse multinomial non-ignorable non-response data from different geographical areas; some of them can be small. For the model, we use a Dirichlet prior density for the multinomial probabilities and a beta prior density for the response probabilities. This permits a 'borrowing of strength' of the data from larger areas to improve the reliability in the estimates of the model parameters corresponding to the smaller areas. Because the joint posterior density of all the parameters is complex, inference is sampling-based and Markov chain Monte Carlo methods are used. We apply our method to provide an analysis of body mass index (BMI) data from the third National Health and Nutrition Examination Survey (NHANES III). For simplicity, the BMI is categorized into 3 natural levels, and this is done for each of 8 age-race-sex domains and 34 counties. We assess the performance of our model using the NHANES III data and simulated examples, which show our model works reasonably well.

    Release date: 2003-01-29
Data (0)

Data (0) (0 results)

No content available at this time.

Analysis (16)

Analysis (16) (0 to 10 of 16 results)

  • Articles and reports: 12-001-X202200100006
    Description:

    In the last two decades, survey response rates have been steadily falling. In that context, it has become increasingly important for statistical agencies to develop and use methods that reduce the adverse effects of non-response on the accuracy of survey estimates. Follow-up of non-respondents may be an effective, albeit time and resource-intensive, remedy for non-response bias. We conducted a simulation study using real business survey data to shed some light on several questions about non-response follow-up. For instance, assuming a fixed non-response follow-up budget, what is the best way to select non-responding units to be followed up? How much effort should be dedicated to repeatedly following up non-respondents until a response is received? Should they all be followed up or a sample of them? If a sample is followed up, how should it be selected? We compared Monte Carlo relative biases and relative root mean square errors under different follow-up sampling designs, sample sizes and non-response scenarios. We also determined an expression for the minimum follow-up sample size required to expend the budget, on average, and showed that it maximizes the expected response rate. A main conclusion of our simulation experiment is that this sample size also appears to approximately minimize the bias and mean square error of the estimates.

    Release date: 2022-06-21

  • Articles and reports: 12-001-X201300211885
    Description:

    Web surveys are generally connected with low response rates. Common suggestions in textbooks on Web survey research highlight the importance of the welcome screen in encouraging respondents to take part. The importance of this screen has been empirically proven in research, showing that most respondents breakoff at the welcome screen. However, there has been little research on the effect of the design of this screen on the level of the breakoff rate. In a study conducted at the University of Konstanz, three experimental treatments were added to a survey of the first-year student population (2,629 students) to assess the impact of different design features of this screen on the breakoff rates. The methodological experiments included varying the background color of the welcome screen, varying the promised task duration on this first screen, and varying the length of the information provided on the welcome screen explaining the privacy rights of the respondents. The analyses show that the longer stated length and the more attention given to explaining privacy rights on the welcome screen, the fewer respondents started and completed the survey. However, the use of a different background color does not result in the expected significant difference.

    Release date: 2014-01-15

  • Articles and reports: 12-001-X201000211384
    Description:

    The current economic downturn in the US could challenge costly strategies in survey operations. In the Behavioral Risk Factor Surveillance System (BRFSS), ending the monthly data collection at 31 days could be a less costly alternative. However, this could potentially exclude a portion of interviews completed after 31 days (late responders) whose respondent characteristics could be different in many respects from those who completed the survey within 31 days (early responders). We examined whether there are differences between the early and late responders in demographics, health-care coverage, general health status, health risk behaviors, and chronic disease conditions or illnesses. We used 2007 BRFSS data, where a representative sample of the noninstitutionalized adult U.S. population was selected using a random digit dialing method. Late responders were significantly more likely to be male; to report race/ethnicity as Hispanic; to have annual income higher than $50,000; to be younger than 45 years of age; to have less than high school education; to have health-care coverage; to be significantly more likely to report good health; and to be significantly less likely to report hypertension, diabetes, or being obese. The observed differences between early and late responders on survey estimates may hardly influence national and state-level estimates. As the proportion of late responders may increase in the future, its impact on surveillance estimates should be examined before excluding from the analysis. Analysis on late responders only should combine several years of data to produce reliable estimates.

    Release date: 2010-12-21

  • Articles and reports: 11-522-X200800010952
    Description:

    In a survey where results were estimated by simple averages, we will compare the effect on the results of a follow-up among non-respondents, and weighting based on the last ten percents of the respondents. The data used are collected from a Survey of Living Conditions among Immigrants in Norway that was carried out in 2006.

    Release date: 2009-12-03

  • Articles and reports: 11-522-X200800010957
    Description:

    Business surveys differ from surveys of populations of individual persons or households in many respects. Two of the most important differences are (a) that respondents in business surveys do not answer questions about characteristics of themselves (such as their experiences, behaviours, attitudes and feelings) but about characteristics of organizations (such as their size, revenues, policies, and strategies) and (b) that they answer these questions as an informant for that organization. Academic business surveys differ from other business surveys, such as of national statistical agencies, in many respects as well. The one most important difference is that academic business surveys usually do not aim at generating descriptive statistics but at testing hypotheses, i.e. relations between variables. Response rates in academic business surveys are very low, which implies a huge risk of non-response bias. Usually no attempt is made to assess the extent of non-response bias and published survey results might, therefore, not be a correct reflection of actual relations within the population, which in return increases the likelihood that the reported test result is not correct.

    This paper provides an analysis of how (the risk of) non-response bias is discussed in research papers published in top management journals. It demonstrates that non-response bias is not assessed to a sufficient degree and that, if attempted at all, correction of non-response bias is difficult or very costly in practice. Three approaches to dealing with this problem are presented and discussed:(a) obtaining data by other means than questionnaires;(b) conducting surveys of very small populations; and(c) conducting surveys of very small samples.

    It will be discussed why these approaches are appropriate means of testing hypotheses in populations. Trade-offs regarding the selection of an approach will be discussed as well.

    Release date: 2009-12-03

  • Articles and reports: 11-522-X200800010984
    Description:

    The Enterprise Portfolio Manager (EPM) Program at Statistics Canada demonstrated the value of employing a "holistic" approach to managing the relationships we have with our largest and most complex business respondents.

    Understanding that different types of respondents should receive different levels of intervention and having learnt the value of employing an "enterprise-centric" approach to managing relationships with important, complex data providers, STC has embraced a response management strategy that divides its business population into four tiers based on size, complexity and importance to survey estimates. Thus segmented, different response management approaches have been developed appropriate to the relative contribution of the segment. This allows STC to target resources to the areas where it stands to achieve the greatest return on investment. Tier I and Tier II have been defined as critical to survey estimates.

    Tier I represent the largest, most complex businesses in Canada and is managed through the Enterprise Portfolio Management Program.

    Tier II represents businesses that are smaller or less complex than Tier I but still significant in developing accurate measures of the activities of individual industries.

    Tier III includes more medium-sized businesses, those that form the bulk of survey samples.

    Tier IV represents the smallest businesses which are excluded from collection; for these STC relies entirely on tax information.

    The presentation will outline:It works! Results and metrics from the programs that have operationalized the Holistic Response Management strategy.Developing a less subjective, methodological approach to segment the business survey population for HRM. The project team's work to capture the complexity factors intrinsically used by experienced staff to rank respondents. What our so called "problem" respondents have told us about the issues underlying non-response.

    Release date: 2009-12-03

  • Articles and reports: 11-522-X200800010999
    Description:

    The choice of number of call attempts in a telephone survey is an important decision. A large number of call attempts makes the data collection costly and time-consuming; and a small number of attempts decreases the response set from which conclusions are drawn and increases the variance. The decision can also have an effect on the nonresponse bias. In this paper we study the effects of number of call attempts on the nonresponse rate and the nonresponse bias in two surveys conducted by Statistics Sweden: The Labour Force Survey (LFS) and Household Finances (HF).

    By use of paradata we calculate the response rate as a function of the number of call attempts. To estimate the nonresponse bias we use estimates of some register variables, where observations are available for both respondents and nonrespondents. We also calculate estimates of some real survey parameters as functions of varying number of call attempts. The results indicate that it is possible to reduce the current number of call attempts without getting an increased nonresponse bias.

    Release date: 2009-12-03

  • Articles and reports: 12-001-X20060019257
    Description:

    In the presence of item nonreponse, two approaches have been traditionally used to make inference on parameters of interest. The first approach assumes uniform response within imputation cells whereas the second approach assumes ignorable response but make use of a model on the variable of interest as the basis for inference. In this paper, we propose a third appoach that assumes a specified ignorable response mechanism without having to specify a model on the variable of interest. In this case, we show how to obtain imputed values which lead to estimators of a total that are approximately unbiased under the proposed approach as well as the second approach. Variance estimators of the imputed estimators that are approximately unbiased are also obtained using an approach of Fay (1991) in which the order of sampling and response is reversed. Finally, simulation studies are conducted to investigate the finite sample performance of the methods in terms of bias and mean square error.

    Release date: 2006-07-20

  • Articles and reports: 11-522-X20040018735
    Description:

    This paper describes analyses on nonresponse among ethnic minorities in the Netherlands. These analyses show that the response effect is mediated by the degree of urbanisation. A negative impact is observed among ethnic minorities.

    Release date: 2005-10-27

  • Articles and reports: 12-001-X20020026428
    Description:

    The analysis of survey data from different geographical areas where the data from each area are polychotomous can be easily performed using hierarchical Bayesian models, even if there are small cell counts in some of these areas. However, there are difficulties when the survey data have missing information in the form of non-response, especially when the characteristics of the respondents differ from the non-respondents. We use the selection approach for estimation when there are non-respondents because it permits inference for all the parameters. Specifically, we describe a hierarchical Bayesian model to analyse multinomial non-ignorable non-response data from different geographical areas; some of them can be small. For the model, we use a Dirichlet prior density for the multinomial probabilities and a beta prior density for the response probabilities. This permits a 'borrowing of strength' of the data from larger areas to improve the reliability in the estimates of the model parameters corresponding to the smaller areas. Because the joint posterior density of all the parameters is complex, inference is sampling-based and Markov chain Monte Carlo methods are used. We apply our method to provide an analysis of body mass index (BMI) data from the third National Health and Nutrition Examination Survey (NHANES III). For simplicity, the BMI is categorized into 3 natural levels, and this is done for each of 8 age-race-sex domains and 34 counties. We assess the performance of our model using the NHANES III data and simulated examples, which show our model works reasonably well.

    Release date: 2003-01-29
Reference (0)

Reference (0) (0 results)

No content available at this time.

Date modified: