Response and nonresponse

Filter results by

Search Help
Currently selected filters that can be removed

Keyword(s)

Type

1 facets displayed. 0 facets selected.

Content

1 facets displayed. 0 facets selected.
Sort Help
entries

Results

All (6)

All (6) ((6 results))

  • Articles and reports: 12-001-X201900300006
    Description:

    High nonresponse is a very common problem in sample surveys today. In statistical terms we are worried about increased bias and variance of estimators for population quantities such as totals or means. Different methods have been suggested in order to compensate for this phenomenon. We can roughly divide them into imputation and calibration and it is the latter approach we will focus on here. A wide spectrum of possibilities is included in the class of calibration estimators. We explore linear calibration, where we suggest using a nonresponse version of the design-based optimal regression estimator. Comparisons are made between this estimator and a GREG type estimator. Distance measures play a very important part in the construction of calibration estimators. We show that an estimator of the average response propensity (probability) can be included in the “optimal” distance measure under nonresponse, which will help to reduce the bias of the resulting estimator. To illustrate empirically the theoretically derived results for the suggested estimators, a simulation study has been carried out. The population is called KYBOK and consists of clerical municipalities in Sweden, where the variables include financial as well as size measurements. The results are encouraging for the “optimal” estimator in combination with the estimated average response propensity, where the bias was reduced for most of the Poisson sampling cases in the study.

    Release date: 2019-12-17

  • Articles and reports: 12-001-X201900200005
    Description:

    We present an approach for imputation of missing items in multivariate categorical data nested within households. The approach relies on a latent class model that (i) allows for household-level and individual-level variables, (ii) ensures that impossible household configurations have zero probability in the model, and (iii) can preserve multivariate distributions both within households and across households. We present a Gibbs sampler for estimating the model and generating imputations. We also describe strategies for improving the computational efficiency of the model estimation. We illustrate the performance of the approach with data that mimic the variables collected in typical population censuses.

    Release date: 2019-06-27

  • Articles and reports: 12-001-X201000111252
    Description:

    Nonresponse bias has been a long-standing issue in survey research (Brehm 1993; Dillman, Eltinge, Groves and Little 2002), with numerous studies seeking to identify factors that affect both item and unit response. To contribute to the broader goal of minimizing survey nonresponse, this study considers several factors that can impact survey nonresponse, using a 2007 Animal Welfare Survey Conducted in Ohio, USA. In particular, the paper examines the extent to which topic salience and incentives affect survey participation and item nonresponse, drawing on the leverage-saliency theory (Groves, Singer and Corning 2000). We find that participation in a survey is affected by its subject context (as this exerts either positive or negative leverage on sampled units) and prepaid incentives, which is consistent with the leverage-saliency theory. Our expectations are also confirmed by the finding that item nonresponse, our proxy for response quality, does vary by proximity to agriculture and the environment (residential location, knowledge about how food is grown, and views about the importance of animal welfare). However, the data suggests that item nonresponse does not vary according to whether or not a respondent received incentives.

    Release date: 2010-06-29

  • Articles and reports: 11-522-X200800010957
    Description:

    Business surveys differ from surveys of populations of individual persons or households in many respects. Two of the most important differences are (a) that respondents in business surveys do not answer questions about characteristics of themselves (such as their experiences, behaviours, attitudes and feelings) but about characteristics of organizations (such as their size, revenues, policies, and strategies) and (b) that they answer these questions as an informant for that organization. Academic business surveys differ from other business surveys, such as of national statistical agencies, in many respects as well. The one most important difference is that academic business surveys usually do not aim at generating descriptive statistics but at testing hypotheses, i.e. relations between variables. Response rates in academic business surveys are very low, which implies a huge risk of non-response bias. Usually no attempt is made to assess the extent of non-response bias and published survey results might, therefore, not be a correct reflection of actual relations within the population, which in return increases the likelihood that the reported test result is not correct.

    This paper provides an analysis of how (the risk of) non-response bias is discussed in research papers published in top management journals. It demonstrates that non-response bias is not assessed to a sufficient degree and that, if attempted at all, correction of non-response bias is difficult or very costly in practice. Three approaches to dealing with this problem are presented and discussed:(a) obtaining data by other means than questionnaires;(b) conducting surveys of very small populations; and(c) conducting surveys of very small samples.

    It will be discussed why these approaches are appropriate means of testing hypotheses in populations. Trade-offs regarding the selection of an approach will be discussed as well.

    Release date: 2009-12-03

  • Articles and reports: 11-522-X20040018738
    Description:

    This paper describes the efforts made during the 2001 UK Census to both maximise and measure the response in the hardest to count sectors of the population. It also discusses the research that will be undertaken for the 2011 UK Census.

    Release date: 2005-10-27

  • Articles and reports: 11-522-X20030017598
    Description:

    This paper looks at descriptive statistics to evaluate non-response in the Labour Force Survey (LFS) and also at ways of improving the current methodology of making non-response adjustments.

    Release date: 2005-01-26
Data (0)

Data (0) (0 results)

No content available at this time.

Analysis (6)

Analysis (6) ((6 results))

  • Articles and reports: 12-001-X201900300006
    Description:

    High nonresponse is a very common problem in sample surveys today. In statistical terms we are worried about increased bias and variance of estimators for population quantities such as totals or means. Different methods have been suggested in order to compensate for this phenomenon. We can roughly divide them into imputation and calibration and it is the latter approach we will focus on here. A wide spectrum of possibilities is included in the class of calibration estimators. We explore linear calibration, where we suggest using a nonresponse version of the design-based optimal regression estimator. Comparisons are made between this estimator and a GREG type estimator. Distance measures play a very important part in the construction of calibration estimators. We show that an estimator of the average response propensity (probability) can be included in the “optimal” distance measure under nonresponse, which will help to reduce the bias of the resulting estimator. To illustrate empirically the theoretically derived results for the suggested estimators, a simulation study has been carried out. The population is called KYBOK and consists of clerical municipalities in Sweden, where the variables include financial as well as size measurements. The results are encouraging for the “optimal” estimator in combination with the estimated average response propensity, where the bias was reduced for most of the Poisson sampling cases in the study.

    Release date: 2019-12-17

  • Articles and reports: 12-001-X201900200005
    Description:

    We present an approach for imputation of missing items in multivariate categorical data nested within households. The approach relies on a latent class model that (i) allows for household-level and individual-level variables, (ii) ensures that impossible household configurations have zero probability in the model, and (iii) can preserve multivariate distributions both within households and across households. We present a Gibbs sampler for estimating the model and generating imputations. We also describe strategies for improving the computational efficiency of the model estimation. We illustrate the performance of the approach with data that mimic the variables collected in typical population censuses.

    Release date: 2019-06-27

  • Articles and reports: 12-001-X201000111252
    Description:

    Nonresponse bias has been a long-standing issue in survey research (Brehm 1993; Dillman, Eltinge, Groves and Little 2002), with numerous studies seeking to identify factors that affect both item and unit response. To contribute to the broader goal of minimizing survey nonresponse, this study considers several factors that can impact survey nonresponse, using a 2007 Animal Welfare Survey Conducted in Ohio, USA. In particular, the paper examines the extent to which topic salience and incentives affect survey participation and item nonresponse, drawing on the leverage-saliency theory (Groves, Singer and Corning 2000). We find that participation in a survey is affected by its subject context (as this exerts either positive or negative leverage on sampled units) and prepaid incentives, which is consistent with the leverage-saliency theory. Our expectations are also confirmed by the finding that item nonresponse, our proxy for response quality, does vary by proximity to agriculture and the environment (residential location, knowledge about how food is grown, and views about the importance of animal welfare). However, the data suggests that item nonresponse does not vary according to whether or not a respondent received incentives.

    Release date: 2010-06-29

  • Articles and reports: 11-522-X200800010957
    Description:

    Business surveys differ from surveys of populations of individual persons or households in many respects. Two of the most important differences are (a) that respondents in business surveys do not answer questions about characteristics of themselves (such as their experiences, behaviours, attitudes and feelings) but about characteristics of organizations (such as their size, revenues, policies, and strategies) and (b) that they answer these questions as an informant for that organization. Academic business surveys differ from other business surveys, such as of national statistical agencies, in many respects as well. The one most important difference is that academic business surveys usually do not aim at generating descriptive statistics but at testing hypotheses, i.e. relations between variables. Response rates in academic business surveys are very low, which implies a huge risk of non-response bias. Usually no attempt is made to assess the extent of non-response bias and published survey results might, therefore, not be a correct reflection of actual relations within the population, which in return increases the likelihood that the reported test result is not correct.

    This paper provides an analysis of how (the risk of) non-response bias is discussed in research papers published in top management journals. It demonstrates that non-response bias is not assessed to a sufficient degree and that, if attempted at all, correction of non-response bias is difficult or very costly in practice. Three approaches to dealing with this problem are presented and discussed:(a) obtaining data by other means than questionnaires;(b) conducting surveys of very small populations; and(c) conducting surveys of very small samples.

    It will be discussed why these approaches are appropriate means of testing hypotheses in populations. Trade-offs regarding the selection of an approach will be discussed as well.

    Release date: 2009-12-03

  • Articles and reports: 11-522-X20040018738
    Description:

    This paper describes the efforts made during the 2001 UK Census to both maximise and measure the response in the hardest to count sectors of the population. It also discusses the research that will be undertaken for the 2011 UK Census.

    Release date: 2005-10-27

  • Articles and reports: 11-522-X20030017598
    Description:

    This paper looks at descriptive statistics to evaluate non-response in the Labour Force Survey (LFS) and also at ways of improving the current methodology of making non-response adjustments.

    Release date: 2005-01-26
Reference (0)

Reference (0) (0 results)

No content available at this time.

Date modified: