Response and nonresponse

Filter results by

Search Help
Currently selected filters that can be removed

Keyword(s)

Type

1 facets displayed. 0 facets selected.
Sort Help
entries

Results

All (21)

All (21) (0 to 10 of 21 results)

  • Articles and reports: 12-001-X200900211037
    Description:

    Randomized response strategies, which have originally been developed as statistical methods to reduce nonresponse as well as untruthful answering, can also be applied in the field of statistical disclosure control for public use microdata files. In this paper a standardization of randomized response techniques for the estimation of proportions of identifying or sensitive attributes is presented. The statistical properties of the standardized estimator are derived for general probability sampling. In order to analyse the effect of different choices of the method's implicit "design parameters" on the performance of the estimator we have to include measures of privacy protection in our considerations. These yield variance-optimum design parameters given a certain level of privacy protection. To this end the variables have to be classified into different categories of sensitivity. A real-data example applies the technique in a survey on academic cheating behaviour.

    Release date: 2009-12-23

  • Articles and reports: 12-001-X200900211038
    Description:

    We examine overcoming the overestimation in using generalized weight share method (GWSM) caused by link nonresponse in indirect sampling. A few adjustment methods incorporating link nonresponse in using GWSM have been constructed for situations both with and without the availability of auxiliary variables. A simulation study on a longitudinal survey is presented using some of the adjustment methods we recommend. The simulation results show that these adjusted GWSMs perform well in reducing both estimation bias and variance. The advancement in bias reduction is significant.

    Release date: 2009-12-23

  • Articles and reports: 12-001-X200900211039
    Description:

    Propensity weighting is a procedure to adjust for unit nonresponse in surveys. A form of implementing this procedure consists of dividing the sampling weights by estimates of the probabilities that the sampled units respond to the survey. Typically, these estimates are obtained by fitting parametric models, such as logistic regression. The resulting adjusted estimators may become biased when the specified parametric models are incorrect. To avoid misspecifying such a model, we consider nonparametric estimation of the response probabilities by local polynomial regression. We study the asymptotic properties of the resulting estimator under quasi-randomization. The practical behavior of the proposed nonresponse adjustment approach is evaluated on NHANES data.

    Release date: 2009-12-23

  • Articles and reports: 12-001-X200900211043
    Description:

    Business surveys often use a one-stage stratified simple random sampling without replacement design with some certainty strata. Although weight adjustment is typically applied for unit nonresponse, the variability due to nonresponse may be omitted in practice when estimating variances. This is problematic especially when there are certainty strata. We derive some variance estimators that are consistent when the number of sampled units in each weighting cell is large, using the jackknife, linearization, and modified jackknife methods. The derived variance estimators are first applied to empirical data from the Annual Capital Expenditures Survey conducted by the U.S. Census Bureau and are then examined in a simulation study.

    Release date: 2009-12-23

  • Articles and reports: 11-522-X200800010951
    Description:

    Missing values caused by item nonresponse represent one type of non-sampling error that occurs in surveys. When cases with missing values are discarded in statistical analyses estimates may be biased because of differences between responders with missing values and responders that do not have missing values. Also, when variables in the data have different patterns of missingness among sampled cases, and cases with missing values are discarded in statistical analyses, those analyses may yield inconsistent results because they are based on different subsets of sampled cases that may not be comparable. However, analyses that discard cases with missing values may be valid provided those values are missing completely at random (MCAR). Are those missing values MCAR?

    To compensate, missing values are often imputed or survey weights are adjusted using weighting class methods. Subsequent analyses based on those compensations may be valid provided that missing values are missing at random (MAR) within each of the categorizations of the data implied by the independent variables of the models that underlie those adjustment approaches. Are those missing values MAR?

    Because missing values are not observed, MCAR and MAR assumptions made by statistical analyses are infrequently examined. This paper describes a selection model from which statistical significance tests for the MCAR and MAR assumptions can be examined although the missing values are not observed. Data from the National Immunization Survey conducted by the U.S. Department of Health and Human Services are used to illustrate the methods.

    Release date: 2009-12-03

  • Articles and reports: 11-522-X200800010952
    Description:

    In a survey where results were estimated by simple averages, we will compare the effect on the results of a follow-up among non-respondents, and weighting based on the last ten percents of the respondents. The data used are collected from a Survey of Living Conditions among Immigrants in Norway that was carried out in 2006.

    Release date: 2009-12-03

  • Articles and reports: 11-522-X200800010953
    Description:

    As survey researchers attempt to maintain traditionally high response rates, reluctant respondents have resulted in increasing data collection costs. This respondent reluctance may be related to the amount of time it takes to complete an interview in large-scale, multi-purpose surveys, such as the National Survey of Recent College Graduates (NSRCG). Recognizing that respondent burden or questionnaire length may contribute to lower response rates, in 2003, following several months of data collection under the standard data collection protocol, the NSRCG offered its nonrespondents monetary incentives about two months before the end of the data collection,. In conjunction with the incentive offer, the NSRCG also offered persistent nonrespondents an opportunity to complete a much-abbreviated interview consisting of a few critical items. The late respondents who completed the interviews as the result of the incentive and critical items-only questionnaire offers may provide some insight into the issue of nonresponse bias and the likelihood that such interviewees would have remained survey nonrespondents if these refusal conversion efforts had not been made.

    In this paper, we define "reluctant respondents" as those who responded to the survey only after extra efforts were made beyond the ones initially planned in the standard data collection protocol. Specifically, reluctant respondents in the 2003 NSRCG are those who responded to the regular or shortened questionnaire following the incentive offer. Our conjecture was that the behavior of the reluctant respondents would be more like that of nonrespondents than of respondents to the surveys. This paper describes an investigation of reluctant respondents and the extent to which they are different from regular respondents. We compare different response groups on several key survey estimates. This comparison will expand our understanding of nonresponse bias in the NSRCG, and of the characteristics of nonrespondents themselves, thus providing a basis for changes in the NSRCG weighting system or estimation procedures in the future.

    Release date: 2009-12-03

  • Articles and reports: 11-522-X200800010957
    Description:

    Business surveys differ from surveys of populations of individual persons or households in many respects. Two of the most important differences are (a) that respondents in business surveys do not answer questions about characteristics of themselves (such as their experiences, behaviours, attitudes and feelings) but about characteristics of organizations (such as their size, revenues, policies, and strategies) and (b) that they answer these questions as an informant for that organization. Academic business surveys differ from other business surveys, such as of national statistical agencies, in many respects as well. The one most important difference is that academic business surveys usually do not aim at generating descriptive statistics but at testing hypotheses, i.e. relations between variables. Response rates in academic business surveys are very low, which implies a huge risk of non-response bias. Usually no attempt is made to assess the extent of non-response bias and published survey results might, therefore, not be a correct reflection of actual relations within the population, which in return increases the likelihood that the reported test result is not correct.

    This paper provides an analysis of how (the risk of) non-response bias is discussed in research papers published in top management journals. It demonstrates that non-response bias is not assessed to a sufficient degree and that, if attempted at all, correction of non-response bias is difficult or very costly in practice. Three approaches to dealing with this problem are presented and discussed:(a) obtaining data by other means than questionnaires;(b) conducting surveys of very small populations; and(c) conducting surveys of very small samples.

    It will be discussed why these approaches are appropriate means of testing hypotheses in populations. Trade-offs regarding the selection of an approach will be discussed as well.

    Release date: 2009-12-03

  • Articles and reports: 11-522-X200800010960
    Description:

    Non-response is inevitable in any survey, despite all the effort put into reducing it at the various stages of the survey. In particular, non-response can cause bias in the estimates. In addition, non-response is an especially serious problem in longitudinal studies because the sample shrinks over time. France's ELFE (Étude Longitudinale Française depuis l'Enfance) is a project that aims to track 20,000 children from birth to adulthood using a multidisciplinary approach. This paper is based on the results of the initial pilot studies conducted in 2007 to test the survey's feasibility and acceptance. The participation rates are presented (response rate, non-response factors) along with a preliminary description of the non-response treatment methods being considered.

    Release date: 2009-12-03

  • Articles and reports: 11-522-X200800010975
    Description:

    A major issue in official statistics is the availability of objective measures supporting the based-on-fact decision process. Istat has developed an Information System to assess survey quality. Among other standard quality indicators, nonresponse rates are systematically computed and stored for all surveys. Such a rich information base permits analysis over time and comparisons among surveys. The paper focuses on the analysis of interrelationships between data collection mode and other survey characteristics on total nonresponse. Particular attention is devoted to the extent to which multi-mode data collection improves response rates.

    Release date: 2009-12-03
Data (0)

Data (0) (0 results)

No content available at this time.

Analysis (21)

Analysis (21) (0 to 10 of 21 results)

  • Articles and reports: 12-001-X200900211037
    Description:

    Randomized response strategies, which have originally been developed as statistical methods to reduce nonresponse as well as untruthful answering, can also be applied in the field of statistical disclosure control for public use microdata files. In this paper a standardization of randomized response techniques for the estimation of proportions of identifying or sensitive attributes is presented. The statistical properties of the standardized estimator are derived for general probability sampling. In order to analyse the effect of different choices of the method's implicit "design parameters" on the performance of the estimator we have to include measures of privacy protection in our considerations. These yield variance-optimum design parameters given a certain level of privacy protection. To this end the variables have to be classified into different categories of sensitivity. A real-data example applies the technique in a survey on academic cheating behaviour.

    Release date: 2009-12-23

  • Articles and reports: 12-001-X200900211038
    Description:

    We examine overcoming the overestimation in using generalized weight share method (GWSM) caused by link nonresponse in indirect sampling. A few adjustment methods incorporating link nonresponse in using GWSM have been constructed for situations both with and without the availability of auxiliary variables. A simulation study on a longitudinal survey is presented using some of the adjustment methods we recommend. The simulation results show that these adjusted GWSMs perform well in reducing both estimation bias and variance. The advancement in bias reduction is significant.

    Release date: 2009-12-23

  • Articles and reports: 12-001-X200900211039
    Description:

    Propensity weighting is a procedure to adjust for unit nonresponse in surveys. A form of implementing this procedure consists of dividing the sampling weights by estimates of the probabilities that the sampled units respond to the survey. Typically, these estimates are obtained by fitting parametric models, such as logistic regression. The resulting adjusted estimators may become biased when the specified parametric models are incorrect. To avoid misspecifying such a model, we consider nonparametric estimation of the response probabilities by local polynomial regression. We study the asymptotic properties of the resulting estimator under quasi-randomization. The practical behavior of the proposed nonresponse adjustment approach is evaluated on NHANES data.

    Release date: 2009-12-23

  • Articles and reports: 12-001-X200900211043
    Description:

    Business surveys often use a one-stage stratified simple random sampling without replacement design with some certainty strata. Although weight adjustment is typically applied for unit nonresponse, the variability due to nonresponse may be omitted in practice when estimating variances. This is problematic especially when there are certainty strata. We derive some variance estimators that are consistent when the number of sampled units in each weighting cell is large, using the jackknife, linearization, and modified jackknife methods. The derived variance estimators are first applied to empirical data from the Annual Capital Expenditures Survey conducted by the U.S. Census Bureau and are then examined in a simulation study.

    Release date: 2009-12-23

  • Articles and reports: 11-522-X200800010951
    Description:

    Missing values caused by item nonresponse represent one type of non-sampling error that occurs in surveys. When cases with missing values are discarded in statistical analyses estimates may be biased because of differences between responders with missing values and responders that do not have missing values. Also, when variables in the data have different patterns of missingness among sampled cases, and cases with missing values are discarded in statistical analyses, those analyses may yield inconsistent results because they are based on different subsets of sampled cases that may not be comparable. However, analyses that discard cases with missing values may be valid provided those values are missing completely at random (MCAR). Are those missing values MCAR?

    To compensate, missing values are often imputed or survey weights are adjusted using weighting class methods. Subsequent analyses based on those compensations may be valid provided that missing values are missing at random (MAR) within each of the categorizations of the data implied by the independent variables of the models that underlie those adjustment approaches. Are those missing values MAR?

    Because missing values are not observed, MCAR and MAR assumptions made by statistical analyses are infrequently examined. This paper describes a selection model from which statistical significance tests for the MCAR and MAR assumptions can be examined although the missing values are not observed. Data from the National Immunization Survey conducted by the U.S. Department of Health and Human Services are used to illustrate the methods.

    Release date: 2009-12-03

  • Articles and reports: 11-522-X200800010952
    Description:

    In a survey where results were estimated by simple averages, we will compare the effect on the results of a follow-up among non-respondents, and weighting based on the last ten percents of the respondents. The data used are collected from a Survey of Living Conditions among Immigrants in Norway that was carried out in 2006.

    Release date: 2009-12-03

  • Articles and reports: 11-522-X200800010953
    Description:

    As survey researchers attempt to maintain traditionally high response rates, reluctant respondents have resulted in increasing data collection costs. This respondent reluctance may be related to the amount of time it takes to complete an interview in large-scale, multi-purpose surveys, such as the National Survey of Recent College Graduates (NSRCG). Recognizing that respondent burden or questionnaire length may contribute to lower response rates, in 2003, following several months of data collection under the standard data collection protocol, the NSRCG offered its nonrespondents monetary incentives about two months before the end of the data collection,. In conjunction with the incentive offer, the NSRCG also offered persistent nonrespondents an opportunity to complete a much-abbreviated interview consisting of a few critical items. The late respondents who completed the interviews as the result of the incentive and critical items-only questionnaire offers may provide some insight into the issue of nonresponse bias and the likelihood that such interviewees would have remained survey nonrespondents if these refusal conversion efforts had not been made.

    In this paper, we define "reluctant respondents" as those who responded to the survey only after extra efforts were made beyond the ones initially planned in the standard data collection protocol. Specifically, reluctant respondents in the 2003 NSRCG are those who responded to the regular or shortened questionnaire following the incentive offer. Our conjecture was that the behavior of the reluctant respondents would be more like that of nonrespondents than of respondents to the surveys. This paper describes an investigation of reluctant respondents and the extent to which they are different from regular respondents. We compare different response groups on several key survey estimates. This comparison will expand our understanding of nonresponse bias in the NSRCG, and of the characteristics of nonrespondents themselves, thus providing a basis for changes in the NSRCG weighting system or estimation procedures in the future.

    Release date: 2009-12-03

  • Articles and reports: 11-522-X200800010957
    Description:

    Business surveys differ from surveys of populations of individual persons or households in many respects. Two of the most important differences are (a) that respondents in business surveys do not answer questions about characteristics of themselves (such as their experiences, behaviours, attitudes and feelings) but about characteristics of organizations (such as their size, revenues, policies, and strategies) and (b) that they answer these questions as an informant for that organization. Academic business surveys differ from other business surveys, such as of national statistical agencies, in many respects as well. The one most important difference is that academic business surveys usually do not aim at generating descriptive statistics but at testing hypotheses, i.e. relations between variables. Response rates in academic business surveys are very low, which implies a huge risk of non-response bias. Usually no attempt is made to assess the extent of non-response bias and published survey results might, therefore, not be a correct reflection of actual relations within the population, which in return increases the likelihood that the reported test result is not correct.

    This paper provides an analysis of how (the risk of) non-response bias is discussed in research papers published in top management journals. It demonstrates that non-response bias is not assessed to a sufficient degree and that, if attempted at all, correction of non-response bias is difficult or very costly in practice. Three approaches to dealing with this problem are presented and discussed:(a) obtaining data by other means than questionnaires;(b) conducting surveys of very small populations; and(c) conducting surveys of very small samples.

    It will be discussed why these approaches are appropriate means of testing hypotheses in populations. Trade-offs regarding the selection of an approach will be discussed as well.

    Release date: 2009-12-03

  • Articles and reports: 11-522-X200800010960
    Description:

    Non-response is inevitable in any survey, despite all the effort put into reducing it at the various stages of the survey. In particular, non-response can cause bias in the estimates. In addition, non-response is an especially serious problem in longitudinal studies because the sample shrinks over time. France's ELFE (Étude Longitudinale Française depuis l'Enfance) is a project that aims to track 20,000 children from birth to adulthood using a multidisciplinary approach. This paper is based on the results of the initial pilot studies conducted in 2007 to test the survey's feasibility and acceptance. The participation rates are presented (response rate, non-response factors) along with a preliminary description of the non-response treatment methods being considered.

    Release date: 2009-12-03

  • Articles and reports: 11-522-X200800010975
    Description:

    A major issue in official statistics is the availability of objective measures supporting the based-on-fact decision process. Istat has developed an Information System to assess survey quality. Among other standard quality indicators, nonresponse rates are systematically computed and stored for all surveys. Such a rich information base permits analysis over time and comparisons among surveys. The paper focuses on the analysis of interrelationships between data collection mode and other survey characteristics on total nonresponse. Particular attention is devoted to the extent to which multi-mode data collection improves response rates.

    Release date: 2009-12-03
Reference (0)

Reference (0) (0 results)

No content available at this time.

Date modified: