Editing and imputation

Filter results by

Search Help
Currently selected filters that can be removed

Keyword(s)

Type

1 facets displayed. 0 facets selected.

Survey or statistical program

1 facets displayed. 0 facets selected.
Sort Help
entries

Results

All (6)

All (6) ((6 results))

  • Articles and reports: 11-522-X20010016253
    Description:

    The U.S. Census Bureau developed software called the Standard Economic Processing System (StEPS) to replace 16 separate systems used to process the data from over 100 current economic surveys. This paper describes the methodology and design of the StEPS modules for editing and imputation and summarizes the reactions of users to using these modules to process their surveys.

    Release date: 2002-09-12

  • Articles and reports: 11-522-X20010016275
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    Hot deck imputation, in which missing items are replaced with values from respondents, is often used in survey sampling. A model supporting such procedures is the model in which response probabilities are assumed equal within imputation cells. In this paper, an efficient version of hot deck imputation is described, as are the variance of the efficient version derived under the cell response model and an approximation to the fully efficient procedure in which a small number of values are imputed for each non-respondent, respectively. Variance estimation procedures are presented and illustrated in a Monte Carlo study.

    Release date: 2002-09-12

  • Articles and reports: 11-522-X20010016303
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    In large-scale surveys, it is almost guaranteed that some level of non-response will occur. Generally, statistical agencies use imputation as a way to treat non-response items. A common preliminary step to imputation is the formation of imputation cells. In this article, the formation of these cells is studied using two methods. The first method is similar to that of Eltinge and Yansaneh (1997) in the case of weighting cells and the second is the method currently used in the Canadian Labour Force Survey. Using Labour Force data, simulation studies are performed to test the impact of the response rate, the response mechanism, and constraints on the quality of the point estimator in both methods.

    Release date: 2002-09-12

  • Articles and reports: 11-522-X20010016304
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    This paper describes a test of two alternative sets of ratio edit and imputation procedures, both using the U.S. Census Bureau's generalized editing and imputation subsystem ("Plain Vanilla") on 1997 Economic Census data. The quality of the edited and imputed data from both sets of procedures were compared - both at the micro and macro level. Discussions followed on how these quantitative methods of comparison gave rise to the recommended changes for the current editing and imputation procedures.

    Release date: 2002-09-12

  • Articles and reports: 11-522-X20010016305
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    A review of the Office for National Statistics (ONS) identified the need for new methods which would improve the efficiency of the data validation and editing processes in business surveys, but would not adversely impact the data quality. Methods for automating the correction of systematic errors, and for applying selective editing, were developed. However, the ways in which the organization and the procedures for ONS business surveys have evolved, presented a number of challenges in implementing these methods. This paper describes these challenges and how they were addressed and considers the relevance of these challenges to other organizations. Approaches to evaluating the impact of new methods on both quality and efficiency are also discussed.

    Release date: 2002-09-12

  • Articles and reports: 11-522-X20010016306
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    The paper deals with concerns regarding the problem of automatic detection and correction of inconsistent or out-of-range data in a general process of statistical data collection. The proposed approach is capable of handling both qualitative and quantitative values. The purpose of this new approach is to overcome the computational limits of the Fellegi-Holt method, while maintaining its positive features. As customary, data records must respect a set of rules in order to be declared correct. By encoding the rules with linear inequalities, we develop mathematical models for the problems of interest. As a first relevant point, by solving a sequence of feasibility problems, the set of rules itself is checked for inconsistency or redundancy. As a second relevant point, imputation is performed by solving a sequence of set-covering problems.

    Release date: 2002-09-12
Data (0)

Data (0) (0 results)

No content available at this time.

Analysis (6)

Analysis (6) ((6 results))

  • Articles and reports: 11-522-X20010016253
    Description:

    The U.S. Census Bureau developed software called the Standard Economic Processing System (StEPS) to replace 16 separate systems used to process the data from over 100 current economic surveys. This paper describes the methodology and design of the StEPS modules for editing and imputation and summarizes the reactions of users to using these modules to process their surveys.

    Release date: 2002-09-12

  • Articles and reports: 11-522-X20010016275
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    Hot deck imputation, in which missing items are replaced with values from respondents, is often used in survey sampling. A model supporting such procedures is the model in which response probabilities are assumed equal within imputation cells. In this paper, an efficient version of hot deck imputation is described, as are the variance of the efficient version derived under the cell response model and an approximation to the fully efficient procedure in which a small number of values are imputed for each non-respondent, respectively. Variance estimation procedures are presented and illustrated in a Monte Carlo study.

    Release date: 2002-09-12

  • Articles and reports: 11-522-X20010016303
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    In large-scale surveys, it is almost guaranteed that some level of non-response will occur. Generally, statistical agencies use imputation as a way to treat non-response items. A common preliminary step to imputation is the formation of imputation cells. In this article, the formation of these cells is studied using two methods. The first method is similar to that of Eltinge and Yansaneh (1997) in the case of weighting cells and the second is the method currently used in the Canadian Labour Force Survey. Using Labour Force data, simulation studies are performed to test the impact of the response rate, the response mechanism, and constraints on the quality of the point estimator in both methods.

    Release date: 2002-09-12

  • Articles and reports: 11-522-X20010016304
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    This paper describes a test of two alternative sets of ratio edit and imputation procedures, both using the U.S. Census Bureau's generalized editing and imputation subsystem ("Plain Vanilla") on 1997 Economic Census data. The quality of the edited and imputed data from both sets of procedures were compared - both at the micro and macro level. Discussions followed on how these quantitative methods of comparison gave rise to the recommended changes for the current editing and imputation procedures.

    Release date: 2002-09-12

  • Articles and reports: 11-522-X20010016305
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    A review of the Office for National Statistics (ONS) identified the need for new methods which would improve the efficiency of the data validation and editing processes in business surveys, but would not adversely impact the data quality. Methods for automating the correction of systematic errors, and for applying selective editing, were developed. However, the ways in which the organization and the procedures for ONS business surveys have evolved, presented a number of challenges in implementing these methods. This paper describes these challenges and how they were addressed and considers the relevance of these challenges to other organizations. Approaches to evaluating the impact of new methods on both quality and efficiency are also discussed.

    Release date: 2002-09-12

  • Articles and reports: 11-522-X20010016306
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    The paper deals with concerns regarding the problem of automatic detection and correction of inconsistent or out-of-range data in a general process of statistical data collection. The proposed approach is capable of handling both qualitative and quantitative values. The purpose of this new approach is to overcome the computational limits of the Fellegi-Holt method, while maintaining its positive features. As customary, data records must respect a set of rules in order to be declared correct. By encoding the rules with linear inequalities, we develop mathematical models for the problems of interest. As a first relevant point, by solving a sequence of feasibility problems, the set of rules itself is checked for inconsistency or redundancy. As a second relevant point, imputation is performed by solving a sequence of set-covering problems.

    Release date: 2002-09-12
Reference (0)

Reference (0) (0 results)

No content available at this time.

Date modified: