Statistics by subject – Simulations

Filter results by

Help for filters and search
Currently selected filters that can be removed

Keyword(s)

Content

1 facets displayed. 0 facets selected.

Filter results by

Help for filters and search
Currently selected filters that can be removed

Keyword(s)

Content

1 facets displayed. 0 facets selected.

Filter results by

Help for filters and search
Currently selected filters that can be removed

Keyword(s)

Content

1 facets displayed. 0 facets selected.

Filter results by

Help for filters and search
Currently selected filters that can be removed

Keyword(s)

Content

1 facets displayed. 0 facets selected.

Other available resources to support your research.

Help for sorting results
Browse our central repository of key standard concepts, definitions, data sources and methods.
Loading
Loading in progress, please wait...
All (35)

All (35) (25 of 35 results)

  • The Daily
    Description: Release published in The Daily – Statistics Canada’s official release bulletin
    Release date: 2018-01-08

  • Public use microdata: 89F0002X
    Description:

    The SPSD/M is a static microsimulation model designed to analyse financial interactions between governments and individuals in Canada. It can compute taxes paid to and cash transfers received from government. It is comprised of a database, a series of tax/transfer algorithms and models, analytical software and user documentation.

    Release date: 2018-01-08

  • The Daily
    Description: Release published in The Daily – Statistics Canada’s official release bulletin
    Release date: 2017-07-28

  • Articles and reports: 11-633-X2017008
    Description:

    The DYSEM microsimulation modelling platform provides a demographic and socioeconomic core that can be readily built upon to develop custom dynamic microsimulation models or applications. This paper describes DYSEM and provides an overview of its intended uses, as well as the methods and data used in its development.

    Release date: 2017-07-28

  • Articles and reports: 82-003-X201700614829
    Description:

    POHEM-BMI is a microsimulation tool that includes a model of adult body mass index (BMI) and a model of childhood BMI history. This overview describes the development of BMI prediction models for adults and of childhood BMI history, and compares projected BMI estimates with those from nationally representative survey data to establish validity.

    Release date: 2017-06-21

  • Technical products: 91-621-X2017001
    Release date: 2017-01-25

  • The Daily
    Description: Release published in The Daily – Statistics Canada’s official release bulletin
    Release date: 2016-12-05

  • Articles and reports: 82-003-X201600314338
    Description:

    This paper describes the methods and data used in the development and implementation of the POHEM-Neurological meta-model.

    Release date: 2016-03-16

  • Technical products: 91-621-X2015001
    Release date: 2015-09-17

  • Technical products: 11-522-X201300014290
    Description:

    This paper describes a new module that will project families and households by Aboriginal status using the Demosim microsimulation model. The methodology being considered would assign a household/family headship status annually to each individual and would use the headship rate method to calculate the number of annual families and households by various characteristics and geographies associated with Aboriginal populations.

    Release date: 2014-10-31

  • Technical products: 11-522-X201300014279
    Description:

    As part of the European SustainCity project, a microsimulation model of individuals and households was created to simulate the population of various European cities. The aim of the project was to combine several transportation and land-use microsimulation models (land-use modelling), add on a dynamic population module and apply these microsimulation approaches to three geographic areas of Europe (the Île-de-France region and the Brussels and Zurich agglomerations

    Release date: 2014-10-31

  • Technical products: 11-522-X201300014289
    Description:

    This paper provides an overview of the main new features that will be added to the forthcoming version of the Demosim microsimulation projection model based on microdata from the 2011 National Household Survey. The paper first describes the additions to the base population, namely new variables, some of which are added to the National Household Survey data by means of data linkage. This is followed by a brief description of the methods being considered for the projection of language variables, citizenship and religion as examples of the new features for events simulated by the model.

    Release date: 2014-10-31

  • Articles and reports: 82-003-X201301011873
    Description:

    A computer simulation model of physical activity was developed for the Canadian adult population using longitudinal data from the National Population Health Survey and cross-sectional data from the Canadian Community Health Survey. The model is based on the Population Health Model (POHEM) platform developed by Statistics Canada. This article presents an overview of POHEM and describes the additions that were made to create the physical activity module (POHEM-PA). These additions include changes in physical activity over time, and the relationship between physical activity levels and health-adjusted life expectancy, life expectancy and the onset of selected chronic conditions. Estimates from simulation projections are compared with nationally representative survey data to provide an indication of the validity of POHEM-PA.

    Release date: 2013-10-16

  • Articles and reports: 82-003-X201300611796
    Description:

    The study assesses the feasibility of using statistical modelling techniques to fill information gaps related to risk factors, specifically, smoking status, in linked long-form census data.

    Release date: 2013-06-19

  • Articles and reports: 12-001-X201200111687
    Description:

    To create public use files from large scale surveys, statistical agencies sometimes release random subsamples of the original records. Random subsampling reduces file sizes for secondary data analysts and reduces risks of unintended disclosures of survey participants' confidential information. However, subsampling does not eliminate risks, so that alteration of the data is needed before dissemination. We propose to create disclosure-protected subsamples from large scale surveys based on multiple imputation. The idea is to replace identifying or sensitive values in the original sample with draws from statistical models, and release subsamples of the disclosure-protected data. We present methods for making inferences with the multiple synthetic subsamples.

    Release date: 2012-06-27

  • Technical products: 11-522-X20040018652
    Description:

    ISQ's Grandir en qualité survey involved the on-site observation of child care providers. The success of the survey is due to an information-based collection strategy.

    Release date: 2005-10-27

  • Technical products: 11-522-X20040018747
    Description:

    This document describes the development and pilot of the first American Indian and Alaska Native Adult Tobacco Survey. Meetings with expert panels and tribal representatives helped to adapt methods.

    Release date: 2005-10-27

  • Technical products: 11-522-X20040018746
    Description:

    This document discusses the qualitative testing of translated questionnaires, the problems typically identified, and the challenges in finding solutions that preserve the intent of the original instrument, while addressing dialect.

    Release date: 2005-10-27

  • Technical products: 11-522-X20020016430
    Description:

    Linearization (or Taylor series) methods are widely used to estimate standard errors for the co-efficients of linear regression models fit to multi-stage samples. When the number of primary sampling units (PSUs) is large, linearization can produce accurate standard errors under quite general conditions. However, when the number of PSUs is small or a co-efficient depends primarily on data from a small number of PSUs, linearization estimators can have large negative bias.

    In this paper, we characterize features of the design matrix that produce large bias in linearization standard errors for linear regression co-efficients. We then propose a new method, bias reduced linearization (BRL), based on residuals adjusted to better approximate the covariance of the true errors. When the errors are independent and identically distributed (i.i.d.), the BRL estimator is unbiased for the variance. Furthermore, a simulation study shows that BRL can greatly reduce the bias, even if the errors are not i.i.d. We also propose using a Satterthwaite approximation to determine the degrees of freedom of the reference distribution for tests and confidence intervals about linear combinations of co-efficients based on the BRL estimator. We demonstrate that the jackknife estimator also tends to be biased in situations where linearization is biased. However, the jackknife's bias tends to be positive. Our bias-reduced linearization estimator can be viewed as a compromise between the traditional linearization and jackknife estimators.

    Release date: 2004-09-13

  • Technical products: 11-522-X20020016722
    Description:

    Colorectal cancer (CRC) is the second cause of cancer deaths in Canada. Randomized controlled trials (RCT) have shown the efficacy of screening using faecal occult blood tests (FOBT). A comprehensive evaluation of the costs and consequences of CRC screening for the Canadian population is required before implementing such a program. This paper evaluates whether or not the CRC screening is cost-effective. The results of these simulations will be provided to the Canadian National Committee on Colorectal Cancer Screening to help formulate national policy recommendations for CRC screening.

    Statistics Canada's Population Health Microsimulation Model was updated to incorporate a comprehensive CRC screening module based on Canadian data and RCT efficacy results. The module incorporated sensitivity and specificity of FOBT and colonoscopy, participation rates, incidence, staging, diagnostic and therapeutic options, disease progression, mortality and direct health care costs for different screening scenarios. Reproducing the mortality reduction observed in the Funen screening trial validated the model.

    Release date: 2004-09-13

  • Articles and reports: 12-001-X20030016610
    Description:

    In the presence of item nonreponse, unweighted imputation methods are often used in practice but they generally lead to biased estimators under uniform response within imputation classes. Following Skinner and Rao (2002), we propose a bias-adjusted estimator of a population mean under unweighted ratio imputation and random hot-deck imputation and derive linearization variance estimators. A small simulation study is conducted to study the performance of the methods in terms of bias and mean square error. Relative bias and relative stability of the variance estimators are also studied.

    Release date: 2003-07-31

  • Articles and reports: 81-595-M2003005
    Description:

    This paper develops technical procedures that may enable ministries of education to link provincial tests with national and international tests in order to compare standards and report results on a common scale.

    Release date: 2003-05-29

  • Articles and reports: 82-005-X20020016479
    Description:

    The Population Health Model (POHEM) is a policy analysis tool that helps answer "what-if" questions about the health and economic burden of specific diseases and the cost-effectiveness of administering new diagnostic and therapeutic interventions. This simulation model is particularly pertinent in an era of fiscal restraint, when new therapies are generally expensive and difficult policy decisions are being made. More important, it provides a base for a broader framework to inform policy decisions using comprehensive disease data and risk factors. Our "base case" models comprehensively estimate the lifetime costs of treating breast, lung and colorectal cancer in Canada. Our cancer models have shown the large financial burden of diagnostic work-up and initial therapy, as well as the high costs of hospitalizing those dying of cancer. Our core cancer models (lung, breast and colorectal cancer) have been used to evaluate the impact of new practice patterns. We have used these models to evaluate new chemotherapy regimens as therapeutic options for advanced lung cancer; the health and financial impact of reducing the hospital length of stay for initial breast cancer surgery; and the potential impact of population-based screening for colorectal cancer. To date, the most interesting intervention we have studied has been the use of tamoxifen to prevent breast cancer among high risk women.

    Release date: 2002-10-08

  • Technical products: 11-522-X20010016289
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    Increasing demand for electronic reporting in establishment surveys has placed additional emphasis on incorporating usability into electronic forms. We are just beginning to understand the implications surrounding electronic forms design. Cognitive interviewing and usability testing are analogous in that both types of testing have similar goals: to build an end instrument (paper or electronic) that reduces both respondent burden and measurement error. Cognitive testing has greatly influenced paper forms design and can also be applied towards the development of electronic forms. Usability testing expands on existing cognitive testing methodology to include examination of the interaction between the respondent and the electronic form.

    The upcoming U.S. 2002 Economic Census will offer businesses the ability to report information using electronic forms. The U.S. Census Bureau is creating an electronic forms style guide outlining the design standards to be used in electronic form creation. The style guide's design standards are based on usability principles, usability and cognitive test results, and Graphical User Interface standards. This paper highlights the major electronic forms design issues raised during the preparation of the style guide and describes how usability testing and cognitive interviewing resolved these issues.

    Release date: 2002-09-12

  • Technical products: 11-522-X20010016288
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    The upcoming 2002 U.S. Economic Census will give businesses the option of submitting their data on paper or by electronic media. If reporting electronically, they may report via Windows-based Computerized Self-Administered Questionnaires (CSAQs). The U.S. Census Bureau will offer electronic reporting for over 650 different forms to all respondents. The U.S. Census Bureau has assembled a cross-divisional team to develop an electronic forms style guide, outlining the design standards to use in electronic form creation and ensuring that the quality of the form designs will be consistent throughout.

    The purpose of a style guide is to foster consistency among the various analysts who may be working on different pieces of a software development project (in this case, a CSAQ). The team determined that the style guide should include standards for layout and screen design, navigation, graphics, edit capabilities, additional help, feedback, audit trails, and accessibility for disabled users.

    Members of the team signed up to develop various sections of the style guide. The team met weekly to discuss and review the sections. Members of the team also conducted usability tests on edits, and subject-matter employees provided recommendations to upper management. Team members conducted usability testing on prototype forms with actual respondents. The team called in subject-matter experts as necessary to assist in making decisions about particular forms where the constraints of the electronic medium required changes to the paper form.

    The style guide will become the standard for all CSAQs for the 2002 Economic Census, which will ensure consistency across the survey programs.

    Release date: 2002-09-12

Data (1)

Data (1) (1 result)

  • Public use microdata: 89F0002X
    Description:

    The SPSD/M is a static microsimulation model designed to analyse financial interactions between governments and individuals in Canada. It can compute taxes paid to and cash transfers received from government. It is comprised of a database, a series of tax/transfer algorithms and models, analytical software and user documentation.

    Release date: 2018-01-08

Analysis (22)

Analysis (22) (22 of 22 results)

  • The Daily
    Description: Release published in The Daily – Statistics Canada’s official release bulletin
    Release date: 2018-01-08

  • The Daily
    Description: Release published in The Daily – Statistics Canada’s official release bulletin
    Release date: 2017-07-28

  • Articles and reports: 11-633-X2017008
    Description:

    The DYSEM microsimulation modelling platform provides a demographic and socioeconomic core that can be readily built upon to develop custom dynamic microsimulation models or applications. This paper describes DYSEM and provides an overview of its intended uses, as well as the methods and data used in its development.

    Release date: 2017-07-28

  • Articles and reports: 82-003-X201700614829
    Description:

    POHEM-BMI is a microsimulation tool that includes a model of adult body mass index (BMI) and a model of childhood BMI history. This overview describes the development of BMI prediction models for adults and of childhood BMI history, and compares projected BMI estimates with those from nationally representative survey data to establish validity.

    Release date: 2017-06-21

  • The Daily
    Description: Release published in The Daily – Statistics Canada’s official release bulletin
    Release date: 2016-12-05

  • Articles and reports: 82-003-X201600314338
    Description:

    This paper describes the methods and data used in the development and implementation of the POHEM-Neurological meta-model.

    Release date: 2016-03-16

  • Articles and reports: 82-003-X201301011873
    Description:

    A computer simulation model of physical activity was developed for the Canadian adult population using longitudinal data from the National Population Health Survey and cross-sectional data from the Canadian Community Health Survey. The model is based on the Population Health Model (POHEM) platform developed by Statistics Canada. This article presents an overview of POHEM and describes the additions that were made to create the physical activity module (POHEM-PA). These additions include changes in physical activity over time, and the relationship between physical activity levels and health-adjusted life expectancy, life expectancy and the onset of selected chronic conditions. Estimates from simulation projections are compared with nationally representative survey data to provide an indication of the validity of POHEM-PA.

    Release date: 2013-10-16

  • Articles and reports: 82-003-X201300611796
    Description:

    The study assesses the feasibility of using statistical modelling techniques to fill information gaps related to risk factors, specifically, smoking status, in linked long-form census data.

    Release date: 2013-06-19

  • Articles and reports: 12-001-X201200111687
    Description:

    To create public use files from large scale surveys, statistical agencies sometimes release random subsamples of the original records. Random subsampling reduces file sizes for secondary data analysts and reduces risks of unintended disclosures of survey participants' confidential information. However, subsampling does not eliminate risks, so that alteration of the data is needed before dissemination. We propose to create disclosure-protected subsamples from large scale surveys based on multiple imputation. The idea is to replace identifying or sensitive values in the original sample with draws from statistical models, and release subsamples of the disclosure-protected data. We present methods for making inferences with the multiple synthetic subsamples.

    Release date: 2012-06-27

  • Articles and reports: 12-001-X20030016610
    Description:

    In the presence of item nonreponse, unweighted imputation methods are often used in practice but they generally lead to biased estimators under uniform response within imputation classes. Following Skinner and Rao (2002), we propose a bias-adjusted estimator of a population mean under unweighted ratio imputation and random hot-deck imputation and derive linearization variance estimators. A small simulation study is conducted to study the performance of the methods in terms of bias and mean square error. Relative bias and relative stability of the variance estimators are also studied.

    Release date: 2003-07-31

  • Articles and reports: 81-595-M2003005
    Description:

    This paper develops technical procedures that may enable ministries of education to link provincial tests with national and international tests in order to compare standards and report results on a common scale.

    Release date: 2003-05-29

  • Articles and reports: 82-005-X20020016479
    Description:

    The Population Health Model (POHEM) is a policy analysis tool that helps answer "what-if" questions about the health and economic burden of specific diseases and the cost-effectiveness of administering new diagnostic and therapeutic interventions. This simulation model is particularly pertinent in an era of fiscal restraint, when new therapies are generally expensive and difficult policy decisions are being made. More important, it provides a base for a broader framework to inform policy decisions using comprehensive disease data and risk factors. Our "base case" models comprehensively estimate the lifetime costs of treating breast, lung and colorectal cancer in Canada. Our cancer models have shown the large financial burden of diagnostic work-up and initial therapy, as well as the high costs of hospitalizing those dying of cancer. Our core cancer models (lung, breast and colorectal cancer) have been used to evaluate the impact of new practice patterns. We have used these models to evaluate new chemotherapy regimens as therapeutic options for advanced lung cancer; the health and financial impact of reducing the hospital length of stay for initial breast cancer surgery; and the potential impact of population-based screening for colorectal cancer. To date, the most interesting intervention we have studied has been the use of tamoxifen to prevent breast cancer among high risk women.

    Release date: 2002-10-08

  • Articles and reports: 12-001-X20020016424
    Description:

    A variety of estimators for the variance of the General Regression (GREG) estimator of a mean have been proposed in the sampling literature, mainly with the goal of estimating the design-based variance. Under certain conditions, estimators can be easily constructed that are approximately unbiased for both the design-variance and the model-variance. Several dual-purpose estimators are studied here in single-stage sampling. These choices are robust estimators of a model-variance even if the model that motivates the GREG has an incorrect variance parameter.

    A key feature of the robust estimators is the adjustment of squared residuals by factors analogous to the leverages used in standard regression analysis. We also show that the delete-one jackknife estimator implicitly includes the leverage adjustments and is a good choice from either the design-based or model-based perspective. In a set of simulations, these variance estimators have small bias and produce confidence intervals with near-nominal coverage rates for several sampling methods, sample sizes and populations in single-stage sampling.

    We also present simulation results for a skewed population where all variance estimators perform poorly. Samples that do not adequately represent the units with large values lead to estimated means that are too small, variance estimates that are too small and confidence intervals that cover at far less than the nominal rate. These defects can be avoided at the design stage by selecting samples that cover the extreme units well. However, in populations with inadequate design information this will not be feasible.

    Release date: 2002-07-05

  • Articles and reports: 88-003-X20020026371
    Description:

    When constructing questions for questionnaires, one of the rules of thumb has always been "keep it short and simple." This article is the third in a series of lessons learned during cognitive testing of the pilot Knowledge Management Practices Survey. It studies the responses given to long questions, thick questionnaires and too many response boxes.

    Release date: 2002-06-14

  • Articles and reports: 88-003-X20020026369
    Description:

    Eliminating the "neutral" response in an opinion question not only encourages the respondent to choose a side, it gently persuades respondents to read the question. Learn how we used this technique to our advantage in the Knowledge Management Practices Survey, 2001.

    Release date: 2002-06-14

  • Articles and reports: 11F0019M1997099
    Description:

    Context : Lung cancer has been the leading cause of cancer deaths in Canadian males for many years, and since 1994, this has been the case for Canadian femalesas well. It is therefore important to evaluate the resources required for its diagnosis and treatment. This article presents an estimate of the direct medical costsassociated with the diagnosis and treatment of lung cancer calculated through the use of a micro-simulation model. For disease incidence, 1992 was chosen as thereference year, whereas costs are evaluated according to the rates that prevailed in 1993.Methods : A model for lung cancer has been incorporated into the Population Health Model (POHEM). The parameters of the model were drawn in part fromStatistics Canada's Canadian Cancer Registry (CCR), which provides information on the incidence and histological classification of lung cancer cases in Canada.The distribution of cancer stage at diagnosis was estimated by using information from two provincial cancer registries. A team of oncologists derived "typical" treatment approaches reflective of current practice, and the associated direct costs were calculated for these approaches. Once this information and the appropriatesurvival curves were incorporated into the POHEM model, overall costs of treatment were estimated by means of a Monte Carlo simulation.Results: It is estimated that overall, the direct medical costs of lung cancer diagnosis and treatment were just over $528 million. The cost per year of life gained as aresult of treatment of the disease was approximately $19,450. For the first time in Canada, it was possible to estimate the five year costs following diagnosis, bystage of the disease at the time of diagnosis. It was possible to estimate the cost per year of additional life gained for three alternative treatments of non small-cell lungcancer (NSCLC). Sensitivity analyses showed that these costs varied between $1,870 and $6,860 per year of additional life gained, which compares favourablywith the costs that the treatment of other diseases may involve.Conclusions: Contrary to widespread perceptions, it appears that the treatment of lung cancer is effective from an economic standpoint. In addition, the use of amicro-simulation model such as POHEM not only makes it possible to incorporate information from various sources in a coherent manner but also offers thepossibility of estimating the effect of alternative medical procedures from the standpoint of financial pressures on the health care system.

    Release date: 1997-04-22

  • Articles and reports: 11F0019M1995081
    Description:

    Users of socio-economic statistics typically want more and better information. Often, these needs can be met simply by more extensive data collections, subject to usual concerns over financial costs and survey respondent burdens. Users, particularly for public policy purposes, have also expressed a continuing, and as yet unfilled, demand for an integrated and coherent system of socio-economic statistics. In this case, additional data will not be sufficient; the more important constraint is the absence of an agreed conceptual approach.

    In this paper, we briefly review the state of frameworks for social and economic statistics, including the kinds of socio-economic indicators users may want. These indicators are motivated first in general terms from basic principles and intuitive concepts, leaving aside for the moment the practicalities of their construction. We then show how a coherent structure of such indicators might be assembled.

    A key implication is that this structure requires a coordinated network of surveys and data collection processes, and higher data quality standards. This in turn implies a breaking down of the "stovepipe" systems that typify much of the survey work in national statistical agencies (i.e. parallel but generally unrelated data "production lines"). Moreover, the data flowing from the network of surveys must be integrated. Since the data of interest are dynamic, the proposed method goes beyond statistical matching to microsimulation modeling. Finally, these ideas are illustrated with preliminary results from the LifePaths model currently under development in Statistics Canada.

    Release date: 1995-07-30

  • Articles and reports: 11F0019M1995067
    Description:

    The role of technical innovation in economic growth is both a current matter of keen public policy interest, and active exploration in economic theory. However, formal economic theorizing is often constrained by considerations of mathematical tractability. Evolutionary economic theories which are realized as computerized microsimulation models offer significant promise both for transcending mathematical constraints and addressing fundamental questions in a more realistic and flexible manner. This paper sketches XEcon, a microsimulation model of economic growth in the evolutionary tradition.

    Release date: 1995-06-30

  • Articles and reports: 12-001-X199400214419
    Description:

    The study was undertaken to evaluate some alternative small areas estimators to produce level estimates for unplanned domains from the Italian Labour Force Sample Survey. In our study, the small areas are the Health Service Areas, which are unplanned sub-regional territorial domains and were not isolated at the time of sample design and thus cut across boundaries of the design strata. We consider the following estimators: post-stratified ratio, synthetic, composite expressed as linear combination of synthetic and of post-stratified ratio, and sample size dependent. For all the estimators considered in this study, the average percent relative biases and the average relative mean square errors were obtained in a Monte Carlo study in which the sample design was simulated using data from the 1981 Italian Census.

    Release date: 1994-12-15

  • Articles and reports: 12-001-X199400214423
    Description:

    Most surveys suffer from the problem of missing data caused by nonresponse. To deal with this problem, imputation is often used to create a “completed data set”, that is, a data set composed of actual observations (for the respondents) and imputations (for the nonrespondents). Usually, imputation is carried out under the assumption of unconfounded response mechanism. When this assumption does not hold, a bias is introduced in the standard estimator of the population mean calculated from the completed data set. In this paper, we pursue the idea of using simple correction factors for the bias problem in the case that ratio imputation is used. The effectiveness of the correction factors is studied by Monte Carlo simulation using artificially generated data sets representing various super-populations, nonresponse rates, nonresponse mechanisms, and correlations between the variable of interest and the auxiliary variable. These correction factors are found to be effective especially when the population follows the model underlying ratio imputation. An option for estimating the variance of the corrected point estimates is also discussed.

    Release date: 1994-12-15

  • Articles and reports: 12-001-X199300114475
    Description:

    In the creation of micro-simulation databases which are frequently used by policy analysts and planners, several datafiles are combined by statistical matching techniques for enriching the host datafile. This process requires the conditional independence assumption (CIA) which could lead to serious bias in the resulting joint relationships among variables. Appropriate auxiliary information could be used to avoid the CIA. In this report, methods of statistical matching corresponding to three methods of imputation, namely, regression, hot deck, and log linear, with and without auxiliary information are considered. The log linear methods consist of adding categorical constraints to either the regression or hot deck methods. Based on an extensive simulation study with synthetic data, sensitivity analyses for departures from the CIA are performed and gains from using auxiliary information are discussed. Different scenarios for the underlying distribution and relationships, such as symmetric versus skewed data and proxy versus nonproxy auxiliary data, are created using synthetic data. Some recommendations on the use of statistical matching methods are also made. Specifically, it was confirmed that the CIA could be a serious limitation which could be overcome by the use of appropriate auxiliary information. Hot deck methods were found to be generally preferable to regression methods. Also, when auxiliary information is available, log linear categorical constraints can improve performance of hot deck methods. This study was motivated by concerns about the use of the CIA in the construction of the Social Policy Simulation Database at Statistics Canada.

    Release date: 1993-06-15

  • Articles and reports: 12-001-X199100214501
    Description:

    Although farm surveys carried out by the USDA are used to estimate crop production at the state and national levels, small area estimates at the county level are more useful for local economic decision making. County estimates are also in demand by companies selling fertilizers, pesticides, crop insurance, and farm equipment. Individual states often conduct their own surveys to provide data for county estimates of farm production. Typically, these state surveys are not carried out using probability sampling methods. An additional complication is that states impose the constraint that the sum of county estimates of crop production for all counties in a state be equal to the USDA estimate for that state. Thus, standard small area estimation procedures are not directly applicable to this problem. In this paper, we consider using regression models for obtaining county estimates of wheat production in Kansas. We describe a simulation study comparing the resulting estimates to those obtained using two standard small area estimators: the synthetic and direct estimators. We also compare several strategies for scaling the initial estimates so that they agree with the USDA estimate of the state production total.

    Release date: 1991-12-16

Reference (12)

Reference (12) (12 of 12 results)

  • Technical products: 91-621-X2017001
    Release date: 2017-01-25

  • Technical products: 91-621-X2015001
    Release date: 2015-09-17

  • Technical products: 11-522-X201300014290
    Description:

    This paper describes a new module that will project families and households by Aboriginal status using the Demosim microsimulation model. The methodology being considered would assign a household/family headship status annually to each individual and would use the headship rate method to calculate the number of annual families and households by various characteristics and geographies associated with Aboriginal populations.

    Release date: 2014-10-31

  • Technical products: 11-522-X201300014279
    Description:

    As part of the European SustainCity project, a microsimulation model of individuals and households was created to simulate the population of various European cities. The aim of the project was to combine several transportation and land-use microsimulation models (land-use modelling), add on a dynamic population module and apply these microsimulation approaches to three geographic areas of Europe (the Île-de-France region and the Brussels and Zurich agglomerations

    Release date: 2014-10-31

  • Technical products: 11-522-X201300014289
    Description:

    This paper provides an overview of the main new features that will be added to the forthcoming version of the Demosim microsimulation projection model based on microdata from the 2011 National Household Survey. The paper first describes the additions to the base population, namely new variables, some of which are added to the National Household Survey data by means of data linkage. This is followed by a brief description of the methods being considered for the projection of language variables, citizenship and religion as examples of the new features for events simulated by the model.

    Release date: 2014-10-31

  • Technical products: 11-522-X20040018652
    Description:

    ISQ's Grandir en qualité survey involved the on-site observation of child care providers. The success of the survey is due to an information-based collection strategy.

    Release date: 2005-10-27

  • Technical products: 11-522-X20040018747
    Description:

    This document describes the development and pilot of the first American Indian and Alaska Native Adult Tobacco Survey. Meetings with expert panels and tribal representatives helped to adapt methods.

    Release date: 2005-10-27

  • Technical products: 11-522-X20040018746
    Description:

    This document discusses the qualitative testing of translated questionnaires, the problems typically identified, and the challenges in finding solutions that preserve the intent of the original instrument, while addressing dialect.

    Release date: 2005-10-27

  • Technical products: 11-522-X20020016430
    Description:

    Linearization (or Taylor series) methods are widely used to estimate standard errors for the co-efficients of linear regression models fit to multi-stage samples. When the number of primary sampling units (PSUs) is large, linearization can produce accurate standard errors under quite general conditions. However, when the number of PSUs is small or a co-efficient depends primarily on data from a small number of PSUs, linearization estimators can have large negative bias.

    In this paper, we characterize features of the design matrix that produce large bias in linearization standard errors for linear regression co-efficients. We then propose a new method, bias reduced linearization (BRL), based on residuals adjusted to better approximate the covariance of the true errors. When the errors are independent and identically distributed (i.i.d.), the BRL estimator is unbiased for the variance. Furthermore, a simulation study shows that BRL can greatly reduce the bias, even if the errors are not i.i.d. We also propose using a Satterthwaite approximation to determine the degrees of freedom of the reference distribution for tests and confidence intervals about linear combinations of co-efficients based on the BRL estimator. We demonstrate that the jackknife estimator also tends to be biased in situations where linearization is biased. However, the jackknife's bias tends to be positive. Our bias-reduced linearization estimator can be viewed as a compromise between the traditional linearization and jackknife estimators.

    Release date: 2004-09-13

  • Technical products: 11-522-X20020016722
    Description:

    Colorectal cancer (CRC) is the second cause of cancer deaths in Canada. Randomized controlled trials (RCT) have shown the efficacy of screening using faecal occult blood tests (FOBT). A comprehensive evaluation of the costs and consequences of CRC screening for the Canadian population is required before implementing such a program. This paper evaluates whether or not the CRC screening is cost-effective. The results of these simulations will be provided to the Canadian National Committee on Colorectal Cancer Screening to help formulate national policy recommendations for CRC screening.

    Statistics Canada's Population Health Microsimulation Model was updated to incorporate a comprehensive CRC screening module based on Canadian data and RCT efficacy results. The module incorporated sensitivity and specificity of FOBT and colonoscopy, participation rates, incidence, staging, diagnostic and therapeutic options, disease progression, mortality and direct health care costs for different screening scenarios. Reproducing the mortality reduction observed in the Funen screening trial validated the model.

    Release date: 2004-09-13

  • Technical products: 11-522-X20010016289
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    Increasing demand for electronic reporting in establishment surveys has placed additional emphasis on incorporating usability into electronic forms. We are just beginning to understand the implications surrounding electronic forms design. Cognitive interviewing and usability testing are analogous in that both types of testing have similar goals: to build an end instrument (paper or electronic) that reduces both respondent burden and measurement error. Cognitive testing has greatly influenced paper forms design and can also be applied towards the development of electronic forms. Usability testing expands on existing cognitive testing methodology to include examination of the interaction between the respondent and the electronic form.

    The upcoming U.S. 2002 Economic Census will offer businesses the ability to report information using electronic forms. The U.S. Census Bureau is creating an electronic forms style guide outlining the design standards to be used in electronic form creation. The style guide's design standards are based on usability principles, usability and cognitive test results, and Graphical User Interface standards. This paper highlights the major electronic forms design issues raised during the preparation of the style guide and describes how usability testing and cognitive interviewing resolved these issues.

    Release date: 2002-09-12

  • Technical products: 11-522-X20010016288
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    The upcoming 2002 U.S. Economic Census will give businesses the option of submitting their data on paper or by electronic media. If reporting electronically, they may report via Windows-based Computerized Self-Administered Questionnaires (CSAQs). The U.S. Census Bureau will offer electronic reporting for over 650 different forms to all respondents. The U.S. Census Bureau has assembled a cross-divisional team to develop an electronic forms style guide, outlining the design standards to use in electronic form creation and ensuring that the quality of the form designs will be consistent throughout.

    The purpose of a style guide is to foster consistency among the various analysts who may be working on different pieces of a software development project (in this case, a CSAQ). The team determined that the style guide should include standards for layout and screen design, navigation, graphics, edit capabilities, additional help, feedback, audit trails, and accessibility for disabled users.

    Members of the team signed up to develop various sections of the style guide. The team met weekly to discuss and review the sections. Members of the team also conducted usability tests on edits, and subject-matter employees provided recommendations to upper management. Team members conducted usability testing on prototype forms with actual respondents. The team called in subject-matter experts as necessary to assist in making decisions about particular forms where the constraints of the electronic medium required changes to the paper form.

    The style guide will become the standard for all CSAQs for the 2002 Economic Census, which will ensure consistency across the survey programs.

    Release date: 2002-09-12

Date modified: