Statistical techniques

Filter results by

Search Help
Currently selected filters that can be removed

Keyword(s)

Type

2 facets displayed. 0 facets selected.

Content

1 facets displayed. 0 facets selected.
Sort Help
entries

Results

All (8)

All (8) ((8 results))

  • Stats in brief: 89-20-00062022004
    Description:

    Gathering, exploring, analyzing and interpreting data are essential steps in producing information that benefits society, the economy and the environment. In this video, we will discuss the importance of considering data ethics throughout the process of producing statistical information.

    As a pre-requisite to this video, make sure to watch the video titled “Data Ethics: An introduction” also available in Statistics Canada’s data literacy training catalogue.

    Release date: 2022-10-17

  • Stats in brief: 89-20-00062022005
    Description:

    In this video, you will learn the answers to the following questions: What are the different types of error? What are the types of error that lead to statistical bias? Where during the data journey statistical bias can occur?

    Release date: 2022-10-17

  • Articles and reports: 11-522-X202100100008
    Description:

    Non-probability samples are being increasingly explored by National Statistical Offices as a complement to probability samples. We consider the scenario where the variable of interest and auxiliary variables are observed in both a probability and non-probability sample. Our objective is to use data from the non-probability sample to improve the efficiency of survey-weighted estimates obtained from the probability sample. Recently, Sakshaug, Wisniowski, Ruiz and Blom (2019) and Wisniowski, Sakshaug, Ruiz and Blom (2020) proposed a Bayesian approach to integrating data from both samples for the estimation of model parameters. In their approach, non-probability sample data are used to determine the prior distribution of model parameters, and the posterior distribution is obtained under the assumption that the probability sampling design is ignorable (or not informative). We extend this Bayesian approach to the prediction of finite population parameters under non-ignorable (or informative) sampling by conditioning on appropriate survey-weighted statistics. We illustrate the properties of our predictor through a simulation study.

    Key Words: Bayesian prediction; Gibbs sampling; Non-ignorable sampling; Statistical data integration.

    Release date: 2021-10-29

  • Articles and reports: 12-001-X202100100004
    Description:

    Multiple data sources are becoming increasingly available for statistical analyses in the era of big data. As an important example in finite-population inference, we consider an imputation approach to combining data from a probability survey and big found data. We focus on the case when the study variable is observed in the big data only, but the other auxiliary variables are commonly observed in both data. Unlike the usual imputation for missing data analysis, we create imputed values for all units in the probability sample. Such mass imputation is attractive in the context of survey data integration (Kim and Rao, 2012). We extend mass imputation as a tool for data integration of survey data and big non-survey data. The mass imputation methods and their statistical properties are presented. The matching estimator of Rivers (2007) is also covered as a special case. Variance estimation with mass-imputed data is discussed. The simulation results demonstrate the proposed estimators outperform existing competitors in terms of robustness and efficiency.

    Release date: 2021-06-24

  • Articles and reports: 11-633-X2020001
    Description:

    This paper reviews alternative measures of income mixing within geographic units and applies them using geographically detailed income data derived from tax records. It highlights the characteristics of these measures, particularly their ease of interpretation and their suitability to decomposition across different levels of analysis, from neighbourhoods to individual apartment buildings. The discussion focuses on three measures: the dissimilarity index, the information theory index and the divergence index (D-index). Particular emphasis is placed on the D-index because it most effectively describes how income distributions at the sub-metropolitan level (e.g., neighbourhoods) differ from distributions at the metropolitan level (i.e., how much income sorting occurs across neighbourhoods). Furthermore, the D-index can consistently measure the contributions of income sorting within neighbourhoods (e.g., across individual apartment buildings) to the degree of income mixing at the neighbourhood and metropolitan scales.

    Release date: 2020-01-21

  • Articles and reports: 12-001-X201600114540
    Description:

    In this paper, we compare the EBLUP and pseudo-EBLUP estimators for small area estimation under the nested error regression model and three area level model-based estimators using the Fay-Herriot model. We conduct a design-based simulation study to compare the model-based estimators for unit level and area level models under informative and non-informative sampling. In particular, we are interested in the confidence interval coverage rate of the unit level and area level estimators. We also compare the estimators if the model has been misspecified. Our simulation results show that estimators based on the unit level model perform better than those based on the area level. The pseudo-EBLUP estimator is the best among unit level and area level estimators.

    Release date: 2016-06-22

  • Articles and reports: 11-522-X20050019474
    Description:

    Missingness is a common feature of longitudinal studies. In recent years there has been considerable research devoted to the development of methods for the analysis of incomplete longitudinal data. One common practice is imputation by the " last observation carried forward" (LOCF) approach, in which values for missing responses are imputed using observations from the most recently completed assessment. In this talk I will first examine the performance of the LOCF approach where the generalized estimating equations (GEE) are employed as the inferential procedures.

    Release date: 2007-03-02

  • Articles and reports: 12-002-X20060019253
    Description:

    Before any analytical results are released from the Research Data Centres (RDCs), RDC analysts must conduct disclosure risk analysis (or vetting). RDC analysts apply Statistics Canada's disclosure control guidelines, when reviewing all analytical output, as a means of ensuring the protection of survey respondents' confidentiality. For some data sets, such as the Aboriginal People's Survey (APS), Ethnic Diversity Survey (EDS), the Participation, Activity and Limitation Survey (PALS) and the Longitudinal Survey of Immigrants to Canada (LSIC), Statistics Canada has developed an additional set of guidelines that involve rounding analytical results, in order to ensure further confidentiality protection. This article will discuss the rationale for the additional rounding procedures used for these data sets, and describe the specifics of the rounding guidelines. More importantly, this paper will suggest several approaches to assist researchers in following these protocols more effectively and efficiently.

    Release date: 2006-07-18
Data (0)

Data (0) (0 results)

No content available at this time.

Analysis (8)

Analysis (8) ((8 results))

  • Stats in brief: 89-20-00062022004
    Description:

    Gathering, exploring, analyzing and interpreting data are essential steps in producing information that benefits society, the economy and the environment. In this video, we will discuss the importance of considering data ethics throughout the process of producing statistical information.

    As a pre-requisite to this video, make sure to watch the video titled “Data Ethics: An introduction” also available in Statistics Canada’s data literacy training catalogue.

    Release date: 2022-10-17

  • Stats in brief: 89-20-00062022005
    Description:

    In this video, you will learn the answers to the following questions: What are the different types of error? What are the types of error that lead to statistical bias? Where during the data journey statistical bias can occur?

    Release date: 2022-10-17

  • Articles and reports: 11-522-X202100100008
    Description:

    Non-probability samples are being increasingly explored by National Statistical Offices as a complement to probability samples. We consider the scenario where the variable of interest and auxiliary variables are observed in both a probability and non-probability sample. Our objective is to use data from the non-probability sample to improve the efficiency of survey-weighted estimates obtained from the probability sample. Recently, Sakshaug, Wisniowski, Ruiz and Blom (2019) and Wisniowski, Sakshaug, Ruiz and Blom (2020) proposed a Bayesian approach to integrating data from both samples for the estimation of model parameters. In their approach, non-probability sample data are used to determine the prior distribution of model parameters, and the posterior distribution is obtained under the assumption that the probability sampling design is ignorable (or not informative). We extend this Bayesian approach to the prediction of finite population parameters under non-ignorable (or informative) sampling by conditioning on appropriate survey-weighted statistics. We illustrate the properties of our predictor through a simulation study.

    Key Words: Bayesian prediction; Gibbs sampling; Non-ignorable sampling; Statistical data integration.

    Release date: 2021-10-29

  • Articles and reports: 12-001-X202100100004
    Description:

    Multiple data sources are becoming increasingly available for statistical analyses in the era of big data. As an important example in finite-population inference, we consider an imputation approach to combining data from a probability survey and big found data. We focus on the case when the study variable is observed in the big data only, but the other auxiliary variables are commonly observed in both data. Unlike the usual imputation for missing data analysis, we create imputed values for all units in the probability sample. Such mass imputation is attractive in the context of survey data integration (Kim and Rao, 2012). We extend mass imputation as a tool for data integration of survey data and big non-survey data. The mass imputation methods and their statistical properties are presented. The matching estimator of Rivers (2007) is also covered as a special case. Variance estimation with mass-imputed data is discussed. The simulation results demonstrate the proposed estimators outperform existing competitors in terms of robustness and efficiency.

    Release date: 2021-06-24

  • Articles and reports: 11-633-X2020001
    Description:

    This paper reviews alternative measures of income mixing within geographic units and applies them using geographically detailed income data derived from tax records. It highlights the characteristics of these measures, particularly their ease of interpretation and their suitability to decomposition across different levels of analysis, from neighbourhoods to individual apartment buildings. The discussion focuses on three measures: the dissimilarity index, the information theory index and the divergence index (D-index). Particular emphasis is placed on the D-index because it most effectively describes how income distributions at the sub-metropolitan level (e.g., neighbourhoods) differ from distributions at the metropolitan level (i.e., how much income sorting occurs across neighbourhoods). Furthermore, the D-index can consistently measure the contributions of income sorting within neighbourhoods (e.g., across individual apartment buildings) to the degree of income mixing at the neighbourhood and metropolitan scales.

    Release date: 2020-01-21

  • Articles and reports: 12-001-X201600114540
    Description:

    In this paper, we compare the EBLUP and pseudo-EBLUP estimators for small area estimation under the nested error regression model and three area level model-based estimators using the Fay-Herriot model. We conduct a design-based simulation study to compare the model-based estimators for unit level and area level models under informative and non-informative sampling. In particular, we are interested in the confidence interval coverage rate of the unit level and area level estimators. We also compare the estimators if the model has been misspecified. Our simulation results show that estimators based on the unit level model perform better than those based on the area level. The pseudo-EBLUP estimator is the best among unit level and area level estimators.

    Release date: 2016-06-22

  • Articles and reports: 11-522-X20050019474
    Description:

    Missingness is a common feature of longitudinal studies. In recent years there has been considerable research devoted to the development of methods for the analysis of incomplete longitudinal data. One common practice is imputation by the " last observation carried forward" (LOCF) approach, in which values for missing responses are imputed using observations from the most recently completed assessment. In this talk I will first examine the performance of the LOCF approach where the generalized estimating equations (GEE) are employed as the inferential procedures.

    Release date: 2007-03-02

  • Articles and reports: 12-002-X20060019253
    Description:

    Before any analytical results are released from the Research Data Centres (RDCs), RDC analysts must conduct disclosure risk analysis (or vetting). RDC analysts apply Statistics Canada's disclosure control guidelines, when reviewing all analytical output, as a means of ensuring the protection of survey respondents' confidentiality. For some data sets, such as the Aboriginal People's Survey (APS), Ethnic Diversity Survey (EDS), the Participation, Activity and Limitation Survey (PALS) and the Longitudinal Survey of Immigrants to Canada (LSIC), Statistics Canada has developed an additional set of guidelines that involve rounding analytical results, in order to ensure further confidentiality protection. This article will discuss the rationale for the additional rounding procedures used for these data sets, and describe the specifics of the rounding guidelines. More importantly, this paper will suggest several approaches to assist researchers in following these protocols more effectively and efficiently.

    Release date: 2006-07-18
Reference (0)

Reference (0) (0 results)

No content available at this time.

Date modified: