History and context

Filter results by

Search Help
Currently selected filters that can be removed

Keyword(s)

Type

2 facets displayed. 0 facets selected.

Content

1 facets displayed. 0 facets selected.
Sort Help
entries

Results

All (10)

All (10) ((10 results))

  • Articles and reports: 12-001-X202000100001
    Description:

    For several decades, national statistical agencies around the world have been using probability surveys as their preferred tool to meet information needs about a population of interest. In the last few years, there has been a wind of change and other data sources are being increasingly explored. Five key factors are behind this trend: the decline in response rates in probability surveys, the high cost of data collection, the increased burden on respondents, the desire for access to “real-time” statistics, and the proliferation of non-probability data sources. Some people have even come to believe that probability surveys could gradually disappear. In this article, we review some approaches that can reduce, or even eliminate, the use of probability surveys, all the while preserving a valid statistical inference framework. All the approaches we consider use data from a non-probability source; data from a probability survey are also used in most cases. Some of these approaches rely on the validity of model assumptions, which contrasts with approaches based on the probability sampling design. These design-based approaches are generally not as efficient; yet, they are not subject to the risk of bias due to model misspecification.

    Release date: 2020-06-30

  • Articles and reports: 12-001-X201300211883
    Description:

    The history of survey sampling, dating from the writings of A.N. Kiaer, has been remarkably controversial. First Kiaer himself had to struggle to convince his contemporaries that survey sampling itself was a legitimate procedure. He spent several decades in the attempt, and was an old man before survey sampling became a reputable activity. The first person to provide both a theoretical justification of survey sampling (in 1906) and a practical demonstration of its feasibility (in a survey conducted in Reading which was published in 1912) was A.L. Bowley. In 1925, the ISI meeting in Rome adopted a resolution giving acceptance to the use of both randomization and purposive sampling. Bowley used both. However the next two decades saw a steady tendency for randomization to become mandatory. In 1934 Jerzy Neyman used the relatively recent failure of a large purposive survey to ensure that subsequent sample surveys would need to employ random sampling only. He found apt pupils in M.H. Hansen, W.N. Hurwitz and W.G. Madow, who together published a definitive sampling textbook in 1953. This went effectively unchallenged for nearly two decades. In the 1970s, however, R.M. Royall and his coauthors did challenge the use of random sampling inference, and advocated that of model-based sampling instead. That in turn gave rise to the third major controversy within little more than a century. The present author, however, with several others, believes that both design-based and model-based inference have a useful part to play.

    Release date: 2014-01-15

  • Stats in brief: 13-604-M2007056
    Description:

    This paper highlights the newly constructed Research and Development Satellite Account (RDSA) developed by Statistics Canada. The RDSA provides an analysis for the capitalization of research and development (R&D) as proposed by international guidelines for the System of National Accounts. The account calculates several methods to measure the impact on Gross Domestic Product of R&D expenditures. This paper presents the results of the RDSA for the years 1997 to 2004.

    Release date: 2008-05-30

  • Articles and reports: 11-522-X20040018744
    Description:

    I will try to look at the future of survey research in the light of the incredible developments in information technology. I will speculate on what new technologies might contribute to doing surveys differently.

    Release date: 2005-10-27

  • Articles and reports: 12-001-X20000015174
    Description:

    Computation is an integral part of statistical analysis in general and survey sampling in particular. What kinds of analyses can be carried out will depend upon what kind of computational power is available. The general development of sampling theory is traced in connection with technological developments in computation.

    Release date: 2000-08-30

  • Articles and reports: 12-001-X20000015175
    Description:

    Mahalanobis provided an example of how to use statistics to enlighten and inform government policy makers. His pioneering work was used by the US Bureau of the Census to learn more about measurement errors in censuses and surveys. People have many misconceptions about censuses, among them who is to be counted and where. Errors in the census do occur, among them errors in coverage. Over the years, the US Bureau of the Census has developed statistical techniques, including sampling in the census, to increase accuracy and reduce response burden.

    Release date: 2000-08-30

  • Articles and reports: 12-001-X19980024347
    Description:

    We review the current status of various aspects of the design and analysis of studies where the same units are investigated at several points in time. These studies include longitudinal surveys, and longitudinal analyses of retrospective studies and of administrative or census data. The major focus is the special problems posed by the longitudinal nature of the study. We discuss four of the major components of longitudinal studies in general; namely, Design, Implementation, Evaluation and Analysis. Each of these components requires special considerations when planning a longitudinal study. Some issues relating to the longitudinal nature of the studies are: concepts and definitions, frames, sampling, data collection, nonresponse treatment, imputation, estimation, data validation, data analysis and dissemination. Assuming familiarity with the basic requirements for conducting a cross-sectional survey, we highlight the issues and problems that become apparent for many longitudinal studies.

    Release date: 1999-01-14

  • Articles and reports: 12-001-X199000114558
    Description:

    Drawing upon experiences from developments at the U.S. Bureau of the Census, the paper briefly traces some contributions made by practitioners to the theory and application of censuses and surveys. Some guesses about future developments are also given.

    Release date: 1990-06-15

  • Articles and reports: 12-001-X198500214373
    Description:

    In the first part of the paper a review of the historical literature concerning microfilmed manuscript census records is given. Several types of sampling designs have been used ranging in complexity from cluster and stratified random sampling to stratified two-stage cluster sampling. In the second part, a method is given to create a public use sample tape of the 1881 Census of Canada. This work was part of a pilot project for Public Archives of Canada and was carried out by the Social Science Computing Laboratory of the University of Western Ontario. The pilot project was designed to determine the merit and technical and economic feasibility of developing machine readable products from microfilm copies of the 1881 Census of Canada.

    Release date: 1985-12-16

  • Articles and reports: 12-001-X198000254939
    Description:

    This document describes the process of commissioning market research at Imperial Oil Limited. It outlines the management processes that precede commissioning and defines the expectations of a typical buyer of research work. It also examines the need to have a satisfactory business relationship between the buyer and the seller, and it provides a list of the attributes most often considered by a company seeking a supplier for a particular research project.

    Release date: 1980-12-15
Data (0)

Data (0) (0 results)

No content available at this time.

Analysis (10)

Analysis (10) ((10 results))

  • Articles and reports: 12-001-X202000100001
    Description:

    For several decades, national statistical agencies around the world have been using probability surveys as their preferred tool to meet information needs about a population of interest. In the last few years, there has been a wind of change and other data sources are being increasingly explored. Five key factors are behind this trend: the decline in response rates in probability surveys, the high cost of data collection, the increased burden on respondents, the desire for access to “real-time” statistics, and the proliferation of non-probability data sources. Some people have even come to believe that probability surveys could gradually disappear. In this article, we review some approaches that can reduce, or even eliminate, the use of probability surveys, all the while preserving a valid statistical inference framework. All the approaches we consider use data from a non-probability source; data from a probability survey are also used in most cases. Some of these approaches rely on the validity of model assumptions, which contrasts with approaches based on the probability sampling design. These design-based approaches are generally not as efficient; yet, they are not subject to the risk of bias due to model misspecification.

    Release date: 2020-06-30

  • Articles and reports: 12-001-X201300211883
    Description:

    The history of survey sampling, dating from the writings of A.N. Kiaer, has been remarkably controversial. First Kiaer himself had to struggle to convince his contemporaries that survey sampling itself was a legitimate procedure. He spent several decades in the attempt, and was an old man before survey sampling became a reputable activity. The first person to provide both a theoretical justification of survey sampling (in 1906) and a practical demonstration of its feasibility (in a survey conducted in Reading which was published in 1912) was A.L. Bowley. In 1925, the ISI meeting in Rome adopted a resolution giving acceptance to the use of both randomization and purposive sampling. Bowley used both. However the next two decades saw a steady tendency for randomization to become mandatory. In 1934 Jerzy Neyman used the relatively recent failure of a large purposive survey to ensure that subsequent sample surveys would need to employ random sampling only. He found apt pupils in M.H. Hansen, W.N. Hurwitz and W.G. Madow, who together published a definitive sampling textbook in 1953. This went effectively unchallenged for nearly two decades. In the 1970s, however, R.M. Royall and his coauthors did challenge the use of random sampling inference, and advocated that of model-based sampling instead. That in turn gave rise to the third major controversy within little more than a century. The present author, however, with several others, believes that both design-based and model-based inference have a useful part to play.

    Release date: 2014-01-15

  • Stats in brief: 13-604-M2007056
    Description:

    This paper highlights the newly constructed Research and Development Satellite Account (RDSA) developed by Statistics Canada. The RDSA provides an analysis for the capitalization of research and development (R&D) as proposed by international guidelines for the System of National Accounts. The account calculates several methods to measure the impact on Gross Domestic Product of R&D expenditures. This paper presents the results of the RDSA for the years 1997 to 2004.

    Release date: 2008-05-30

  • Articles and reports: 11-522-X20040018744
    Description:

    I will try to look at the future of survey research in the light of the incredible developments in information technology. I will speculate on what new technologies might contribute to doing surveys differently.

    Release date: 2005-10-27

  • Articles and reports: 12-001-X20000015174
    Description:

    Computation is an integral part of statistical analysis in general and survey sampling in particular. What kinds of analyses can be carried out will depend upon what kind of computational power is available. The general development of sampling theory is traced in connection with technological developments in computation.

    Release date: 2000-08-30

  • Articles and reports: 12-001-X20000015175
    Description:

    Mahalanobis provided an example of how to use statistics to enlighten and inform government policy makers. His pioneering work was used by the US Bureau of the Census to learn more about measurement errors in censuses and surveys. People have many misconceptions about censuses, among them who is to be counted and where. Errors in the census do occur, among them errors in coverage. Over the years, the US Bureau of the Census has developed statistical techniques, including sampling in the census, to increase accuracy and reduce response burden.

    Release date: 2000-08-30

  • Articles and reports: 12-001-X19980024347
    Description:

    We review the current status of various aspects of the design and analysis of studies where the same units are investigated at several points in time. These studies include longitudinal surveys, and longitudinal analyses of retrospective studies and of administrative or census data. The major focus is the special problems posed by the longitudinal nature of the study. We discuss four of the major components of longitudinal studies in general; namely, Design, Implementation, Evaluation and Analysis. Each of these components requires special considerations when planning a longitudinal study. Some issues relating to the longitudinal nature of the studies are: concepts and definitions, frames, sampling, data collection, nonresponse treatment, imputation, estimation, data validation, data analysis and dissemination. Assuming familiarity with the basic requirements for conducting a cross-sectional survey, we highlight the issues and problems that become apparent for many longitudinal studies.

    Release date: 1999-01-14

  • Articles and reports: 12-001-X199000114558
    Description:

    Drawing upon experiences from developments at the U.S. Bureau of the Census, the paper briefly traces some contributions made by practitioners to the theory and application of censuses and surveys. Some guesses about future developments are also given.

    Release date: 1990-06-15

  • Articles and reports: 12-001-X198500214373
    Description:

    In the first part of the paper a review of the historical literature concerning microfilmed manuscript census records is given. Several types of sampling designs have been used ranging in complexity from cluster and stratified random sampling to stratified two-stage cluster sampling. In the second part, a method is given to create a public use sample tape of the 1881 Census of Canada. This work was part of a pilot project for Public Archives of Canada and was carried out by the Social Science Computing Laboratory of the University of Western Ontario. The pilot project was designed to determine the merit and technical and economic feasibility of developing machine readable products from microfilm copies of the 1881 Census of Canada.

    Release date: 1985-12-16

  • Articles and reports: 12-001-X198000254939
    Description:

    This document describes the process of commissioning market research at Imperial Oil Limited. It outlines the management processes that precede commissioning and defines the expectations of a typical buyer of research work. It also examines the need to have a satisfactory business relationship between the buyer and the seller, and it provides a list of the attributes most often considered by a company seeking a supplier for a particular research project.

    Release date: 1980-12-15
Reference (0)

Reference (0) (0 results)

No content available at this time.

Date modified: