Quality assurance

Filter results by

Search Help
Currently selected filters that can be removed

Keyword(s)

Geography

1 facets displayed. 0 facets selected.
Sort Help
entries

Results

All (26)

All (26) (0 to 10 of 26 results)

  • Articles and reports: 11F0019M2000144
    Geography: Canada
    Description:

    In this paper, we revisit trends in low-income among Canadian children by taking advantage of recent developments in the measurement of low-income intensity. We focus in particular on the Sen-Shorrocks-Thon (SST) index and its elaboration by Osberg and Xu. Low-income intensity declined in the 1980s but rose in the 1990s. Declining earnings put upward pressure on low-income levels over much of the period. Higher transfers more than offset this pressure in the 1980s and continued to absorb a substantial share of the increase through 1993. In contrast, the rise in low-income intensity after 1993 reflected reductions in UI and social assistance benefits that were not offset by increased employment earnings, at least to 1996 the latest year used in this paper.

    A major aim of the paper is methodological. We contrast results using the SST index with results produced by the more familiar low-income rate, the usual measure for indexing low-income trends. The low-income rate is embedded in the SST index, but unlike the index, the rate incorporates only partial information on the distribution of low-income. Consequently, the low-income rate is generally unable to detect the changes we describe and this is true irrespective of the choice of low-income cut-off. Compared to the low-income intensity measure, the rate is also relatively insensitive to changes in transfer payments and employment earnings.

    Release date: 2000-03-30

  • Surveys and statistical programs – Documentation: 11-522-X19990015638
    Description:

    The focus of Symposium'99 is on techniques and methods for combining data from different sources and on analysis of the resulting data sets. In this talk we illustrate the usefulness of taking such an "integrating" approach when tackling a complex statistical problem. The problem itself is easily described - it is how to approximate, as closely as possible, a "perfect census", and in particular, how to obtain census counts that are "free" of underenumeration. Typically, underenumeration is estimated by carrying out a post enumeration survey (PES) following the census. In the UK in 1991 the PEF failed to identify the full size of the underenumeration and so demographic methods were used to estimate the extent of the undercount. The problems with the "traditional" PES approach in 1991 resulted in a joint research project between the Office for National Statistics and the Department of Social Statistics at the University of Southampton aimed at developing a methodology which will allow a "One Number Census" in the UK in 2001. That is, underenumeration will be accounted for not just at high levels of aggregation, but right down to the lowest levels at which census tabulations are produced. In this way all census outputs will be internally consistent, adding to the national population estimates. The basis of this methodology is the integration of information from a number of data sources in order to achieve this "One Number".

    Release date: 2000-03-02

  • Surveys and statistical programs – Documentation: 11-522-X19990015640
    Description:

    This paper states how SN is preparing for a new era in the making of statistics, as it is triggered by technological and methodological developments. An essential feature of the turn to the new era is the farewell to the stovepipe way of data processing. The paper discusses how new technological and methodological tools will affect processes and their organization. Special emphasis is put on one of the major chances and challenges the new tools offer: establishing coherence in the content of statistics and in the presentation to users.

    Release date: 2000-03-02

  • Surveys and statistical programs – Documentation: 11-522-X19990015644
    Description:

    One method of enriching survey data is to supplement information collected directly from the respondent with that obtained from administrative systems. The aims of such a practice include being able to collect data which might not otherwise be possible, provision of better quality information for data items which respondents may not be able to report accurately (or not at all) reduction of respondent load, and maximising the utility of information held in administrative systems. Given the direct link with administrative information, the data set resulting from such techniques is potentially a powerful basis for policy-relevant analysis and evaluation. However, the processes involved in effectively combining data from different sources raise a number of challenges which need to be addressed by the parties involved. These include issues associated with privacy, data linking, data quality, estimation, and dissemination.

    Release date: 2000-03-02

  • Surveys and statistical programs – Documentation: 11-522-X19990015648
    Description:

    We estimate the parameters of a stochastic model for labour force careers involving distributions of correlated durations employed, unemployed (with and without job search) and not in the labour force. If the model is to account for sub-annual labour force patterns as well as advancement towards retirement, then no single data source is adequate to inform it. However, it is possible to build up an approximation from a number of different sources.

    Release date: 2000-03-02

  • Surveys and statistical programs – Documentation: 11-522-X19990015652
    Description:

    Objective: To create an occupational surveillance system by collecting, linking, evaluating and disseminating data relating to occupation and mortality with the ultimate aim of reducing or preventing excess risk among workers and the general population.

    Release date: 2000-03-02

  • Surveys and statistical programs – Documentation: 11-522-X19990015656
    Description:

    Time series studies have shown associations between air pollution concentrations and morbidity and mortality. These studies have largely been conducted within single cities, and with varying methods. Critics of these studies have questioned the validity of the data sets used and the statistical techniques applied to them; the critics have noted inconsistencies in findings among studies and even in independent re-analyses of data from the same city. In this paper we review some of the statistical methods used to analyze a subset of a national data base of air pollution, mortality and weather assembled during the National Morbidity and Mortality Air Pollution Study (NMMAPS).

    Release date: 2000-03-02

  • Surveys and statistical programs – Documentation: 11-522-X19990015658
    Description:

    Radon, a naturally occurring gas found at some level in most homes, is an established risk factor for human lung cancer. The U.S. National Research Council (1999) has recently completed a comprehensive evaluation of the health risks of residential exposure to radon, and developed models for projecting radon lung cancer risks in the general population. This analysis suggests that radon may play a role in the etiology of 10-15% of all lung cancer cases in the United States, although these estimates are subject to considerable uncertainty. In this article, we present a partial analysis of uncertainty and variability in estimates of lung cancer risk due to residential exposure to radon in the United States using a general framework for the analysis of uncertainty and variability that we have developed previously. Specifically, we focus on estimates of the age-specific excess relative risk (ERR) and lifetime relative risk (LRR), both of which vary substantially among individuals.

    Release date: 2000-03-02

  • Surveys and statistical programs – Documentation: 11-522-X19990015660
    Description:

    There are many different situations in which one or more files need to be linked. With one file the purpose of the linkage would be to locate duplicates within the file. When there are two files, the linkage is done to identify the units that are the same on both files and thus create matched pairs. Often records that need to be linked do not have a unique identifier. Hierarchical record linkage, probabilistic record linkage and statistical matching are three methods that can be used when there is no unique identifier on the files that need to be linked. We describe the major differences between the methods. We consider how to choose variables to link, how to prepare files for linkage and how the links are identified. As well, we review tips and tricks used when linking files. Two examples, the probabilistic record linkage used in the reverse record check and the hierarchical record linkage of the Business Number (BN) master file to the Statistical Universe File (SUF) of unincorporated tax filers (T1) will be illustrated.

    Release date: 2000-03-02

  • Surveys and statistical programs – Documentation: 11-522-X19990015664
    Description:

    Much work on probabilistic methods of linkage can be found in the statistical literature. However, although many groups undoubtedly still use deterministic procedures, not much literature is available on these strategies. Furthermore there appears to exist no documentation on the comparison of results for the two strategies. Such a comparison is pertinent in the situation where we have only non-unique identifiers like names, sex, race etc. as common identifiers on which the databases are to be linked. In this work we compare a stepwise deterministic linkage strategy with the probabilistic strategy, as implemented in AUTOMATCH, for such a situation. The comparison was carried out on a linkage between medical records from the Regional Perinatal Intensive Care Centers database and education records from the Florida Department of Education. Social security numbers, available in both databases, were used to decide the true status of the record pair after matching. Match rates and error rates for the two strategies are compared and a discussion of their similarities and differences, strengths and weaknesses is presented.

    Release date: 2000-03-02
Data (0)

Data (0) (0 results)

No content available at this time.

Analysis (3)

Analysis (3) ((3 results))

  • Articles and reports: 11F0019M2000144
    Geography: Canada
    Description:

    In this paper, we revisit trends in low-income among Canadian children by taking advantage of recent developments in the measurement of low-income intensity. We focus in particular on the Sen-Shorrocks-Thon (SST) index and its elaboration by Osberg and Xu. Low-income intensity declined in the 1980s but rose in the 1990s. Declining earnings put upward pressure on low-income levels over much of the period. Higher transfers more than offset this pressure in the 1980s and continued to absorb a substantial share of the increase through 1993. In contrast, the rise in low-income intensity after 1993 reflected reductions in UI and social assistance benefits that were not offset by increased employment earnings, at least to 1996 the latest year used in this paper.

    A major aim of the paper is methodological. We contrast results using the SST index with results produced by the more familiar low-income rate, the usual measure for indexing low-income trends. The low-income rate is embedded in the SST index, but unlike the index, the rate incorporates only partial information on the distribution of low-income. Consequently, the low-income rate is generally unable to detect the changes we describe and this is true irrespective of the choice of low-income cut-off. Compared to the low-income intensity measure, the rate is also relatively insensitive to changes in transfer payments and employment earnings.

    Release date: 2000-03-30

  • Articles and reports: 12-001-X19990024876
    Description:

    Leslie Kish describes the challenges and opportunities of combining data from surveys of different populations. Examples include multinational surveys where the data from surveys of several countries are combined for comparison and analysis, as well as cumulated periodic surveys of the "same" population. He also compares and contrasts the combining of surveys with the combining of experiments.

    Release date: 2000-03-01

  • Articles and reports: 12-001-X19990024877
    Description:

    In 1999 Statistics Sweden outlined a proposal for improved quality within the European Statistical System (ESS). The ESS comprises Eurostat and National Statistical Institutes (NSIs) associated with Eurostat. ... Basically Statistics Sweden proposed the creation of a LEG [Leadership Expert Group] on Quality].

    Release date: 2000-03-01
Reference (23)

Reference (23) (0 to 10 of 23 results)

  • Surveys and statistical programs – Documentation: 11-522-X19990015638
    Description:

    The focus of Symposium'99 is on techniques and methods for combining data from different sources and on analysis of the resulting data sets. In this talk we illustrate the usefulness of taking such an "integrating" approach when tackling a complex statistical problem. The problem itself is easily described - it is how to approximate, as closely as possible, a "perfect census", and in particular, how to obtain census counts that are "free" of underenumeration. Typically, underenumeration is estimated by carrying out a post enumeration survey (PES) following the census. In the UK in 1991 the PEF failed to identify the full size of the underenumeration and so demographic methods were used to estimate the extent of the undercount. The problems with the "traditional" PES approach in 1991 resulted in a joint research project between the Office for National Statistics and the Department of Social Statistics at the University of Southampton aimed at developing a methodology which will allow a "One Number Census" in the UK in 2001. That is, underenumeration will be accounted for not just at high levels of aggregation, but right down to the lowest levels at which census tabulations are produced. In this way all census outputs will be internally consistent, adding to the national population estimates. The basis of this methodology is the integration of information from a number of data sources in order to achieve this "One Number".

    Release date: 2000-03-02

  • Surveys and statistical programs – Documentation: 11-522-X19990015640
    Description:

    This paper states how SN is preparing for a new era in the making of statistics, as it is triggered by technological and methodological developments. An essential feature of the turn to the new era is the farewell to the stovepipe way of data processing. The paper discusses how new technological and methodological tools will affect processes and their organization. Special emphasis is put on one of the major chances and challenges the new tools offer: establishing coherence in the content of statistics and in the presentation to users.

    Release date: 2000-03-02

  • Surveys and statistical programs – Documentation: 11-522-X19990015644
    Description:

    One method of enriching survey data is to supplement information collected directly from the respondent with that obtained from administrative systems. The aims of such a practice include being able to collect data which might not otherwise be possible, provision of better quality information for data items which respondents may not be able to report accurately (or not at all) reduction of respondent load, and maximising the utility of information held in administrative systems. Given the direct link with administrative information, the data set resulting from such techniques is potentially a powerful basis for policy-relevant analysis and evaluation. However, the processes involved in effectively combining data from different sources raise a number of challenges which need to be addressed by the parties involved. These include issues associated with privacy, data linking, data quality, estimation, and dissemination.

    Release date: 2000-03-02

  • Surveys and statistical programs – Documentation: 11-522-X19990015648
    Description:

    We estimate the parameters of a stochastic model for labour force careers involving distributions of correlated durations employed, unemployed (with and without job search) and not in the labour force. If the model is to account for sub-annual labour force patterns as well as advancement towards retirement, then no single data source is adequate to inform it. However, it is possible to build up an approximation from a number of different sources.

    Release date: 2000-03-02

  • Surveys and statistical programs – Documentation: 11-522-X19990015652
    Description:

    Objective: To create an occupational surveillance system by collecting, linking, evaluating and disseminating data relating to occupation and mortality with the ultimate aim of reducing or preventing excess risk among workers and the general population.

    Release date: 2000-03-02

  • Surveys and statistical programs – Documentation: 11-522-X19990015656
    Description:

    Time series studies have shown associations between air pollution concentrations and morbidity and mortality. These studies have largely been conducted within single cities, and with varying methods. Critics of these studies have questioned the validity of the data sets used and the statistical techniques applied to them; the critics have noted inconsistencies in findings among studies and even in independent re-analyses of data from the same city. In this paper we review some of the statistical methods used to analyze a subset of a national data base of air pollution, mortality and weather assembled during the National Morbidity and Mortality Air Pollution Study (NMMAPS).

    Release date: 2000-03-02

  • Surveys and statistical programs – Documentation: 11-522-X19990015658
    Description:

    Radon, a naturally occurring gas found at some level in most homes, is an established risk factor for human lung cancer. The U.S. National Research Council (1999) has recently completed a comprehensive evaluation of the health risks of residential exposure to radon, and developed models for projecting radon lung cancer risks in the general population. This analysis suggests that radon may play a role in the etiology of 10-15% of all lung cancer cases in the United States, although these estimates are subject to considerable uncertainty. In this article, we present a partial analysis of uncertainty and variability in estimates of lung cancer risk due to residential exposure to radon in the United States using a general framework for the analysis of uncertainty and variability that we have developed previously. Specifically, we focus on estimates of the age-specific excess relative risk (ERR) and lifetime relative risk (LRR), both of which vary substantially among individuals.

    Release date: 2000-03-02

  • Surveys and statistical programs – Documentation: 11-522-X19990015660
    Description:

    There are many different situations in which one or more files need to be linked. With one file the purpose of the linkage would be to locate duplicates within the file. When there are two files, the linkage is done to identify the units that are the same on both files and thus create matched pairs. Often records that need to be linked do not have a unique identifier. Hierarchical record linkage, probabilistic record linkage and statistical matching are three methods that can be used when there is no unique identifier on the files that need to be linked. We describe the major differences between the methods. We consider how to choose variables to link, how to prepare files for linkage and how the links are identified. As well, we review tips and tricks used when linking files. Two examples, the probabilistic record linkage used in the reverse record check and the hierarchical record linkage of the Business Number (BN) master file to the Statistical Universe File (SUF) of unincorporated tax filers (T1) will be illustrated.

    Release date: 2000-03-02

  • Surveys and statistical programs – Documentation: 11-522-X19990015664
    Description:

    Much work on probabilistic methods of linkage can be found in the statistical literature. However, although many groups undoubtedly still use deterministic procedures, not much literature is available on these strategies. Furthermore there appears to exist no documentation on the comparison of results for the two strategies. Such a comparison is pertinent in the situation where we have only non-unique identifiers like names, sex, race etc. as common identifiers on which the databases are to be linked. In this work we compare a stepwise deterministic linkage strategy with the probabilistic strategy, as implemented in AUTOMATCH, for such a situation. The comparison was carried out on a linkage between medical records from the Regional Perinatal Intensive Care Centers database and education records from the Florida Department of Education. Social security numbers, available in both databases, were used to decide the true status of the record pair after matching. Match rates and error rates for the two strategies are compared and a discussion of their similarities and differences, strengths and weaknesses is presented.

    Release date: 2000-03-02

  • Surveys and statistical programs – Documentation: 11-522-X19990015666
    Description:

    The fusion sample obtained by a statistical matching process can be considered a sample out of an artificial population. The distribution of this artificial population is derived. If the correlation between specific variables is the only focus the strong demand for conditional independence can be weakened. In a simulation study the effects of violations of some assumptions leading to the distribution of the artificial population are examined. Finally some ideas concerning the establishing of the claimed conditional independence by latent class analysis are presented.

    Release date: 2000-03-02
Date modified: