Statistics by subject – Statistical methods

Filter results by

Help for filters and search
Currently selected filters that can be removed

Keyword(s)

Year of publication

1 facets displayed. 1 facets selected.

Filter results by

Help for filters and search
Currently selected filters that can be removed

Keyword(s)

Year of publication

1 facets displayed. 1 facets selected.

Other available resources to support your research.

Help for sorting results
Browse our central repository of key standard concepts, definitions, data sources and methods.
Loading
Loading in progress, please wait...
All (109)

All (109) (25 of 109 results)

  • Articles and reports: 11F0019M2002181
    Description:

    We use data from the Canadian National Longitudinal Survey of Children and Youth to address two questions. To what extent do parents and children agree when asked identical questions about child well-being? To what extent do differences in their responses affect what one infers from multivariate analysis of the data? The correspondence between parent and child in the assessment of child well-being is only slight to fair. Agreement is stronger for more observable outcomes, such as schooling performance, and weaker for less observable outcomes, such as emotional disorders. We regress both sets of responses on a standard set of socio-economic characteristics. We also conduct formal and informal tests of the differences in what one would infer from these two sets of regressions.

    Release date: 2002-10-23

  • Articles and reports: 82-005-X20020016479
    Description:

    The Population Health Model (POHEM) is a policy analysis tool that helps answer "what-if" questions about the health and economic burden of specific diseases and the cost-effectiveness of administering new diagnostic and therapeutic interventions. This simulation model is particularly pertinent in an era of fiscal restraint, when new therapies are generally expensive and difficult policy decisions are being made. More important, it provides a base for a broader framework to inform policy decisions using comprehensive disease data and risk factors. Our "base case" models comprehensively estimate the lifetime costs of treating breast, lung and colorectal cancer in Canada. Our cancer models have shown the large financial burden of diagnostic work-up and initial therapy, as well as the high costs of hospitalizing those dying of cancer. Our core cancer models (lung, breast and colorectal cancer) have been used to evaluate the impact of new practice patterns. We have used these models to evaluate new chemotherapy regimens as therapeutic options for advanced lung cancer; the health and financial impact of reducing the hospital length of stay for initial breast cancer surgery; and the potential impact of population-based screening for colorectal cancer. To date, the most interesting intervention we have studied has been the use of tamoxifen to prevent breast cancer among high risk women.

    Release date: 2002-10-08

  • Technical products: 11-522-X2001001
    Description:

    Symposium 2001 was the eighteenth in Statistics Canada's series of international symposia on methodological issues. Each year the symposium focuses on a particular theme. In 2001, the theme was: "Achieving Data Quality in a Statistical Agency: a Methodological Perspective".

    Symposium 2001 was held from October 17 to October 19, 2001 in Hull, Quebec and it attracted over 560 people from 21 countries. A total of 83 papers were presented. Aside from translation and formatting, the papers, as submitted by the authors, have been reproduced in these proceedings.

    Release date: 2002-09-12

  • Technical products: 11-522-X20010016297
    Description:

    This paper discusses in detail issues dealing with the technical aspects in designing and conducting surveys. It is intended for an audience of survey methodologists. The Danish National Institute of Social Research is an independent institution under the Ministry of Social Affairs. The Institute carries out surveys on social issues on encompassing a broad range of subjects. The Sustainable Financing Initiative Survey (SFI-SURVEY) is an economically independent section within the institute. SFI-SURVEY carries out scientific surveys both for the Institute, for other public organizations, and for the private sector as well. The SFI-SURVEY interviewer body has 450 interviewers spread throughout Denmark. There are five supervisors, each with a regional office, who are in contact with the interviewer body. On a yearly basis, SFI-SURVEY conducts 40 surveys. The average sample size (gross) is 1,000 persons. The average response rate is 75%. Since January 1999, the following information about the surveys have been recorded: · Type of method used (face-to-face or telephone) · Length of questionnaire (interviewing time in minutes) · Whether or not a folder was sent to the respondents in advance · Whether or not an interviewer instruction meeting was given · Number of interviews per interviewer per week · Whether or not the subject of the survey was of interest to the respondents · Interviewing month · Target group (random selection of the total population or special groups)

    Release date: 2002-09-12

  • Technical products: 11-522-X20010016285
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    The three papers presented in this session offer excellent insight into the issues concerning the quality of hospital morbidity data. Richards, Brown, and Homan sampled hospital records to evaluate administrative data in Canada; Hargreaves sampled persons in hospitals to evaluate administrative data in Australia; and McLemoreand Pokras describe the quality assurance practices of an ongoing sample survey of hospital records in the United States. Each paper is discussed, along with the issues and challenges for the future.

    Release date: 2002-09-12

  • Technical products: 11-522-X20010016289
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    Increasing demand for electronic reporting in establishment surveys has placed additional emphasis on incorporating usability into electronic forms. We are just beginning to understand the implications surrounding electronic forms design. Cognitive interviewing and usability testing are analogous in that both types of testing have similar goals: to build an end instrument (paper or electronic) that reduces both respondent burden and measurement error. Cognitive testing has greatly influenced paper forms design and can also be applied towards the development of electronic forms. Usability testing expands on existing cognitive testing methodology to include examination of the interaction between the respondent and the electronic form.

    The upcoming U.S. 2002 Economic Census will offer businesses the ability to report information using electronic forms. The U.S. Census Bureau is creating an electronic forms style guide outlining the design standards to be used in electronic form creation. The style guide's design standards are based on usability principles, usability and cognitive test results, and Graphical User Interface standards. This paper highlights the major electronic forms design issues raised during the preparation of the style guide and describes how usability testing and cognitive interviewing resolved these issues.

    Release date: 2002-09-12

  • Technical products: 11-522-X20010016280
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    Survey response rates serve as one key measure of the quality of a data set. However, they are only useful to a statistical agency in the evaluation of ongoing data collections if they are based on a predefined set of formulas and definitions that are uniformly applied across all data collections.

    In anticipation of a revision of the current National Center for Education Statistics (NCES) statistical standards, several agency-wide audits of statistical practices were undertaken in the late 1990s. In particular, a compendium documenting major survey design parameters of NCES surveys was drafted. Related to this, NCES conducted a targeted audit of the consistency in response rate calculations across these surveys.

    Although NCES has had written statistical standards since 1988, the audit of the reported response rates from 50 survey components in 14 NCES surveys revealed considerable variability in procedures used to calculate response rates. During the course of the response rate audit, the Statistical Standards Program staff concluded that the organization of the 1992 Standards made it difficult to find all of the information associated with response rates in the standards. In fact, there are references to response rate in a number of separate standards scattered throughout the 1992 Statistical Standards.

    Release date: 2002-09-12

  • Technical products: 11-522-X20010016278
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    The first round on quality reporting of the statistics produced in Eurostat has almost been completed. This paper presents the experiences so far and, in particular, some of the methodological problems encountered when measuring the quality of the statistics that are produced for international comparisons. A proposal is also presented for indicators that summarize the detailed information provided in these quality reports. Two sets of indicator are discussed: the first more producer-oriented, the second more user-oriented.

    Release date: 2002-09-12

  • Technical products: 11-522-X20010016260
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    The Canadian Vehicle Survey (CVS), which began in 1999, is designed to collect information about the usage of motor vehicles registered in Canada. The CVS target population includes all on-road vehicles (except special equipment, trailers and motorcycles) registered in Canada. A sample of vehicles is drawn each quarter and a seven-day trip log is used to gather detailed vehicle usage patterns. The log includes questions on kilometres driven, number of passengers, vehicle characteristics, trip purpose and travel times, driver and passenger demographics and fuel usage. Since this is a voluntary survey and the log takes seven days to complete, every effort is made to ensure a good response rate and prevent response errors. The first part of this paper describes the current survey design, data collection, and editing and imputation methodology. Then it goes on to explain the challenges associated with the different steps of the survey. Finally, findings from the research carried out to minimize the effects of non-sampling errors are presented.

    Release date: 2002-09-12

  • Technical products: 11-522-X20010016254
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    At Statistics Netherlands, the design and organization of the statistical process is changing rapidly, motivated by the need to produce more consistent data and to cut down the response burden. The ideas behind the new production process are the integration of all survey and administrative data into a limited number of micro-databases and the development of an estimation strategy for those databases.

    This paper provides the initial incentive for an estimation strategy per micro-database. The proposed strategy ensures that all estimated m-way tables are numerically consistent with respect to common margins, even if these tables are estimated from different surveys. Although still based on the calibration principle, it is not necessarily centred on a fixed set of weights per survey. The practicability of the strategy is tested by means of a fictitious example.

    Release date: 2002-09-12

  • Technical products: 11-522-X20010016245
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    This paper summarizes recent Australian Bureau of Statistics (ABS) methodological developments and other experiences with electronic data reporting (EDR). It deals particularly with the part of EDR loosely defined as 'e-forms', or screen-based direct collection instruments, where the respondent manually enters all or most of the data. In this context, the paper covers recent ABS experiences and current work, but does not revisit the historical EDR work or cover other developments in Australia outside the ABS.

    Release date: 2002-09-12

  • Technical products: 11-522-X20010016269
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    In surveys with low response rates, non-response bias can be a major concern. While it is not always possible to measure the actual bias due to non-response, there are different approaches that help identify potential sources of non-response bias. In the National Center for Education Statistics (NCES), surveys with a response rate lower than 70% must conduct a non-response bias analysis. This paper discusses the different approaches to non-response bias analyses using examples from NCES.

    Release date: 2002-09-12

  • Technical products: 11-522-X20010016293
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    This paper presents the Second Summit of the Americas Regional Education Indicators Project (PRIE), whose basic goal is to develop a set of comparable indicators for the Americas. This project is led by the Ministry of Education of Chile and has been developed in response to the countries' needs to improve their information systems and statistics. The countries need to construct reliable and relevant indicators to support decisions in education, both within their individual countries and the region as a whole. The first part of the paper analyses the importance of statistics and indicators in supporting educational policies and programs, and describes the present state of the information and statistics systems in these countries. It also discusses the major problems faced by the countries and reviews the countries' experiences in participating in other education indicators' projects or programs, such as the INES Program, WEI Project, MERCOSUR and CREMIS. The second part of the paper examines PRIE's technical co-operation program, its purpose and implementation. The second part also emphasizes how technical co-operation responds to the needs of the countries, and supports them in filling in the gaps in available and reliable data.

    Release date: 2002-09-12

  • Technical products: 11-522-X20010016301
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    The Integrated Metadatabase is a corporate repository of information for each of Statistics Canada's surveys. The information stored in the Integrated Metadatabase includes a description of data sources and methodology, definitions of concepts and variables measured, and indicators of data quality. It provides an effective vehicle for communicating data quality to data users. Its coverage of Statistics Canada's data holdings is exhaustive, the provided information on data quality complies with the Policy on Informing Users of Data Quality and Methodology, and it is presented in a consistent and systematic fashion.

    Release date: 2002-09-12

  • Technical products: 11-522-X20010016234
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    With the goal of obtaining a complete enumeration of the Canadian agricultural sector, the 2001 Census of Agriculture has been conducted using several collection methods. Challenges to the traditional drop-off and mail-back of paper questionnaires in a household-based enumeration have led to the adoption of supplemental methods using newer technologies to maintain the coverage and content of the census. Overall, this mixed-mode data collection process responds to the critical needs of the census programme at various points. This paper examines these data collection methods, several quality assessments, and the future challenges of obtaining a co-ordinated view of the methods' individual approaches to achieving data quality.

    Release date: 2002-09-12

  • Technical products: 11-522-X20010016233
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    From January 2000, the data collection method of the Finnish Consumer Survey was changed from a Labour Force Survey panel design mode to an independent survey. All interviews are now carried out centrally from Statistics Finland's Computer Assisted Telephone Interview (CATI) Centre. There have been suggestions that the new survey mode has been influencing the respondents' answers. This paper analyses the extent of obvious changes in the results of the Finnish Consumer Survey. This is accomplished with the help of a pilot survey. Furthermore, this paper studies the interviewer's role in the data collection process. The analysis is based on cross-tabulations, chi-square tests and multinomial logit models. It shows that the new survey method produces more optimistic estimations and expectations concerning economic matters than the old method did.

    Release date: 2002-09-12

  • Technical products: 11-522-X20010016265
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    Several key strategies contributed to the success of the United States' Census 2000. This paper describes the strategy for the Address Building Process that incorporated numerous address lists and updated activities. The Field Interview Process created close to 900,000 jobs that needed to be filled. Two key strategies to achieve this are also described. The Formal Quality Control Process established principles to guide the quality assurance (QA) programs. These programs are presented, as are some of the examples of their implementation. The Coverage Measurement and Correction Process was used to increase census accuracy through the use of statistical methods. The steps taken to ensure Annual Capital Expenditures (ACE) accuracy and quality are described and the preliminary estimates of the undercount are reported.

    Release date: 2002-09-12

  • Technical products: 11-522-X20010016286
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    It is customary for statistical agencies to audit tables containing suppressed cells in order to ensure that there is sufficient protection against inadvertent disclosure of sensitive information. If the table contains rounded values, this fact may be ignored by the audit procedure. This oversight can result in over-protection, reducing the utility of the published data. This paper provides correct auditing formulation and gives examples of over-protection.

    Release date: 2002-09-12

  • Technical products: 11-522-X20010016282
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    The Discharge Abstract Database (DAD) is one of the key data holdings held by the Canadian Institute for Health Information (CIHI). The institute is a national, not-for-profit organization, which plays a critical role in the development of Canada's health information system. The DAD contains acute care discharge data from most Canadian hospitals. The data generated are essential for determining, for example, the number and types of procedures and the length of hospital stays. CIHI is conducting the first national data quality study of selected clinical and administrative data from the DAD. This study is evaluating and measuring the accuracy of the DAD by returning to the original data sources and comparing this information with what exists in the CIHI database, in order to identify any discrepancies and their associated reasons. This paper describes the DAD data quality study and some preliminary findings. The findings are also briefly compared with another similar study. In conclusion, the paper discusses subsequent steps for the study and how the findings from the first year are contributing to improvements in the quality of the DAD.

    Release date: 2002-09-12

  • Technical products: 11-522-X20010016238
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    Research programs building on population-based, longitudinal administrative data and record-linkage techniques are found in England, Scotland, the United States (the Mayo Clinic), Western Australia and Canada. These systems can markedly expand both the methodological and the substantive research in health and health care.

    This paper summarizes published, Canadian data quality studies regarding registries, hospital discharges, prescription drugs, and physician claims. It makes suggestions for improving registries, facilitating record linkage and expanding research into social epidemiology. New trends in case identification and health status measurement using administrative data have also been noted. And the differing needs for data quality research in each province have been highlighted.

    Release date: 2002-09-12

  • Technical products: 11-522-X20010016242
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    "Remembering Leslie Kish" provides us with a personal view of his many contributions to the international development of statistics. One of the elements that made his contributions so special and effective was the "Kish approach". The characteristic features of this approach include: identifying what is important; formulating and answering practical questions; seeking patterns and frameworks; and above all, persisting in the promotion of good ideas. Areas in which his technical contributions have made the most impact on practical survey work in developing countries have been identified. A unique aspect of Leslie's contribution is the motivation he created for the development of a world-wide community of survey samplers.

    Release date: 2002-09-12

  • Technical products: 11-522-X20010016303
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    In large-scale surveys, it is almost guaranteed that some level of non-response will occur. Generally, statistical agencies use imputation as a way to treat non-response items. A common preliminary step to imputation is the formation of imputation cells. In this article, the formation of these cells is studied using two methods. The first method is similar to that of Eltinge and Yansaneh (1997) in the case of weighting cells and the second is the method currently used in the Canadian Labour Force Survey. Using Labour Force data, simulation studies are performed to test the impact of the response rate, the response mechanism, and constraints on the quality of the point estimator in both methods.

    Release date: 2002-09-12

  • Technical products: 11-522-X20010016309
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    This paper proposes a method for estimating simple and correlated measurement variance components when a re-interview is available for a subsample of respondents. However, the two measurements cannot be considered as being collected under the same conditions and, therefore, are subject to different measurement error variance. This consideration seems more realistic when, in actuality, it is impossible to ensure that the same measurement conditions are implemented in the two interviews, as in the case when operational and budget constraints suggest adopting a different survey mode for the second interview.

    Release date: 2002-09-12

  • Technical products: 11-522-X20010016250
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    This paper describes the Korea National Statistics Office's (KNSO) experiences in data quality assessment and introduces the strategies of institutionalizing the assessment procedure. This paper starts by briefly describing the definition of quality assessment, quality dimensions and indicators at the national level. It introduces the current situation of the quality assessment process in KNSO and lists the six dimensions of quality that have been identified: relevance, accuracy, timeliness, accessibility, comparability and efficiency. Based on the lessons learned from these experiences, this paper points out three essential elements required in an advanced system of data quality assessment: an objective and independent planning system, a set of appropriate indicators and competent personnel specialized in data quality assessment.

    Release date: 2002-09-12

  • Technical products: 11-522-X20010016235
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    Police records collected by the Federal Bureau of Investigation (FBI) through the Uniform Crime Reporting (UCR) Program are the leading source of national crime statistics. Recently, audits to correct UCR records have raised concerns as to how to handle the errors discovered in these files. Concerns centre around the methodology used to detect errors and the procedures used to correct errors once they have been discovered. This paper explores these concerns, focusing on sampling methodology, establishment of a statistical-adjustment factor, and alternative solutions. The paper distinguishes the difference between sample adjustment and sample estimates of an agency's data, and recommends sample adjustment as the most accurate way of dealing with errors.

    Release date: 2002-09-12

Data (0)

Data (0) (0 results)

Your search for "" found no results in this section of the site.

You may try:

Analysis (25)

Analysis (25) (25 of 25 results)

  • Articles and reports: 11F0019M2002181
    Description:

    We use data from the Canadian National Longitudinal Survey of Children and Youth to address two questions. To what extent do parents and children agree when asked identical questions about child well-being? To what extent do differences in their responses affect what one infers from multivariate analysis of the data? The correspondence between parent and child in the assessment of child well-being is only slight to fair. Agreement is stronger for more observable outcomes, such as schooling performance, and weaker for less observable outcomes, such as emotional disorders. We regress both sets of responses on a standard set of socio-economic characteristics. We also conduct formal and informal tests of the differences in what one would infer from these two sets of regressions.

    Release date: 2002-10-23

  • Articles and reports: 82-005-X20020016479
    Description:

    The Population Health Model (POHEM) is a policy analysis tool that helps answer "what-if" questions about the health and economic burden of specific diseases and the cost-effectiveness of administering new diagnostic and therapeutic interventions. This simulation model is particularly pertinent in an era of fiscal restraint, when new therapies are generally expensive and difficult policy decisions are being made. More important, it provides a base for a broader framework to inform policy decisions using comprehensive disease data and risk factors. Our "base case" models comprehensively estimate the lifetime costs of treating breast, lung and colorectal cancer in Canada. Our cancer models have shown the large financial burden of diagnostic work-up and initial therapy, as well as the high costs of hospitalizing those dying of cancer. Our core cancer models (lung, breast and colorectal cancer) have been used to evaluate the impact of new practice patterns. We have used these models to evaluate new chemotherapy regimens as therapeutic options for advanced lung cancer; the health and financial impact of reducing the hospital length of stay for initial breast cancer surgery; and the potential impact of population-based screening for colorectal cancer. To date, the most interesting intervention we have studied has been the use of tamoxifen to prevent breast cancer among high risk women.

    Release date: 2002-10-08

  • Articles and reports: 12-001-X20020016414
    Description:

    Census-taking by traditional methods is becoming more difficult. The possibility of cross-linking administrative files provides an attractive alternative to conducting periodic censuses (Laihonen 2000; Borchsenius 2000). This was proposed in a recent article by Nathan (2001). The Institut national de la statistique et des études économiques (INSEE)' redesign is based on the idea of a 'continuous census,' originally suggested by Kish (1981, 1990) and Horvitz (1986). A first approach that would be feasible in France can be found in Deville and Jacod (1996). This article reviews methodological developments since INSEE started its population census redesign program.

    Release date: 2002-07-05

  • Articles and reports: 12-001-X20020016408
    Description:

    Regression and regression-related procedures have become common in survey estimation. We review the basic properties of regression estimators, discuss implementation of regression estimation, and investigate variance estimation for regression estimators. The role of models in constructing regression estimators and the use of regression in non-response adjustment are also explored.

    Release date: 2002-07-05

  • Articles and reports: 12-001-X20020016424
    Description:

    A variety of estimators for the variance of the General Regression (GREG) estimator of a mean have been proposed in the sampling literature, mainly with the goal of estimating the design-based variance. Under certain conditions, estimators can be easily constructed that are approximately unbiased for both the design-variance and the model-variance. Several dual-purpose estimators are studied here in single-stage sampling. These choices are robust estimators of a model-variance even if the model that motivates the GREG has an incorrect variance parameter.

    A key feature of the robust estimators is the adjustment of squared residuals by factors analogous to the leverages used in standard regression analysis. We also show that the delete-one jackknife estimator implicitly includes the leverage adjustments and is a good choice from either the design-based or model-based perspective. In a set of simulations, these variance estimators have small bias and produce confidence intervals with near-nominal coverage rates for several sampling methods, sample sizes and populations in single-stage sampling.

    We also present simulation results for a skewed population where all variance estimators perform poorly. Samples that do not adequately represent the units with large values lead to estimated means that are too small, variance estimates that are too small and confidence intervals that cover at far less than the nominal rate. These defects can be avoided at the design stage by selecting samples that cover the extreme units well. However, in populations with inadequate design information this will not be feasible.

    Release date: 2002-07-05

  • Articles and reports: 12-001-X20020019499
    Description:

    "In this Issue" is a column where the Editor briefly presents each paper of the current issue of Survey Methodology. As well, it sometimes contains informations on structure or management changes in the journal.

    Release date: 2002-07-05

  • Articles and reports: 12-001-X20020016422
    Description:

    In estimating variances so as to account for imputation for item non-response, Rao and Shao (1992) originated an approach based on adjusted replication. Further developments (particularly the extension to Balanced Repeated Replication of the jackknife replication of Rao and Shao) were made by Shao, Chen and Chen (1998). In this article, we explore how these methods can be implemented using replicate weights.

    Release date: 2002-07-05

  • Articles and reports: 12-001-X20020016413
    Description:

    Leslie Kish long advocated a "rolling sample" design, with non-overlapping monthly panels which can be cumulated over different lengths of time for domains of different sizes. This enables a single survey to serve multiple purposes. The Census Bureau's new American Community Survey (ACS) uses such a rolling sample design, with annual averages to measure change at the state level, and three-year or five-year moving averages to describe progressively smaller domains. This paper traces Kish's influence on the development of the American Community Survey, and discusses some practical methodological issues that had to be addressed in implementing the design.

    Release date: 2002-07-05

  • Articles and reports: 12-001-X20020016488
    Description:

    Sampling is a branch of and a tool for statistics, and the field of statistics was founded as a new paradigm in 1810 by Quetelet (Porter 1987; Stigler 1986). Statistics and statisticians deal with the effects of chance events on empirical data. The mathematics of chance had been developed centuries earlier to predict gambling games and to account for errors of observation in astronomy. Data were also compiled for commerce, banking, and government purposes. But combining chance with real data required a new theoretical view; a new paradigm. Thus, statistical science and its various branches, which are the products of the maturity of human development (Kish 1985), arrived late in history and academia. This article examines the new concepts in diverse aspects of sampling, which may also be known as new sampling paradigms, models or methods.

    Release date: 2002-07-05

  • Articles and reports: 12-001-X20020016420
    Description:

    The post-stratified estimator sometimes has empty strata. To address this problem, we construct a post-stratified estimator with post-strata sizes set in the sample. The post-strata sizes are then random in the population. The next step is to construct a smoothed estimator by calculating a moving average of the post-stratified estimators. Using this technique, it is possible to construct an exact theory of calibration on distribution. The estimator obtained is not only calibrated on distribution, it is also linear and completely unbiased. We then compare the calibrated estimator with the regression estimator. Lastly, we propose an approximate variance estimator that we validate using simulations.

    Release date: 2002-07-05

  • Articles and reports: 12-001-X20020016419
    Description:

    Since some individuals in a population may lack telephones, telephone surveys using random digit dialling within strata may result in asymptotically biased estimators of ratios. The impact from not being able to sample the non-telephone population is examined. We take into account the propensity that a household owns a telephone, when proposing a post-stratified telephone-weighted estimator, which seems to perform better than the typical post-stratified estimator in terms of mean squared error. Such coverage propensities are estimated using the Public Use Microdata Samples, as provided by the United States Census. Non-post-stratified estimators are considered when sample sizes are small. The asymptotic mean squared error, along with its estimate based on a sample of each of the estimators is derived. Real examples are analysed using the Public Use Microdata Samples. Other forms of no-nresponse are not examined herein.

    Release date: 2002-07-05

  • Articles and reports: 12-001-X20020016417
    Description:

    An approach to exploiting the data from multiple surveys and epochs by benchmarking the parameter estimates of logit models of binary choice and semiparametric survival models has been developed. The goal is to exploit the relatively rich source of socio-economic covariates offered by Statistics Canada's Survey of Labour and Income Dynamics (SLID), and also the historical time-span of the Labour Force Survey (LFS), enhanced by following individuals through each interview in their six-month rotation. A demonstration of how the method can be applied is given, using the maternity leave module of the LifePaths dynamic microsimulation project at Statistics Canada. The choice of maternity leave over job separation is specified as a binary logit model, while the duration of leave is specified as a semiparametric proportional hazards survival model with covariates together with a baseline hazard permitted to change each month. Both models are initially estimated by maximum likelihood from pooled SLID data on maternity leaves beginning in the period from 1993 to 1996, then benchmarked to annual estimates from the LFS from 1976 to 1992. In the case of the logit model, the linear predictor is adjusted by a log-odds estimate from the LFS. For the survival model, a Kaplan-Meier estimator of the hazard function from the LFS is used to adjust the predicted hazard in the semiparametric model.

    Release date: 2002-07-05

  • Articles and reports: 12-001-X20020016421
    Description:

    Like most other surveys, non-response often occurs in the Current Employment Survey conducted monthly by the U.S. Bureau of Labor Statistics (BLS). In a given month, imputation using reported data from previous months generally provides more efficient survey estimators than ignoring non-respondents and adjusting survey weights. However, imputation also has an effect on variance estimation: treating imputed values as reported data and applying a standard variance estimation method lead to negatively biased variance estimators. In this article, we propose some variance estimators using the Grouped Balanced Half Sample method and re-imputation to take imputation into account. Some simulation results for the finite sample performance of the imputed survey estimators and their variance estimators are presented.

    Release date: 2002-07-05

  • Articles and reports: 88-003-X20020026371
    Description:

    When constructing questions for questionnaires, one of the rules of thumb has always been "keep it short and simple." This article is the third in a series of lessons learned during cognitive testing of the pilot Knowledge Management Practices Survey. It studies the responses given to long questions, thick questionnaires and too many response boxes.

    Release date: 2002-06-14

  • Articles and reports: 88-003-X20020026369
    Description:

    Eliminating the "neutral" response in an opinion question not only encourages the respondent to choose a side, it gently persuades respondents to read the question. Learn how we used this technique to our advantage in the Knowledge Management Practices Survey, 2001.

    Release date: 2002-06-14

  • Articles and reports: 12-001-X20010026095
    Description:

    In this paper, we discuss the application of the bootstrap with a re-imputation step to capture the imputation variance (Shao and Sitter 1996) in stratified multistage sampling. We propose a modified bootstrap that does not require rescaling so that Shao and Sitter's procedure can be applied to the case where random imputation is applied and the first-stage stratum sample sizes are very small. This provides a unified method that works irrespective of the imputation method (random or nonrandom), the stratum size (small or large), the type of estimator (smooth or nonsmooth), or the type of problem (variance estimation or sampling distribution estimation). In addition, we discuss the proper Monte Carlo approximation to the bootstrap variance, when using re-imputation together with resampling methods. In this setting, more care is needed than is typical. Similar results are obtained for the method of balanced repeated replications, which is often used in surveys and can be viewed as an analytic approximation to the bootstrap. Finally, some simulation results are presented to study finite sample properties and various variance estimators for imputed data.

    Release date: 2002-02-28

  • Articles and reports: 12-001-X20010026092
    Description:

    To augment the amount of available information, data from different sources are increasingly being combined. These databases are often combined using record linkage methods. When there is no unique identifier, a probabilistic linkage is used. In that case, a record on a first file is associated with a probability that is linked to a record on a second file, and then a decision is taken on whether a possible link is a true link or not. This usually requires a non-negligible amount of manual resolution. It might then be legitimate to evaluate if manual resolution can be reduced or even eliminated. This issue is addressed in this paper where one tries to produce an estimate of a total (or a mean) of one population, when using a sample selected from another population linked somehow to the first population. In other words, having two populations linked through probabilistic record linkage, we try to avoid any decision concerning the validity of links and still be able to produce an unbiased estimate for a total of the one of two populations. To achieve this goal, we suggest the use of the Generalised Weight Share Method (GWSM) described by Lavallée (1995).

    Release date: 2002-02-28

  • Articles and reports: 12-001-X20010026097
    Description:

    A compositional time series is defined as a multivariate time series in which each of the series has values bounded between zero and one and the sum of the series equals one at each time point. Data with such characteristics are observed in repeated surveys when a survey variable has a multinomial response but interest lies in the proportion of units classified in each of its categories. In this case, the survey estimates are proportions of a whole subject to a unity-sum constraint. In this paper we employ a state space approach for modelling compositional time series from repeated surveys taking into account the sampling errors. The additive logistic transformation is used in order to guarantee predictions and signal estimates bounded between zero and one which satisfy the unity-sum constraint. The method is applied to compositional data from the Brazilian Labour Force Survey. Estimates of the vector of proportions and the unemployment rate are obtained. In addition, the structural components of the signal vector, such as the seasonals and the trends, are produced.

    Release date: 2002-02-28

  • Articles and reports: 12-001-X20010026096
    Description:

    Local polynomial regression methods are put forward to aid in exploratory data analysis for large-scale surveys. The proposed regression methods are put forward to aid in exploratory data analysis for large-scale surveys. The proposed method relies on binning the data on the x-variable and calculating the appropriate survey estimates for the mean of the y-values at each bin. When binning on x has been carried out to the precision of the recorded data, the method is the same as applying the survey weights to the standard criterion for obtaining local polynomial regression estimates. The alternative of using classical polynomial regression is also considered and a criterion is proposed to decide whether the nonparametric approach to modeling should be preferred over the classical approach. Illustrative examples are given from the 1990 Ontario Health Survey.

    Release date: 2002-02-28

  • Articles and reports: 12-001-X20010026093
    Description:

    This paper presents weighting procedures that combine information from multiple panels of a repeated panel household survey for cross-sectional estimation. The dynamic character of a repeated panel survey is discussed in relation to estimation of population parameters at any wave of the survey. A repeated panel survey with overlapping panels is described as a special type of multiple frame survey, with the frames of the panels forming a time sequence. The paper proposes weighting strategies suitable for various multiple panel survey situations. The proposed weighting schemes involve an adjustment of weights in domains of the combined panel sample that represent identical time periods covered by the individual panels. A weight adjustment procedure that deals with changes in the panels over time is discussed. The integration of the various weight adjustments required for cross-sectional estimation in a repeated panel household survey is also discussed.

    Release date: 2002-02-28

  • Articles and reports: 12-001-X20010026090
    Description:

    The number of calls in a telephone survey is used as an indicator of how difficult an intended respondent is to reach. This permits a probabilistic division of the non-respondents into non-susceptibles (those who will always refuse to respond), and the susceptible non-respondents (those who were not available to respond) in a model of the non-response. Further, it permits stochastic estimation of the views of the latter group and an evaluation of whether the non-response is ignorable for inference about the dependent variable. These ideas are implemented on the data from a survey in Metropolitan Toronto of attitudes toward smoking in the workplace. Using a Bayesian model, the posterior distribution of the model parameters is sampled by Markov Chain Monte Carlo methods. The results reveal that the non-response is not ignorable and those who do not respond are twice as likely to favor unrestricted smoking in the workplace as are those who do.

    Release date: 2002-02-28

  • Articles and reports: 12-001-X20010026094
    Description:

    This article reviews the methods that may be used to produce direct estimates for small areas, including stratification and oversampling, and forms of dual-frame estimation.

    Release date: 2002-02-28

  • Articles and reports: 12-001-X20010026091
    Description:

    The theory of double sampling is usually presented under the assumption that one of the samples is nested within the other. This type of sampling is called two-phase sampling. The first-phase sample provides auxiliary information (x) that is relatively inexpensive to obtain, whereas the second-phase sample: (b) to improve the estimate using a difference, ratio or regression estimator; or (c) to draw a sub-sample of non-respondent units. However, it is not necessary for one of the samples to be nested in the other or selected from the same frame. The case of non-nested double sampling is dealt with in passing in the classical works on sampling (Des Raj 1968, Cochrane 1977). This method is now used in several national statistical agencies. This paper consolidates double sampling by presenting it in a unified manner. Several examples of surveys used at Statistics Canada illustrate this unification.

    Release date: 2002-02-28

  • Articles and reports: 12-001-X20010026089
    Description:

    Telephone surveys are a convenient and efficient method of data collection. Bias may be introduced into population estimates, however, by the exclusion of nontelephone households from these surveys. Data from the U.S. Federal Communications Commission (FCC) indicates that five and a half to six percent of American households are without phone service at any given time. The bias introduced can be significant since nontelephone households may differ from telephone households in ways that are not adequately handled by poststratification. Many households, called "transients", move in and out of the telephone population during the year, sometimes due to economic reasons or relocation. The transient telephone population may be representative of the nontelephone population in general since its members have recently been in the nontelephone population.

    Release date: 2002-02-28

  • Articles and reports: 12-001-X20010029567
    Description:

    In this Issue is a column where the Editor biefly presents each paper of the current issue of Survey Methodology. As well, it sometimes contain informations on structure or management changes in the journal.

    Release date: 2002-02-28

Reference (84)

Reference (84) (25 of 84 results)

  • Technical products: 11-522-X2001001
    Description:

    Symposium 2001 was the eighteenth in Statistics Canada's series of international symposia on methodological issues. Each year the symposium focuses on a particular theme. In 2001, the theme was: "Achieving Data Quality in a Statistical Agency: a Methodological Perspective".

    Symposium 2001 was held from October 17 to October 19, 2001 in Hull, Quebec and it attracted over 560 people from 21 countries. A total of 83 papers were presented. Aside from translation and formatting, the papers, as submitted by the authors, have been reproduced in these proceedings.

    Release date: 2002-09-12

  • Technical products: 11-522-X20010016297
    Description:

    This paper discusses in detail issues dealing with the technical aspects in designing and conducting surveys. It is intended for an audience of survey methodologists. The Danish National Institute of Social Research is an independent institution under the Ministry of Social Affairs. The Institute carries out surveys on social issues on encompassing a broad range of subjects. The Sustainable Financing Initiative Survey (SFI-SURVEY) is an economically independent section within the institute. SFI-SURVEY carries out scientific surveys both for the Institute, for other public organizations, and for the private sector as well. The SFI-SURVEY interviewer body has 450 interviewers spread throughout Denmark. There are five supervisors, each with a regional office, who are in contact with the interviewer body. On a yearly basis, SFI-SURVEY conducts 40 surveys. The average sample size (gross) is 1,000 persons. The average response rate is 75%. Since January 1999, the following information about the surveys have been recorded: · Type of method used (face-to-face or telephone) · Length of questionnaire (interviewing time in minutes) · Whether or not a folder was sent to the respondents in advance · Whether or not an interviewer instruction meeting was given · Number of interviews per interviewer per week · Whether or not the subject of the survey was of interest to the respondents · Interviewing month · Target group (random selection of the total population or special groups)

    Release date: 2002-09-12

  • Technical products: 11-522-X20010016285
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    The three papers presented in this session offer excellent insight into the issues concerning the quality of hospital morbidity data. Richards, Brown, and Homan sampled hospital records to evaluate administrative data in Canada; Hargreaves sampled persons in hospitals to evaluate administrative data in Australia; and McLemoreand Pokras describe the quality assurance practices of an ongoing sample survey of hospital records in the United States. Each paper is discussed, along with the issues and challenges for the future.

    Release date: 2002-09-12

  • Technical products: 11-522-X20010016289
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    Increasing demand for electronic reporting in establishment surveys has placed additional emphasis on incorporating usability into electronic forms. We are just beginning to understand the implications surrounding electronic forms design. Cognitive interviewing and usability testing are analogous in that both types of testing have similar goals: to build an end instrument (paper or electronic) that reduces both respondent burden and measurement error. Cognitive testing has greatly influenced paper forms design and can also be applied towards the development of electronic forms. Usability testing expands on existing cognitive testing methodology to include examination of the interaction between the respondent and the electronic form.

    The upcoming U.S. 2002 Economic Census will offer businesses the ability to report information using electronic forms. The U.S. Census Bureau is creating an electronic forms style guide outlining the design standards to be used in electronic form creation. The style guide's design standards are based on usability principles, usability and cognitive test results, and Graphical User Interface standards. This paper highlights the major electronic forms design issues raised during the preparation of the style guide and describes how usability testing and cognitive interviewing resolved these issues.

    Release date: 2002-09-12

  • Technical products: 11-522-X20010016280
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    Survey response rates serve as one key measure of the quality of a data set. However, they are only useful to a statistical agency in the evaluation of ongoing data collections if they are based on a predefined set of formulas and definitions that are uniformly applied across all data collections.

    In anticipation of a revision of the current National Center for Education Statistics (NCES) statistical standards, several agency-wide audits of statistical practices were undertaken in the late 1990s. In particular, a compendium documenting major survey design parameters of NCES surveys was drafted. Related to this, NCES conducted a targeted audit of the consistency in response rate calculations across these surveys.

    Although NCES has had written statistical standards since 1988, the audit of the reported response rates from 50 survey components in 14 NCES surveys revealed considerable variability in procedures used to calculate response rates. During the course of the response rate audit, the Statistical Standards Program staff concluded that the organization of the 1992 Standards made it difficult to find all of the information associated with response rates in the standards. In fact, there are references to response rate in a number of separate standards scattered throughout the 1992 Statistical Standards.

    Release date: 2002-09-12

  • Technical products: 11-522-X20010016278
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    The first round on quality reporting of the statistics produced in Eurostat has almost been completed. This paper presents the experiences so far and, in particular, some of the methodological problems encountered when measuring the quality of the statistics that are produced for international comparisons. A proposal is also presented for indicators that summarize the detailed information provided in these quality reports. Two sets of indicator are discussed: the first more producer-oriented, the second more user-oriented.

    Release date: 2002-09-12

  • Technical products: 11-522-X20010016260
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    The Canadian Vehicle Survey (CVS), which began in 1999, is designed to collect information about the usage of motor vehicles registered in Canada. The CVS target population includes all on-road vehicles (except special equipment, trailers and motorcycles) registered in Canada. A sample of vehicles is drawn each quarter and a seven-day trip log is used to gather detailed vehicle usage patterns. The log includes questions on kilometres driven, number of passengers, vehicle characteristics, trip purpose and travel times, driver and passenger demographics and fuel usage. Since this is a voluntary survey and the log takes seven days to complete, every effort is made to ensure a good response rate and prevent response errors. The first part of this paper describes the current survey design, data collection, and editing and imputation methodology. Then it goes on to explain the challenges associated with the different steps of the survey. Finally, findings from the research carried out to minimize the effects of non-sampling errors are presented.

    Release date: 2002-09-12

  • Technical products: 11-522-X20010016254
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    At Statistics Netherlands, the design and organization of the statistical process is changing rapidly, motivated by the need to produce more consistent data and to cut down the response burden. The ideas behind the new production process are the integration of all survey and administrative data into a limited number of micro-databases and the development of an estimation strategy for those databases.

    This paper provides the initial incentive for an estimation strategy per micro-database. The proposed strategy ensures that all estimated m-way tables are numerically consistent with respect to common margins, even if these tables are estimated from different surveys. Although still based on the calibration principle, it is not necessarily centred on a fixed set of weights per survey. The practicability of the strategy is tested by means of a fictitious example.

    Release date: 2002-09-12

  • Technical products: 11-522-X20010016245
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    This paper summarizes recent Australian Bureau of Statistics (ABS) methodological developments and other experiences with electronic data reporting (EDR). It deals particularly with the part of EDR loosely defined as 'e-forms', or screen-based direct collection instruments, where the respondent manually enters all or most of the data. In this context, the paper covers recent ABS experiences and current work, but does not revisit the historical EDR work or cover other developments in Australia outside the ABS.

    Release date: 2002-09-12

  • Technical products: 11-522-X20010016269
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    In surveys with low response rates, non-response bias can be a major concern. While it is not always possible to measure the actual bias due to non-response, there are different approaches that help identify potential sources of non-response bias. In the National Center for Education Statistics (NCES), surveys with a response rate lower than 70% must conduct a non-response bias analysis. This paper discusses the different approaches to non-response bias analyses using examples from NCES.

    Release date: 2002-09-12

  • Technical products: 11-522-X20010016293
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    This paper presents the Second Summit of the Americas Regional Education Indicators Project (PRIE), whose basic goal is to develop a set of comparable indicators for the Americas. This project is led by the Ministry of Education of Chile and has been developed in response to the countries' needs to improve their information systems and statistics. The countries need to construct reliable and relevant indicators to support decisions in education, both within their individual countries and the region as a whole. The first part of the paper analyses the importance of statistics and indicators in supporting educational policies and programs, and describes the present state of the information and statistics systems in these countries. It also discusses the major problems faced by the countries and reviews the countries' experiences in participating in other education indicators' projects or programs, such as the INES Program, WEI Project, MERCOSUR and CREMIS. The second part of the paper examines PRIE's technical co-operation program, its purpose and implementation. The second part also emphasizes how technical co-operation responds to the needs of the countries, and supports them in filling in the gaps in available and reliable data.

    Release date: 2002-09-12

  • Technical products: 11-522-X20010016301
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    The Integrated Metadatabase is a corporate repository of information for each of Statistics Canada's surveys. The information stored in the Integrated Metadatabase includes a description of data sources and methodology, definitions of concepts and variables measured, and indicators of data quality. It provides an effective vehicle for communicating data quality to data users. Its coverage of Statistics Canada's data holdings is exhaustive, the provided information on data quality complies with the Policy on Informing Users of Data Quality and Methodology, and it is presented in a consistent and systematic fashion.

    Release date: 2002-09-12

  • Technical products: 11-522-X20010016234
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    With the goal of obtaining a complete enumeration of the Canadian agricultural sector, the 2001 Census of Agriculture has been conducted using several collection methods. Challenges to the traditional drop-off and mail-back of paper questionnaires in a household-based enumeration have led to the adoption of supplemental methods using newer technologies to maintain the coverage and content of the census. Overall, this mixed-mode data collection process responds to the critical needs of the census programme at various points. This paper examines these data collection methods, several quality assessments, and the future challenges of obtaining a co-ordinated view of the methods' individual approaches to achieving data quality.

    Release date: 2002-09-12

  • Technical products: 11-522-X20010016233
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    From January 2000, the data collection method of the Finnish Consumer Survey was changed from a Labour Force Survey panel design mode to an independent survey. All interviews are now carried out centrally from Statistics Finland's Computer Assisted Telephone Interview (CATI) Centre. There have been suggestions that the new survey mode has been influencing the respondents' answers. This paper analyses the extent of obvious changes in the results of the Finnish Consumer Survey. This is accomplished with the help of a pilot survey. Furthermore, this paper studies the interviewer's role in the data collection process. The analysis is based on cross-tabulations, chi-square tests and multinomial logit models. It shows that the new survey method produces more optimistic estimations and expectations concerning economic matters than the old method did.

    Release date: 2002-09-12

  • Technical products: 11-522-X20010016265
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    Several key strategies contributed to the success of the United States' Census 2000. This paper describes the strategy for the Address Building Process that incorporated numerous address lists and updated activities. The Field Interview Process created close to 900,000 jobs that needed to be filled. Two key strategies to achieve this are also described. The Formal Quality Control Process established principles to guide the quality assurance (QA) programs. These programs are presented, as are some of the examples of their implementation. The Coverage Measurement and Correction Process was used to increase census accuracy through the use of statistical methods. The steps taken to ensure Annual Capital Expenditures (ACE) accuracy and quality are described and the preliminary estimates of the undercount are reported.

    Release date: 2002-09-12

  • Technical products: 11-522-X20010016286
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    It is customary for statistical agencies to audit tables containing suppressed cells in order to ensure that there is sufficient protection against inadvertent disclosure of sensitive information. If the table contains rounded values, this fact may be ignored by the audit procedure. This oversight can result in over-protection, reducing the utility of the published data. This paper provides correct auditing formulation and gives examples of over-protection.

    Release date: 2002-09-12

  • Technical products: 11-522-X20010016282
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    The Discharge Abstract Database (DAD) is one of the key data holdings held by the Canadian Institute for Health Information (CIHI). The institute is a national, not-for-profit organization, which plays a critical role in the development of Canada's health information system. The DAD contains acute care discharge data from most Canadian hospitals. The data generated are essential for determining, for example, the number and types of procedures and the length of hospital stays. CIHI is conducting the first national data quality study of selected clinical and administrative data from the DAD. This study is evaluating and measuring the accuracy of the DAD by returning to the original data sources and comparing this information with what exists in the CIHI database, in order to identify any discrepancies and their associated reasons. This paper describes the DAD data quality study and some preliminary findings. The findings are also briefly compared with another similar study. In conclusion, the paper discusses subsequent steps for the study and how the findings from the first year are contributing to improvements in the quality of the DAD.

    Release date: 2002-09-12

  • Technical products: 11-522-X20010016238
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    Research programs building on population-based, longitudinal administrative data and record-linkage techniques are found in England, Scotland, the United States (the Mayo Clinic), Western Australia and Canada. These systems can markedly expand both the methodological and the substantive research in health and health care.

    This paper summarizes published, Canadian data quality studies regarding registries, hospital discharges, prescription drugs, and physician claims. It makes suggestions for improving registries, facilitating record linkage and expanding research into social epidemiology. New trends in case identification and health status measurement using administrative data have also been noted. And the differing needs for data quality research in each province have been highlighted.

    Release date: 2002-09-12

  • Technical products: 11-522-X20010016242
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    "Remembering Leslie Kish" provides us with a personal view of his many contributions to the international development of statistics. One of the elements that made his contributions so special and effective was the "Kish approach". The characteristic features of this approach include: identifying what is important; formulating and answering practical questions; seeking patterns and frameworks; and above all, persisting in the promotion of good ideas. Areas in which his technical contributions have made the most impact on practical survey work in developing countries have been identified. A unique aspect of Leslie's contribution is the motivation he created for the development of a world-wide community of survey samplers.

    Release date: 2002-09-12

  • Technical products: 11-522-X20010016303
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    In large-scale surveys, it is almost guaranteed that some level of non-response will occur. Generally, statistical agencies use imputation as a way to treat non-response items. A common preliminary step to imputation is the formation of imputation cells. In this article, the formation of these cells is studied using two methods. The first method is similar to that of Eltinge and Yansaneh (1997) in the case of weighting cells and the second is the method currently used in the Canadian Labour Force Survey. Using Labour Force data, simulation studies are performed to test the impact of the response rate, the response mechanism, and constraints on the quality of the point estimator in both methods.

    Release date: 2002-09-12

  • Technical products: 11-522-X20010016309
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    This paper proposes a method for estimating simple and correlated measurement variance components when a re-interview is available for a subsample of respondents. However, the two measurements cannot be considered as being collected under the same conditions and, therefore, are subject to different measurement error variance. This consideration seems more realistic when, in actuality, it is impossible to ensure that the same measurement conditions are implemented in the two interviews, as in the case when operational and budget constraints suggest adopting a different survey mode for the second interview.

    Release date: 2002-09-12

  • Technical products: 11-522-X20010016250
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    This paper describes the Korea National Statistics Office's (KNSO) experiences in data quality assessment and introduces the strategies of institutionalizing the assessment procedure. This paper starts by briefly describing the definition of quality assessment, quality dimensions and indicators at the national level. It introduces the current situation of the quality assessment process in KNSO and lists the six dimensions of quality that have been identified: relevance, accuracy, timeliness, accessibility, comparability and efficiency. Based on the lessons learned from these experiences, this paper points out three essential elements required in an advanced system of data quality assessment: an objective and independent planning system, a set of appropriate indicators and competent personnel specialized in data quality assessment.

    Release date: 2002-09-12

  • Technical products: 11-522-X20010016235
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    Police records collected by the Federal Bureau of Investigation (FBI) through the Uniform Crime Reporting (UCR) Program are the leading source of national crime statistics. Recently, audits to correct UCR records have raised concerns as to how to handle the errors discovered in these files. Concerns centre around the methodology used to detect errors and the procedures used to correct errors once they have been discovered. This paper explores these concerns, focusing on sampling methodology, establishment of a statistical-adjustment factor, and alternative solutions. The paper distinguishes the difference between sample adjustment and sample estimates of an agency's data, and recommends sample adjustment as the most accurate way of dealing with errors.

    Release date: 2002-09-12

  • Technical products: 11-522-X20010016302
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    This session provides three more contributions to the continuing discussion concerning the national statistics offices' response to the topic of quality -in particular, the subtopic of communicating quality. These three papers make the important and necessary assumption that national statistical offices have an obligation to report the limitations of the data; users should know and understand those limitations; and, as a result of understanding the limitations, users ought to be able to determine whether the data are fit for their purposes.

    Release date: 2002-09-12

  • Technical products: 11-522-X20010016284
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    Since 1965, the National Center for Health Statistics has conducted the National Hospital Discharge Survey (NHDS), a national probability sample survey of discharges from non-federal, short-stay and general hospitals. A major aspect of the NHDS redesign in 1988 was to use electronic data from abstracting service organizations and state data systems. This paper presents an overview of the development of the NHDS and the 1988 redesign. Survey methodologies are reviewed in light of the data collection and processing issues arising from the combination of "manually" abstracted data and "automated" data. Methods for assessing the overall quality and accuracy of the NHDS data are discussed for both data collection modes. These methods include procedures to ensure that incoming data meet established standards and that abstracted data are processed and coded according to strict quality control procedures. These procedures are presented in the context of issues and findings from the broader literature about the quality of hospital administrative data sets.

    Release date: 2002-09-12

Date modified: