This section is organized in subsections that correspond to the main activities of a typical survey. The subsections all follow the same structure, describing the Scope and purpose, Principles, Guidelines and Quality indicators related to each activity, as defined below. The first subsection addresses the stage at which objectives, uses and users are identified. The subsections that follow cover other survey steps roughly in the chronological order in which they would typically take place. However, there are significant interdependencies between some steps such as, for example, between questionnaire design and data collection and capture operations. As well, survey steps as discussed here do not always proceed strictly sequentially. Some activities can take place concurrently, for example, frame development, sampling plans and questionnaire design. Other steps, such as data quality evaluation and documentation touch on most other activities and do not take place as discrete activities on their own.
Under the heading of Scope and purpose, a description of the concepts and key terms of the main activity or survey step are provided. The objective of the step and why it is important are stated briefly.
Principles are the broad, underlying policies, directions and approaches which govern the design of the activity in question, with emphasis on those that relate to quality.
Guidelines are known good practices that have evolved in the design and implementation of statistical surveys. Not all of these guidelines can be applied to every survey. They provide checklists to aid survey design. Judgment is still needed in deciding how to weigh the considerations that these Guidelines suggest.
On the other hand, Statistics Canada does have policies that have a bearing on many aspects of statistical activities in the Agency, and which may place requirements on the way particular activities are carried out. These are documented separately in the Statistics Canada Policy Manual. Wherever a policy has a bearing on a particular topic covered by these Guidelines, the existence and relevance of the policy is indicated.
Quality measures give a direct measure of the quality of data but, in practice, they can rarely be explicitly calculated. For example, in the case of accuracy, it is almost impossible to measure nonresponse bias, as the characteristics of those who do not respond can be difficult to ascertain. Instead, certain information can be provided to help "indicate" quality. Quality indicators usually consist of information that is a by-product of the statistical process. They do not measure quality directly but can provide enough information to offer valuable insight into quality. Included in this section are both quality measures, when they exist, and quality indicators.
Information presented in this section will be useful to methodologists tasked with producing quality measures to accompany the statistical outputs. It will also be of interest to director of program areas, survey managers and data users, who will use the indicators to assess and compare the quality of statistical products. In addition, this section will be of interest to the directors of program areas and production managers, as it will provide a basis for monitoring performance in terms of the quality of the processes and products in the program area.
Objectives, uses and users
Concepts, variables and classifications
Coverage and frames
Data collection, capture and coding
Use of administrative data
Response and nonresponse
Weighting and estimation
Seasonal adjustment and trend-cycle estimation
Benchmarking and related techniques
Data quality evaluation
Data dissemination and communication
Data analysis and presentation
- Date modified: