Statistics Canada
Symbol of the Government of Canada

Objectives, scope and organization of the review

Warning View the most recent version.

Archived Content

Information identified as archived is provided for reference, research or recordkeeping purposes. It is not subject to the Government of Canada Web Standards and has not been altered or updated since it was archived. Please "contact us" to request a format other than those available.

The nine programs that were chosen to be the subject of the review are the following:

  • Monthly Consumer Price Index (CPI)
  • Monthly Labour Force Survey (LFS)
  • Monthly GDP by Industry (GDP)
  • Monthly Retail Trade Survey (MRTS)
  • Monthly Survey of Manufacturing (MSM)
  • Monthly International Trade (IT)
  • Quarterly Income and Expenditure Accounts (IEA)
  • Quarterly Balance of Payments (BOP)
  • Quarterly Labour Productivity (LP)

These programs were selected, not because it was believed that they were particularly prone to errors, but because they represent the most “mission critical” sub-annual programs of Statistics Canada, providing current information on economic conditions. The quality of these “key indicators” is therefore of utmost importance to a wide range of users.

There was considerable variety in the characteristics of the programs reviewed. Six of the programs are monthly, while the other three are quarterly. Four of the programs (Consumer Price Index, Labour Force Survey, Retail Trade Survey and Monthly Survey of Manufacturing) are survey-based programs, one (International Trade) is an administrative records-based program, three (Monthly GDP, Income and Expenditure Accounts and Labour Productivity) are derived statistics programs, and Balance of Payments incorporates characteristics of both surveys and derived statistics. This variety made the review challenging, but it also meant that the results of the review would have wide applicability.

To oversee the review, a Steering Committee was formed consisting of the Director General (DG) of the Methodology Branch, who chaired the Committee, the DG of Economy-wide Statistics, the DG of the System of the National Accounts, and the DG of Labour and Household Surveys. The Steering Committee invited the head of the Quality Secretariat, also located in the Methodology Branch, to become part of the Committee as an assistant-coordinator. The Steering Committee met on a weekly basis to design and monitor the review.

With direction from Policy Committee, the Steering Committee established its objectives and developed an approach to review these programs within the very tight time schedule prescribed. The specific objectives of the review were twofold:

  1. to identify any specific weaknesses where action is needed, and
  2. to identify “best practices” that should be promoted to other programs.

This balanced approach, looking at both positive and negative aspects, was a key principle of our work.

The focus of the review was on the implementation of the programs, not on their design. We were specifically interested in factors affecting the accuracy of data, rather than the other five dimensions of quality (relevance, timeliness, coherence, interpretability and accessibility), unless the latter happen to affect the risk of producing inaccurate results. Particular attention was put on the certification of the outputs, deemed as the last check on the accuracy of the information, and on the data release process, where this information is communicated to the public.

Ten managers, primarily at the Assistant Director level, were recruited from across the Bureau to form the review teams. One team was formed for each of the nine programs listed above, with a separate team formed to review the data release activities of the Dissemination Division and Communications and Library Services Division (hereafter referred to as DCD). With the exception of the latter team, each team consisted of three managers, so that each manager was involved in the review of three different programs. This interlocking assignment of reviewers to review teams was designed specifically to expose each reviewer to several programs and thereby achieve a more uniform approach. For the same reason, a member of the Steering Committee was assigned to be a member of each review team.  For the DCD review, the team consisted of an Assistant Director and all of the members of the Steering Committee.

A lead reviewer (not one of the Steering Committee members) was assigned for each program and was generally responsible for organizing the review, collating all of the materials and preparing a report on the program. In order to avoid any conflict of interest, care was taken to ensure that the lead reviewer did not work in the program being reviewed. In some cases one of the other team members who were familiar with a specific program was assigned to be a member of that review team, in order to provide insight into the program. Appendix 1 shows a complete list of the teams for each program.

During the review process, all the review team members and Steering Committee members met every month to review progress and to share findings. This also helped to keep the reviews on track and fostered a uniform approach.

The Steering Committee developed a semi-structured questionnaire to be used in the main interviews with the programs (see Appendix 2). The questionnaire was not designed to be self-completed by programs, but rather was intended to provide the review teams and the programs with suggested lines of questioning, while not being so structured as to confine the review teams to asking pre-defined questions. The questionnaire first collected information on the various steps in the production and the process flows. For each step, the questionnaire then asked about the various kinds of checks in place, indicators of quality at each step, and the various risk factors for each step. The third section of the questionnaire concentrated on the data certification (for example what data confrontation is done or what internal checks are done) to certify that the data were accurate. The fourth part of the questionnaire asked about the release process and what checks were in place to ensure that there were no errors introduced at this step. The fifth part asked the program about its experiences (if any) where incorrect information had been released in the past, what the reasons were, and what had been done since then to prevent errors in future. The sixth part of the questionnaire asked about various factors that might cause checks to fail more frequently. Finally, the questionnaire asked about how changes to the program, whether internal or imposed from outside, are managed. The questionnaire was sent to the programs in advance of the first meeting to familiarize them with the content of the review. A slightly modified questionnaire (not included in this report) was developed for the Dissemination and Communications program.

The first meeting with each program was intended to explain the review process to them and to obtain information and documentation on the program’s operations (i.e. part 1 of the questionnaire). The meeting was normally with the assistant-coordinator and the lead reviewer, and the Director and/or Assistant Director for the program. Based on the results of these initial meetings, the Steering Committee developed a list of seven standard steps which were common to all programs. These were:

  1. Preparation for certification
  2. Data collection
  3. Editing/transformation of data
  4. Imputation and estimation
  5. Certification
  6. Release
  7. Post-release

While the details of these steps varied across programs, these activities were common to all nine programs and this framework proved to be a very useful way of organizing the remainder of the review.

Following the development of this framework, a series of meetings was held between the full review team and the program to cover the remainder of the topics in the questionnaire. The program was typically represented by the Director, Assistant Director and/or production manager. Although the initial estimate was that one two-hour meeting would be enough, it quickly became evident that at least three such meetings would be required to cover all the material. Because the meetings had to be scheduled around the production schedules for these monthly and quarterly programs, it was often difficult to find meeting times and as a result the interviews took about four weeks longer to complete than originally planned.

While the review teams were conducting the interviews in November and December 2006, the Steering Committee developed a standard format for the reports on each program (see Appendix 3). Following a standard introduction describing the background to the review, each report was to describe the program, enumerate the various quality assurance checks at each of the seven standardized steps in the process, provide a summary, describe other considerations (optional) and finish with Appendices. As a guideline, the Steering Committee suggested to the review teams that each report should be approximately 15 pages, excluding any Appendices.

Once each team had drafted its report, the reports were reviewed with the management of the programs. The purpose of this was not to ensure that the program management agreed with all the findings and recommendations, but that the review teams had reflected accurately what it had been told about the program. Following this process the reports were finalized and passed to the Steering Committee. The Steering Committee reviewed the reports and in some cases asked that further follow-ups or clarification be made.

The final step of the review teams’ involvement was an all-day debriefing session in early February, where each team presented a summary of its findings and answered questions from the Steering Committee and the other reviewers. As well, the review process was discussed and suggestions were solicited for improvements to future reviews of this type. The Steering Committee then used the reports and the all-day debriefing session to prepare this Summary Report.