Statistics Canada
Symbol of the Government of Canada

Evaluation of the review process

Warning View the most recent version.

Archived Content

Information identified as archived is provided for reference, research or recordkeeping purposes. It is not subject to the Government of Canada Web Standards and has not been altered or updated since it was archived. Please "contact us" to request a format other than those available.

In this section we present an evaluation of the review process itself (described in Section 2) and make recommendations for future reviews of this nature. This evaluation is based on discussions with the review teams during the all-day debriefing in early February, a short questionnaire completed by the reviewers, and our own impressions.

Steering Committee
Review teams
Program staff
Schedule
Documents developed for the review
Summary

Steering Committee

The Steering Committee consisted of four Directors General, plus the head of the Quality Secretariat in the Methodology Branch (an Assistant Director) as an assistant-coordinator. The varied experiences brought to the project meshed well and made for an effective team. In particular, the assistant-coordinator’s knowledge of quality assessments and audits in other organizations proved very useful. His assistance in developing materials and in being a day-to-day interface between the review teams and the Steering Committee was especially valuable.

The Committee found that it was necessary to meet on a weekly basis throughout most of the review period, in order to develop a plan, schedule, questionnaire, review templates and other materials, as well as to monitor the work of the review teams. The Steering Committee members were also called on to give numerous presentations and progress reports on the project, both internally and to external advisory groups. If such a review is conducted in future based on the same materials, such frequent meetings may not be required.

Having a Steering Committee member on each of the review teams, while time-consuming, was beneficial in three ways. First, it helped to standardize the process, since Steering Committee members could give guidance to the review teams. Second, it gave the review teams increased legitimacy with the programs by having someone at a senior level present for the interviews. Third, it gave direct exposure and feedback on the programs to the Steering Committee members. This is an important aspect of the approach that should be retained.

Review teams

The ten review team members consisted primarily of Assistant Directors. Given the short amount of time available to organize the review, most of the reviewers were identified and “volunteered” by members of the Steering Committee. Despite this, all of the reviewers were enthusiastic and quickly dedicated themselves to the project. They clearly recognized the importance of the project to the organization and were interested in participating, even though they had to fit this work into their already busy schedules. They expected that this would take a considerable amount of time and they were correct: by the end of the process they estimated that they had each spent eight full working days on the project for which they were the lead reviewer. When one adds the time spent in meetings and reviewing reports for the other two reviews in which they participated, preparation for and participation in the one-day debriefing session, and subsequent editing of the reports, it is not unreasonable to assume that the average commitment approached the equivalent of  three weeks full-time.

Despite some initial uncertainty, the review process – essentially peer review teams conducting a structured exploration of a program – was an amazing learning experience. The reviewers had the breadth of experience and depth of knowledge to effectively engage program managers. The fact that the review teams were not immersed in the programs in question meant they had to ask questions as “outsiders”. The process was iterative and structured, but with enough flexibility to make it meaningful across a diverse set of programs.

Having a fairly large number of interviewers, each working on separate teams, created an atmosphere of learning together and making adjustments “as we go” that offset most of the drawbacks of conducting the reviews simultaneously It alleviated the pressure on any one individual reviewer and provided a sense of not being alone on this important project.

The review teams were interpenetrating – a feature designed for expediency that turned into an enormous benefit. The fact that each reviewer saw three programs and that all reviewers periodically sat around the same table was tremendously useful in dealing with difficult issues and potential unevenness in evaluation standards. It also meant that reviewers were learning about best practices and risks in other programs.  At the same time, by assigning a “lead reviewer” for each program, it was clear who was responsible for preparing the final report on each program.

The Steering Committee and review team members met on a monthly basis to report on progress, share experiences and receive further instructions as needed. The reviewers found these meetings quite useful and of appropriate frequency. They particularly valued the direct communication with the Steering Committee members, either at these monthly meetings, the interviews with programs, or on more informal occasions.

Finally, all reviewers were very happy to have participated in this project (see Table 1). All of the reviewers found the project interesting and a good learning experience. Despite the extra workload, almost all were interested in repeating the experience (!) and nine out of ten highly recommended such a project to colleagues. The reviewers also felt that this type of evaluation should be applied to other programs.

Table 1: Responses of the ten reviewers to an evaluation questionnaire
Aspect of project Not at all A little A lot Totally
Project was interesting 0 0 5 5
Project was good learning experience 0 0 4 6
Have a better knowledge of risks that can affect quality 0 1 6 3
Glad to have participated 0 0 4 6
Interested again in a year or two 0 2 6 2
Would recommend a similar project to colleagues 0 1 7 2
Should apply similar evaluation to other programs 0 1 3 6

Based on these results, we highly recommend this type of project as an excellent learning experience for the Assistant Director level (see recommendation 8). Statistics Canada could build on this exercise by recognizing the intense interest of the Assistant Director level in comparing QA practices across programs, with the intention of importing/exporting them. Although this was an ad hoc exercise, it proved beyond a doubt that these managers recognize the benefits of this type of peer review process and will willingly engage in it.     

Program staff

The managers and senior staff of the programs under review also found the review useful. It was an opportunity to take stock and identify real risks. Some long-standing practices received kudos; others areas of their operation were under a different lens. Overall, the programs were very cooperative and took the review seriously. All ten reviewers rated the degree of cooperation as either “a lot” or “totally.”

A first meeting with the Director of the program, the lead interviewer and the assistant-coordinator took place in the last half of October to introduce the review process to the program and to introduce the program to the lead interviewer. The cooperation was such that the discussion often started getting into the actual review and had to be cut off until the full review team was available.

We found that the amount of preparation by the programs did vary. Some programs came very well prepared, with well documented processes, while others were more reactive. Having the initial meeting with the programs was very useful in equalizing this by letting programs know what was expected of them. For future reviews of this type, it would be useful to spend more time in defining what is expected from programs in terms of documentation. Examples from the individual program reports can serve as a useful guide in this respect.

The initial meeting was also very valuable in collecting enough information about the processes that we were able to categorize the activities for all programs into the seven standard steps listed in Section 2. This served as an excellent way of standardizing the remaining steps in the review (e.g., interviews, report writing) and could probably be applied to future reviews without change.

One of the limitations of the exercise was that the review teams did not have time to speak to the various service areas involved in supporting these programs. The conclusions of the reports are based primarily on information and documentation obtained during a series of interviews with the management of the programs. The one exception is the support provided by the Dissemination Division and Communications and Library Services Division. Their role was important enough that it was felt vital to interview them. In future, if time permits, the interviews for a specific program should extend to the other service areas that support the programs, such as Methodology, Systems Development, Business Register and Collection.

Recommendation 26: Future reviews should include direct interviews with all significant service areas, time permitting.

Schedule

To plan the project, the Steering Committee developed a simple schedule of activities (see Table 2).  With the exception of the interview phase of the project, the project was generally successful in sticking to the schedule. The interview stage took about four weeks longer than initially thought, and a one-month extension for the project had to be sought from Policy Committee.

Table 2: Planned and actual schedule
Milestone Planned date Actual date
Presentation of initial plan to Policy Committee September 15 September 15
Recruitment and briefing session for reviewers October 11 October 11
Questionnaire design completed October 18 October 25
Interviews with program managers completed November 30 December 221
Assessment reports drafted and vetted with programs December 22 February 7
Debriefing session with reviewers Early January February 7
Summary report, Presentation to Policy Committee End January February 28, March 212
1. In a small number of cases the interviews went into early January.
2. Presentation to Policy Committee was constrained by its availability and could have been a week earlier.

There were two reasons why the interviews took longer than expected. First, it took more interview meetings than expected. On average, each review took 3.2 meetings (not counting the initial meeting) and the total average time per review was 6.1 hours, rather than one meeting of two hours as planned.

The second reason was that the programs under review are monthly or quarterly production operations and therefore had certain periods when they were simply not available to meet with the review teams. Seven of the ten reviewers reported difficulties in scheduling meetings around the programs’ production schedules.

The interviews also ran into the pre-Christmas holiday period and in a few cases into January. The majority of reviewers mentioned that the holiday break hindered the momentum of the review and a few felt that it had a serious negative effect on the schedule. Overall, half of the reviewers felt that the project deadlines were at least somewhat unreasonable.

As well as the interviews, the process of finalization of the reports also took somewhat longer than planned. There was no explicit step built into the schedule for the Steering Committee members to review and provide comments on the reports, as there should have been. When the reports were reviewed by the Steering Committee members, we realized that some of the reports needed to be reworked. In some cases too much focus was put on one aspect of the program, and in other cases the authors had not provided the necessary arguments to support their conclusions and recommendations. In future reviews, approximately three weeks should be built into the schedule for such a review. Now that we have some good examples of reports, we can be better prepared to specify what is wanted and provide examples.

Table 3 shows a breakdown of the average number of hours spent by each reviewer on their program. The time estimates are approximate based on self-reporting by the reviewers, so they should probably be interpreted only in relative terms. Somewhat surprisingly, additional research on the program took almost twice as long as the interviews themselves; the lead reviewers also spent an average of almost 11 hours researching and consulting other materials to complete their reports. The most relevant documents were those provided by the program, the integrated program reports, the Integrated Metadata Base record and official releases by the program. They also had to contact the programs on occasion to clarify points discussed in the interviews. As expected, drafting of the report was the most time-consuming activity.

Table 3: Time taken for selected major activities (hours per lead reviewer)
Activity Hours
Interviews with programs (excluding introductory meeting) 6.1
Conducting additional research on the program 10.9
Writing first draft of report 21.8
Vetting and finalization of report 4.6
Review and commenting on other reports 6.9

Documents developed for the review

The Steering Committee developed a number of documents for the review process. The purpose of these was to standardize the review process as much as possible across the nine programs plus the DCD program, while maintaining sufficient flexibility for the review teams to adapt their review to the specifics of the programs.

Aside from the schedule (described in Section 6.4), the main materials were an initial briefing to the review teams, the questionnaire, the report template, and an agenda and instructions for the one-day debriefing session. The briefing seemed to work well, as nine out of ten of the reviewers found the mandate clear. Our impression is that the one-day debriefing session also went well and was appreciated as a good way to “wrap-up” the reviewers’ involvement.

The questionnaire (see Appendix 2) was designed as a semi-structured document that was designed to suggest lines of questioning or prompts during the interviews. It was generally successful, although it became clear during the interviews that it contained much more detail than could be covered in a few meetings. The reviewers thus had to do considerable additional research to complete their reports. Nevertheless, all of the reviewers reported that the materials provided were “a lot” or “totally” useful, and nine of them said the same about the materials being sufficient.

The report template (see Appendix 3) was designed to standardize the format of the reports. It provided a general outline, but did not specify particular charts or tables that should be included. While several of the reports did follow the format, others did not. The template came somewhat late in the process, after some of the reviewers may have already started to formulate the outline of their reports. In other cases (e.g., DCD) the content simply did not lend itself to the format we had developed.

A number of the reviewers came up with excellent ways of summarizing information in the form of graphics and summary tables. We would recommend that future reviews examine these reports and try to develop a more detailed template that would improve the standardization of the reports.

Summary

In our view there were three key factors that resulted in the project being a success. The most important of these was having a knowledgeable, experienced and dedicated group of reviewers at our disposal. We highly recommend that the Assistant Director level be used for future reviews of this type. The second factor was the excellent cooperation from the programs under review, who viewed this as an opportunity to learn. The third key factor was good communications among all the players: the Steering Committee, the review teams, the programs, senior management, and advisory groups.