Review of the July 2014 Labour Force Survey release

The review was conducted by:
Claude Julien, Director General of Methodology Branch and
Executive responsible for the Quality Secretariat, and
Craig Kuntz, Director General of the Economy-wide Statistics Branch

The original version was signed by Claude Julien, Director General, Methodology Branch, Executive responsible for the Quality Secretariat, and Craig Kuntz, Director General, Economy-wide Statistics Branch.

Introduction

This review was initiated by the Chief Statistician of Canada in response to an error in Canada's Labour Force Survey (LFS) release for the reference period of July 2014. The objective of this report is to determine what happened, why it occurred, why it was not caught in the quality assurance process, and what should be done to mitigate the risk of such errors in the future.

The review was conducted by Claude Julien, Director General of Methodology Branch and Executive responsible for the Quality Secretariat, and Craig Kuntz, Director General of the Economy-wide Statistics Branch.

Methodology

To understand what happened, why it occurred, and why the error was not identified sooner in the process, interviews were conducted with LFS employees and management team, as well as with the systems and methodology teams assigned to support the LFS. The review included an analysis of the LFS results, supporting documentation and operations diagnostics.

Background

The LFS is an important key economic indicator. Every month, the LFS collects data from a random sample of 56,000 households to obtain data on the labour force. The sample consists of six panels of dwellings. Each panel is contacted on six consecutive months or survey occasions. Every month, the oldest panel is dropped and a new panel is introduced. Hence, two consecutive survey occasions have five panels in common.

Like all surveys, the LFS must deal with incomplete or missing data due to a certain percentage of non-response. One of the methods used to deal with this non-response is called imputation where, using various data available, plausible values are determined to "fill in" for missing data. Having five panels in common between two survey occasions allows the LFS to use the previous month's responses to check the validity of the current month's responses and to determine the most plausible values for any missing data due to non-response. The previous month's responses used in imputation include demographic characteristics (e.g., age, sex) and labour force status (employed, unemployed, out of labour force). Given the known dynamics of the labour market, a person who reported being employed the previous month and who does not provide an update to his or her labour force status in the current month is likely to be determined by the processing system (what is called imputation) as still being employed. Yet, in order to respect the known dynamics of the labour market, the processing system will impute some previously employed individuals as either unemployed or out of the labour force.

Like all programs at Statistics Canada, the LFS is carried out in accordance with the agency's Quality Assurance Framework and the best practices suggested in its Quality Guidelines. Last spring, the results of a performance audit conducted by the Office of the Auditor General of Canada carried out on the LFS and three other key programs concluded that "Statistics Canada applied its quality assurance framework to ensure the quality of the statistical programs we examined." The Auditor General's recommendations focused on improvements to "ensure the continued relevance of its data products."

To ensure it remains relevant and accurately portrays the Canadian labour market, the LFS undergoes important updates after every census and a redesign every 10 years. Since 2011, the LFS program has been working on a major redesign that will renew the survey infrastructure by adopting a large number of corporate systems and services.

The LFS redesign project is divided into two phases. The first phase includes the rebasing of the estimates to the latest census of population results and redesigning the sample to, among other objectives, further integrate the corporate address register into the LFS. This rebasing sub-project also includes the introduction of new standard occupation codes and a review of the imputation methodology. The second phase of the redesign consists of reviewing the survey content and developing new systems that adopt corporate systems and services to replace the existing LFS infrastructure.

The current LFS systems were originally developed in 1997 and have since been maintained and updated as required. The LFS production environment comprises three main systems: front-end (sample preparation, collection and transmission), processing (reception, coding, editing, imputation, derivation, weighting and reporting), and aggregation and dissemination (tabulation, seasonal adjustment, and preparation of data outputs). The front-end system changes frequently and is tested every month to ensure that it is working properly. The processing system and the aggregation and dissemination system are complex and contain many modules and programs. They undergo infrequent changes and are tested only when changes are made.

In preparation for the first phase of the LFS redesign, the existing aggregation and dissemination system needed to be modified to incorporate a revised dwelling identifier number (DIN) to enable the LFS to further integrate the corporate address register to support its sampling frame. The required changes to incorporate the DIN had been previously made to the front-end and the processing systems as part of a project, called the continuous listing project, completed in January 2013. A record layout was changed in four areas of the processing system: a central data dictionary and three modules (imputation, derivation and weighting). According to plan, the aggregation and dissemination system was not changed at that time, as the required changes were viewed as being too complex and beyond the scope of the continuous listing project. An interim solution was implemented to ensure that the aggregation and dissemination system continued to function properly. The quality of the LFS was not affected.

In March 2013, a solution was developed to implement the revised DIN in the aggregation and dissemination system in a manner that would reduce the risk of introducing an error in the system. The solution consisted of modifying the layout of the fixed record file, known as the Tabulation Systems File (TABS), by moving one of the variables from the front to the back of the file. This created space for the revised DIN at the front of the file and left the remainder of the file untouched and readable by the aggregation and dissemination system with minimal change. However, the solution required going back and making another modification to the record layouts in the processing system. The project to implement the DIN changes was undertaken as a systems maintenance activity, rather than an activity that was part of the redesign.

Findings

What happened?

To implement the changes to the processing system, the team believed that they only needed to modify the derivation and weighting modules. What they did not realize was that the imputation module had a reference to the TABS file record layout.

The changes and associated testing were limited to what was believed to be the relevant modules (derivation, weighting, aggregation, seasonal adjustment and preparation of data outputs) prior to implementation for the July 2014 reference period—but the imputation module was neither changed, nor tested.

As a result, in the July 2014 production run, the imputation module did not pick up the labour force status of individuals from the June 2014 survey occasion. This meant that the June labour force status was not used, as designed, to determine the most plausible values for individuals whose July labour force status were missing, due to non-response. This resulted in a higher number of individuals who reported being employed in June, and who did not respond in July, being imputed as either out of the labour force or unemployed in July. The LFS system as a whole ran and the production logs did not trigger any systems code for this error because labour force status is one of several characteristics used in imputation. There were no operations diagnostics to detect this type of error.

Why did it occur?

Based on the facts that we have gathered, we conclude that several factors contributed to the error in the July 2014 LFS results. There was an incomplete understanding of the LFS processing system on the part of the team implementing and testing the change to the TABS file. This change was perceived as systems maintenance and the oversight and governance were not commensurate with the potential risk. The systems documentation was out of date, inaccurate and erroneously supported the team's assumptions about the system. The testing conducted was not sufficiently comprehensive and operations diagnostics to catch this type of error were not present. As well, roles and responsibilities within the team were not as clearly defined as they should have been. Communications among the team, labour analysts and senior management around this particular issue were inadequate.

How was the error detected?

The first indication that there might be a problem with the July 2014 LFS results occurred the morning of Friday August 8, 2014, after the data had been publicly released. A member of the systems team was trying to understand, as part of the review of imputation methodology in the rebasing sub‑project, why the results from a previous month's imputation could not be replicated in the test environment. The programmer realized the imputation module contained a reference to the TABS file record layout and recognized the potential consequences. The programmer immediately escalated the concerns and a decision was taken that morning to make the appropriate systems changes and rerun the July 2014 production in the test environment to assess the impact on the results. Once the error and its magnitude were confirmed to the satisfaction of the Chief Statistician on Tuesday August 12, 2014, a decision was taken, as per the agency's Directive on Corrections to Daily Releases and Statistical Products, to remove the erroneous data, to notify the public of the error, and to begin the process to release the corrected data on August 15, 2014.

What are the LFS quality assurance procedures?

All Statistics Canada data go through a rigorous data validation process. The basic mechanisms for managing quality are described in Statistics Canada's Quality Assurance Framework and Statistics Canada's Quality Guidelines. Their effectiveness does not depend on any one mechanism or process, but on the collective effect of many interdependent measures.

The LFS production follows a strict monthly release schedule that is set annually. The main production steps comprise collection applications testing, sample selection, collection, processing, and dissemination (analysis and validation). Our review focused on processing (where the error occurred) and analysis and validation (where the error could have been detected).

The processing system comprises nine modules, of which the latter five are fully automated and include the module where the error occurred. The system, run by a production officer, produces systems logs, operations diagnostics and survey results. At every occasion, the systems logs and the operations diagnostics are reviewed by the production officer and the production manager.

The production team delivers main tables in paper format and detailed tables in electronic format to the labour analysts in the Current Labour Analysis Section of the Labour Statistics Division. The main tables are usually delivered nine days prior to the official release; the detailed tables are delivered the next day. The production manager and methodologists from the Household Survey Methods Division validate the survey weights. The labour analysts are not involved in the production process to assure that there is an independent review of the LFS numbers. This is an agency best practice.

Labour analysts review the numbers for anomalies, either unexpectedly high month-to-month changes or unexpectedly low changes. They can request additional tables or system checks from the production manager. Seasonally adjusted numbers are produced and validated by labour analysts, time series analysis experts and the production manager. Analysts also consult the Analytical Studies Branch to challenge the numbers and their interpretation in a broad economic perspective. One of the labour analysts is identified to lead the writing of the article to be released in The Daily, the official release vehicle for Statistics Canada data.

On the Friday, one week prior to official release, the analysts communicate high-level numbers and their interpretation to the Assistant Director and Director responsible for the LFS. The Director then communicates the results to the Director General and the Assistant Chief Statistician. At each of these meetings, the numbers and their interpretation are challenged; additional production checks or analyses can be requested.

On the Monday prior to official release, a first draft of the article for The Daily is reviewed and validated line‑by‑line during a 3‑ to 4‑hour session attended by labour analysts, the Assistant Director and the Director responsible for the LFS. Changes and further analysis or research are usually recommended, and carried out and incorporated into a second draft of the article that is reviewed and validated line‑by‑line during a second 3‑ to 4‑hour session with the same group on Tuesday. Throughout the analysis and validation of the numbers, analysts request more detailed tables or scan the Internet for additional information that could explain or provide more context to the numbers produced by the survey.

A draft communiqué is delivered to the Communications Division for preparation and translation of all the material for official release. Prior to release, this material is reviewed by the labour analyst and Communications staff. Two days prior to official release, the results of the LFS are presented by the Chief of the Current Labour Analysis Section to the Executive Management Board members (Chief Statistician and Assistant Chief Statisticians); the Director General, Education, Labour and Income Statistics; the Director General and the Principal Researcher, Analytical Studies Branch; and the Director General, Communications. The results are challenged, and additional checks and analyses can be requested.

A few days after official release, the LFS holds a monthly post-mortem to review data quality. It is attended by the Director or Assistant Director, collection managers, processing managers, labour analysts and methodologists. This is an agency best practice. The meeting usually focuses on the collection environment, in particular response rates, and the demographic coverage of the LFS sample.

Why was the error not detected earlier?

In the July 2014 occasion, all data production checks were executed as planned and the tables were delivered on schedule. Due to the change that had been made to the processing system, the production manager double-checked the systems logs and operations diagnostics. On the surface, the production appeared to have run without incident. The systems logs and operations diagnostics, including those in the imputation module, did not return any error codes or atypical results, despite the changes to the TABS file record layout. Our review assessed the operations diagnostics produced in the imputation module and confirmed that it did not reveal any irregular results that could have indicated a problem. Without a proper diagnostic in the imputation module to assess the match rate of current and previous month records, the production team was not aware that the module did not run as designed. This diagnostic was added to the module during the correction process.

Prior to starting the validation process, three sectors were identified as requiring particular attention: construction, education services, and health care and social assistance. A special complex analysis was planned and executed to look at the health-care sector. The numbers in these sectors, the low net increase in employment (200 jobs) and the decrease in full-time employment (-59,700 jobs) were challenged and discussed throughout the validation process up to the final presentation to the Executive Management Board members. At the end of each step, the LFS results were deemed plausible in the current economic context and the sampling variability inherent to the LFS.

Were the results plausible? To check this conclusion, our review conducted an analysis of the LFS results available on Statistics Canada's socioeconomic database, CANSIM, since January 1985 (for a total of 355 survey occasions). The following LFS results are all adjusted for seasonality. We observed that when total employment is growing, the LFS has produced a net change of 200 jobs or less (even negative) nearly one‑fifth of the time. The median net change when employment has grown is a gain of 22,000 jobs. This is similar to the LFS consensus forecast provided by external economists for the month of July 2014. As for full-time employment, the LFS results show that it has recently been stable. In similar situations in the past, the LFS has produced a net decrease of 60,000 jobs or more, roughly one‑sixth of the time. Based on this analysis, we judge that the initial incorrect results produced by the LFS were plausible given the inherent sampling variability in the survey process.

While the data validation steps were executed as planned, our review identified two key weaknesses. First, the analytical team involved in the validation process was not fully aware that a change had been made to the production system. While the labour analysts knew that such a change was in the works, they only found out that it had been implemented during the analytical validation. However, when the results were presented to senior management and questioned, no reference was made to the change to the processing system. Secondly, the analytical process lacked formal reports to assess the contribution of imputation on the LFS results. A report detailing non-response and the contribution of imputation to estimates by labour market status may have raised questions about the results. The review of more detailed reports on processing indicators at the monthly data quality meetings could have further raised awareness on the potential impact of imputation.

It is clear that the context in which any survey is conducted and processed has a bearing on the quality of the results. In this regard, the survey analysts and senior management reviewing the overall data quality of the results should be aware of this contextual information. As well, analysts should have reports on imputation outcomes and the impact on survey results.

The LFS data quality post-mortem should address any issues in the processing environment, including imputation, and should also take the opportunity to communicate any planned changes that may affect future survey cycles.

Recommendations

1. Governance

Given the importance of the LFS, and the complexity and age of the systems, all changes should be undertaken with a heightened sense of the risks involved and should not be considered to be regular maintenance. We recommend that proper governance and oversight be put in place, regardless of the size or perceived simplicity of the changes being made to the LFS. When implementing changes, the roles of the production manager and project manager, as well as developer, tester and acceptor, should be clearly separated. These roles and responsibilities should be formally documented and understood by the team.

2. Testing protocol

The testing of systems changes should be conducted in line with a set protocol. The scope of the testing in this incident was constrained based on the perceived importance and impact of the change that was being made. Had a protocol to systematically test all the components rather than a subset been in place, the error might have been avoided. We recommend that a formal testing protocol be developed and systematically implemented at Statistics Canada. It should take into consideration the importance of the program, as well as the age and complexity of the systems, and should clearly articulate the different roles and responsibilities of those involved in testing, as well as the scope of testing to be conducted. This testing protocol, as well as other corporate management framework components, should be clearly referred to when updating the agency's Quality Assurance Framework.

3. Diagnostics

Additional diagnostics and reports should be added to the LFS production process to ensure that systems are not just running, but running as expected and doing what they are supposed to do. Specifically, additional diagnostics should be included in the imputation module and the analysts should have a report on, and be aware of, the contribution of imputation to the estimates.

4. Documentation

Accurate and up-to-date systems documentation might have prevented this error from occurring. Steps should be taken to ensure a centralized set of documentation is systematically updated and reviewed for accuracy when system changes are made. We recommend a review of best practices around the use of systems documentation and how it can best be incorporated in the change management process.

5. Communication

The context in which a survey was conducted and processed has a bearing on the quality of the results. While we clearly support the separation of the production, analysis and management functions, the survey analysts and management reviewing the overall data quality of the results should be fully aware of the contextual events surrounding the survey production cycle. The LFS data quality meeting that is conducted each month for the LFS is also a best practice, and should include a discussion of all known upcoming events that could affect the next production cycles, including changes regardless of their perceived importance.

Briefings to the Executive Management Board members on mission critical survey results should start with a brief overview of the contextual events related to the survey environment. For any events identified, there should be an explanation of steps that were taken to mitigate the risks to data quality. The briefings should also include information on any upcoming events that may affect the future production cycles.

Conclusion

Our review of the July 2014 LFS incident indicates that a number of factors contributed to the publication of incorrect data. The primary factor was the lack of understanding among team members of the impact of changes made to the dwelling identification number on the system as a whole.

If the project had not been conducted as a systems maintenance activity, but rather as a formal sub‑project attached to the redesign, would it have received additional scrutiny and oversight, would the potential risks have been better understood, and would the testing plan have been challenged? We can only speculate what the answers to these questions are, but in hindsight it is clear that the project would have benefitted from increased oversight and governance, a heightened awareness of the risks involved, a more formal testing protocol, and enhanced communication.

The LFS is a complex system with numerous checks and balances. The recommendations in this report are meant to address the shortcomings that led to the July 2014 LFS error and ensure stronger quality assurance in the future.

Date modified: