Publications

Labour Statistics: Technical Papers

2011 Workplace Survey
Summary and Lessons Learned

2011 Workplace Survey
Summary and Lessons Learned

Warning View the most recent version.

Archived Content

Information identified as archived is provided for reference, research or recordkeeping purposes. It is not subject to the Government of Canada Web Standards and has not been altered or updated since it was archived. Please "contact us" to request a format other than those available.

Skip to text

Text begins

1. Introduction

The Workplace Survey was an experimental survey conducted in early 2012 by Statistics Canada. The survey was funded by Human Resources and Skills Development Canada, now known as Employment and Social Development Canada (ESDC). The goal of the survey was to determine the ability of employers to provide information on a wide range of labour characteristics, including job vacancies and occupations in high demand. Funding of the survey ended on March 31st, 2012, with the completion of collection.

This document will provide some background on the survey as well as describe the survey methodology, processing and lessons learned. In addition, some measures of data quality will be provided based on the survey results.

2. Background

The Workplace Survey (WS) was originally conceived as the successor to the Workplace and Employee Survey (WES), but with a more limited scope. Even with this limited content, the WS was still designed to collect many different types of information on Canadian workplaces. It was planned to be an annual survey with core content focusing on workplace demographics and vacancies.

Unfortunately, budgetary constraints forced ESDC to end the financing of the WS after the collection of the first cycle. Funding was recently received to complete the processing of the survey. To this end, a microdata file has been made available to researchers for further analysis. This quality report completes the survey documentation.

3. Survey methodology

3.1 Survey universe and frame

The in-scope population for the Workplace Survey was the same as the former WES survey, that is, all active employers categorized within the 2012 North American Industrial Classification System (NAICS) codes, with the following exceptions:

NAICS Industry title
111 Crop production
112 Animal production and aquaculture
114 Fishing, hunting, and trapping
814 Private households
911 Federal government public administration
912 Provincial and territorial public administration
919 International and other extra-territorial public administration
1151 Support activities for crop production
1152 Support activities for animal production
8131 Religious organizations

The frame used for the WS was the Statistics Canada Business Register (BR), which contains the most up-to-date enterprise structures available for the population of businesses operating in Canada. These structures on the BR have up to four levels of hierarchy where applicable. These include the top level enterprise, the companies that are contained within the enterprise, the establishments within the companies, and finally the locations within the establishments. For the WS, the establishment and location levels were targeted for sampling and estimation.

Demedash, Jang, and Lothian (2013) provide a more detailed examination of the challenges faced in the design of the Workplace Survey.Note 1

3.2 Sample design

The sampling unit for the WS was the business location. This unit was chosen in an attempt to reach the respondent most likely to be able to answer the survey questions (i.e. the person responsible for Human Resources at the location).

Based on the results of the pilot survey and budgetary constraints, the WS sample was set at 25,000 locations. The sample was stratified by industry (at the two digit NAICS level) and by establishment size (five categories). The sample was selected in two stages. First a sample of establishments was drawn, and then locations within these establishments were selected. To reduce the response burden, restrictions were placed upon the total number of surveys to be assigned to an enterprise. A maximum of five locations could be selected from an establishment, with up to 25 locations within an enterprise. In order to realize the total sample, the selection process had to be run in an iterative process to ensure that the sample respected the response burden guidelines.

Prior to the data collection, there was a pre-contact stage to verify the contact information for the survey units. The sample selected for pre-contact was inflated by 5,000 to 30,000 locations. This was done to maximize the number of live, in-scope units in the final sample. The pre-contact stage identified about 1,900 units that were either inactive or out-of-scope for the WS survey. After eliminating these 1,900 units from the pre-contact sample, a final subsample of 25,038 units was drawn for the survey. The distribution of this sample, as well as the response rates by industry and province are shown in Tables 1 and 2.

3.3 Survey questionnaire

The pilot test of the WS questionnaire revealed one important fact: the questionnaire was far too long and complicated for respondents to be readily able to answer, especially for the larger locations. After the pilot, the questionnaire was re-thought and reduced in size. The final WS questionnaire was nonetheless 22 pages long.

The WS questionnaire was divided into five sections:

  1. Workforce Characteristics;
  2. Job Vacancies and Labour Turnover (hirings, separations, vacancies, hard-to-fill jobs);
  3. Specific Occupations Filled and Unfilled in 2011 (detailed occupation data);
  4. Future Skill Shortages; and
  5. Final Verifications (used for quality evaluations and combined reports).

From these sections, the goal was to provide a detailed overview of the Canadian workforce, detailed analysis of hiring practices over the 2011 calendar year, as well as very detailed information on which occupations were most in demand in the Canadian job market.

3.4 Collection

The collection period for the WS ran from January 1, 2012 to March 31, 2012. Respondents had two options for completing the questionnaire: either by mail or by a computer assisted telephone interview (CATI). The CATI application was rarely used and in fact was best suited for the smaller locations. Large locations generally had more hirings, and therefore the CATI interview would take too much time to complete.

The collection process was monitored from beginning to end. Tables 1 and 2 show the response rate by industry and geography. The final response rate for the survey was 71.6%. Amongst the major industries, the response rate varied from 60.4% to 77.8%, while by province, it varied from 61.1% to 74.8%.

After approximately six weeks of collection, priority was put on increasing response from the larger units, as the response rate for these locations was lagging behind the others in the sample (Table 3). This lower response rate among larger units was also observed in the pilot. The final response rate for the largest locations was 57.8% compared to 79.9% for the smallest locations.

A second issue with the collection of data was having a knowledgeable respondent. Despite the attempts in pre-contact to obtain the most knowledgeable respondent for the subject matter, there were many cases where the questionnaires were routed to the company accountant, for example, rather than the Human Resource Manager. This is good practice for surveys that require financial information on the business, but for the WS, the response rate and quality of data was compromised.

4. Processing

The collection of the survey ended on March 31, 2012, at which point the funding for the survey ended. Later, funding was made available to process and release some of the data.

In the end, it was determined that Statistics Canada would:

  1. Capture all questionnaires received after the end of the collection period (about 1,000 questionnaires).
  2. Code the occupations reported in questions C1 and C2 to the National Occupational Classification – Statistics (NOC-S) 2011 structure.
  3. Edit and impute information related to workforce characteristics, labour turnover and data on vacancies by occupation (this is equivalent to Sections A, B and the first 2 questions of Section C in the questionnaire). Note that questions on foreign workers and green jobs are part of the labour turnover section of the questionnaire (Section B). While these questions were processed, the low frequency of response made it difficult to analyze the data. The possibility of processing the rest of the questionnaire was also investigated, but after considering the additional cost that would be incurred versus the timeliness of the data, it was not pursued.
  4. Produce the preliminary weights for the data as well as summary tables of the results for internal evaluation (as a first step).
  5. Document the lessons learned from the survey.
  6. Make a microdata file available for researchers.

The remainder of Section C and all of Section D were not processed in any way for two main reasons. For Section C, the confusion seen in responses to questions C1 and C2 would be carried through to the remainder of the section. For section D, the number of responses received was low, with about 30% of respondents reporting expected difficulty finding workers.

After capture and coding of the data, preliminary inspections revealed that many inconsistencies and issues remained. A few examples of such inconsistencies include:

  1. Although the overall response rates for the workplace characteristic questions in Section A were very strong (over 95% for responding units for most questions), the reported values of employment were not always consistent between questions. This issue arose in about 20% of the cases.
  2. Less than 50% of the questionnaires were consistent between Sections A and B. Consistency was defined as the difference between the number of hirings and separations in Section B being equal to the difference between the 2010 and 2011 employment from Section A.
  3. Respondent confusion between questions C1 (most frequently hired occupations) and C2 (most recently hired occupations) was prevalent. This confusion arose when the respondent did not recognize that occupations reported in C1 should be excluded from reporting in C2. This resulted in duplication of reporting in 13% of the responding records and required a major manual effort to correct these two questions.

To resolve the issues with questionnaire consistency, some basic editing decisions were made. Among these were:

  1. Once the 2011 employment for Section A was resolved, that value became an anchor for the rest of the questionnaire.
  2. Discrepancies between Sections A and B were resolved by editing the 2010 employment value.
  3. Section C total hires could not be larger than Section B total hires. If this occurred, values in section C were ratio adjusted to bring them in line with the section B total.

As well, the confusion between questions C1 and C2 as mentioned above created an additional workload of 2,240 cases to resolve manually.

Survey weights were calibrated according to the Survey of Employment, Payrolls and Hours (SEPH) employment estimates to ensure consistency between the two employment totals, and then applied to the edited file. Bootstrap weights were also produced for the purpose of variance estimation calculation.

An additional output from the survey is a microdata file that is available for researchers at the Canadian Centre for Data Development and Economic Research (CDER). This file contains all processed responses and a record layout documenting the file contents, as well as its limitations for data users.

Several of the activities above were sponsored in 2015 by ESDC.

5. Lessons learned

There were many lessons learned about what did and did not function well. These lessons came from many different stages of the survey, and will be of use when designing surveys of this type in the future.

5.1 Planning

The most important lesson learned from this survey with respect to the planning stage is to better co-ordinate the survey activities with the Enterprise Statistics Division (ESD) here at Statistics Canada. The portfolio managers within ESD are responsible for maintaining the corporate structures of the largest companies in Canada, as well as having good working relationships with these businesses. By making better use of ESD knowledge and contacts from the beginning of the process, larger companies may be more receptive to this type of survey content and better able to direct the survey to the most knowledgeable person for response. This would positively affect response rates among the larger companies.

5.2 Questionnaire design

While the survey sample design was sound, there were several aspects of the questionnaire design that should be reconsidered prior to conducting another survey of this nature.

The first issue with the questionnaire was that it was simply too long. Despite the fact that there were many skip patterns, filling out 22 pages was a daunting task for many respondents, especially those in medium to large companies. It is highly recommended that future questionnaire designers push the clients harder to eliminate the non-essential questions in order to reduce the respondent burden. For example, while Section A of the questionnaire was reduced and simplified from a previous pilot survey, most of the information could have been obtained from another source such as the Labour Force Survey. As well, very rare frequency questions with little or no possibility of being published should not be retained, unless the sample design accounts for this by specifically targeting businesses with these characteristics. Examples of this are the green jobs and temporary foreign workers questions, which had response rates of 2.7% and 7.1% respectively.

A second concern with the questionnaire design was the interdependency of questions. The WS questionnaire had many questions for which the totals should have matched those of previous questions. For more than 60% of respondents who filled out the questionnaire, there was at least one inconsistency between the totals, requiring edits to address them. Reducing these inconsistencies by making each question self-contained, where possible, would greatly improve the questionnaires.

A final questionnaire design issue was the confusion caused by asking for both the most frequent hires in question C1 and the most recent hires at the location in C2. The purpose of these questions was to obtain information on the occupations that are hired less frequently than others, but are still important. Unfortunately, about 13% of the respondents did not recognize the difference and simply repeated their answers for both questions. Although the idea of collecting information on all occupations hired during the year has merit, it may be better to limit the survey to one concept (i.e. the four most recently hired occupations). The concept chosen should align with what can be measured and estimated from the survey.

5.3 Collection

As mentioned earlier, having a knowledgeable respondent answering the questionnaire was an issue in data collection. For many business survey respondents, Statistics Canada surveys are filled out by people who work in the finance or accounting areas of the company. While this respondent may be the best placed to answer questions on items such as assets, revenues or payroll, he or she is not the most knowledgeable person with respect to the hiring practices and needs of the company. Even though the survey was intended for the person with the most knowledge of hiring practices, as requested in the pre-contact phase, many of the questionnaires ended up with the “usual” Statistics Canada respondent. In addition, contacting locations rather than establishment levels of the enterprise presented a problem during collection. Many businesses were used to reporting data at the establishment level, and tried to do the same for the WS questionnaire. These two problems led to delayed collection, as the questionnaire was passed around to get the correct information, as well as most likely increased non-response rates. Perhaps a combination of the previously mentioned co-operation with ESD, more in-depth probing during pre-contact and emphasis on getting data from the location level would reduce these issues.

The response rate of the larger locations also provides some insights into improving the survey collection. As shown in Table 3, the response rate declined significantly as the size of the location increased. While there are several reasons for this, one thing that could be improved is the timing of priority being given to these cases. After six weeks of collection, it was noted that the response rate of locations with an employment size of 100 or more employees were falling behind. At that point it was decided to focus more on these cases during the last two weeks of collection. This approach contributed to increase the large location response rate in the last weeks, and it would have been better to implement it earlier.

A final important lesson from data collection is that the survey should have been collected using an electronic questionnaire (EQ). At the time of survey design, there were very few EQ applications available. The content and flow of the questionnaire would have fit very well into the EQ format. In particular, edits could have been programmed to reduce inconsistencies at the time of the data entry.

5.4 Sample size

One of the main objectives of this survey was to produce job vacancy statistics by occupation. Unfortunately, given that the frequency of vacancies reported among the responding locations was around one in seven, the sample size was not large enough to accomplish this reliably. Of the approximately 500 occupations covered, job vacancy estimates were deemed of sufficient quality to be released for 105 of them. However, 82 of these occupations would have a coefficient of variation between 15% and 35%. The remaining occupations would either be too unreliable to publish or suppressed to meet the confidentiality requirements of the Statistics Act.

If another survey is undertaken to target job vacancy estimates at the occupation code level, it is recommended that the sample size be increased to ensure a sufficient number of responses to publish more occupational detail.

6. Comparison with the Job Vacancy Statistics

The Job Vacancy Statistics (JVS) program has been published every month since March 2011. The JVS makes use of the existing Business Payrolls Survey (BPS) sample used as part of the Survey of Employment, Payrolls and Hours (SEPH) program.

The BPS is a monthly survey consisting of 15,000 establishments drawn from the Statistics Canada Business Register. Most units are in sample for a period of one year, at which point they rotate out of the sample and are replaced by other establishments. The industries covered by the BPS survey are the same industries that are in-scope for the WS survey.

BPS information on the various categories of employees (paid by the hour, salaried, other) is collected for the last pay period of the month. This information includes the number of employees, the total pay, the hours worked, the overtime pay and hours, and any special payments such as bonuses that were paid during the month. This information is transformed to a weekly equivalent, and then combined via regression with administrative data to produce survey estimates for the SEPH program. As the information targeted for the survey relates to pay, the preferred respondent is a person from the payroll department. Overall, the BPS response rate is generally around 85%.

To collect job vacancy information for the sampled establishment, two additional questions were added to the end of the BPS survey. These questions ask if there were any vacancies at the establishment, and if so, how many. As the JVS and the WS are attempting to measure the same concept, and because the two surveys covered the December 2011 period, it is possible to make a sound comparison of their vacancy estimates.

The following two tables show the JVS and the WS job vacancy rates for the December reference month. Also included for each estimate is the coefficient of variation, a 95% confidence interval for the estimate and an indication as to whether or not the difference between the two sources is statistically significant.

When comparing the provincial estimates from the two surveys, a pattern emerges (Table 4). Clearly, the WS survey has higher job vacancy rates than the JVS survey, and these differences are consistent, although not to the same degree across all provinces. The differences between the estimates for the two sources are not always significant, but they are for the largest four provinces (Quebec, Ontario, Alberta, and British Columbia).

The same type of analysis was done for the industrial breakdown (Table 5).

As with the provincial data, the WS estimates of job vacancy rate by industry are consistently higher than the JSV estimates. When the variability of the estimates is considered, only eight of the 17 industries are significantly different. Unlike the provincial figures, data from the two sources in the largest industries are not all significantly different, with notable exceptions being retail trade and accommodation and food services.

Investigations into these differences focused on two main avenues; namely the definition of a vacancy and collection differences between the two sources. For the two surveys, the definition of a vacancy is almost the same, i.e. the position exists and could be filled within the next 30 days. However, the WS vacancies require the employer to be looking for someone from outside the sampled location, while the JVS vacancies require the employer to be looking for someone from outside the organization. This difference, although slight, could mean that more positions would be considered vacant in the WS survey. The second source of difference considered was the collection methodology of the two surveys. For the JVS, the establishment is the sampled unit and the respondent is often the person responsible for the payroll of the company. For larger companies in particular, this person is not necessarily aware of the number of vacancies, and will either have to find the answer or send the questionnaire to the person capable of responding to those questions.

This mismatch between the subject matter and the survey respondent may lead to underreporting in the JVS. In contrast, the WS is location-based and targets the person most knowledgeable about the hiring practices of the business. Despite the previously-mentioned problems with reaching the best respondent during collection, it was felt that the WS was more often able to contact the correctly-targeted respondent. This combination of being location-based and specifically targeting Human Resource Managers most likely increased the number of vacancies reported in the WS survey.

Beyond the differences in the estimates, Tables 4 and 5 demonstrate that the coefficients of variation obtained from the WS are generally lower than those derived from the JVS. This may be mostly due to the fact that the WS sample size is larger than the JVS sample, but could also be due to the sample design as well.

7. Conclusion

The Workplace Survey was a very ambitious project that was complex and may have tried to accomplish too much given the sample size. However, even with the challenges faced there were some very interesting data derived. In addition, the lessons learned during the development and implementation of this survey will guide other surveys of this type. To summarize these main recommendations:

  1. Ensure that the survey is location-based;
  2. Co-ordinate with Enterprise Statistics Division to help with collection from the largest enterprises and build in special procedures for these units;
  3. Keep the questionnaire as short and simple as possible;
  4. In pre-contact, make it a priority to get the “right” respondent;
  5. To target detailed occupation data, ensure that the sample is large enough; and
  6. Make use of electronic questionnaires for collection.

Many of these recommendations have fed into the development of the new Job Vacancy and Wage Survey. This new quarterly survey was launched in February 2015 and its first release is expected later this year.

Notes

Date modified: