Publications

    Programme for the International Assessment of Adult Competencies Series

    Skills in Canada: First Results from the Programme for the International Assessment of Adult Competencies (PIAAC)

    Annex A

    Methodology

    Canada is a participant in the Programme for the International Assessment for Adult Competencies (PIAAC). The Canadian component was carried out in accordance with the standards in the PIAAC guidelines. These standards set out the minimum requirements for the survey design and the implementation of all phases of the survey, from planning to documentation.

    Target population
    The target population consists of all Canadian residents aged 16 to 65 inclusive, with the exception of long-term residents of collective dwellings (institutional and non-institutional), families of members of the Armed Forces living on military bases, and people living on Indian reserves. Because of operational constraints, sparsely populated regions were also excluded from the target population. Together, these exclusions made up no more than 2% of the total population of Canada, which easily met the international requirement that less than 5% of the target population be excluded from the survey.

    Coverage of the survey’s target population by the 2011 Census of Population was determined to be about 96% at the national level and between 94% and almost 100% at the provincial/territorial level (except for Nunavut). Table A.1 shows preliminary estimates (as of March 2013) of the coverage rate of the population aged 15 to 64 based on 2011 Census coverage studiesNote1 for Canada and each province and territory. It should be noted, however, that the fact that someone was missed in the Census does not mean that he or she was also missed in the PIAAC, since Statistics Canada’s interviewers had to prepare a roster of the members of the selected households before choosing the respondent.

    Sampling frame
    The response databases of the 2011 Census of Population and Housing and the National Household Survey (NHS) were used as sampling frames to construct the PIAAC sample.

    These databases provided recent information about dwellings’ usual residents so that people who are members of the survey’s target populations could be selected. The Census was used for the general sample, the 16-to-24 age group in British Columbia, and linguistic minorities. NHS data were used to identify recent immigrants, Aboriginal people and Métis people. Only dwellings of Census or NHS respondents and dwellings whose residents were members of the target populations according to Census or NHS data were considered.

    Sampling plan
    A multi-stage probabilistic sampling plan was used to select a sample from each frame. The design produced sufficiently large samples for both official languages (English and French). In addition, the sample size was augmented to produce reliable estimates for a number of population subgroups, including young people (the 16-to-24 age group in British Columbia), linguistic minorities (Anglophones in Quebec and Francophones in New Brunswick, Ontario and Manitoba), immigrants who had been in Canada 10 years or less (i.e., since 2002), urban Métis in Ontario and urban Aboriginals.

    In the territories, the initial sample was designed so that the final sample would contain at least 450 aboriginals in Yukon and Northwest territories and 600 in Nunavut. Note that initially, aboriginals in the territories were not explicitly targeted as such using their answers to the NHS, but households were stratified and sample sizes calculated in such a way that a sufficient number of aboriginal individuals would be interviewed to produce reliable estimates in each territory. As collection was conducted however, reports showed that these targets would not be met in Yukon and Northwest Territories. As a consequence, the initial sample in Yukon has been replaced by another random sample selected among NHS responding households explicitly targeting aboriginals according to the same criteria used in the provinces. In Northwest Territories, a portion of the sample selected in Yellowknife has been replaced by a random sample selected in communities known to have a higher percentage of aboriginals in their population.

    In the provinces,Note2 the primary sampling units (PSUs) were defined by updating the PSUs constructed for the 2003 International Adult Literacy and Skills Survey (IALSS).

    At the time, Statistics Canada’s Generalized Area Delimitation System was used to create PSUs with a sufficiently large population based on the number of dwellings within limited, reasonably compact areas. A general indication of the population’s level of education according to the 1996 Census had been added to generate PSUs that reflected the distribution of levels of education in the province.

    Since the enumeration area geography used in the 2001 Census was replaced, additional work was required to define the boundaries of each PSU in terms of dissemination areas before stratification and selection.

    Using these boundaries and exclusions similar to the IALSS exclusions, the PSUs were allocated to the following strata: A (urban), B (rural) or E (excluded). PSUs were excluded when they were too large, did not have enough residents or were too far north. Reserves were also excluded. Further clean-up resolved cases of PSUs that were in more than one stratum. A few PSUs were divided or combined with others so that they would have an area and number of dwellings comparable to other PSUs.

    In addition, 2006 Census data and the PIAAC’s sample and target population sizes were used to update the stratum boundaries. Communities were formed to derive these stratum boundaries using dissemination areas or urban areas, depending on whether it was in a census metropolitan area (CMA) or the area of the CMA or urban area was greater than 5,625 km2 . The 2006 Census long questionnaire (2B) counts and the PIAAC’s final sample sizes were also used to divide the communities into an urban stratum (A) and a rural stratum (B). The sample was divided among the PSUs on a preliminary basis using a Neyman allocation. Communities in which at least 15 dwellings had been selected were assigned to the urban stratum.

    Stratification was then completed by assigning some PSUs to a new stratum, C, for which they were selected with certainty because of their size. The PSUs chosen for this stratum were those in which at least 80 dwellings had been selected for the general and special samples taken together, or in which 40 dwellings had been selected for a subsample.

    After the final stratification was determined, a sample of PSUs was selected at the first stage in the rural stratum by sampling with probability proportional to the number of eligible persons in the PSU. In each province, the sample was distributed among the strata in proportion to actual population size, with a conservative design effect of 2.0 for the rural stratum and 1.5 for the urban stratum. The latter adjustment was made to compensate for the effect that the multistage sample design has on the variance of the estimates produced with the survey data.

    In the urban stratum, the number of dwellings was estimated by allocating the initial sample size to strata A and C on the basis of the proportion of the general sample or the subsample for that PSU. In the rural stratum, the same sample size was allocated to all PSUs in the sample to equalize collection workloads.

    In the urban stratum in provinces, as well as in the three territories, two-stage sampling was used. In the first stage, households were selected systematically with probability proportional to size. Size was defined as the number of adults aged 16 to 65 in a household based on 2011 Census data, at any time during the PIAAC collection period. The upper limit was set at four eligible adults for the core sample and three for the supplementary samples. In the second stage, the computer-assisted personal interview (CAPI) application used a simple random sampling algorithm to select one person from the roster of eligible adults that the interviewers made for each household during collection.

    In the rural stratum, three-stage sampling was used. In the first stage, PSUs were selected with probability proportional to the number of adults aged 16 to 65 according to the 2011 Census. In the second and third stages, the selection method was the same as the one used for the urban stratum.

    Sample size
    The PIAAC sample was constructed from a general sample of 5,400 units, which were distributed among the provinces using a Kish allocation (Kish 1976) to obtain a sample of at least 5,000 English-speaking respondents at the national level. Then, 3,600 units from Quebec were added to produce a sample of 4,500 French-speaking respondents (required to meet the international consortium’s standards). Supplementary units were added to this sample to produce more precise estimates for some provinces and territories and some subpopulations of interest.Note3 Following adjustments for expected non-response and target population mobility, an overall sample of nearly 50,000 units was obtained. The samples were selected one by one in sequence, following the core sample. After each sample was selected, the households chosen from the frame were removed before the next selection process, which made the samples dependent. Sequential selection of several samples in the same province or territory can be considered multiphase sampling.

    In the final stage before sample selection, the size of the primary samples was augmented to compensate for a 6% vacancy rate among the selected dwellings and a 4% rate of households with no eligible members for the general sample, which made for a combined rate of approximately 11%.

    The supplementary samples covered populations with specific characteristics, and because of natural mobility, a household selected for inclusion in one of these samples was more likely to have no eligible members at the time of contact with the interviewer, compared with the general sample. For example, persons aged 16 to 65 who moved out of a dwelling selected in the general sample shortly after the Census are very likely to have been replaced by other persons in the same age group; however, recent immigrants in that age group are less likely to have been replaced by other recent immigrants before the PIAAC was conducted. For this reason, the percentages used for the supplementary samples were different from the percentage used for the general sample. For example, the combined rate of vacant dwellings containing no members of the target group for the official-language-minorities sample was set at 15% in New Brunswick and 20% in Quebec, Ontario and Manitoba. A 65% unique response rate was also assumed, along with an 8% unique rate of refusal to share.Note4

    Table A.2 shows the expected number of 2012 PIAAC respondents by sample type for Canada and the provinces and territories.

    Supplementary samples
    The supplementary samples were constructed with the Census or NHS response database. A dwelling could be included in one of these samples if data from the survey (the Census or the NHS) indicated that it contained at least one person with the desired characteristics. The criteria used for each supplementary sample are shown in List A.1.

    At the time of the visit, the selected household was interviewed, and the interviewer checked that it was still eligible—i.e., that it had at least one person from the target population—using the same questions as the Census or the NHS. If more than one person was eligible, one of them was chosen at random. If the household was ineligible, it was coded as out of scope.

    As a result, some households (for example, some of those selected in the Métis sample) were reported as out of scope because they no longer had any members with the desired profile.

    Note that some members of the specific populations (recent immigrants, Aboriginals and so on) are also present in the sample of the general population, since they are members of the Canadian population aged 16 to 65.

    Data collection

    PIAAC survey design, assessment design and application
    The Programme for the International Assessment of Adult Competencies (PIAAC) is a survey of adult skills which is constructed of three main stages: the background questionnaire (BQ), the Core modules and the direct assessment part (direct assessment of literacy, numeracy and problem solving in technology-rich environments). While conceived primarily as a computer-based assessment (CBA), the option of taking the literacy and numeracy components through paper-based assessment (PBA) had to be provided for those adults who had insufficient experience with computers to attempt the assessment in CBA mode.

    Respondents were initially asked to complete a set of basic questions about all household members, including their gender and age, in order to permit the random selection of one member from each dwelling. This “screener” collected as required, more demographic information aimed at identifying targeted sub-populations for the survey. The background questionnaire (BQ) was then asked of the selected respondent. The BQ included questions about respondents’ computer experiences, which were essential to branch them to either the paper or computer assessments at the end of the BQ. Respondents with no computer experience, based on BQ questions, and respondents who failed the Information and Communication Technology (ICT) core assessment were routed to the paper branch. Respondents with some computer experience also had the option to opt out of the CBA without attempting it and take the PBA. Most respondents, however, were routed to the computer branch of the survey. At the beginning of the survey, respondents were given the option of completing the survey in the official language of their choice (English or French). Prior to beginning the assessment, respondents were again asked in which of the official language they preferred to complete the assessment; from this point forward, respondents could not change their mind and had to complete the entire assessment in the language selected at that time. This necessitated a relatively complex design, which is presented graphically in Figure A.1 below.

    As seen in Figure A.1, there are several pathways through the assessment. Respondents with no experience in using computers, as indicated by their response to the relevant questions in the background questionnaire, were directed to the pencil and paper version of the assessment. Respondents with some experience of computer use were directed to the CBA where they took a short test of their ability to use the basic features of the test application (use of a mouse, typing, use of highlighting, and drag and drop functionality) – the CBA core Stage 1. Those who “failed” this component were directed to the pencil and paper pathway.

    Respondents taking the computer path then took a short test (the CBA core Stage 2) composed of three literacy and three numeracy items of low difficulty to determine whether or not they should continue with the full assessment. Those who “failed” this module were directed to the reading components assessment. Respondents who passed this module continued on to take the full test and were randomly assigned to a first module of literacy, numeracy or problem solving items. Following completion of the first module, respondents who had completed a literacy module were randomly assigned to a numeracy or problem-solving module, respondents who had completed a numeracy module were randomly assigned to a literacy or problem-solving module, and respondents who had completed a problem-solving module were randomly assigned to a literacy, a numeracy or a second problem-solving module.

    The assessment design assumed that the respondents taking the PBA path would be either those who had no prior experience with computers (as assessed on the basis of responses to the relevant questions in the background questionnaire) or those who failed the CBA core. It was, however, possible for respondents with some computer experience to take the PBA pathway if they refused the CBA.

    Respondents taking the pencil and paper path first took a “core” test of four simple literacy and four simple numeracy items. Those who passed this test were randomly assigned to a module of either 20 literacy tasks or 20 numeracy tasks. Once the module was completed, respondents were given the reading-components test. Respondents who failed the initial “core” test proceeded directly to the reading-components test.

    In Canada, the majority of respondents had enough computer skills to carry out the PIAAC assessment on the computer. Approximately 85% of respondents completed the Computer-based Assessment (CBA), and 15% completed the Paper-based Assessment (PBA).

    The average times taken to complete the different stages of the PIAAC survey in Canada are as follows:

    • Background Questionnaire (BQ): approximately 45 minutes;
    • Paper-based Assessment (PBA): approximately 30 minutes;
    • Reading Component Assessment: approximately 20 minutes;
    • Computer-based Assessment (CBA): approximately 60 minutes.

    PIAAC adaptive design
    One of the unique aspects of the PIAAC was the adaptive design of the computer branch of the survey within the domains of literacy and numeracy.

    Respondents were directed to different blocks of items on the basis of their estimated ability. Individuals who were estimated to have greater proficiency were more likely to be directed to groups of more difficult items than those who were estimated to be less proficient. Each of the literacy and numeracy modules was composed of two stages containing testlets (groups of items) of varying difficulty. Stage 1 contained three different testlets of nine items each, while Stage 2 contained four different testlets of 11 items each. Respondents’ chances of being assigned to testlets of a certain difficulty depended on their level of educational attainment, whether their native language was the same as the test language (i.e. whether the language of the test was the first language or birth language of the respondent), their score on the literacy/numeracy core (CBA core Stage 2) and, if relevant, their score on a Stage 1 testlet.

    Problem Solving in Technology Rich Environment (PS-TRE) is unique because of the nature of the domain; there was only one testlet per module. It was organized as two fixed sets of tasks: seven tasks in Module 1 and seven in Module 2.

    Respondents directed to the paper booklet path directly started with a Paper Core booklet consisting of a set of items designed to determine whether they have the basic literacy and numeracy skills to proceed to the main assessment. This was scored by the interviewer, and if the respondent correctly answered a sufficient number of questions (4), they were then randomly assigned either a literacy or numeracy booklet.

    Finally, PIAAC can provide more information about individuals with low proficiency levels by assessing reading component skills. This portion of the paper assessment was an international option and Canada was one of the participating countries. It measured basic reading skills using some short sections of exercises, word meaning, sentence processing, and basic passage comprehension.

    With the exception of the reading components section (the time taken by respondents to complete the reading components tasks was recorded), no time limit was imposed on respondents completing the assessment, and they were urged to try each item whether it be on the computer or paper booklets. Respondents were given a maximum leeway to demonstrate their skill levels, even if their measured skills were minimal.

    PIAAC quality control
    To ensure high quality data, the International Technical Standards and Guidelines were followed and supplemented by adherence to Statistics Canada’s own internal policies and procedures. The interviews were conducted in the respondent’s home in a neutral, non-pressured manner. Interviewer training and supervision were provided, emphasizing the importance of precautions against non-response bias. Interviewers were specifically instructed to return several times to non-respondent households in order to obtain as many responses as possible. Extensive effort was expended to ensure that the home address information provided to interviewers was as complete as possible, in order to reduce potential household identification problems. Finally, the interviewers’ work was supervised by using frequent quality checks throughout collection and by having help available to interviewers during the data collection period. In total, Canada employed 786 interviewers over the duration of the survey.

    The paper-based assessment was scored and captured in Statistics Canada. Explicit guidelines and a standard data capture tool were provided by the International Consortium to complete this work. As a condition of participation in the international study, it was required to capture and process files using procedures that ensured logical consistency and acceptable levels of data capture error. Specifically, complete verification of the captured scores (i.e., enter each record twice) was done in order to minimize error rates.

    The International Consortium regarded Quality Control (QC) as an integral component to the overall success of the PIAAC survey. Various guidelines were established to ensure that the data collected by participating countries were reliable and valid.

    The guidelines stipulated that throughout collection PIAAC countries routinely conduct validations to verify that an interview was indeed conducted or attempted as reported by the interviewer. Countries were required to validate at least 10 percent of each interviewer’s finalized work to ensure that the case was handled according to study procedures. Validation included completed cases and those finalized with other outcome codes, such as vacant or refusal. Validation cases were selected randomly.

    In Canada, the Quality Control Validation was done by a Computer Assisted Telephone Interview (CATI). The interview consisted of a series of questions about the respondents experience with the PIAAC survey, and the responses were then compared to the PIAAC survey data to determine if:

    • the data matched (month and year of birth; education; address; demographics on household members; etc);
    • procedures were followed (length of interview; composure of interviewer; interviewer using laptop; respondent completing assessment; interviewer helping respondent);
    • the correct outcome code was assigned (correct vacant/ no contact/ absent/ seasonal dwelling etc).

    If inconsistencies were discovered, the interviewer’s entire completed caseload was then selected and subject to further validation in order to ascertain whether other cases were also compromised.

    PIAAC coding
    Industry, occupation, and education variables were coded using standard schemes such as the International Standard Industrial Classification (ISIC), the International Standard Classification of Occupations (ISCO) and the International Standard Classification for Education (ISCED). Coding schemes were provided for all open-ended items, as were specific instructions about coding of such items.

    PIAAC data collection period
    Data collection began in 2011 with the planning of interviewer assignments by the regional offices coordinating the collection activities. The first contacts with respondents were initiated in November 2011 across the country and the last interviews were completed in June 2012, with all survey-related materials being returned to head office by August of 2012.

    Scoring of tasks
    The overall performance of items from the assessment was evaluated during the field test. The field test was used to evaluate scoring procedures, including scoring standards and scorer training for paper-based instruments and automated scoring procedures for the computer-based instruments. Items that did not appear to be working as expected were examined and either revised or replaced for the PIAAC main study.

    For the large majority of respondents who took the assessment in its CBA format, scoring was done automatically. Manual scoring was necessary in the case of respondents taking the PBA version.

    Computer-based instruments automated scoring procedures
    The purpose of this section is to explain in detail the scoring procedures within the computer branch of the assessment, focusing on the CBA Core, CBA Module 1 and CBA Module 2:

    • The Core: The word “core” is used in PIAAC to refer to two different sets of basic skills. Below are the scoring procedures for the CBA Core stages:
      • CBA core Stage 1 (Basic computer skills): In the computer branch, the CBA core Stage 1 focused on basic computer skills including clicking, typing, scrolling, dragging, using pull-down menus and highlighting – skills respondents needed to complete the CBA main assessment. Thus, this module considered whether the respondents completed the task and was scored based on the completion of the action rather than the correct content. For example, one of the tasks asked the respondent to select “May” from a pull-down menu. The task was scored correctly if he/she used the pull-down menu to select any month. Out of the six tasks, respondents had to complete at least four tasks to move to the next stage. That is, respondents had to receive a score of 4, 5 or 6 AND they had to complete the highlighting task. Respondents who failed to demonstrate the necessary basic computer skills were routed to the paper branch. A successful completion of the CBA core Stage 1 led respondents to the CBA core Stage 2.
      • CBA core Stage 2 (Basic literacy and numeracy skills): CBA core Stage 2 in the computer branch was designed to ensure that respondents had the basic literacy and numeracy skills necessary to proceed to the main assessment. CBA core Stage 2 contained six items with a passing score of at least 3; respondents with a score of 0, 1 and 2 were routed to the paper branch. For example to get a score of “4” a respondent had to answer 4 out of the 6 items correctly. The score received in the CBA core Stage 2 was used as a variable determining the choice of the first and second Testlet (i.e. Stage 1 and Stage 2 testlets) within Literacy and Numeracy.
    • The Modules: The CBA main assessment assessed the domains of literacy, numeracy and problem solving. Each respondent took two modules (Module 1 and Module 2), which each included two stages; Stage 1 contained three different testlets of nine items each, while Stage 2 contained four different testlets of 11 items each.

    For the computer branch, the selection of a domain (literacy, numeracy or problem solving) for the first module (Module 1) is random. After completing Module 1 (either the two testlets for literacy or numeracy or the problem-solving module), the respondent proceeded to Module 2; the selection between Module 1 and Module 2 was also based on random probabilities. As noted in section 6.6.2, each of the literacy and numeracy modules was composed of two stages containing testlets (groups of items) of varying difficulty. All items were scored automatically.

    Below are the scoring procedures for the CBA Module Stage 1:

    • CBA Module Stage 1: Respondents needed to answer the items of each stage of a given module to get a certain score. For instance, in literacy and numeracy, the possible values of the stage 1 score of a module (and the result of the answers to the related items) were 0, 1, 2, 3, 4, 5, 6, 7, 8, or 9. E.g. to get a score “7” a respondent had to answer correctly seven out of the nine items. The Stage 1 score is used as a variable determining the test assignment for Stage 2 within Literacy and Numeracy.

    Paper-based instruments scoring procedures
    Persons charged with scoring received intense training on scoring responses to the paper-based items using the PIAAC scoring manual. To aid in maintaining scoring accuracy and comparability between countries, the PIAAC survey used an electronic bulletin board, where countries could post their scoring questions and received scoring decisions from the domain experts. This information could be seen by all participating countries, and they could then adjust their scoring. To further ensure quality, monitoring of the scoring was done in two ways.

    First, a certain proportion of booklets had to be re-scored. A minimum of 600 sets of Core Booklet/Exercise Booklet 1 or Core Booklet/Exercise Booklet 2 had to have been double scored within each country. The first score was considered as the main score; the second was considered as the reliability score. In Canada 1,000 sets of English and 1,000 sets of French Core Booklet/Exercise Booklet 1 or Core Booklet/Exercise Booklet 2 were double scored. This accounted for about 43% of the total amount of booklets scored. The structure of the scoring design involved rescoring a large portion of booklets at the beginning and middle of the scoring process to identify and rectify as many scoring problems as possible. The goal in PIAAC scoring was to reach a within country inter-rater reliability of 0.95 (95% agreement) across all items, with at least 85% agreement for each item. In fact, most of the intra- country scoring reliabilities were above 95%. Where errors occurred, booklets were reviewed and problem questions associated with a systematic scoring error by a particular scorer were rescored.

    Second, the Consortium developed a cross-country reliability study where a set of anchor booklets were used to check the consistency of scorers across countries and to ensure they were applying the same criteria when scoring the items. The anchor booklets consisted of a set of 180 “completed” English booklets that were scored and rescored by every country.

    Once Canada met the requirements of the two reliability studies (Canada had a within-country agreement above 97% across items), the remaining Core, Exercise 1 and Exercise 2 booklets were single scored.

    The section below explains the scoring procedures within the paper branch of the assessment, focusing on the paper core booklet (PPC), the literacy booklet (PP1), the numeracy booklet (PP2) and the reading components booklet (PRC):

    • PPC core (Basic literacy and numeracy skills): the paper core booklet in the paper branch was designed to ensure that respondents had the basic literacy and numeracy skills necessary to proceed to the main paper-based assessment. The paper core contained eight items with a passing score of at least 4 (so scores 4, 5, 6, 7, and 8 were passing scores).
    • In the literacy booklet (PP1) and the numeracy booklet (PP2) items were scored within Statistics Canada and each assigned a score of 1, 7 or 0. In general: A score of ‘1’ was assigned for a correct response, a score of ‘7’ was assigned for an incorrect response, and a score of ‘0’ was assigned if no response was provided.  
    • The Exercise Booklet RC (Reading Components) was not scored within Canada; instead a procedure known as response capturewas required. For each part of the Reading Components assessment, actual responses given by the respondent were captured in appropriate scoring sheets. During the data processing at the International Consortium, a response key was applied that assigned consistently coded scores for all reading component items. The following scheme was used: 0 = Question refused / not done, 1 = Correct response, 7 = Incorrect response and 8 = Any other response.

    Survey response and weighting
    The Canadian PIAAC sample has a very complex design, involving stratification, multiple phases, multiple stages, systematic sampling, probability-proportional-to-size sampling, and several overlapping samples. It is also necessary to adjust for non-response at various levels. As a result, the estimation of population parameters and the corresponding standard errors depends on weighting coefficients, or weights. Two types of weighting coefficients were calculated: population weights, which are used to produce population estimates, and jackknife replicate weights, which are used to derive the corresponding standard errors.

    Population weights
    Since the PIAAC is a sample survey, each respondent was selected by means of a random process and represents a portion of the survey’s target population. Each respondent’s weight, i.e., the number of members of the target population that he or she represents, is calculated at the outset as the inverse of each person’s probability of being selected in the sample. A sampling unit’s overall probability of selection is the product of its probabilities of selection in all phases and stages of selection. The sequential selection of multiple samples in a province was taken into account by factoring in the probability that a unit selected in a given sample was not chosen in any previously selected samples. The initial weight was then adjusted to compensate for the various types of non-response in the survey.

    There are four phases of weight adjustments for non-response: two apply to the weights before they are adjusted for the number of eligible members of the household, and two apply to the weights after that calculation.

    For each type of weight adjustment, persons (respondents and non-respondents) with similar response probabilities were divided into response homogeneity groups (RHGs) for adjustment. For the adjustment of literacy-related non-response cases, the RHGs are composed of province–subsample combinations, because the number of literacy-related non-response cases in the sample is so small. For every other phase, in each province–subsample combination, an algorithm similar to the chi-square automatic interaction detection (CHAID) algorithm (Kass 1980) was used to form the RHGs. The RHGs were constructed so that each one had at least 30 households and a weighted response rate (or known eligibility rate for adjusting for the household’s unknown eligibility at the household composition stage) of at least 40%.

    The households selected in the sample were assigned to one of the following five response groups: respondent, literacy-related non-respondent (at this stage, only language problems were considered), non-literacy-related non-respondent, ineligible and unknown eligibility. They were allocated to the groups on the basis of the result codes selected by the interviewer when he or she contacted the people living in the selected dwellings and made a roster of the usual residents.

    The first adjustment involves distributing part of the weight of dwellings of unknown eligibility among the dwellings that are ineligible (because they are vacant at the time of the interviewer’s visit, they are being renovated, etc.). The second adjustment involves redistributing the weights of the dwellings of non-literacy-related non-respondents and ineligible dwellings among the weights of respondent dwellings.

    After the roster of household members has been prepared and the respondent has been selected from the eligible members, a second code indicates whether the interview took place, and if not, why not. After the household composition stage, the members of a respondent household are in one of the following five response groups: respondent, literacy-related non-respondent, non-literacy-related non-respondent, ineligible member or disabled member.Note5 The non-response adjustment stages that follow are applied to the weights, which reflect the number of eligible persons in the household.

    The third non-response adjustment involves distributing the weights of disabled persons and selected non-respondents across the weights of respondents. Lastly, after the roster of household members is made, the fourth adjustment distributes the weights of pre-roster literacy-related non-respondents across the same type of non-respondents identified as persons selected to complete the survey.

    Because of the overlap of the populations associated with the various samples, the weights had to be combined so that estimates could be produced using all units from all samples. The situation is similar to that of a survey with multiple frames, except that in this case, the samples are dependent. The weights were combined using the Hartley method (Hartley 1962) for multiple frames: The entire sample was allocated on the basis of the subpopulations targeted in the supplementary samples, and the weights were adjusted using coefficients proportional to the size of the various samples within the partition.

    Lastly, the weights for each province and territory were calibrated separately using the calibration variables shown in Table A.3.

    The calibration totals used are population estimates based on the 2006 Census. They are official totals for the province, age and sex dimensions and simulation-based estimates for the other dimensions. Some missing data were imputed so that the variables used for calibration would be complete for all respondents.

    The sample size and response rate for each province and territory are presented in Table A.4.

    As required by the international consortium, two non-response bias analyses were carried out: a “basic” analysis, to assess the relationship between response status and available auxiliary variables correlated with the skill measures, and an “extended” analysis, to measure the effect of the various weight adjustments and assess the impact of non-response bias on key statistics (or correlated variables). These analyses showed that the various weight adjustments and the use of variables known to be correlated with the skill measures in the calibration stage minimized the effects that non-response had on the survey results.

    Jackknife weights
    A set of jackknife weights was generated to estimate the variance of the estimates produced with the survey data. The jackknife method with one unit removed (JK1) was selected because of its ease of implementation (Landry 2012). In the application of this method, each selected dwelling was assigned to a variance group. The sample PSUs were divided into 80 variance groups, or “replicates”, and each replicate’s jackknife weight was calculated by assigning a weight of 0 to the replicate’s dwellings and multiplying the weights of the other dwellings by 80/79.

    The method used to allocate the variance groups differs depending on whether the stratum is take-all (strata A and C) or take-some (stratum B). For a take-all stratum, the dwelling serves as the PSU, and each dwelling was assigned to a replicate independently. Thus, the first dwelling was assigned to a replicate at random, the next dwelling to the next replicate, and so on for all the dwellings in the stratum. The set of 80 replicates was split between the take-all PSUs and the take-some PSUs on the basis of a measure of the size (size of the PIAAC’s target population) of the take-all or take-some PSUs. For example, if the take-all PSUs made up 50% of the PIAAC’s target population, then 40 (80 * 0.5) replicates were allocated to the take-all PSUs. The remaining 40 replicates were assigned to the take-some PSUs. This process was performed independently for each province/territory–subsample combination.

    Then the number of replicates to be allocated to each take-all PSU was determined so that the number of variance units assigned to each take-all PSU reflected the ratio of the PSU’s size to a particular limit (the boundary between the take-all PSUs and the take-some PSUs). If a take-all PSU’s size was about six times the limit, it received 6+1 replicates (i.e., six degrees of freedom). After the number of replicates was determined for each take-all stratum, the dwellings were sorted on the basis of the order in which they were sampled and the variance unit assigned to them. If the first take-all PSU in the sort received four replicates, its dwellings were assigned a variance unit of 1, 2, 3, 4, 1, 2, 3, 4, and so on. If the next PSU in the sort received two replicates, its dwellings were assigned a variance unit of 5, 6, 5, 6, and so on. The variance unit allocation for the take-all PSUs starts over when it reaches replicate n (in the example given above, replicate 40 would be followed by replicate 1).

    The take-some PSUs were sorted into the order in which they were sampled. Then they were numbered sequentially from n+1 to 80 (in the above example, n would be 40) to form the variance units.

    The presence of a second-phase sample among NHS respondents was also taken into account in the calculation of the jackknife weights by using the method described by Kim and Yu (2011).

    The jackknife weights were produced from the PIAAC’s entire initial sample, and the initial jackknife weights were calculated with the weights determined by the sampling plan. The entire weighting process was repeated for each of the 80 jackknife weights, including non-response weighting adjustments, combining of weights, and calibration.


    Notes

    1. Undercoverage estimates for the population aged 16 to 65 were not available at the time of writing.
    2. In the territories, a two-stage sample design has been used. As a consequence, PSUs are constituted by households, and not by geographical areas.
    3. As noted previously, these supplementary units were added to meet the needs of federal, provincial and territorial government departments and ministries.
    4. The rate of refusal to share is the proportion of persons who responded to the survey but withheld consent for the transmission of their responses to organizations other than Statistics Canada and to the organizations responsible for processing the data collected. Those persons are treated as non-respondents.
    5. This category includes only persons whose disability, such as deafness or blindness, was considered incommensurate with participation in the survey.
    Date modified: