Publications

Education Indicators in Canada: Handbook for the Pan-Canadian Education Indicators Program
May 2013

Related products

Section C:
Elementary-secondary education

Warning View the most recent version.

Archived Content

Information identified as archived is provided for reference, research or recordkeeping purposes. It is not subject to the Government of Canada Web Standards and has not been altered or updated since it was archived. Please "contact us" to request a format other than those available.

Skip to text

C1 Early years and school readiness
C2 Elementary-secondary school: enrolments and educators
C4 Student achievement
Programme for International Student Assessment (PISA)
Pan-Canadian Assessment Program (PCAP)
C5 Information and communications technologies (ICT)

Text begins

C1 Early years and school readiness

Tables C.1.1 and C.1.2

Indicator C1 assesses the early years and school readiness of 4- and 5-year-old children by examining their health status (including any health limitations), participation in activities, exposure to reading and reading materials (Table C.1.1), and their language scores/vocabulary skills (Table C.1.2).

Concepts and definitions

  • The child’s general health was classified as: excellent; very good; good; or fair or poor. The categories were read to the adult respondents who answered on behalf of their children in the National Longitudinal Survey of Children and Youth (NLSCY).

  • This indicator also considers certain health limitations affecting the child. One set of questions asked about the child’s day-to-day health and focused on his or her abilities relative to other children of the same age. The adult respondents were told that these same questions would be asked of everyone. This indicator considers the following: difficulty seeing; difficulty hearing; difficulty being understood when speaking; difficulty walking; and pain or discomfort. Pain or discomfort reflects the “no” responses to a question asking if the child is “usually free of pain or discomfort.” These questions are part of an index called the Health Utility Index.

    Before being asked about chronic conditions, the adult who was responding on behalf of the child was told that this referred to “conditions that have lasted or are expected to last six months or more and have been diagnosed by a health professional” and was instructed to mark all that apply. This indicator presents information for long-term allergies and long-term bronchitis, as well as asthma. The questions for asthma were asked separately, and the information presented reflects the percentage of children aged 4 or 5 who had ever been diagnosed with asthma, not just those who had had as asthma attack in the 12 months before the survey interview.

  • Weekly physical activities outside of school hours refers to weekly participation (ranging from most days to about once a week) in: sports that involved a coach or instructor (except dance, gymnastics or martial arts); lessons or instruction in organized physical activities such as dance, gymnastics or martial arts; lessons or instruction in music, art or other non-sport activities; and participation in any clubs, groups or community programs with leadership (for example, Beavers, Sparks or church groups). The adults who responded on behalf of these young children were asked to provide information on the children’s physical activities for the 12-month period leading up to the survey interview.

  • Daily reading activities outside of school hours reflects some of the information obtained from questions about literacy, including how often a parent read aloud to the child or listened to the child read (or try to read). Respondents were also asked how often the child looked at books, magazines, comics, etc. on his/her own, or tried to read on his/her own (at home).

  • The Peabody Picture Vocabulary Test-Revised (PPVT-R) measures children’s receptive vocabulary, which is the vocabulary that is understood by the child when he or she hears the words spoken. It is a “normed” test; that is, a child’s performance is scored relative to that of an overall population of children at the same age level as the child. A wide range of scores represents an average level of ability, taking the age of the child into consideration. Scores below the lower threshold of this average range reflect a delayed receptive vocabulary, and scores above the higher threshold demonstrate an advanced receptive vocabulary.

    The PPVT-R is scaled to an average of 100. The range of average receptive vocabulary measured by the PPVT-R covers scores from 85 to 115. A score below 85 is considered to indicate delayed receptive vocabulary; a score above 115, advanced. Scoring is adjusted to reflect the different abilities of 4- and 5-year-olds. English and French scores are assessed separately and are not directly comparable.

Methodology

  • The National Longitudinal Survey of Children and Youth (NLSCY) is a long-term study of Canadian children that follows their development and well-being from birth to early adulthood. The survey was designed to collect information about factors influencing a child’s social, emotional and behavioural development and to monitor the impact of these factors on the child’s development over time.

  • This indicator is based on nationally representative data for 4- and 5-year-olds from cycle 8 of the NLSCY, which was conducted in 2008/2009.

  • The information presented was obtained from the NLSCY child component; specifically, the questions on child health, activities (sports, lessons, clubs, etc.) and literacy. Responses were provided by the person most knowledgeable (PMK) about the child, which is usually the mother.

Limitations

  • The NLSCY relies on the perceptions of the adult most familiar with the child to report on the child’s general health and development, and such reports may not always be entirely objective or accurate.

  • The following are possible sources of non-sampling errors in the NLSCY: response errors due to sensitive questions, poor memory, translated questionnaires, approximate answers, and conditioning bias; non-response errors; and coverage errors.

Data source

National Longitudinal Survey of Children and Youth (NLSCY), Statistics Canada. For more information, consult “Definitions, data sources and methods”, Statistics Canada Web site, survey 4450.

C2 Elementary-secondary school: enrolments and educators

Tables C.2.1 through C.2.7

Information on enrolment in public schools at the elementary-secondary level (Table  C.2.1), as well as on the number of full-time educators (Table C.2.2), is captured in Indicator C2. A student–educator ratio, which measures the total human resources available to students, is also presented (Table C.2.3), along with some characteristics of the educator work force (Table C.2.4, Table C.2.5, Table  C.2.6 and Table C.2.7).

Concepts and definitions

  • Public schools are publicly funded elementary and secondary schools that are operated by school boards or the province or territory. They include all regular publicly funded schools (graded and ungraded), provincial reformatory or custodial schools and others that are recognized and funded by the province or territory. This indicator includes data for public elementary and secondary schools only and does not include private schools, federal schools and schools for the visually and hearing impaired.

  • Full-time equivalent (FTE) enrolments represent the number of full-time students enrolled as of September 30 (or as close as possible thereafter) of the school year, plus the sum of part-time enrolments according to the portion of time spent in the classroom and for which students are funded (determined by the province or territory) (Table C.2.1).

  • Educators refer to personnel involved in direct student instruction in a group or one-on-one basis. They include: classroom teachers; special education teachers, specialists (music, physical education); and other teachers who work with students as a whole class in a classroom, in a small groups in a resource room, or one-on-one inside or outside a regular classroom, including substitute/supply teachers. Chairpersons of departments who spend the majority of their time teaching and personnel temporarily not at work (e.g., for reasons of illness or injury, maternity or parental leave, holiday or vacation) should also be reported in this category. It excludes teacher’s aides or student teachers as well as other personnel who are not paid for their employment.

    School administrators include all personnel who support the administration and management of the school such as principals, vice-principals and other management staff with similar responsibilities only if they do not spend the majority of their time teaching. They do not include those who are in higher level management; receptionists, secretaries, clerks and other staff who support the administrative activities of the school; and those who are reported under “other than educators”.

    Pedagogical support staff includes professional non-teaching personnel who provide services to students to support their instruction program. It includes educational assistants, paid teacher’s aides, guidance counselors and librarians. They do not include those in health and social support who should be reported under “other than educators”.

  • Full-time equivalent (FTE) educators is defined as the number of full-time educators as of September 30 (or as close as possible thereafter) of the school year, plus the sum of part-time educators according to their percentage of a full-time employment allocation (determined by the province or territory) (Table C.2.2). For example, if a normal full-time work allocation is 10 months per year, an educator who works for 6 months of the year would be counted as 6/10 (0.6) of a full-time equivalent, or an employee who works part-time for 10 months at 60% of full-time would be 0.6 of an FTE.

  • Full-time educators (headcount) (Table C.2.4) refers to the number of educators as of September 30 (or as close as possible thereafter) of the school year who are responsible for providing services to the students.

  • The labour force comprises the portion of the civilian, non-institutional population 15 years of age and over who form the pool of available workers in Canada. To be considered a member of the labour force, an individual must be working either full- or part-time or be unemployed but actively looking for work. The age distribution of the full-time and part-time employed labour force is presented in Table C.2.5.

Methodology

  • The Elementary-Secondary Education Survey (ESES, formerly called the Elementary-Secondary Education Statistics Project) is a national survey that enables Statistics Canada to provide information on enrolments (including minority and second language programs), graduates, educators and finance of Canadian elementary-secondary public educational institutions. Every year, Statistics Canada conducts a survey of all Departments/Ministries of education in all 10 provinces and 3 territories that collects data on enrolments, graduates, educators and finance data of the public elementary-secondary schools.

  • The full-time equivalent (FTE) enrolment rate represents the time fraction spent in the classroom and for which students are funded. If this fraction is not known, an estimate should be used. For example, for junior kindergarten and kindergarten students taking a half-time program and where a half-time program is being funded, the FTE enrolment would be the headcount enrolment divided by two (0.5). If a student is only taking one-quarter of the usual course load and is funded on that basis, the FTE enrolment would be the headcount enrolment divided by 4, which is 0.25.

  • The student–educator ratio (Table C.2.3) is calculated using full-time equivalent enrolment in Grades 1 to 12 (OAC in Ontario) and ungraded programs, plus pre-elementary full-time equivalent enrolment, divided by the full-time equivalent number of educators, both teaching and non-teaching.

  • The Labour Force Survey data used to compare the age distribution of the overall full-time and part-time employed labour force with that of the full-time and part-time educator work force are based on a monthly average from September to August (Table C.2.5).

Limitations

  • Due to the nature of the Elementary-Secondary Education Survey (ESES) data collection, these data are updated on an ongoing basis and are therefore subject to further revisions.

  • Care should be taken with cross-jurisdictional comparisons. The proportion of educators (comprising a mix of teachers, administrators and pedagogical support) differs in each  jurisdiction.

  • The student–educator ratio should not be taken as a measure of classroom size, nor should it be interpreted as a student–teacher ratio. Average classroom size depends not only on the number of teachers and students, but also on the hours of instructional time per week, the per-teacher hours worked, and the division of time between classroom instruction and other activities. The number of educators in this indicator includes both teaching and non-teaching educators (such as school principals, librarians, guidance counselors, etc.).

Data sources

  • Elementary-Secondary Education Survey, Statistics Canada. For more information, consult “Definitions, data sources and methods”, Statistics Canada Web site survey 5102.

  • Labour Force Survey, Statistics Canada. For more information, consult “Definitions, data sources and methods”, Statistics Canada Web site, survey 3701.

C4 Student achievement

Programme for International Student Assessment (PISA)

Tables C.4.2, C.4.4, C.4.5, C.4.10 and C.4.17

Indicator C4 reports on student achievement in three key areas—reading, mathematics, and science—and looks at changes in results over time. Performance was examined using results from the Programme for International Student Assessment (PISA), an international program of the Organisation for Economic Co-operation and Development (OECD).

This sub-indicator presents detailed information on the performance of 15-year-old students in Canada in the major PISA domain of reading, assessed in 2009, by looking at average scores and the distribution of students by proficiency levels on the combined reading scale (Table  C.4.2) and at average scores on the reading subscales (Table C.4.17). It also compares performance over time in reading (Table C.4.4), science (Table C.4.5) and mathematics (Table C.4.10).

Concepts and definitions

  • The Programme for International Student Assessment (PISA) is a collaborative effort of member countries of the OECD along with partner countries to regularly assess youth outcomes, using common international tests, for three domains: reading, mathematics, and science. The goal of PISA is to measure students’ skills in reading, mathematics, and science not only in terms of mastery of the school curriculum, but also in terms of the knowledge and skills needed for full participation in society.

    Reading: An individual’s capacity to understand, reflect on, and engage with written texts, in order to achieve one’s goals, to develop one’s knowledge and potential and to participate in society.

    Mathematics: An individual’s capacity to identify and understand the role that mathematics plays in the world, to make well-founded judgments and to use and engage with mathematics in ways that meet the needs of that individual’s life as a constructive, concerned and reflective citizen.

    Science: An individual’s capacity to use scientific knowledge, to identify questions and to draw evidence-based conclusions in order to understand and help make decisions about the natural world and the changes made to it through human activity.

Methodology

  • Internationally, around 470,000 students from 65 countries and economies participated in PISA 2009. PISA’s target population comprises 15-year-olds who are attending school. In Canada, the student sample is drawn from Canada’s 10 provinces; the territories have not participated in PISA to date. The PISA assessments are administered in schools, during regular school hours, in the spring. Students of schools located on Indian reserves were excluded, as were students of schools for those with severe learning disabilities, schools for blind and deaf students, and students who were being home-schooled. In 2009, the PISA assessment was a two-hour paper- and pencil-test.Note 1 It was administered in English and in French according to the respective school system.

  • While all three of the PISA domains are tested in each assessment, only one forms the major domain in each cycle, meaning it includes more assessment items than the others. In each cycle, two-thirds of testing time is devoted to the major domain. Reading was the major domain in 2000, mathematics in 2003, and science in 2006. With the repetition of the cycle, the major focus of the 2009 assessment was again on reading.

  • Results for the major domains are available in a combined domain scale (which represents students’ overall performance across all the questions in the assessment for that domain), as well as on the sub-domains that make up each overall scale. As fewer items are tested as part of the minor domains, only combined or overall results are available from PISA.

  • In 2009, the reading sub-scales refer to three aspects of reading—accessing and retrieving information, integrating and interpreting, and reflecting and evaluating—and two text formats—continuous and non-continuous.

    • Reading aspect sub-scales:
      Accessing and retrieving: Involves going to the information space provided and navigating in that space to locate and retrieve one or more distinct pieces of information.
      Integrating and interpreting: Involves processing what is read to make internal sense of a text.
      Reflecting and evaluating: Involves drawing upon knowledge, ideas or attitudes beyond the text in order to relate the information provided within the text to one’s own conceptual and experiential frames of reference.

    • Reading text format sub-scales:
      Continuous texts: Consist of documents that are formed by sentences organized into paragraphs. These include newspaper articles, essays, short stories, reviews or letters.
      Non-continuous texts: Consist of documents that combine several text elements such as lists, tables, graphs, diagrams, advertisements, schedules, catalogues, indexes or forms.

  • In PISA, student performance is expressed as a number of points on a scale constructed so that the average score for the major domains for students in all participating countries was 500 and its standard deviation was 100. This means that about two-thirds of the students scored between 400 and 600. This average was established in the year in which the domain became the main focus of the assessment. Due to change in performance over time, the OECD average scores in PISA 2009 differ slightly from 500.

  • PISA results can also be presented as the distribution of student performance across levels of proficiency. In PISA 2009, seven levels were used in reporting reading achievement, to identify the most difficult test items a student could answer; therefore, a student at one level could be assumed to have the ability to answer questions at all lower levels. To help in interpretation, these levels were linked to specific score ranges on the original scale:

    • Below Level 1b (scores lower than or equal to 262 points)
    • Level 1b (scores higher than 262 but lower than or equal to 335 points);
      Level 1a (scores higher than 335 but lower than or equal to 407 points)
    • Level 2 (scores higher than 407 but lower than or equal to 480 points)
    • Level 3 (scores higher than 480 but lower than or equal to 553 points)
    • Level 4 (scores higher than 553 but lower than or equal 626 points)
    • Level 5 (scores higher than 626 but lower than or equal to 698 points) and
    • Level 6 (scores higher than 698 points).

    According to the OECD, Level 2 can be considered a baseline level of proficiency, at which students begin to demonstrate the reading competencies that will enable them to participate effectively and productively in life. Students performing below Level 2 can still accomplish some reading tasks successfully, but they lack some fundamental skills that may prepare them to either enter the workforce or pursue postsecondary education.   
  • When comparing student performance among countries, provinces, or population subgroups, the PISA tables identify statistically significant differences. Statistical significance is determined by mathematical formulas and considers issues such as sampling and measurement errors. Sampling errors relate to the fact that performance was computed from the scores of random samples of students from each country and not from the entire population of students in each country. Consequently, it cannot be said with certainty that a sample average has the same value as a population average that would have been obtained had all 15-year-old students been assessed. Additionally, a degree of error is associated with the scores describing student skills as these scores are estimated based on student responses to test items.

  • Standard errors and confidence intervals have been used as the basis for performing comparative statistical tests. The standard error expresses the degree of uncertainty around the survey results associated with sampling and measurement errors. The standard error is used to construct a confidence interval, which indicates the probability that a given error range (given by the standard error) around the sample statistic includes the population number. The PISA survey results are statistically different if the confidence intervals do not overlap. Furthermore, an additional t-test was conducted to confirm statistical difference.

  • It is possible to compare changes in student performance over time in each PISA domain because a number of common test questions are used in each survey. However, the limited number of such common test items used increases the chances of measurement error. To account for this, an extra error factor, known as the linking error, is introduced into the standard error. The standard errors with linking errors should be used whenever comparing performance across assessments (but not when comparing results across countries/economies or subpopulation within a particular assessment).

  • This indicator compares the performance of students in the 2009 PISA assessment with the first major assessment in each domain: reading in 2000 (Table C.4.4), mathematics in 2003 (Table C.4.10), and science in 2006 (Table C.4.5). It is not possible to include in this comparison the results from any minor assessments that took place before the first major (full) assessment of a domain. This is because the framework for the domain is not fully developed until the cycle in which it is assessed as a major domain. Consequently, the results measured as a minor domain beforehand are not comparable.

Limitations

  • Looking at the relative performance of different groups of students on the same or comparable assessments at different time periods shows whether the level of achievement is changing. Obviously, scores on an assessment alone cannot be used to evaluate a school system, because many factors combine to produce the average scores. Nonetheless, these assessments are one of the indicators of overall performance.

  • Since data are compared for only two points in time, it is not possible to assess to what extent the observed differences are indicative of longer term trends.

  • Statistical significance is determined by mathematical formulas and considers issues such as sampling. Whether a difference in results has implications for education is a matter of interpretation; for example, a statistically significant difference may be quite small and have little effect. There are also situations in which a difference that is perceived to have educational significance may not, in fact, have statistical significance.

Data sources

  • Human Resources and Skills Development Canada, Statistics Canada, and Council of Ministers of Education, Canada. 2010. Measuring Up: Canadian Results of the OECD PISA Study: The Performance of Canada’s Youth in Reading, Mathematics and Science. 2009 First Results for Canadians Aged 15. Statistics Canada. Catalogue no. 81-590-XIE-4.

  • Organisation for Economic Co-operation and Development, 2010. PISA 2009 Results: What Students Know and Can Do – Student Performance in Reading, Mathematics and Science (Volume I).

  • Programme for International Student Assessment (PISA), Statistics Canada. For more information, consult “Definitions, data sources and methods,” Statistics Canada web site, survey 5060.

Pan-Canadian Assessment Program (PCAP)

Tables C.4.13, C.4.14, C.4.15, C.4.16, C.4.18, C.4.19, and C.4.20

Indicator C4 reports on student achievement in three core learning areas (also referred to as domains): mathematics, science, and reading. It also examines the process of mathematics problem-solving. This sub-indicator examines performance by presenting results from the Pan-Canadian Assessment Program (PCAP), an initiative of the provinces and territories conducted through the Council of Ministers of Education, Canada (CMEC).

Detailed information on the performance of Grade 8 students in CanadaNote 2 in the major PCAP domain of mathematics, assessed in 2010, is presented. Mean scores and the distribution of students by performance levels for the overall mathematics domain, as well as mean scores for the mathematics sub-domains and problem-solving process, are also outlined (Tables C.4.18 and C.4.19). The performance of students in science and reading in 2010 (Table C.4.13) is also shown, in addition to performance over time for reading (Table C.4.20). Results are presented by the language of the school system.

Concepts and definitions

  • The Pan-Canadian Assessment Program (PCAP) is a cyclical program of assessments that measures the achievement of Grade 8 students in Canada. It is conducted by the Council of Ministers of Education, Canada (CMEC). PCAP provides a detailed look at each of three core learning areas, or domains, in the years when it is a major focus of the assessment (reading in 2007, mathematics in 2010, and science in 2013), along with a minor focus on the other two domains. PCAP, which was first conducted in 2007, has replaced CMEC’s School Achievement Indicators Program (SAIP). PCAP was designed to determine whether students across Canada reach similar levels of performance in these core learning areas at about the same age, and to complement existing assessments in each jurisdiction.

  • Mathematics: Mathematics is assessed as a conceptual tool that students can use to increase their capacity to calculate, describe, and solve problems.

  • The PCAP mathematics domain was divided into four sub-domains, which reflect traditional groupings of mathematics skills and knowledge: numbers and operations; geometry and measurement; patterns and relationships; and data management and probability. The mathematics assessment also allowed for the demonstration of five processes associated with how students acquire and use mathematics knowledge: problem-solving; communication; representation; reasoning; and connections.

  • Science: The assessment of science is based on the concept of “scientific literacy” as the general goal of science curricula across Canada. Scientific literacy refers to how students use competencies to apply science-related attitudes, skills and knowledge, as well as to how they understand the nature of science, all of which enables them to conduct inquiries, solve problems, and make evidence-based decisions about science-related issues.

  • The PCAP concept of scientific literacy assumes that students have knowledge of the life sciences, physical sciences, and earth and space sciences, as well as an understanding of the nature of science as a human endeavour.

  • Reading: Reading is considered a dynamic, interactive process during which the reader constructs meaning from texts. The process of reading involves the interaction of reader, text, purpose and context, before, during, and after reading.

  • While all three of the PCAP domains are tested in each assessment, each cycle places a major focus on only one domain, meaning it will include more assessment items than the other two. PCAP has been, and will be, administered to students as follows:

    Three Pan-Canadian Program Assessment (PCAP) domains tested
    Table summary
    This table displays the results of three pan-canadian program assessment (pcap) domains tested . The information is grouped by domain focus (appearing as row headers), 2007, 2010 and 2013 (appearing as column headers).
    Domain focus 2007 2010 2013
    Major Reading Mathematics Science
    Minor Mathematics Science Reading
    Minor Science Reading Mathematics

Methodology

  • Approximately 32,000 Grade 8 students from Canada’s 10 provinces and Yukon participated in PCAP 2010. The Northwest Territories and Nunavut have not yet participated in the PCAP assessments.
  • When PCAP began in 2007, its target population was all 13-year-old students. In 2010, the target was modified to capture all Grade 8 students, regardless of age. This simplified the selection of students and reduced disruptions to the schools and in the classrooms. In 2007, 13-year-old students accounted for most of the PCAP sample, although these students may not have all been in Grade 8 at the time—some could have been in either Grade 7 or Grade 9.
  • The following process was used to select PCAP participants:

    • The random selection of schools from each jurisdiction, drawn from a complete list of publicly funded schools provided by the jurisdiction.

    • The random selection of Grade 8 classes, drawn from a list of all eligible Grade 8 classes within the school.

    • The selection of all students enrolled in the selected Grade 8 class.

    • When intact Grade 8 classes could not be selected, a random selection of Grade 8 students.

  • The PCAP participation rate was over 85% of sampled students. The school determined whether or not a student could be exempted from participating in the PCAP assessment. Students were excused: from the assessments if they had, for example: functional disabilities; intellectual disabilities; socio-emotional conditions; or limited language proficiency in the target language of the assessment.
  • The PCAP structure was designed to align with that used for the Programme for International Student Assessment (PISA), which is conducted by the Organisation for Economic Co-operation and Development (OECD). A significant portion of the Grade 8 student cohort from PCAP 2010 will likely participate in the PISA 2012 assessment, when they will be around 15 years old. Since mathematics will be the major domain in PISA 2012, it will be possible to compare the performance patterns of the two assessments.
  • PCAP 2010 tested approximately 24,000 students in English, and about 8,000 students in French. The results for students in the French school system were reported as French language, and the results for students in the English school system were reported as English language. The overall results for a jurisdiction represent those for students in both systems. Results for French immersion students who wrote in French were calculated as part of the English results since these students are considered part of the English-language cohort. (Caution is advised when comparing achievement results based on assessment instruments that were prepared in two different languages. Despite extensive efforts to produce an equivalent test in both languages, each language has unique features that may make direct comparisons difficult.)
  • Results for the major domains are available in an overall domain scale (which represents students’ overall performance across all the questions in the assessment for that domain), as well as on the sub-domains that make up each overall scale. As fewer items are tested as part of the minor domains, only combined or overall results are available from PCAP.
  • When scores obtained from different populations and on different versions of a test are compared over time, a common way of reporting achievement scores that will allow for direct comparisons is needed. One such commonly used method numerically converts the raw scores to “standard scale scores”. For PCAP 2010, raw scores were converted to a scale on which the average for the Canadian population was set at 500, with a standard deviation of 100. From this conversion, the scores of two-thirds of all participating students fell within the range of 400 to 600 points, which represents a “statistically normal distribution” of scores.
  • Results for a major domain in PCAP can also be presented as the percentage of students who had different performance levels. Performance levels represent how well students were doing based on the cognitive demand and degree of difficulty of the test items. Cognitive demand is defined by the level of reasoning required by the student to correctly answer an item, from high demand to low demand; degree of difficulty is defined by a statistical determination of the collective performance of the students on the assessment. There were four levels of performance in the mathematics component of PCAP 2010:

    • Level 4 (scores higher than 668)

    • Level 3 (scores between 514 and 668)

    • Level 2 (scores between 358 and 513)

    • Level 1 (scores below 358)

  • Level 2 represents the expected level of performance for Grade 8 students, and Level 1, a level below that expected of students in their Grade 8 level group. Levels 3 and 4 represent higher levels of performance. These definitions of the expected levels of performance were established by a panel of assessment and education experts from across Canada, and were confirmed as reasonable given the actual student responses from the PCAP assessments.
  • When comparing student performance among provinces and territories, or across population sub-groups, statistically significant differences must be considered. Standard errors and confidence intervals were used as the basis for performing comparative statistical tests. The standard error expresses the degree of uncertainty around the survey results associated with sampling and measurement errors. The standard error is used to construct a confidence interval. The confidence interval represents the range within which the score for the population is likely to fall, with 95% probability. It is calculated as a range of plus or minus about two standard errors around the estimated average score. The differences between estimated average scores are statistically significant if the confidence intervals do not overlap.
  • This indicator compares the performance of students in reading on the 2010 PCAP assessment with the first major assessment of this domain in PCAP 2007. It is not possible to compare the results from any minor assessments that took place before the first major (full) assessment of a domain because the framework for the domain is not fully developed until the cycle in which it is assessed as a major domain. Consequently, the results measured as a minor domain beforehand are not comparable.
  • The 2007 results for reading may be compared with those from the 2010 assessment, but they should not be compared directly with the original 2007 results. The 2007 scores used for the comparison have been rescaled onto the 2010 metric using common items (also referred to as “anchor items”) that link the two (2007 and 2010) reading assessments. Also, the 2007 scores are based on only those Grade 8 students who completed the test, and not on the complete 2007 population of 13-year-olds. In 2010, there may have been a range of ages for students in Grade 8.
  • In addition to the assessment of students’ knowledge and skills in mathematics, reading, and science, PCAP also administers accompanying contextual questionnaires to students, teachers, and schools.

Limitations

  • An examination of the relative performance of different groups of students on the same or comparable assessments at different time periods shows whether the level of achievement is changing. However, scores on an assessment alone cannot be used to evaluate a school system, because many factors combine to produce the average scores. Nonetheless, these assessments are one of the indicators of overall performance.
  • Since data are compared for only two points in time, it is not possible to assess to what extent the observed differences are indicative of longer term trends.
  • Statistical significance is determined by mathematical formulas and considers issues such as sampling. Whether a difference in results has implications for education is a matter of interpretation; for example, a statistically significant difference may be quite small and have little effect. There are also situations in which a difference that is perceived to have educational significance may not, in fact, have statistical significance.

Data source

  • Pan-Canadian Assessment Program, PCAP-2010: Report on the Pan-Canadian Assessment of Mathematics, Science, and Reading, Council of Ministers of Education, Canada (CMEC), 2011.

C5 Information and communications technologies (ICT)

Tables C.5.1, C.5.6, C.5.7 and C.5.8

Indicator C5 reports on computer and software availability in schools (Tables C.5.1 and C.5.6), computer use among students at school (Table C.5.7), and student self-confidence in performing computer tasks (Table C.5.8). Information is presented for Canada, the provinces, and selected member countries of the Organisation for Economic Co-operation and Development (OECD) using results from the OECD’s 2009 Programme for International Student Assessment (PISA).

Concepts and definitions

  • Information for this indicator is obtained through the 2009 Programme for International Student Assessment (PISA), which evaluates the skills and knowledge of 15-year-old students that are considered to be essential for full participation in modern economies, and sheds light on a range of factors that contribute to successful students, schools, and education systems. Information on computer and software availability in schools is obtained through the PISA school context questionnaire in which principals provided information on the availability of computers at their schools and whether they felt a lack of computers or software hindered instruction. Information on computer use among students at school and student self-assessment of their confidence in performing computer tasks was obtained from the optional ICT familiarity component of the PISA student context questionnaire.
  • The number of computers per student is often used as a proxy to indicate the technology available to students. It refers to the total number of computers available for educational purposes to students in schools in the national modal grade for 15-year-olds (Grade 10 or equivalent in Canada) divided by the total number of students in the modal grade.
  • A shortage or inadequacy of computers or software for instruction was explored in the PISA 2009 school context questionnaire as another way of looking at student access to ICT resources. In this questionnaire, principals reported on their perceptions of whether their school’s capacity to provide instruction was hindered by a shortage of computers or computer software for instruction. Schools are considered to have a shortage or inadequacy of computers or software for instruction when school principals reported that this situation was hindering instruction to “some extent” or “a lot”. The principals’ subjective perceptions of shortages should be interpreted with some caution, because cultural factors and expectations, along with pedagogical practices, may influence the degree to which principals consider shortages a problem. Perceptions of inadequacy may be related to higher expectations among principals for ICT-based instruction rather than fewer computers available for learning.
  • The Index of self-confidence in information and communications technologies high-level tasks was constructed to summarize student’s self-confidence in performing certain computer tasks. This index reflects a composite score based on students’ indications of the extent to which they could perform the following five different types of technical tasks: edit digital photographs or other graphic images; create a database; use a spreadsheet to plot a graph; create a presentation; create a multimedia presentation. For each task there were four possible responses: I can do this very well by myself; I can do this with help from someone; I know what this means but I cannot do it; I don't know what this means. This index was constructed so that the average OECD student would have an index value of zero, and about two-thirds of the OECD student population would be between -1 and 1. For this index, a negative score indicates a level of confidence that is lower than the average calculated for students across OECD countries. Students' subjective judgments of task competency may vary across jurisdictions. Each index is self-contained; that is, a jurisdiction’s score on one index cannot be directly compared with its score on another.
  • The Index of computer use at school was constructed to summarize how frequently students perform different types of ICT activities at school. This index reflects a composite score based on students’ responses when asked how frequently they perform the following nine activities: chat on-line; use e-mail; browse the Internet for schoolwork; download, upload or browse material from the school Web site; post work on the school’s Web site; play simulations; practice and do drills (e.g., for mathematics or learning a foreign language); do individual homework; and do group work and communicate with other students. For each activity there were four possible responses: never or hardly ever; once or twice a month; once or twice a week; every day or almost every day. This index was constructed so that the average OECD student would have an index value of zero, and about two-thirds of the OECD student population would be between -1 and 1. Index points above zero indicate a frequency of use above the OECD average. Each index is self-contained; that is, a jurisdiction’s score on one index cannot be directly compared with its score on another.
  • The modal grade attended by 15-year-olds is the grade attended by most 15-year-olds in the participating country or economy. In Canada, most 15-year-olds attend Grade 10 (or equivalent).
  • Students’ socio-economic status is measured by the PISA Index of Economic, Social and Cultural Status (ESCS). It is important to emphasize that this indicator presents information organized according to the socio-economic status of the student, not of the school attended by the student.
  • The PISA Index of Economic, Social and Cultural Status (ESCS) provides a measure of the socio-economic status of the student. This index was constructed based on information provided by the representative sample of 15-year-old students who participated in the PISA student background questionnaire, in which information on students’ backgrounds was obtained from their answers to a 30-minute questionnaire that covered topics such as educational background, family and home situation, reading activities, and school characteristics. The PISA ESCS index was derived from the following variables: the international socio-economic index of occupational status of the father or mother, whichever is higher; the level of education of the father or mother, whichever is higher, converted into years of schooling; and the index of home possessions, obtained by asking students whether they had a desk at which they studied at home, a room of their own, a quiet place to study, a computer to use for school work, educational software, a link to the Internet, their own calculator, classic literature, books of poetry, works of art (e.g., paintings), books to help them with their school work, a dictionary, a dishwasher, a DVD player, three other country-specific items, and the number of cellular phones, televisions, computers, cars and bathrooms at home. The rationale for choosing these variables is that socio-economic background is usually seen as being determined by occupational status, education, and wealth. As no direct measure of parental income or wealth was available from PISA, information on access to household items was used as a proxy as students would have knowledge of these items within the home. These questions were selected to construct the indices based on theoretical considerations and previous research. Structural equation modeling was used to validate the indices.

    Greater values on the Index of Economic, Social and Cultural Status (ESCS) represent a more advantaged social background, while smaller values represent a less advantaged social background. A negative value indicates that the socio-economic status is below the OECD mean. The index is divided into quarters based on students’ values on the ESCS index. Therefore students in the bottom quarter are in the lowest quarter of students in the ESCS index, and students in the top quarter are in the highest quarter of students based on their ESCS value.

Methodology

  • The target population for PISA 2009 comprised 15-year-olds who were attending schools in one of Canada’s 10 provinces; the territories have not participated in PISA to date. Students of schools located on Indian reserves were excluded, as were students of schools for those with severe learning disabilities, schools for blind and deaf students, and students who were being home-schooled.
  • In 2009, PISA was administered in 65 countries and economies, including Canada and all other OECD member countries. Between 5,000 and 10,000 students aged 15 from at least 150 schools were typically tested in each country. In Canada, approximately 23,000 students from about 1,000 schools participated in the 10 provinces. This large Canadian sample was needed to produce reliable estimates for each province.
  • The information for this indicator is obtained from certain responses to three contextual questionnaires that were administered along with the main PISA skills assessment: a student background questionnaire that provided information about students and their homes; a questionnaire on familiarity with ICT that was administered to students; and a questionnaire administered to school principals. The questionnaire framework that is the basis of the context questionnaires and the questionnaires themselves are found in PISA 2009 Assessment Framework: Key Competencies in Reading, Mathematics and Science (OECD 2010), available at www.oecd.org.
  • All member countries of the OECD participated in the PISA 2009 main assessment (including the student and school background questionnaires that are a main source of data for this indicator), and 29 member countries chose to administer the optional ICT familiarity questionnaire. This indicator presents information for a subset of these participating countries; namely, the G-8 countries (Canada, France, Germany, Italy, Japan, the Russian Federation, the United Kingdom, and the United States) and nine selected OECD countries that were deemed to be among Canada’s social and economic peers and therefore of key comparative interest (Australia, Denmark, Finland, Ireland, Korea, New Zealand, Norway, Sweden, and Switzerland).
  • The statistics in this indicator represent estimates based on samples of students, rather than values obtained from the entire population of students in each country. This distinction is important as it cannot be said with certainty that a sample estimate has the same value as the population parameters that would have been obtained had all 15-year-old students been assessed. Consequently, it is important to measure the degree of uncertainty of the estimates. In PISA, each estimate has an associated degree of uncertainty, which is expressed through the standard error. In turn the standard error can be used to construct a confidence interval around the estimate—calculated as the estimate +/- 1.96 x standard error—which provides a way to make inferences about the population parameters in a manner that reflects the uncertainty associated with the sample estimates. Using this confidence interval, it can be inferred that the population parameter would lie within the confidence interval in 95 out of 100 replications of the measurement, using different samples randomly drawn from the same population.
  • When comparing sample estimates among countries, provinces and territories, or population subgroups, statistically significant differences must be considered in order to determine if the true population parameters are likely different from each other. Standard errors and confidence intervals are used as the basis for performing comparative statistical tests. Results are statistically different if the confidence intervals do not overlap.

    In Table C.5.6, differences in the percentage of students whose principals reported a shortage or inadequacy of computers or software between the top and bottom quarters of the PISA Index of Economic, Social, and Cultural Status were tested for statistical significance at Statistics Canada’s Centre for Education Statistics. The testing method involved calculating the confidence intervals surrounding the percentage of students whose principals reported computer or software inadequacies for both the top and bottom quarters of the index. If these confidence intervals did not overlap, then the difference was determined to be statistically significant at the 95% confidence level.

Limitations

  • Some data previously presented in Indicator C5 of Pan-Canadian Education Indicators Program (PCEIP) are not available from PISA 2009 as some of the questions were not repeated, or the information is not comparable with that used in past iterations of the PISA assessment.
  • The PISA background questionnaires that explored ICT topics were not designed to assess the quality of ICT use at school nor the integration of ICT in pedagogy and its impact on student’s cognitive skills.
  • The territories have not participated in PISA to date.

Data sources

  • Statistics Canada, Programme for International Student Assessment (PISA), 2009 database; Organisation for Economic Co-operation and Development (OECD), 2009 PISA database.

Notes:

1. For the first time in 2009, 20 countries elected to give their students additional questions via computer to assess their capacity to read digital texts. Canada did not administer these extra questions.

2. In Quebec, Secondary II  is the equivalent of Grade 8.

Date de modification :