Statistics by subject – Time series

Filter results by

Help for filters and search
Currently selected filters that can be removed

Keyword(s)

Content

1 facets displayed. 0 facets selected.

Filter results by

Help for filters and search
Currently selected filters that can be removed

Keyword(s)

Content

1 facets displayed. 0 facets selected.

Filter results by

Help for filters and search
Currently selected filters that can be removed

Keyword(s)

Content

1 facets displayed. 0 facets selected.

Filter results by

Help for filters and search
Currently selected filters that can be removed

Keyword(s)

Content

1 facets displayed. 0 facets selected.

Other available resources to support your research.

Help for sorting results
Browse our central repository of key standard concepts, definitions, data sources and methods.
Loading
Loading in progress, please wait...
All (44)

All (44) (25 of 44 results)

  • Journals and periodicals: 11-633-X
    Description:

    Papers in this series provide background discussions of the methods used to develop data for economic, health, and social analytical studies at Statistics Canada. They are intended to provide readers with information on the statistical methods, standards and definitions used to develop databases for research purposes. All papers in this series have undergone peer and institutional review to ensure that they conform to Statistics Canada's mandate and adhere to generally accepted standards of good professional practice.

    Release date: 2018-01-22

  • Articles and reports: 12-001-X201700254871
    Description:

    In this paper the question is addressed how alternative data sources, such as administrative and social media data, can be used in the production of official statistics. Since most surveys at national statistical institutes are conducted repeatedly over time, a multivariate structural time series modelling approach is proposed to model the series observed by a repeated surveys with related series obtained from such alternative data sources. Generally, this improves the precision of the direct survey estimates by using sample information observed in preceding periods and information from related auxiliary series. This model also makes it possible to utilize the higher frequency of the social media to produce more precise estimates for the sample survey in real time at the moment that statistics for the social media become available but the sample data are not yet available. The concept of cointegration is applied to address the question to which extent the alternative series represent the same phenomena as the series observed with the repeated survey. The methodology is applied to the Dutch Consumer Confidence Survey and a sentiment index derived from social media.

    Release date: 2017-12-21

  • Articles and reports: 12-001-X201700114819
    Description:

    Structural time series models are a powerful technique for variance reduction in the framework of small area estimation (SAE) based on repeatedly conducted surveys. Statistics Netherlands implemented a structural time series model to produce monthly figures about the labour force with the Dutch Labour Force Survey (DLFS). Such models, however, contain unknown hyperparameters that have to be estimated before the Kalman filter can be launched to estimate state variables of the model. This paper describes a simulation aimed at studying the properties of hyperparameter estimators in the model. Simulating distributions of the hyperparameter estimators under different model specifications complements standard model diagnostics for state space models. Uncertainty around the model hyperparameters is another major issue. To account for hyperparameter uncertainty in the mean squared errors (MSE) estimates of the DLFS, several estimation approaches known in the literature are considered in a simulation. Apart from the MSE bias comparison, this paper also provides insight into the variances and MSEs of the MSE estimators considered.

    Release date: 2017-06-22

  • Articles and reports: 13-604-M2015077
    Description:

    This new dataset increases the information available for comparing the performance of provinces and territories across a range of measures. It combines often fragmented provincial time series data that, as such, are of limited utility for examining the evolution of provincial economies over extended periods. More advanced statistical methods, and models with greater breadth and depth, are difficult to apply to existing fragmented Canadian data. The longitudinal nature of the new provincial dataset remedies this shortcoming. This report explains the construction of the latest vintage of the dataset. The dataset contains the most up-to-date information available.

    Release date: 2015-02-12

  • Articles and reports: 12-001-X201400214110
    Description:

    In developing the sample design for a survey we attempt to produce a good design for the funds available. Information on costs can be used to develop sample designs that minimise the sampling variance of an estimator of total for fixed cost. Improvements in survey management systems mean that it is now sometimes possible to estimate the cost of including each unit in the sample. This paper develops relatively simple approaches to determine whether the potential gains arising from using this unit level cost information are likely to be of practical use. It is shown that the key factor is the coefficient of variation of the costs relative to the coefficient of variation of the relative error on the estimated cost coefficients.

    Release date: 2014-12-19

  • Articles and reports: 11-010-X201000311141
    Description:

    A review of what seasonal adjustment does, and how it helps analysts focus on recent movements in the underlying trend of economic data.

    Release date: 2010-03-18

  • Articles and reports: 12-001-X200900211040
    Description:

    In this paper a multivariate structural time series model is described that accounts for the panel design of the Dutch Labour Force Survey and is applied to estimate monthly unemployment rates. Compared to the generalized regression estimator, this approach results in a substantial increase of the accuracy due to a reduction of the standard error and the explicit modelling of the bias between the subsequent waves.

    Release date: 2009-12-23

  • Technical products: 12-539-X
    Description:

    This document brings together guidelines and checklists on many issues that need to be considered in the pursuit of quality objectives in the execution of statistical activities. Its focus is on how to assure quality through effective and appropriate design or redesign of a statistical project or program from inception through to data evaluation, dissemination and documentation. These guidelines draw on the collective knowledge and experience of many Statistics Canada employees. It is expected that Quality Guidelines will be useful to staff engaged in the planning and design of surveys and other statistical projects, as well as to those who evaluate and analyze the outputs of these projects.

    Release date: 2009-12-02

  • Articles and reports: 12-001-X200900110885
    Description:

    Peaks in the spectrum of a stationary process are indicative of the presence of stochastic periodic phenomena, such as a stochastic seasonal effect. This work proposes to measure and test for the presence of such spectral peaks via assessing their aggregate slope and convexity. Our method is developed nonparametrically, and thus may be useful during a preliminary analysis of a series. The technique is also useful for detecting the presence of residual seasonality in seasonally adjusted data. The diagnostic is investigated through simulation and an extensive case study using data from the U.S. Census Bureau and the Organization for Economic Co-operation and Development (OECD).

    Release date: 2009-06-22

  • Articles and reports: 11F0027M2007047
    Description:

    This paper examines the effect of aberrant observations in the Capital, Labour, Energy, Materials and Services (KLEMS) database and a method for dealing with them. The level of disaggregation, data construction and economic shocks all potentially lead to aberrant observations that can influence estimates and inference if care is not exercised. Commonly applied pre-tests, such as the augmented Dickey-Fuller and the Kwaitkowski, Phillips, Schmidt and Shin tests, need to be used with caution in this environment because they are sensitive to unusual data points. Moreover, widely known methods for generating statistical estimates, such as Ordinary Least Squares, may not work well when confronted with aberrant observations. To address this, a robust method for estimating statistical relationships is illustrated.

    Release date: 2007-12-05

  • Articles and reports: 12-001-X20050029053
    Description:

    A spatial regression model in a general mixed effects model framework has been proposed for the small area estimation problem. A common autocorrelation parameter across the small areas has resulted in the improvement of the small area estimates. It has been found to be very useful in the cases where there is little improvement in the small area estimates due to the exogenous variables. A second order approximation to the mean squared error (MSE) of the empirical best linear unbiased predictor (EBLUP) has also been worked out. Using the Kalman filtering approach, a spatial temporal model has been proposed. In this case also, a second order approximation to the MSE of the EBLUP has been obtained. As a case study, the time series monthly per capita consumption expenditure (MPCE) data from the National Sample Survey Organisation (NSSO) of the Ministry of Statistics and Programme Implementation, Government of India, have been used for the validation of the models.

    Release date: 2006-02-17

  • Technical products: 11-522-X20030017707
    Description:

    The paper discusses the structure and the quality measures Eurostat uses to provide European Union and EU-Zone with economic seasonally adjusted series.

    Release date: 2005-01-26

  • Technical products: 11-522-X20030017695
    Description:

    This paper proposes methods to correct a seasonally adjusted series so that its annual totals match those of the raw series. The methods are illustrated with a seasonally adjusted series obtained with either X-11-ARIMA or X-12-ARIMA.

    Release date: 2005-01-26

  • Technical products: 11-522-X20030017693
    Description:

    This paper evaluates changes in the quality performances of two different and widely used programs for seasonal adjustment, X-12-Regarima and Tramo-Seats, when the length of time series is progressively reduced.

    Release date: 2005-01-26

  • Technical products: 11-522-X20030017694
    Description:

    This paper evaluates the performance of diagnostics for seasonal adjustment as they relate to the performance of X-12-ARIMA and SEATS on a large sample of real and simulated economic time series.

    Release date: 2005-01-26

  • Articles and reports: 12-001-X20030016610
    Description:

    In the presence of item nonreponse, unweighted imputation methods are often used in practice but they generally lead to biased estimators under uniform response within imputation classes. Following Skinner and Rao (2002), we propose a bias-adjusted estimator of a population mean under unweighted ratio imputation and random hot-deck imputation and derive linearization variance estimators. A small simulation study is conducted to study the performance of the methods in terms of bias and mean square error. Relative bias and relative stability of the variance estimators are also studied.

    Release date: 2003-07-31

  • Articles and reports: 12-001-X20010026097
    Description:

    A compositional time series is defined as a multivariate time series in which each of the series has values bounded between zero and one and the sum of the series equals one at each time point. Data with such characteristics are observed in repeated surveys when a survey variable has a multinomial response but interest lies in the proportion of units classified in each of its categories. In this case, the survey estimates are proportions of a whole subject to a unity-sum constraint. In this paper we employ a state space approach for modelling compositional time series from repeated surveys taking into account the sampling errors. The additive logistic transformation is used in order to guarantee predictions and signal estimates bounded between zero and one which satisfy the unity-sum constraint. The method is applied to compositional data from the Brazilian Labour Force Survey. Estimates of the vector of proportions and the unemployment rate are obtained. In addition, the structural components of the signal vector, such as the seasonals and the trends, are produced.

    Release date: 2002-02-28

  • Articles and reports: 12-001-X20000025536
    Description:

    Many economic and social time series are based on sample surveys which have complex sample designs. The sample design affects the properties of the time series. In particular, the overlap of the sample from period to period affects the variability of the time series of survey estimates and the seasonally adjusted and trend estimates produced from them.

    Release date: 2001-02-28

  • Technical products: 11-522-X19990015688
    Description:

    The geographical and temporal relationship between outdoor air pollution and asthma was examined by linking together data from multiple sources. These included the administrative records of 59 general practices widely dispersed across England and Wales for half a million patients and all their consultations for asthma, supplemented by a socio-economic interview survey. Postcode enabled linkage with: (i) computed local road density; (ii) emission estimates of sulphur dioxide and nitrogen dioxides, (iii) measured/interpolated concentration of black smoke, sulphur dioxide, nitrogen dioxide and other pollutants at practice level. Parallel Poisson time series analysis took into account between-practice variations to examine daily correlations in practices close to air quality monitoring stations. Preliminary analyses show small and generally non-significant geographical associations between consultation rates and pollution markers. The methodological issues relevant to combining such data, and the interpretation of these results will be discussed.

    Release date: 2000-03-02

  • Technical products: 11-522-X19990015656
    Description:

    Time series studies have shown associations between air pollution concentrations and morbidity and mortality. These studies have largely been conducted within single cities, and with varying methods. Critics of these studies have questioned the validity of the data sets used and the statistical techniques applied to them; the critics have noted inconsistencies in findings among studies and even in independent re-analyses of data from the same city. In this paper we review some of the statistical methods used to analyze a subset of a national data base of air pollution, mortality and weather assembled during the National Morbidity and Mortality Air Pollution Study (NMMAPS).

    Release date: 2000-03-02

  • Technical products: 11-522-X19990015648
    Description:

    We estimate the parameters of a stochastic model for labour force careers involving distributions of correlated durations employed, unemployed (with and without job search) and not in the labour force. If the model is to account for sub-annual labour force patterns as well as advancement towards retirement, then no single data source is adequate to inform it. However, it is possible to build up an approximation from a number of different sources.

    Release date: 2000-03-02

  • Technical products: 11-522-X19980015033
    Description:

    Victimizations are not randomly scattered through the population, but tend to be concentrated in relatively few victims. Data from the U.S. National Crime Victimization Survey (NCVS), a multistage rotating panel survey, are employed to estimate the conditional probabilities of being a crime victim at time t given the victimization status in earlier interviews. Models are presented and fit to allow use of partial information from households that move in or out of the housing unit during the study period. The estimated probability of being a crime victim at interview t given the status at interview (t-l) is found to decrease with t. Possible implications for estimating cross-sectional victimization rates are discusssed.

    Release date: 1999-10-22

  • Technical products: 11-522-X19980015031
    Description:

    The U.S. Third National Health and Nutrition Examination Survey (NHANES III) was carried out from 1988 to 1994. This survey was intended primarily to provide estimates of cross-sectional parameters believed to be approximately constant over the six-year data collection period. However, for some variable (e.g., serum lead, body mass index and smoking behavior), substantive considerations suggest the possible presence of nontrivial changes in level between 1988 and 1994. For these variables, NHANES III is potentially a valuable source of time-change information, compared to other studies involving more restricted populations and samples. Exploration of possible change over time is complicated by two issues. First, there was of practical concern because some variables displayed substantial regional differences in level. This was of practical concern because some variables displayed substantial regional differences in level. Second, nontrivial changes in level over time can lead to nontrivial biases in some customary NHANES III variance estimators. This paper considers these two problems and discusses some related implications for statistical policy.

    Release date: 1999-10-22

  • Articles and reports: 12-001-X19990014709
    Description:

    We develop an approach to estimating variances for X-11 seasonal adjustments that recognizes the effects of sampling error and errors from forecast extension. In our approach, seasonal adjustment error in the central values of a sufficiently long series results only from the effect of the X-11 filtering on the sampling errors. Towards either end of the series, we also recognize the contribution to seasonal adjustment error from forecast and backcast errors. We extend the approach to produce variances of errors in X-11 trend estimates, and to recognize error in estimation of regression coefficients used to model, e.g., calendar effects. In empirical results, the contribution of sampling error often dominated the seasonal adjustment variances. Trend estimate variances, however, showed large increases at the ends of series due to the effects of fore/backcast error. Nonstationarities in the sampling errors produced striking patterns in the seasonal adjustment and trend estimate variances.

    Release date: 1999-10-08

  • Articles and reports: 12-001-X19980024351
    Description:

    To calculate price indexes, data on "the same item" (actually a collection of items narrowly defined) must be collected across time periods. The question arises whether such "quasi-longitudinal" data can be modeled in such a way as to shed light on what a price index is. Leading thinkers on price indexes have questioned the feasibility of using statistical modeling at all for characterizing price indexes. This paper suggests a simple state space model of price data, yielding a consumer price index that is given in terms of the parameters of the model.

    Release date: 1999-01-14

Data (1)

Data (1) (1 result)

  • Table: 62-010-X19970023422
    Description:

    The current official time base of the Consumer Price Index (CPI) is 1986=100. This time base was first used when the CPI for June 1990 was released. Statistics Canada is about to convert all price index series to the time base 1992=100. As a result, all constant dollar series will be converted to 1992 dollars. The CPI will shift to the new time base when the CPI for January 1998 is released on February 27th, 1998.

    Release date: 1997-11-17

Analysis (33)

Analysis (33) (25 of 33 results)

  • Journals and periodicals: 11-633-X
    Description:

    Papers in this series provide background discussions of the methods used to develop data for economic, health, and social analytical studies at Statistics Canada. They are intended to provide readers with information on the statistical methods, standards and definitions used to develop databases for research purposes. All papers in this series have undergone peer and institutional review to ensure that they conform to Statistics Canada's mandate and adhere to generally accepted standards of good professional practice.

    Release date: 2018-01-22

  • Articles and reports: 12-001-X201700254871
    Description:

    In this paper the question is addressed how alternative data sources, such as administrative and social media data, can be used in the production of official statistics. Since most surveys at national statistical institutes are conducted repeatedly over time, a multivariate structural time series modelling approach is proposed to model the series observed by a repeated surveys with related series obtained from such alternative data sources. Generally, this improves the precision of the direct survey estimates by using sample information observed in preceding periods and information from related auxiliary series. This model also makes it possible to utilize the higher frequency of the social media to produce more precise estimates for the sample survey in real time at the moment that statistics for the social media become available but the sample data are not yet available. The concept of cointegration is applied to address the question to which extent the alternative series represent the same phenomena as the series observed with the repeated survey. The methodology is applied to the Dutch Consumer Confidence Survey and a sentiment index derived from social media.

    Release date: 2017-12-21

  • Articles and reports: 12-001-X201700114819
    Description:

    Structural time series models are a powerful technique for variance reduction in the framework of small area estimation (SAE) based on repeatedly conducted surveys. Statistics Netherlands implemented a structural time series model to produce monthly figures about the labour force with the Dutch Labour Force Survey (DLFS). Such models, however, contain unknown hyperparameters that have to be estimated before the Kalman filter can be launched to estimate state variables of the model. This paper describes a simulation aimed at studying the properties of hyperparameter estimators in the model. Simulating distributions of the hyperparameter estimators under different model specifications complements standard model diagnostics for state space models. Uncertainty around the model hyperparameters is another major issue. To account for hyperparameter uncertainty in the mean squared errors (MSE) estimates of the DLFS, several estimation approaches known in the literature are considered in a simulation. Apart from the MSE bias comparison, this paper also provides insight into the variances and MSEs of the MSE estimators considered.

    Release date: 2017-06-22

  • Articles and reports: 13-604-M2015077
    Description:

    This new dataset increases the information available for comparing the performance of provinces and territories across a range of measures. It combines often fragmented provincial time series data that, as such, are of limited utility for examining the evolution of provincial economies over extended periods. More advanced statistical methods, and models with greater breadth and depth, are difficult to apply to existing fragmented Canadian data. The longitudinal nature of the new provincial dataset remedies this shortcoming. This report explains the construction of the latest vintage of the dataset. The dataset contains the most up-to-date information available.

    Release date: 2015-02-12

  • Articles and reports: 12-001-X201400214110
    Description:

    In developing the sample design for a survey we attempt to produce a good design for the funds available. Information on costs can be used to develop sample designs that minimise the sampling variance of an estimator of total for fixed cost. Improvements in survey management systems mean that it is now sometimes possible to estimate the cost of including each unit in the sample. This paper develops relatively simple approaches to determine whether the potential gains arising from using this unit level cost information are likely to be of practical use. It is shown that the key factor is the coefficient of variation of the costs relative to the coefficient of variation of the relative error on the estimated cost coefficients.

    Release date: 2014-12-19

  • Articles and reports: 11-010-X201000311141
    Description:

    A review of what seasonal adjustment does, and how it helps analysts focus on recent movements in the underlying trend of economic data.

    Release date: 2010-03-18

  • Articles and reports: 12-001-X200900211040
    Description:

    In this paper a multivariate structural time series model is described that accounts for the panel design of the Dutch Labour Force Survey and is applied to estimate monthly unemployment rates. Compared to the generalized regression estimator, this approach results in a substantial increase of the accuracy due to a reduction of the standard error and the explicit modelling of the bias between the subsequent waves.

    Release date: 2009-12-23

  • Articles and reports: 12-001-X200900110885
    Description:

    Peaks in the spectrum of a stationary process are indicative of the presence of stochastic periodic phenomena, such as a stochastic seasonal effect. This work proposes to measure and test for the presence of such spectral peaks via assessing their aggregate slope and convexity. Our method is developed nonparametrically, and thus may be useful during a preliminary analysis of a series. The technique is also useful for detecting the presence of residual seasonality in seasonally adjusted data. The diagnostic is investigated through simulation and an extensive case study using data from the U.S. Census Bureau and the Organization for Economic Co-operation and Development (OECD).

    Release date: 2009-06-22

  • Articles and reports: 11F0027M2007047
    Description:

    This paper examines the effect of aberrant observations in the Capital, Labour, Energy, Materials and Services (KLEMS) database and a method for dealing with them. The level of disaggregation, data construction and economic shocks all potentially lead to aberrant observations that can influence estimates and inference if care is not exercised. Commonly applied pre-tests, such as the augmented Dickey-Fuller and the Kwaitkowski, Phillips, Schmidt and Shin tests, need to be used with caution in this environment because they are sensitive to unusual data points. Moreover, widely known methods for generating statistical estimates, such as Ordinary Least Squares, may not work well when confronted with aberrant observations. To address this, a robust method for estimating statistical relationships is illustrated.

    Release date: 2007-12-05

  • Articles and reports: 12-001-X20050029053
    Description:

    A spatial regression model in a general mixed effects model framework has been proposed for the small area estimation problem. A common autocorrelation parameter across the small areas has resulted in the improvement of the small area estimates. It has been found to be very useful in the cases where there is little improvement in the small area estimates due to the exogenous variables. A second order approximation to the mean squared error (MSE) of the empirical best linear unbiased predictor (EBLUP) has also been worked out. Using the Kalman filtering approach, a spatial temporal model has been proposed. In this case also, a second order approximation to the MSE of the EBLUP has been obtained. As a case study, the time series monthly per capita consumption expenditure (MPCE) data from the National Sample Survey Organisation (NSSO) of the Ministry of Statistics and Programme Implementation, Government of India, have been used for the validation of the models.

    Release date: 2006-02-17

  • Articles and reports: 12-001-X20030016610
    Description:

    In the presence of item nonreponse, unweighted imputation methods are often used in practice but they generally lead to biased estimators under uniform response within imputation classes. Following Skinner and Rao (2002), we propose a bias-adjusted estimator of a population mean under unweighted ratio imputation and random hot-deck imputation and derive linearization variance estimators. A small simulation study is conducted to study the performance of the methods in terms of bias and mean square error. Relative bias and relative stability of the variance estimators are also studied.

    Release date: 2003-07-31

  • Articles and reports: 12-001-X20010026097
    Description:

    A compositional time series is defined as a multivariate time series in which each of the series has values bounded between zero and one and the sum of the series equals one at each time point. Data with such characteristics are observed in repeated surveys when a survey variable has a multinomial response but interest lies in the proportion of units classified in each of its categories. In this case, the survey estimates are proportions of a whole subject to a unity-sum constraint. In this paper we employ a state space approach for modelling compositional time series from repeated surveys taking into account the sampling errors. The additive logistic transformation is used in order to guarantee predictions and signal estimates bounded between zero and one which satisfy the unity-sum constraint. The method is applied to compositional data from the Brazilian Labour Force Survey. Estimates of the vector of proportions and the unemployment rate are obtained. In addition, the structural components of the signal vector, such as the seasonals and the trends, are produced.

    Release date: 2002-02-28

  • Articles and reports: 12-001-X20000025536
    Description:

    Many economic and social time series are based on sample surveys which have complex sample designs. The sample design affects the properties of the time series. In particular, the overlap of the sample from period to period affects the variability of the time series of survey estimates and the seasonally adjusted and trend estimates produced from them.

    Release date: 2001-02-28

  • Articles and reports: 12-001-X19990014709
    Description:

    We develop an approach to estimating variances for X-11 seasonal adjustments that recognizes the effects of sampling error and errors from forecast extension. In our approach, seasonal adjustment error in the central values of a sufficiently long series results only from the effect of the X-11 filtering on the sampling errors. Towards either end of the series, we also recognize the contribution to seasonal adjustment error from forecast and backcast errors. We extend the approach to produce variances of errors in X-11 trend estimates, and to recognize error in estimation of regression coefficients used to model, e.g., calendar effects. In empirical results, the contribution of sampling error often dominated the seasonal adjustment variances. Trend estimate variances, however, showed large increases at the ends of series due to the effects of fore/backcast error. Nonstationarities in the sampling errors produced striking patterns in the seasonal adjustment and trend estimate variances.

    Release date: 1999-10-08

  • Articles and reports: 12-001-X19980024351
    Description:

    To calculate price indexes, data on "the same item" (actually a collection of items narrowly defined) must be collected across time periods. The question arises whether such "quasi-longitudinal" data can be modeled in such a way as to shed light on what a price index is. Leading thinkers on price indexes have questioned the feasibility of using statistical modeling at all for characterizing price indexes. This paper suggests a simple state space model of price data, yielding a consumer price index that is given in terms of the parameters of the model.

    Release date: 1999-01-14

  • Articles and reports: 12-001-X199600114383
    Description:

    The estimation of the trend-cycle with the X-11-ARIMA method is often done using the 13-term Henderson filter applied to seasonally adjusted data modified by extreme values. This filter however, produces a large number of unwanted ripples in the final or “historical” trend-cycle curve which are interpreted as false turning points. The use of a longer Henderson filter such as the 23-term is not an alternative for this filter is sluggish to detect turning points and consequently is not useful for current economic and business analysis. This paper proposes a new method that enables the use of the 13-term Henderson filter with the advantages of: (i) reducing the number of unwanted ripples; (ii) reducing the size of the revisions to preliminary values and (iii) no increase in the time lag to detect turning points. The results are illustrated with nine leading indicator series of the Canadian Composite Leading Index.

    Release date: 1996-06-14

  • Articles and reports: 12-001-X199500214391
    Description:

    Statistical process control can be used as a quality tool to assure the accuracy of sampling frames that are constructed periodically. Sampling frame sizes are plotted in a control chart to detect special causes of variation. Procedures to identify the appropriate time series (ARIMA) model for serially correlated observations are described. Applications of time series analysis to the construction of control charts are discussed. Data from the United States Department of Labor’s Unemployment Insurance Benefits Quality Control Program is used to illustrate the technique.

    Release date: 1995-12-15

  • Articles and reports: 12-001-X199400114436
    Description:

    This paper identifies some technical issues in the provision of small area data derived from censuses, administrative records and surveys. Although the issues are of a general nature, they are discussed in the context of programs at Statistics Canada. For survey-based estimates, the need for developing an overall strategy is stressed and salient features of survey design that have an impact on small area data are highlighted in the context of redesigning a household survey. A brief review of estimation methods with their strengths and weaknesses is also presented.

    Release date: 1994-06-15

  • Articles and reports: 12-001-X199300214457
    Description:

    The maximum likelihood estimation of a non-linear benchmarking model, proposed by Laniel and Fyfe (1989; 1990), is considered. This model takes into account the biases and sampling errors associated with the original series. Since the maximum likelihood estimators of the model parameters are not obtainable in closed forms, two iterative procedures to find the maximum likelihood estimates are discussed. The closed form expressions for the asymptotic variances and covariances of the benchmarked series, and of the fitted values are also provided. The methodology is illustrated using published Canadian retail trade data.

    Release date: 1993-12-15

  • Articles and reports: 12-001-X199100214505
    Description:

    The X-11-ARIMA seasonal adjustment method and the Census X-11 variant use a standard ANOVA-F-test to assess the presence of stable seasonality. This F-test is applied to a series consisting of estimated seasonals plus irregulars (residuals) which may be (and often are) autocorrelated, thus violating the basic assumption of the F-test. This limitation has long been known by producers of seasonally adjusted data and the nominal value of the F statistic has been rarely used as a criterion for seasonal adjustment. Instead, producers of seasonally adjusted data have used rules of thumb, such as, F equal to or greater than 7. This paper introduces an exact test which takes into account autocorrelated residuals following an SMA process of the (0, q) (0, Q)_s type. Comparisons of this modified F-test and the standard ANOVA test of X-11-ARIMA are made for a large number of Canadian socio-economic series.

    Release date: 1991-12-16

  • Articles and reports: 12-001-X199000214534
    Description:

    The common approach to small area estimation is to exploit the cross-sectional relationships of the data in an attempt to borrow information from one small area to assist in the estimation in others. However, in the case of repeated surveys, further gains in efficiency can be secured by modelling the time series properties of the data as well. We illustrate the idea by considering regression models with time varying, cross-sectionally correlated coefficients. The use of past relationships to estimate current means raises the question of how to protect against model breakdowns. We propose a modification which guarantees that the model dependent predictors of aggregates of the small area means coincide with the corresponding survey estimators and we explore the statistical properties of the modification. The proposed procedure is applied to data on home sale prices used for the computation of housing price indexes.

    Release date: 1990-12-14

  • Articles and reports: 12-001-X199000214531
    Description:

    Benchmarking is a method of improving estimates from a sub-annual survey with the help of corresponding estimates from an annual survey. For example, estimates of monthly retail sales might be improved using estimates from the annual survey. This article deals, first with the problem posed by the benchmarking of time series produced by economic surveys, and then reviews the most relevant methods for solving this problem. Next, two new statistical methods are proposed, based on a non-linear model for sub-annual data. The benchmarked estimates are then obtained by applying weighted least squares.

    Release date: 1990-12-14

  • Articles and reports: 12-001-X199000214535
    Description:

    Papers by Scott and Smith (1974) and Scott, Smith, and Jones (1977) suggested the use of signal extraction results from time series analysis to improve estimates in repeated surveys, what we call the time series approach to estimation in repeated surveys. We review the underlying philosophy of this approach, pointing out that it stems from recognition of two sources of variation - time series variation and sampling variation - and that the approach can provide a unifying framework for other problems where the two sources of variation are present. We obtain some theoretical results for the time series approach regarding design consistency of the time series estimators, and uncorrelatedness of the signal and sampling error series. We observe that, from a design-based perspective, the time series approach trades some bias for a reduction in variance and a reduction in average mean squared error relative to classical survey estimators. We briefly discuss modeling to implement the time series approach, and then illustrate the approach by applying it to time series of retail sales of eating places and of drinking places from the U.S. Census Bureau’s Retail Trade Survey.

    Release date: 1990-12-14

  • Articles and reports: 12-001-X199000214533
    Description:

    A commonly used model for the analysis of time series models is the seasonal ARIMA model. However, the survey errors of the input data are usually ignored in the analysis. We show, through the use of state-space models with partially improper initial conditions, how to estimate the unknown parameters of this model using maximum likelihood methods. As well, the survey estimates can be smoothed using an empirical Bayes framework and model validation can be performed. We apply these techniques to an unemployment series from the Labour Force Survey.

    Release date: 1990-12-14

  • Articles and reports: 12-001-X199000214532
    Description:

    Births by census division are studied via graphs and maps for the province of Saskatchewan for the years 1986-87. The goal of the work is to see how births are related to time and geography by obtaining contour maps that display the birth phenomenon in a smooth fashion. A principal difficulty arising is that the data are aggregate. A secondary goal is to examine the extent to which the Poisson-lognormal can replace for data that are counts, the normal regression model for continuous variates. To this end a hierarchy of models for count-valued random variates are fit to the birth data by maximum likelihood. These models include: the simple Poisson, the Poisson with year and weekday effects and the Poisson-lognormal with year and weekday effects. The use of the Poisson-lognormal is motivated by the idea that important covariates are unavailable to include in the fitting. As the discussion indicates, the work is preliminary.

    Release date: 1990-12-14

Reference (10)

Reference (10) (10 of 10 results)

  • Technical products: 12-539-X
    Description:

    This document brings together guidelines and checklists on many issues that need to be considered in the pursuit of quality objectives in the execution of statistical activities. Its focus is on how to assure quality through effective and appropriate design or redesign of a statistical project or program from inception through to data evaluation, dissemination and documentation. These guidelines draw on the collective knowledge and experience of many Statistics Canada employees. It is expected that Quality Guidelines will be useful to staff engaged in the planning and design of surveys and other statistical projects, as well as to those who evaluate and analyze the outputs of these projects.

    Release date: 2009-12-02

  • Technical products: 11-522-X20030017707
    Description:

    The paper discusses the structure and the quality measures Eurostat uses to provide European Union and EU-Zone with economic seasonally adjusted series.

    Release date: 2005-01-26

  • Technical products: 11-522-X20030017695
    Description:

    This paper proposes methods to correct a seasonally adjusted series so that its annual totals match those of the raw series. The methods are illustrated with a seasonally adjusted series obtained with either X-11-ARIMA or X-12-ARIMA.

    Release date: 2005-01-26

  • Technical products: 11-522-X20030017693
    Description:

    This paper evaluates changes in the quality performances of two different and widely used programs for seasonal adjustment, X-12-Regarima and Tramo-Seats, when the length of time series is progressively reduced.

    Release date: 2005-01-26

  • Technical products: 11-522-X20030017694
    Description:

    This paper evaluates the performance of diagnostics for seasonal adjustment as they relate to the performance of X-12-ARIMA and SEATS on a large sample of real and simulated economic time series.

    Release date: 2005-01-26

  • Technical products: 11-522-X19990015688
    Description:

    The geographical and temporal relationship between outdoor air pollution and asthma was examined by linking together data from multiple sources. These included the administrative records of 59 general practices widely dispersed across England and Wales for half a million patients and all their consultations for asthma, supplemented by a socio-economic interview survey. Postcode enabled linkage with: (i) computed local road density; (ii) emission estimates of sulphur dioxide and nitrogen dioxides, (iii) measured/interpolated concentration of black smoke, sulphur dioxide, nitrogen dioxide and other pollutants at practice level. Parallel Poisson time series analysis took into account between-practice variations to examine daily correlations in practices close to air quality monitoring stations. Preliminary analyses show small and generally non-significant geographical associations between consultation rates and pollution markers. The methodological issues relevant to combining such data, and the interpretation of these results will be discussed.

    Release date: 2000-03-02

  • Technical products: 11-522-X19990015656
    Description:

    Time series studies have shown associations between air pollution concentrations and morbidity and mortality. These studies have largely been conducted within single cities, and with varying methods. Critics of these studies have questioned the validity of the data sets used and the statistical techniques applied to them; the critics have noted inconsistencies in findings among studies and even in independent re-analyses of data from the same city. In this paper we review some of the statistical methods used to analyze a subset of a national data base of air pollution, mortality and weather assembled during the National Morbidity and Mortality Air Pollution Study (NMMAPS).

    Release date: 2000-03-02

  • Technical products: 11-522-X19990015648
    Description:

    We estimate the parameters of a stochastic model for labour force careers involving distributions of correlated durations employed, unemployed (with and without job search) and not in the labour force. If the model is to account for sub-annual labour force patterns as well as advancement towards retirement, then no single data source is adequate to inform it. However, it is possible to build up an approximation from a number of different sources.

    Release date: 2000-03-02

  • Technical products: 11-522-X19980015033
    Description:

    Victimizations are not randomly scattered through the population, but tend to be concentrated in relatively few victims. Data from the U.S. National Crime Victimization Survey (NCVS), a multistage rotating panel survey, are employed to estimate the conditional probabilities of being a crime victim at time t given the victimization status in earlier interviews. Models are presented and fit to allow use of partial information from households that move in or out of the housing unit during the study period. The estimated probability of being a crime victim at interview t given the status at interview (t-l) is found to decrease with t. Possible implications for estimating cross-sectional victimization rates are discusssed.

    Release date: 1999-10-22

  • Technical products: 11-522-X19980015031
    Description:

    The U.S. Third National Health and Nutrition Examination Survey (NHANES III) was carried out from 1988 to 1994. This survey was intended primarily to provide estimates of cross-sectional parameters believed to be approximately constant over the six-year data collection period. However, for some variable (e.g., serum lead, body mass index and smoking behavior), substantive considerations suggest the possible presence of nontrivial changes in level between 1988 and 1994. For these variables, NHANES III is potentially a valuable source of time-change information, compared to other studies involving more restricted populations and samples. Exploration of possible change over time is complicated by two issues. First, there was of practical concern because some variables displayed substantial regional differences in level. This was of practical concern because some variables displayed substantial regional differences in level. Second, nontrivial changes in level over time can lead to nontrivial biases in some customary NHANES III variance estimators. This paper considers these two problems and discusses some related implications for statistical policy.

    Release date: 1999-10-22

Date modified: