Benchmarking and related techniques

Warning View the most recent version.

Archived Content

Information identified as archived is provided for reference, research or recordkeeping purposes. It is not subject to the Government of Canada Web Standards and has not been altered or updated since it was archived. Please "contact us" to request a format other than those available.

Scope and purpose
Principles
Guidelines
Quality indicators
References

Scope and purpose

Statistical programs often have two sources of data measuring the same target variable: 1) a more frequent measurement with an emphasis on an accurate estimation of the period to period movement and 2) a less frequent measurement with an emphasis on an accurate estimation of the level. Without loss of generality, the more frequent series will from hereon be referred to as a sub-annual series, whereas the less frequent series will be used as benchmark and considered to be an annual series.

Benchmarking refers to techniques used to ensure coherence between time series data of the same target variable measured at different frequencies, for example, sub-annually and annually.  Benchmarking consists of imposing the level of the benchmark series while minimizing the revisions of the observed movement in the sub-annual series as much as possible. Consequently, the growth rates in the benchmarked series are coherent with those from the benchmarks. In certain situations, benchmarking can improve the accuracy and timeliness of statistical output.

Nonbinding benchmarking, interpolation, temporal distribution, calendarization, linkage and reconciliation are related techniques which are based on similar methodological principles and guidelines as those of benchmarking.  Nonbinding benchmarking is used when the benchmark series can also be revised.  Interpolation is the estimation of intermediate terms between known values and can also be used to benchmark stock series.  Temporal Distribution is the disaggregation of the benchmark series into more frequent observations.  Calendarization is a special case of temporal distribution.  Linkage is used to join different time series segments into a consistent single time series.  Reconciliation is used to impose cross-sectional additive constraints among the components of a system of time series.  More details on those techniques can be found in Dagum and Cholette (2006).

Benchmarking in the context of time series should not be confused with weight adjustments that can be applied at the estimation stage for calibration purposes.

Principles

One should make sure that there are as few conceptual, methodological and operational discrepancies as possible between the two data sources at the design stage.  Discrepancies between the series should be thoroughly investigated and understood, after which an informed decision on whether to publish the series as is or to benchmark the series to ensure full numerical coherence can be made.  In the former case, the discrepancies should be explained to users as per the Policy on Informing Users of Data Quality and Methodology (Statistics Canada, 1998).

In the case where the data source's designs are compatible or when external constraints impose the need of full numerical coherence, benchmarking methods can and – from a statistical point of view –  should be used.  In typical situations, the sub-annual series is implicitly assumed to be less reliable than the annual data.  By its nature, the benchmarking process will cause various sets of revisions to the sub-annual data and thus, a willingness to revise is necessary.

All the related techniques are based on similar principles: The underlying assumptions should first be understood and the applicability of the methods should be verified – most frequently by a thorough analysis of the data before and after the techniques are used.

Guidelines

  • Before considering benchmarking, investigate, document and quantify the discrepancies between the two sources of data. These differences should be minimized as much as possible at the design stage.

  • Before considering benchmarking, examine differences in the microdata for common sample units, if any. Applied corrections, if made, must respect the time series nature of the data.  For the sub-annual data, corrections should attempt to improve the accuracy of the period-to-period movement; for the annual data, corrections must consider both the accuracy of the level and the accuracy of the year-to-year movement.

  • Be aware that the design of the annual series may not be compatible with the goals of benchmarking. For benchmarking, the annual survey has to provide both an accurate measurement of the annual level and an accurate measurement of the annual change since they will be imposed on the benchmarked series.

  • Do not benchmark when annual values are less reliable than the annual sums from the sub-annual series. In this case, imposing the annual benchmarks will essentially produce less reliable benchmarked series.

  • When the data sources are designed differently, consider benchmarking only if strong external constraints impose the need of full numerical consistency.  Be aware that the resulting coherence may come at a cost of reduced accuracy.

  • Benchmarking will result in revisions of the sub-annual data.  Only consider benchmarking when the gain in consistency strongly reduces confusion among the users or the improved accuracy gained from a high quality annual series outweighs the burden of repeated revisions.

  • Benchmark in the context of seasonal adjustment when there are unwanted discrepancies between the yearly sums of the raw and the corresponding yearly sums of the seasonally adjusted series.  When required, seasonally adjusted series may be benchmarked to yearly totals derived from the raw series.

  • Use an appropriate benchmarking method such as the regression-based techniques described in Dagum and Cholette (2006) or one of the various enhanced Denton methods described in the Quarterly National Accounts Manuel from the International Monetary Fund (Bloem et al. 2001).  Avoid simple prorating techniques as they introduce discontinuities between the years (known as the step-problem).

  • Understand the underlying assumptions when the most recent observations of the sub-annual series are missing a corresponding annual value either because the year is incomplete or the annual data is not yet available.  Benchmarking techniques will use either an implicit or explicit projection of the next annual value; projections can be based on short-term or long-term historical data or on external considerations.

  • When implementing benchmarking or related techniques, consider the use of generalized software.  This minimizes programming error and reduces development cost and time.  Support from Methodology and Informatics is available within Statistics Canada, especially on the following software: in-house SAS Proc Benchmarking – for benchmarking, temporal distribution and linkage; in-house SAS Proc TSraking – for reconciliation; SAS Proc Expand – for interpolation; the US Bureau of the Census X-12-ARIMA program, SAS Proc X12, or SAS Proc ARIMA  – for statistical inference methods on time series.

  • Assistance with the interpretation and implementation of these guidelines can be obtained from the Time Series Research and Analysis Centre (TSRAC), Business Survey Methods Division.

Quality indicators

The following indicators may be used to document and quantify the discrepancies:

  • Descriptive statements of conceptual aspects: reporting periods of the annual source; definition of the variables measured and of the target population, etc.

  • Descriptive statements of operational aspects: sampling frame, collection process, etc.

  • Descriptive statements of methodological aspects: sampling, use and source of administrative data, etc.

  • When appropriate, quantitative descriptions of the discrepancies: counts and weighted counts by reporting periods to estimate the impact of the non-calendarization of annual data,  differences between administrative data,  sampling errors of the two annual estimates and corresponding annual changes, etc.

For more details and a case study, see Yung et al (2008).

Applying the benchmarking techniques and thoroughly analysing the results may also provide a good indicator of the appropriateness of the method.  Annual discrepancies, revision to the series through the benchmarked-to-indicator (BI) ratios and revision to growth rates may all be studied with graphs or summary statistics.

References

Bloem, A. M., R. J. Dippelsman, and N. Ø. Mæhel. 2001. Quarterly National Accounts Manual, Concepts, Data Sources and Compilation. International Monetary Fund, Washington DC.

Dagum, E.B. and P.A. Cholette. 2006. Benchmarking, Temoral Distribution, and Reconciliation Methods for Time Series. New York. Springer.  410 p. Lecture Notes in Statistics #186.

Statistics Canada. 1998.  "Policy on Informing Survey Respondents. Statistics Canada Policy Manual, Section 1.1. Last updated March 4, 2009.

Yung, W., B. Brisebois, C. Tardif, G. Kuromi, and C. Rondeau. 2008, Should Sub-Annual Surveys be Benchmarked to their Annual Counterparts? A Case Study of Manufacturing Surveys, Working Paper BSMD-2008-001, Statistics Canada. Ottawa, Ontario.

Report a problem on this page

Is something not working? Is there information outdated? Can't find what you're looking for?

Please contact us and let us know how we can help you.

Privacy notice

Date modified: