Summary

Efforts to rank the health status of the Canadian population and the performance of Canada's health care systems will inevitably increase as standard health indicators become well established and data become uniformly available across the country and internationally. When report cards or other publications that include rankings of jurisdictions according to their health or health care systems are published, the public should be aware of any potential shortcomings in the methods that underlie these reports.

As a first step, the cautious reader should assess whether important aspects of health and health care are integral to the ranking scheme. Measures of health status, although essential to understand population health, do not always accurately reflect the success of a health care system. To better understand population health, indicators are needed in the areas of health behaviours, living and working conditions, personal resources such as social support and environmental factors affecting health. To judge the effectiveness and/or quality of health care systems, indicators are needed that embody quality aims central to an optimal health care system, for example, health care that is accessible, appropriate, continuous, effective, efficient and safe. Indicators that capture these dimensions of care and are at the same time amenable to measurement and action on the part of health care decision-makers are the kind that are most helpful.

Second, readers can carefully examine the meaningfulness and validity of the indicators chosen to quantify the aspects of health and health care included in the ranking scheme. The indicators should reflect important population health objectives or essential aspects of the health system.

Third, consumers can assess whether the data used to support specific indicators are accurate, reliable and comparable. Data that are old, incomplete or otherwise not representative of the intended population or health care institutions should be viewed with caution. Similarly, any potential biases should be examined and ruled out in the sources of data.

Finally, readers of ranking reports can keep an eye out for adherence to sound methodologic principles, including the following:

  • The distribution of the values of the indicators used as part of the ranking scheme must be taken into account before cut points are established that distinguish "good" from "middling" or "bad" performance.

  • In interpreting the meaning of ranking scores, it is important to keep in mind that the rank scores are relative measures that can be misleading without examining the absolute values of the indicators that underlie the ranking method.

  • When comparing jurisdictions using ranking, adjustments must be made to account for underlying differences in the demographic profile of the respective populations. Adjustments may also be needed to account for underlying differences in health status or other characteristics.

  • When combining the values of indicators as part of a ranking scheme, how each factor is weighted to achieve an overall ranking score needs to be carefully considered and made explicit. Ideally, there will be a set of principles underlying the weights and aggregation formula; otherwise, aggregations of disparate and incommensurable indicators should be avoided.

  • The uncertainty that underlies all measurement should be reflected in the results of the ranking scheme.

  • Numerous statistical issues should be considered in the ranking scheme, including, for example, taking into account any correlations among the set of indicators used as part of the ranking methodology; handling of extreme indicator values; and determining appropriate levels of precision in determining ties in rank order.

  • Confidence in any ranking scheme depends on the full exposition of analytic methods used (that is, their transparency) and the extent to which steps have been taken to ensure that the methodology is free from bias.
Box 4 includes a summary of these points in the form of a checklist to aid in the review of health ranking reports.

Ranking health and health system performance will improve as better information systems and standardized reporting systems are more fully implemented across Canada and internationally. For example, cross-national surveys have been fielded and have begun to provide insights into public perceptions of the performance of the health care systems of several developed countries.10 The WHO , OECD and other organizations continue to work to develop measures of population health and health system performance that are applicable around the world.6, 11, 12

Although ranking systems can be informative in gauging population health and the success of health care systems, other approaches are often helpful as supplements to aid in their interpretation. Providing descriptions of common attributes of highranking (or low–ranking) jurisdictions is one such strategy which augments the value of relative rankings. And until improved measures and methods are available to precisely rank aspects of population health and health care systems, cruder measures such as whether jurisdictions are performing on, above or below average have merit. 13

Box 4 – Checklist

for Reviewing Health Ranking Reports

Step 1:
Assess the soundness of the conceptual framework

  • Does the ranking scheme's conceptual framework cover the areas of health and health care that are relevant to the purpose of the ranking?

Step 2:
Assess the indicators chosen to measure selected aspects of health and health care

  • Are the indicators of health or health care used in the ranking consistent with the conceptual framework?
  • Are the measures used for the selected indicators meaningful and valid?

Step 3:
Assess the data quality

  • Are data accurate, reliable, complete, comparable and free from bias?
  • Are data elements defined and collected so that "apples to apples" comparisons are being made?

Step 4:
Examine soundness of methods

  • Are meaningful differences in performance distinguishable?
  • Are absolute and relative comparisons available for review?
  • Have appropriate adjustments been made for underlying differences in the populations being compared?
  • Is the way specific measures are combined in the ranking scheme clear?
  • Is the specific formula, along with any weights used to combine individual measures or indicators, based on clear and reasonable principles?
  • Are differences in performance statistically significant?
  • Have other statistical issues been appropriately handled (for example, adjustments for correlated measures or handling outlier values and ties)?
  • Have the authors of the report reduced the potential for bias through full disclosure of ranking methods and peer review?

Ranking reports are popular because they provide a quick high–level picture of different health care systems and point to "how we are doing" compared to other systems. At the same time, when done well these reports can point to differences which, upon more thorough examination, may assist in improving our own health care systems.

However, as illustrated in this paper, there are methodological and statistical pitfalls that ranking reports have to avoid to provide an accurate and insightful perspective. Even when these pitfalls are avoided, ranking reports can reach seemingly contradictory conclusions if they are based on different frameworks of health or health care and as a result include different measures or indicators. Here we have tried to give readers the tools needed to critically assess the content of ranking reports. We hope this focus on critical assessment will be of assistance to both those who produce ranking reports and those who use these reports so that we are able to learn from these reports and their inter–jurisdictional comparisons.

Report a problem on this page

Is something not working? Is there information outdated? Can't find what you're looking for?

Please contact us and let us know how we can help you.

Privacy notice

Date modified: