MJA
MJA

Standards for health care: a necessary but unknown quantity

Caroline A Brand, Joseph E Ibrahim, Peter A Cameron and Ian A Scott
Med J Aust 2008; 189 (5): 257-260. || doi: 10.5694/j.1326-5377.2008.tb02017.x
Published online: 1 September 2008
The rationale for having health care standards

In 1995, the Quality in Australian Health Care Study of public hospitals reported high levels of preventable iatrogenic injuries among inpatients.2 The slow pace of health care reform since that time has prompted the monitoring of health care performance and feedback of data to organisations, health care provider groups and consumers to improve quality of care. External regulation in Australia is accepted, and indeed expected, in other domains, such as the food and aviation industries, where operators who do not meet tightly controlled minimum safety standards can have their activities curtailed. External regulation of health care performance is also necessary to ensure accountability and equity in access to high-quality care.3

However, the most effective way to incorporate measurement into health care clinical governance frameworks is yet to be identified,4 as reflected in the variety of measurement, reporting and regulatory processes used worldwide: from profession-led clinical audits to organisation- and system-wide clinical indicators, the latter linked to the use of report cards, league tables and funding incentives such as “pay for performance”.

Recent international and national policy documents predict increased external regulation. In Australia, jurisdictions such as Queensland now require mandatory reporting of safety indicators,5 and, nationally, the Australian Commission on Safety and Quality in Health Care, established in January 2006, aims “to report publicly on the state of safety and quality including performance against national standards”,6 broadening the scope of attention beyond acute in-hospital care to all health settings and health care providers.

Here, we present a point of view regarding the strengths and limitations associated with implementing policies of universal reporting against specified standards. We suggest that a structured framework for developing national health care standards should be applied to high-priority areas in which there are demonstrated gaps in health care performance and where there is strong evidence for the need for effective intervention.

Is there a clear definition of health care standards?

It may be argued that there have always been health care standards: some explicit in nature (in the form of clinical guidelines and position statements), but many implicit, defined and maintained by the professionalism of self-regulating health care providers. From a patient’s perspective, however, there is a reasonable expectation of public accountability for explicitly defined standards that support high-quality health care. In addition, patients may interpret “quality” differently from clinicians.7 Ultimately, the quality domains and standards chosen will reflect community values and social goals.8

Safety standards have been defined as “agreed attributes and processes designed to ensure that a product, service or method will perform consistently at a designated level”.9 A designation of minimally acceptable performance that applies to all cases may attract high-level regulation, even legislation, with which all stakeholders would agree, as is the case for the incorporation of seatbelts in car design. In health care, a similar example would be the use of appropriate sterilisation procedures for surgical instruments. However, there will be many instances where such standards have not been determined or are even undesirable. Quality is a concept rather than a fixed entity, and describes attributes of performance, many of which are amenable to change, preferably improvement. In other jurisdictions, such as the National Health Service in the United Kingdom, improvement measures have been included in national reporting as “optimal” or aspirational standards, in which levels are set using evidence, existing achievement levels or consensus.10

There is a risk that different types of standards will cause conceptual confusion and lead to an unrealistic expectation that aspirational targets are synonymous with fixed minimally acceptable service levels. It will be necessary to review the implications for national reporting of both types of standards, with regard to their limitations and ramifications for regulation.

What is the relationship between health care standards and clinical performance?

In Australia, health care standards monitored within accreditation programs already measure performance across the continuum of care, focusing primarily (but not exclusively) on qualitative structural and process measures rather than on quantitative measures that reflect change over time in response to specific quality improvement interventions.11,12 Despite widespread adoption of accreditation (associated with significant cost and perceived distraction by organisations), there is limited information about the degree to which included standards conform to a commonly agreed-on “standards for standards” development framework, and about whether assessment of such standards correlates with, or improves, quality of care. A recent Australian review of the impact of accreditation reports some evidence for an increased focus on policies and processes to improve quality, but conflicting evidence for an association between accreditation and measures of hospital quality and safety performance.13 Further, accreditation and national safety and quality processes have failed to rectify problems uncovered in well publicised incidents in some Australian hospitals.14,15 Administrative processes such as credentialling and clinical privileging, even when appropriately applied and assessed by accreditation agencies, may also not prevent failures by individual health service providers: the most rigorous structural frameworks and administrative processes are likely to be too far removed from actual service provision to allow assessment of a causal relationship with health outcomes. The current emphasis on structural standards may need to be reviewed from a cost–benefit perspective to select a minimum set of priority standards. Those included would drive improvement in important domains of care; for example, documentation supports communication and other goals of care, but should not burden organisations and health care providers with data collection requirements that distract from efforts to improve quality in high-priority, evidence-based areas.

Quantitative clinical performance measures of both processes and outcomes of care in specific clinical conditions are of increasing interest to regulators. They have a closer causal relationship to quality and safety than structural measures and provide numerical data for monitoring change. However, they also have limitations. Firstly, the reported association between processes of care and patient outcomes is variable.16,17 Bradley et al correlated hospital performance based on measures of care of patients with acute myocardial infarction with National Registry of Myocardial Infarction data from 962 hospitals in the United States and found that the process measures captured only a small proportion of the variation in risk-standardised hospital mortality rates.17 This result contrasts with that of Peterson et al, who investigated individual and composite measures of guideline adherence and found a positive correlation with high performance on composite measures and overall organisational performance measured by in-hospital mortality rates.16 A further study reviewed organisational performance across 18 standardised indicators of care introduced by Joint Commission on Accreditation of Healthcare Organizations for acute myocardial infarction, heart failure and pneumonia, and found consistent improvement in process-of-care measures but no related improvement in in-hospital mortality.18 Performance indicators based on guideline-endorsed standards of care may also not be the most appropriate measures of high-quality care of specific patient populations.19 Kerr et al reported that lipid levels in patients with diabetes may identify poor control but not necessarily poor care,20 and Guthrie et al reported that high levels of reported adherence to national targets for blood pressure by general practitioners may not necessarily translate into clinical action. For quality improvement purposes, gathering additional treatment information was required.21

A recent systematic review reported no “consistent nor reliable” relationship between risk-adjusted mortality and quality of hospital care,22 and the validity of using readmission as a measure of quality remains highly controversial.23-25 Even after adjustment for differences in casemix, other confounders that may explain variances between organisations are poorly understood, and methodological differences relating to different data sources (administrative or “coded” data versus clinical data) and data quality can result in erroneous and unfair conclusions regarding organisational performance.26-28 Additional recent work suggests that appropriate patient selection may be even more important than choice of data sources in the assessment of potentially avoidable adverse outcomes for specific clinical conditions.29

How should Australia develop national standards?

Despite the reservations expressed about methodological issues in identifying robust measures for standards, patients are entitled to expect information about their treatment. We suggest that Australian health care standards would be best developed within a broad measurement framework that matches assessment of system-level performance with use of appropriate measurement methods, measures and indicators, reporting and regulation (Box).30,31 Further, we caution against over-reliance on external regulatory systems for driving improvement. Some authors have reported perverse behaviours resulting from the introduction of standards or fixed targets which undermine a holistic approach to quality improvement in all its domains and divert attention from unmeasured areas of care.32,33 The chasing of aspirational targets may incur considerable opportunity costs in the absence of studies that confirm cost-effectiveness from a policy-making perspective.34 Once performance thresholds become entrenched, there may be less flexibility for reviewing and redefining standards according to changing circumstances.

Externally regulated standards should focus on areas with clearly identified major gaps in safety; where these gaps can be accurately measured, and a validated “cut-off” or minimally acceptable threshold can be identified; and where there is good evidence that interventions improve performance in specific gaps. Further, we recommend that in the initial stages of development of Australian health care standards, broad frameworks35 previously used for developing performance indicators could form a basis for setting standards across multiple quality domains system-wide. These broad frameworks should also be modified to guide development of a smaller array of standards and measures targeting a suite of high-priority quality and safety areas. For instance, nosocomial infection is a major safety issue for hospitalised patients and a priority identified for intervention by the Australian Commission on Safety and Quality in Health Care.36 On the rationale that there is evidence for a causal relationship between inadequate hand hygiene and microbial colonisation, a suite of structural standards (policies, environment, building), process standards (credentialling, observational monitoring) and outcome standards (reporting central blood stream infections) could be considered for development within a defined methodological framework that considers psychometric attributes of performance measures and the implications of data collection, monitoring and remedial intervention on infrastructure development.37 A responsive regulatory process, appropriate to the type of standard and needs of the setting and providers to which the standard applies, could then be assigned.38 Other priority areas could be addressed in a similar fashion; for instance, venous thrombophylaxis and prevention of pressure ulcers, wrong-site surgery and handover errors.

Clearly a group of diverse individuals will need to provide the necessary clinical, management, methodological, legal and consumer perspectives and expertise. Engagement of clinicians in quality improvement and a systems approach to patient safety have been slow and generally narrowly focused on improving evidence-based practice. Despite this, there have been notable examples of the involvement of frontline clinicians in the design and implementation of systems that routinely collect and report high-quality data to improve quality of care.39,40

Ultimately, success in developing effective Australian health care standards will be predicated on access to adequate funding to develop the standards and to adapt or redesign current monitoring systems required for the reporting, review and remediation that underpin the standards, and to facilitate regular review and reformulation of these standards. It has been suggested that development of standards needs well defined procedures and at least 3 years of preparation and testing.41 Observers in the UK are concerned that lessons learned have not been integrated into the National Health Service plan for the development of standards.42 Let’s hope the Australian experience of standards development will not be reported in the same way.

Online responses are no longer available. Please refer to our instructions for authors page for more information.