Providing feedback to clinicians is part of the enterprise of assuring and improving quality. The ideas arose in the 1940s from W E Deming’s improvement principles for manufacturing processes1 and were subsequently applied to healthcare. Initially, the basic principle was that small incremental changes should be applied throughout clinical practice.2 Services were shown to improve steadily with time.3
In the meantime, hospitals were revealed to be unsafe environments,4-7 and the imperative for change suddenly became urgent. Since that time, new methods have been adopted, including the “Breakthrough Series” collaboratives, which harness the “plan–do–study–act” (PDSA) processes promulgated by the US Institute of Health Improvement (www.ihi.org).8 The emphasis is on identifying information called “guides” (including evidence in the form of guidelines, but also tips and techniques); establishing an enabling organisation to drive the changes through smaller collaborating organisations; using agreed standard indicators; and setting big ambitious targets. These processes, in comparison with the small incremental changes originally favoured, could be called “revolutionary” rather than “evolutionary”.9
Australia, and many other countries, are looking to these methods of improving the healthcare system in hospital and primary care. For example, Australia’s National Institute of Clinical Studies (NICS) has recently focused quality interventions on emergency departments (Box 1, example A). However, the area of primary care has largely been bypassed in recent endeavours to improve quality of care in Australia.
The use of feedback is central to many quality improvement processes. The main principle of feedback is that clinicians reflect on their performance to encourage whatever change (in their clinical behaviour or in the system) is necessary for improvement.15 Initially, data are collected, perhaps to answer a particular concern about perceived problems. Feedback may then simply consist of presenting data to clinicians, in the hope that deficiencies will be so obvious that improvement will be inevitable. Alternatively, the audited data may be matched against explicit standards, a process that more clearly highlights any shortcomings in current practice.16 The outcomes from the feedback stage may be twofold: changes in performance (to correct perceived deficiencies) and further data collection (as a basis for comparison when the process is repeated in the future).
Standards, and the data that reflect them, drive the cycles of improvement. Standards can be set at various levels:
“minimal” standards define the level below which clinicians will be censured if they do not comply. The disadvantage of minimal standards is that they are unlikely to influence the majority unless the fear of censure is great;
“normative” standards are an average, providing a direct impetus to perhaps half the population (but potentially ignoring the other half, who might even feel encouraged to be complacent);
“exemplary” standards encourage everyone to do better (but perhaps discourage those hopelessly below standard).
There are pros and cons to setting standards at each of these levels, but few empirical data to show which approach is the most effective.15,17
Feedback data are used in different ways. Clinicians may be given the data and left to make their own decisions about what action to take, or authorities may seek to directly influence clinicians’ behaviour by rewarding or imposing censure. Multidisciplinary teams may use the data to set new targets, set ways of achieving the targets, and design new sets of data to collect for future cycles.
A Cochrane review has shown that audit and feedback yield a small to modest improvement in clinical practice,18 regardless of whether they are used alone or in concert with other forms of intervention. The amount of improvement is very variable, with some interventions much more effective than others. The best predictor of the size of improvement is the degree of deviation from best practice of the target group at baseline (the greater the deviation, the greater the improvement).18 More complex interventions generally yield greater results. Simply providing centrally collected feedback data is less likely to be effective (Box 1, example B).12
Evaluation of quality improvement may be by randomised controlled trial, but often it appears to be acceptable to use less stringent methods, such as before–after comparisons or quasi-experimental designs (Box 1, examples A and D).
How should we use feedback data, in view of their moderate but variable effectiveness? Concentrating on each of the areas of the process is one way of addressing the issue rationally.
The area of clinical practice under examination should determine the data to be collected to assess it. (All too often, the reverse is true: easy sources of data are seized for audit. For example, general practitioners probably excessively audit cervical screening because data are readily available from pathology services.19 See also Box 1, example C.)
Perhaps the issue is the ease or otherwise of getting the most appropriate data. This is more a problem in community-based (non-institutional) care than in hospitals (Box 2). However, identifying data to be collected and fed back does not necessarily target areas of greatest deviation from best practice. This is a trap for the unwary.
So, also, is using important clinical areas (such as national goals promoted by the Australian Health Ministers’ Advisory Council), which nevertheless may not be seen as relevant to local settings and runs the risk of being less appealing to clinicians.
In setting targets, the new methods of taking big steps (“Breakthrough Series”) come into their own, forcing bold targets and ambitious gains. This can ensure that optimal measures are selected (Box 1, example D).
One means of disseminating feedback data would be to set up computer systems to automatically link the components of quality improvement activities. A meta-analysis suggests that computers help clinicians in implementing evidence, with improved outcomes for patients in several areas.20 One can envision a future in which clinicians will receive computer-generated prompts, feedback and advice automatically in response to their clinical behaviour.
Managing clinicians’ responses to the feedback they receive is important, as it drives any change. In the past, change has often been left to implicit processes, where the responses are unmanaged (Box 1, examples A, D), but, increasingly, explicit processes are being used, in which other people (not the clinicians themselves) decide beforehand what the target should be (Box 1, examples B, C).
Feedback is often a central plank in the use of processes to change clinicians’ behaviour to improve quality. This is becoming such an important activity for clinicians that they need a better understanding of improvement processes. Perhaps quality activities should be inculcated into basic medical training, as they are in most other health-professional training.
Feedback is not always effective, and we can see several reasons why we would expect to find the variations found empirically. But it can be a powerful motivator if applied with the right kind of process. The challenge is to incorporate quality improvement processes into everyday clinical activities without imposing yet another burden on doctors.
1: Examples of the use of feedback in Australian healthcare settings
A. Feedback data acted upon by PDSA processes, with implicit standards and before-and-after evaluation
Data about “time to analgesia” in several Australian hospital emergency departments were collected, with no standard being set. Before-and-after studies under the auspices of the National Institute of Clinical Studies showed substantial reductions in time to analgesia for patients in pain.10,11
B. Implicit standards applied to feedback data sent from an external agency, evaluated by RCT
In an RCT, half of 2440 Australian general practitioners were sent feedback every 6 months about their levels of prescribing of several drug types, together with educational newsletters. (The data were collected centrally by the prescribing authority.) After 2 years, there was no significant difference in prescribing patterns between intervention and control groups.12
C. Explicit standards applied to clinical data collected by an external agency, evaluated by RCT
A simple quality assurance program was set up to improve GP records. GPs were randomly allocated to receive feedback about the quality of their medical records or to receive no feedback. The intervention involved unstructured meetings with a peer to assess each other’s medical records. Standards were provided to enable the GPs to make comparisons. The improvement in quality of records was small but significant in some areas.13
D. Implicit standards derived from Breakthrough Series, evaluated using before-and-after comparisons
Classical quality methods of audit and feedback, under the enlightenment of guidelines, were used to improve the quality of care of patients with acute myocardial infarction (AMI) in Queensland public hospitals. Attention to the urgent delivery of thrombolytic drugs to patients suffering AMI — a task requiring full teamwork — achieved a greater benefit than could have been achieved by hospitals using more conventional, and vastly costlier, approaches. Deciding to measure patient death rates after AMI led to the focus on “time to thrombolytic treatment” as the outcome to be fed back. Evaluation by a quasi-experimental design demonstrated a 40% reduction in in-hospital deaths.14
PDSA = plan–do–study–act. RCT = randomised controlled trial.
2: Issues affecting the appropriateness of data for feedback purposes
Hospital-based care |
Community-based care |
||||||||||
Pre-existing data often available, but generally collected for purposes other than feedback (eg, reporting) |
Existing data/clinical records usually (a) sparse; and (b) used for non-reporting reasons (eg, aide-memoire, medicolegal purposes) |
||||||||||
Large quantity of data |
Data sparse |
||||||||||
Data often quantitative (eg, “time to analgesia” in emergency departments) |
Data mostly qualitative |
||||||||||
Primary function of data is for reporting and clinical purposes |
Data rarely used for reporting |
- Chris B Del Mar1
- Geoffrey K Mitchell2
- Centre for General Practice, University of Queensland Medical School, Herston, QLD.
C B D M has been paid to attend quality-of-care meetings on behalf of the National Institute of Clinical Studies and the Royal Australian College of General Practitioners.
- 1. Deming WE. Elementary principles of the statistical control of quality. Tokyo: Nippon Kagaku Gijutsu Renmei, 1950.
- 2. Berwick DM. Continuous improvement as an ideal in health care. N Engl J Med 1989; 320: 53-56.
- 3. Berwick DM, Murphy JM, Goldman PA, et al. Performance on a five-item health screening test. Med Care 1991; 29: 169-176.
- 4. Wilson RM, Runciman WB, Gibberd RW, et al. The Quality in Australian Health Care Study. Med J Aust 1995; 163: 458-471.
- 5. Shimmel EM. The hazards of hospitalisation. Ann Intern Med 1964; 60: 100-101.
- 6. Brennan TA, Leape LL, Laird N, et al. Incidence of adverse events and negligence in hospitalised patients: results of the Harvard Medical Practice Study I. N Engl J Med 1991; 324: 370-376.
- 7. Leape LL, Brennan TA, Laird N, et al. The nature of adverse events in hospitalised patients: results of the Harvard Medical Practice Study II. N Engl J Med 1991; 324: 377-384.
- 8. Berwick DM. Disseminating innovations in health care. JAMA 2003; 289: 1969-1975.
- 9. Øvretveit J, Bate P, Cleary P, et al. Quality collaboratives: lessons from research. Qual Saf Health Care 2002; 11: 345-351.
- 10. Toncich G, Cameron P, Virtue E, et al. Institute for Health Care Improvement Collaborative Trial to improve process times in an Australian emergency department. J Qual Clin Pract 2000; 20: 79-86.
- 11. National Institute of Clinical Studies. NICS Projects: Emergency Department Collaborative. 2003. Available at: www.nicsl.com.au/projects_projects_detail.aspx?view=11 (accessed Feb 2004).
- 12. O’Connell DL, Henry D, Tomlins R. Randomised controlled trial of effect of feedback on general practitioners’ prescribing in Australia. BMJ 1999; 318: 507-511.
- 13. Del Mar CB, Lowe JB, Adkins P, et al. Improving general practitioner clinical records with a quality assurance minimal intervention. Br J Gen Pract 1998; 48: 1307-1311.
- 14. Scott IA, Coory MD, Harper CM. The effects of quality improvement interventions on inhospital mortality after acute myocardial infarction. Med J Aust 2001; 175: 465-470.
- 15. Donabedian A. A guide to medical care administration: medical care appraisal — quality and utilization. New York: American Public Health Association, 1969.
- 16. Baker R. General practice in Gloucestershire, Avon and Somerset: explaining variations in standards. Br J Gen Pract 1992; 42: 415-418.
- 17. Donabedian A. The quality of care: how can it be assessed? JAMA 1988; 260: 1743-1748.
- 18. Jamtvedt G, Young JM, Kristoffersen DT, et al. Audit and feedback: effects on professional practice and health care outcomes. The Cochrane Library, Issue 3, 2003. Chichester, UK: John Wiley and Sons, Ltd.
- 19. Reynolds M, Richards C. Audit of computerised recall scheme for cervical cytology. BMJ 1982; 284: 1375-1376.
- 20. Hunt DL, Haynes RB, Hanna SE, Smith K. Effects of computer-based clinical decision support systems on physician performance and patient outcomes: a systematic review. JAMA 1998; 280: 1339-1346.
Abstract
Concern about risks associated with medical care has led to increasing interest in quality improvement processes.
Most quality initiatives derive from manufacturing, where they have worked well in improving quality by small, steady increments.
Adaptations of quality processes to the healthcare environment have included variations emphasising teamwork; large, ambitious increments in targets; and unorthodox approaches.
Feedback of clinical information to clinicians is a central process in many quality improvement activities.
It is important to choose feedback data that support the objectives for quality improvement — and not just what is expedient.
Clinicians need to be better educated about the quality improvement process to maintain the quality of their care.