The rationale for developing evidence-based clinical practice guidelines is that their use will achieve better health outcomes for patients, or value for money, than would have been achieved otherwise.1 Methods of guideline development should ensure that treating patients according to the guidelines will achieve the outcomes desired. Three important issues underpin the development of valid and usable guidelines. Firstly, there must be a systematic review of the evidence. Secondly, the expert group assembled to translate the evidence into guidelines should be appropriately multidisciplinary. Thirdly, developing guidelines requires sufficient resources, such as people with a wide range of skills — including expert clinicians, health services researchers and group process leaders — and sufficient financial support.2 In a world in which clinical guidelines now abound, what factors should guideline implementers and users consider when selecting, presenting and delivering clinical guidelines?
The majority of guideline implementers and users will not be developing their own guidelines. What then should guide their choice of guidelines? The principles of guideline development delineated above have been elaborated and enshrined within most guideline assessment tools.3 One of the best, the “Appraisal of Guidelines for Research and Evaluation (AGREE)” instrument,4 has been well developed and validated. It documents the important areas of guideline development (Box).
The presence of such instruments and an increasing awareness of the importance of how clinical guidelines should be developed have led to the external assessment of published guidelines. External assessment has revealed that they are not of uniformly high quality5 and that the characteristics of guideline developers cannot be used as a reliable proxy for guideline quality. Several studies have shown that there is no predictable relationship between the characteristics of developers and the quality of guidelines they develop. Burgers et al,6 assessing 86 guidelines from 11 countries, found that guidelines produced within structured and coordinated programs were likely to be of higher quality than those developed outside such programs. However, Shaneyfelt et al,5 in a study of 279 guidelines published over 12 years by 69 developers, could find no relationship between developer characteristics and the quality of guidelines. Furthermore, in an assessment of 431 guidelines published by specialty societies, Grilli et al7 concluded that their quality, in most cases, was unsatisfactory.
The inescapable conclusion is that “you can’t judge a guideline by its cover”. When planning to adopt a guideline, users first need to critically appraise it using a validated instrument such as the AGREE instrument.4
Having found and appraised a guideline, users may find it valuable to know whether there are additional attributes that make the guideline more likely to be used. Two studies8,9 have suggested that a range of factors (eg, complexity, compatibility, the need for new skills) can promote or inhibit the use of a guideline. These are broadly compatible with the characteristics of innovations in the diffusion-of-innovation model described by Rogers.10,11
Grol et al8 and Burgers et al9 have both suggested that a guideline reflecting current norms (practice beliefs and attitudes) is more likely to be used. However, as both these studies used a cross-sectional design they were actually studying performance rather than change in performance. In contrast, Foy et al,12 within a before-and-after design, demonstrated that change in performance was more likely when implementing recommendations that were least compatible with current norms. This suggests that those developing or using guidelines need not shy away from recommendations suggesting new or different behaviours.
Dissemination and implementation of guidelines are closely linked. Dissemination involves communication of information to care providers to increase their knowledge and skills, while implementation involves the introduction of an innovation into daily routines.13 It has been previously held that the mere postal distribution of guidelines did not change clinical practice.14 However, in a recent systematic review of guideline dissemination and implementation strategies,15 Grimshaw et al suggested that “educational materials may have a modest effect on guideline implementation that may be short-lived”. Although this effect may be small and somewhat uncertain (not least owing to the relatively poor nature of the evidence), as an implementation strategy postal distribution has the advantage of being relatively inexpensive compared with other implementation strategies such as outreach visiting or small-group educational sessions. However, irrespective of its impact as an implementation strategy, postal distribution, as used by the UK National Institute for Clinical Excellence, will continue to be a common method for disseminating guidelines.
Whether as a dissemination or implementation strategy, is there any evidence about how best to present guidelines?
It is probably reasonable to conclude that a range of presentations can be appropriate, although almost all of the evidence relates to overall effects on behaviour rather than the relative merits of different presentations. In the review by Grimshaw et al15 it is unclear how extensive the documents were — the studies reviewed almost certainly contained a range of guidelines, from lengthy documents through to short summary texts. At least two studies evaluating the impact of policy-orientated evidence summaries have shown a positive effect of these summaries on clinical practice.16,17
Baker and Fraser18 have explored methods of developing review criteria (ie, measurable elements of care derived from guideline recommendations) from clinical guidelines. In a randomised controlled trial, Baker and colleagues19 compared the impact of distributing prioritised review criteria with distributing guidelines. They found no difference in the effectiveness of either method for achieving uptake of guidelines.19
Does the method of presentation make a difference to the uptake of guidelines? The review by Grimshaw et al15 suggests that postal delivery may be effective to some degree, and also provides strong support for various forms of reminder systems. However, the method of presentation with perhaps the greatest potential is computerisation. Computerisation can encompass a variety of methods of guideline presentation, ranging from electronic filing cabinets to computerised reminder systems to the most sophisticated decision-support systems. While there is considerable enthusiasm for and evidence to support the use of computerised reminder systems, sophisticated decision-support systems remain the Holy Grail.
A computerised decision-support (CDS) system is “a system that compares patient characteristics with a knowledge base and then guides a health provider by offering patient-specific and situation-specific advice”.20 We examined two systematic reviews21,22 and one meta-analysis23 of the effectiveness of CDS. The meta-analysis23 focused on randomised controlled trials of computerised reminder systems for preventive care in ambulatory-care settings. The 16 trials showed improved preventive practice for vaccination, breast cancer screening, colorectal cancer screening and cardiovascular risk reduction, but not for cervical cancer screening or “other” preventive activities. The most recent systematic review of CDS22 (an update of an earlier review21) identified 68 controlled trials. These showed benefit in nine of 15 trials evaluating systems to improve drug dosing; one of five trials evaluating diagnostic aids; 14 of 19 trials evaluating systems to improve preventive care; and 19 of 26 trials of CDS in “other medical care”. Of the 14 studies measuring patient outcomes there were improvements in six. However, most of the studies had design or analysis flaws that meant the results had to be interpreted with caution. In addition, many of the studies were based on computer-generated but paper-based systems (in which remote computer systems generate paper reminders that are then attached to patient records); there is less evidence about the use of real-time computer systems outside the confines of expert settings.
However, there are almost no studies of CDS in chronic disease management or of CDS integrated into routine computer systems. Somewhat chasteningly, since the abovementioned reviews21,23 were published, two trials evaluating CDS in the management of chronic disease24-27 have found that the intervention had little or no effect.
Thus, while it seems reasonable to conclude that CDS has potential as a method of delivering and implementing guidelines, it is important not to let our enthusiasm blind us to the realities.
Assuming that the technical hardware and software challenges of producing a system that truly supports complex disease management can be overcome, there remains the challenge of how such systems function within clinical encounters where patients with complex conditions are managed.
To be widely accepted by practising clinicians, computerised support systems for decision making must be integrated into the clinical workflow. They must present the right information, in the right format, at the right time, without requiring special effort.27
Certainly, based on current systems and their patterns of uptake and use, it seems unlikely that computerisation will become the “magic bullet” for implementing evidence-based care in the near future.28
The Appraisal of Guidelines for Research and Evaluation (AGREE) instrument: a recommended approach to guideline assessment4
Scope and purpose
The overall objective(s) of the guideline is (are) specifically described.
The clinical question(s) covered by the guideline is (are) specifically described.
The patients to whom the guideline is meant to apply are specifically described.
Stakeholder involvement
The guideline development group includes individuals from all the relevant professional groups.
The patients’ views and preferences have been sought.
The target users of the guideline are clearly defined.
The guideline has been piloted among target users.
Rigour of development
Systematic methods were used to search for evidence.
The criteria for selecting the evidence are clearly described.
The methods used for formulating the recommendations are clearly described.
The health benefits, side effects and risks have been considered in formulating recommendations.
There is an explicit link between the recommendations and the supporting evidence.
The guideline has been externally reviewed by experts before its publication.
A procedure for updating the guideline is provided.
Clarity and presentation
The recommendations are specific and unambiguous.
The different options for management of the condition are clearly presented.
Key recommendations are easily identifiable.
The guideline is supported with tools for application.
Applicability
The potential organisational barriers in applying the guideline have been discussed.
The potential cost implications of applying the recommendations have been considered.
The guideline presents key review criteria for monitoring and/or audit purposes.
Editorial independence
The guideline is editorially independent from the funding body.
Conflicts of interest of guideline development members have been recorded.
- Martin P Eccles1
- Jeremy M Grimshaw2
- 1 Centre for Health Services Research, School of Population and Health Sciences, University of Newcastle upon Tyne, Newcastle upon Tyne, UK.
- 2 Clinical Epidemiology Programme, Ottawa Health Research Institute, Ottawa, Canada.
Jeremy Grimshaw holds a Canada Research Chair in Health Knowledge Transfer and Uptake funded by the Canadian Foundation for Innovation.
The authors received honoraria from the National Institute of Clinical Studies for participation in the workshop “Development of strategies to encourage adoption of best evidence into practice in Australia”.
- 1. Woolf SH, Grol R, Hutchinson A, et al. Clinical guidelines: the potential benefits, limitations and harms of clinical guidelines. BMJ 1999; 318: 527-530.
- 2. Shekelle PG, Woolf SH, Eccles M, Grimshaw J. Developing guidelines. BMJ 1999; 318: 593-596.
- 3. Cluzeau F, Littlejohn P, Grimshaw JM, et al. Development of a valid and reliable methodology for appraising the quality of clinical guidelines. Int J Qual Health Care 1999; 11: 21-28.
- 4. The AGREE Collaboration. Development and validation of an international appraisal instrument for assessing the quality of clinical practice guidelines: the AGREE project. Qual Saf Health Care 2003; 12: 18-23. Available at: www.agreecollaboration.org (accessed Jan 2004).
- 5. Shaneyfelt TM, Mayo-Smith MF, Rothwangl J. Are guidelines following guidelines? JAMA 1999; 281: 1900-1905.
- 6. Burgers JS, Cluzeau FA, Hanna SE, et al. Characteristics of high-quality guidelines: evaluation of 86 clinical guidelines developed in ten European countries and Canada. Int J Technol Assess Health Care 2003; 19: 148-157.
- 7. Grilli R, Magrini N, Penna A. Practice guidelines developed by specialty societies: the need for a critical appraisal. Lancet 2000; 355: 103-106.
- 8. Grol R, Dalhuijsen J, Thomas S, et al. Attributes of clinical guidelines that influence use of guidelines in general practice: observational study. BMJ 1998; 317: 858-861.
- 9. Burgers JS, Grol RP, Zaat JO, et al. Characteristics of effective guidelines for general practice. Br J Gen Pract 2003; 53: 15-19.
- 10. Rogers EM. Diffusion of innovations. New York: Free Press, 1995.
- 11. Rogers EM. Lessons for guidelines from the diffusion of innovations. Jt Comm J Qual Improv 1995; 21: 324-328.
- 12. Foy R, MacLennan G, Grimshaw J, et al. Attributes of clinical recommendations that influence change in practice following audit and feedback. J Clin Epidemiol 2002; 55: 717-722.
- 13. Davis D, Tailor-Vaisey A. Translating guidelines into practice: a systematic review of theoretic concepts, practical experience and research evidence in the adoption of clinical practice guidelines. CMAJ 1997; 157: 408-416.
- 14. Freemantle N, Harvey EL, Wolf F, et al. Printed educational materials to improve the behaviour of health care professionals and patient outcomes. In: Bero L, Grilli R, Grimshaw J, Oxman A, editors. Collaboration on Effective Professional Practice module of the Cochrane database of systematic reviews. The Cochrane Library, Issue 1, 1998. Oxford: Update Software.
- 15. Grimshaw JM, Thomas RE, MacLennan G, et al. Effectiveness and efficiency of guideline dissemination and implementation strategies. Health Technol Assess 2004. In press.
- 16. Mason J, Freemantle N, Young P. The effect of the distribution of Effective Health Care Bulletins on prescribing selective serotonin reuptake inhibitors in primary care. Health Trends 1999; 30: 120-122.
- 17. Mason J, Freemantle N, Browning G. Impact of Effective Health Care bulletin on treatment of persistent glue ear in children: time series analysis. BMJ 2001; 323: 1096-1097.
- 18. Baker R, Fraser RC. Development of review criteria: linking guidelines and assessment of quality. BMJ 1995; 311: 370-373.
- 19. Baker R, Fraser RC, Stone M, et al. Randomised controlled trial of the impact of guidelines, prioritized review criteria and feedback on implementation of recommendations for angina and asthma. Br J Gen Pract 2003; 53: 284-291.
- 20. Friedman CP, Wyatt JC. Evaluation methods in medical informatics. New York: Springer-Verlag, 1997.
- 21. Johnston ME, Langton KB, Haynes B, Mathieu A. Effects of computer-based clinical decision support systems on clinician performance and patient outcome: a critical appraisal of research. Ann Intern Med 1994; 120: 135-142.
- 22. Hunt DL, Haynes RB, Hanna SE, Smith K. Effects of computer-based clinical decision support systems on physician performance and patient outcomes. JAMA 1998; 280: 1339-1346.
- 23. Shea S, DuMouchel W, Bahamonde L. A meta-analysis of 16 randomized controlled trials to evaluate computer-based clinical reminder systems for preventative care in the ambulatory setting. J Am Med Inform Assoc 1996; 3: 399-409.
- 24. Hetlevik I, Holmen J, Krüger Ø, et al. Implementing clinical guidelines in the treatment of hypertension in general practice. Scand J Prim Health Care 1999; 17: 35-40.
- 25. Hetlevik I, Holmen J, Krüger Ø, et al. Implementing clinical guidelines in the treatment of diabetes mellitus in general practice. Evaluation of effort, process and patient outcome related to implementation of a computer-based decision support system. Int J Technol Assess Health Care 2000; 16: 210-227.
- 26. Eccles M, McColl E, Steen N, et al. Effect of computerised evidence based guidelines on management of asthma and angina in adults in primary care: cluster randomised controlled trial. BMJ 2002; 325: 941-944.
- 27. James BC. Making it easy to do right. N Engl J Med 2001; 345: 991-992.
- 28. Foy R, Eccles M, Grimshaw J. Why does primary care need more implementation research? Fam Pract 2001; 18: 353-355.
Abstract
There are internationally agreed optimal methods for developing clinical practice guidelines.
The quality of published guidelines varies. A validated assessment instrument should be used to identify well developed guidelines that can be used with confidence.
There are multiple ways of presenting guidelines, including computerised systems.
Computerisation of guidelines can cover a range of formats, from brief prompts through to complex decision-support systems. Integrating guidelines into computerised reminder systems has been shown to be effective in improving patient care, but there is less evidence to support the effectiveness of guidelines integrated into computerised decision-support systems.