MJA
MJA

A robust clinical review process: the catalyst for clinical governance in an Australian tertiary hospital

Imogen A Mitchell, Bobby Antoniou, Judith L Gosper, John Mollett, Mark D Hurwitz and Tracey L Bessell
Med J Aust 2008; 189 (8): 451-455. || doi: 10.5694/j.1326-5377.2008.tb02120.x
Published online: 20 October 2008

High-profile patient safety inquiries1,2 and persistently high levels of preventable adverse events in health care systems3-5 have led governments to revolutionise their approach to the delivery of safety and quality in health care.6-8 A key component of this revolution has been the adoption of “clinical governance”,6,9 which requires structures and processes that integrate financial control, service performance and clinical quality in ways that engage clinicians and generate service improvement. Good clinical governance ideally shares the responsibility for averting adverse events between clinicians and managers. This includes shared ownership of both the issues and implementation of solutions.

In the Australian Capital Territory, a public review of neurosurgical services at the Canberra Hospital10 recommended that adverse events be identified and monitored to prevent harm to patients. In response to this review, a senior hospital executive decided that both clinicians and managers should be involved in developing a process to not only identify and investigate adverse events but also to create solutions to minimise their recurrence. A multidisciplinary committee was formed to oversee the development and implementation of a hospital-wide clinical review process and to provide the hospital’s Clinical Board with recommendations for reducing the incidence of adverse events.

Here, we report the development and implementation of this clinical review process and its impact on the hospital’s response to adverse patient outcomes.

Methods

We undertook a review of documents pertaining to the set-up and maintenance of the Clinical Review Committee (CRC) and recommendations made to and subsequent actions from the Clinical Board during the period 1 September 2002 – 30 June 2006. We assessed the degree of hospital staff engagement in the clinical review process by using the surrogate measures of CRC membership, the number of specific referrals made by clinicians, the number of departmental committees undertaking clinical review, and the number of staff interviewed during investigation of incidents. Other outcome measures were the numbers of cases reviewed, system issues identified, recommendations made to the hospital board, and ensuing actions.

The clinical review process

CRC members adapted the clinical review process from the process at another institution,11 which, at the time, did not have a multidisciplinary approach to clinical review or the same hospital structure. The main change made to this process was the introduction of a six-tier system of case review, so that intensity of the review was dependent on the severity of the adverse event, allowing more cases to be reviewed without diminishing the review outcomes.

Cases were identified for initial review using predetermined flags (Box 1). The clinical reviewers would then screen the medical records of flagged cases for the presence of one or more specified adverse events (Box 2), which were developed from a review of published adverse event data,11,12 national core sentinel events13 and aggregated CRC data after 12 months. If a case involved one or more of the specified adverse events, it was tabled at the CRC Executive meeting and evaluated against a severity assessment code (SAC).14 Along with other predetermined criteria relating more specifically to the nature of the case, the SAC determined the method of review (Box 3). The review aimed to determine if any system issues led to the adverse event.

The CRC was afforded “qualified privilege” under the ACT Health Act (1993), which encouraged frank discussion of the adverse event during the review process. However, qualified privilege did not prevent the CRC from publishing its findings and recommendations to a wider audience, including the Coroner, the Community and Health Services Complaints Commissioner, patients, and their relatives; nor did it prevent the ACT from supporting the national open disclosure policy.15

Results

From September 2002 to June 2006, 179 750 inpatients and 1 370 092 occasions of service were screened, capturing 5925 cases involving adverse patient outcomes; many were captured under more than one criterion. Of these events, 2776 (46.8%) progressed to detailed review and, of these, 342 (12.3%) were classed as serious or major (SAC 1 or 2). Investigation of these 342 cases identified at least two system issues associated with each, with a total of 881 system issues being identified.

Discussion

The implementation of a hospital-wide clinical review process in our tertiary hospital has demonstrated that all serious adverse events can be detected in a systematic way using predetermined detection flags and screening criteria. With seven methods of detecting clinical incidents, no significant adverse event has been identified outside the CRC processes. The close relationship with the hospital’s Clinical Board has enabled the CRC to bridge the gap between frontline clinical staff, policymakers and managers, by ensuring that system issues identified in serious adverse events are acknowledged and result in actions and hospital-wide projects to improve patient care.

The role of the independent clinical reviewers has been important in the success of this clinical review process. They have been able to work collaboratively with all clinicians, including senior consultants, and, being located in the hospital’s independent Clinical Practice Improvement Unit, have been able to provide impartial and objective reports. Their independence has also facilitated objective feedback to the clinicians, CRC and Clinical Board.

Another potentially important determinant in engaging clinical staff in the review process has been the driving of CRC activities and development of CRC processes by clinicians, allowing them to “buy into” the CRC and its activities. The CRC has also been able to give feedback to clinicians, morbidity and mortality committees, and the hospital Executive on its findings, and together developed recommendations for the Clinical Board. This allowed for engagement of clinicians (nurses, doctors, allied health workers) in developing recommendations, which facilitates ownership and makes it more likely they will be enacted.19

Introduction of the systematic clinical review process has not been without difficulties. Its set-up and maintenance has been time-consuming and dependent on a small number of enthusiastic people. Some craft groups did not initially embrace the clinical review process, but the resistance to take part has declined over time. This change in behaviour occurred through active participation (eg, CRC membership) and also through an understanding, gained from face-to-face meetings, that adverse events are investigated consistently and independently in accord with transparent processes.

The qualified privilege conferred on the CRC appears to have helped with acceptance of the clinical review process. Previously, clinicians were reluctant to discuss adverse events9 for fear of reprisal (defamation, litigation). However, with the knowledge that documents relating to CRC investigations were not admissible in a court of law, only rarely did clinicians refuse to take part. With the clinical review process now embedded in the hospital culture, clinicians have welcomed a consumer representative onto the CRC and the introduction of open disclosure.

The CRC was slow to develop rigorous reporting of identified system issues. In the first 2 years, it was difficult to report to the Clinical Board in a meaningful way, due to lack of grouping or prioritisation of identified system issues. Over time, a data dictionary has been developed to enable accurate grouping of identified system issues, which has been essential for the development of hospital-wide projects.

Despite the apparent success of the CRC, this study only reports surrogate markers for engagement of clinical staff in the clinical review process. The failure to conduct interviews with participants and non-participants in the CRC process weakens the evidence for good clinical engagement. Also, in the absence of a CRC database in the early days, much of the data collection was performed manually, increasing the risk of missing data and incorrect analysis.

The CRC, through its multidisciplinary group of clinicians and links with the Clinical Board, has had a visible impact on patient care. The multi-tiered investigative process has been a practical solution to the overwhelming number of cases identified for initial screening, without compromising review outcomes. We see the success of the CRC as twofold: the engagement of clinicians in the process,20 and the development of actions overseen by the peak decision-making body. The consistent methods used for case review of similar incidents, the independent nature of the dedicated reviewers, the penetration of the CRC into the institution and the local university curriculum, and the visible actions that have arisen from the reviews represent some of the evidence of its success.

The clinical review process is itself continually under review, and substantial resources have been invested to not only support the CRC’s processes, but also for clinical improvement projects driven by clinicians. While the system continues to mature, it has led the development of the clinical governance framework in our institution that is now being used territory-wide.

2 Initial adverse event screening criteria that trigger further review

Australian national core sentinel events13

Other triggers

3 Clinical Review Committee (CRC) review process — levels of review*

Level of review and type of adverse event

               Method of review


Level 1: External opinion

Actual or potential significant/sentinel/critical incident


Level 2: ACT Clinical Audit Committee (CAC) interdivisional (joint) review

Any incident involving more than one health agency in the ACT


Level 3: CRC extended review

Actual or potential significant/sentinel/critical clinical incident in accord with severity assessment coding process and Significant Incident Policy


Level 4: Review and presentation to CRC

Incident involving more than one clinical unit — “not significant” in accord with severity assessment coding process and Significant Incident Policy


Level 5: Single unit review

Incident involving only one clinical unit — not “non-significant” in accord with severity assessment coding process and Significant Incident Policy


Level 6: CRC Executive

Any case reviewed by the clinical reviewers that fulfils the screening criteria


ACT = Australian Capital Territory. * A review may be escalated to another level at the discretion of the CRC Executive.

4 Frequency of systems associated with adverse events and examples of actions taken

System associated with adverse event

2003–2004

2004–2005

2005–2006

               Examples of actions taken


Clinical assessment and management

35%

57%

34%


Clinical guidelines/policy procedure

20%

43%

23%


Communication between staff

37%

42%

26%


Skills/education

33%

27%

22%


Patient observation process

19%

25%

9%


Documentation

20%

24%

13%


Coordination of care

17%

16%

20%


Staff supervision

7%

14%

9%


Human resources/staff allocation

8%

11%

4%


Equipment

5%

8%

8%


External factors

4%

8%

5%


Other factors

6%

8%

6%


Physical environment

4%

4%

8%


Communication between staff, patient and family

3%

2%

7%


Security/design

0

1%

1%


Patient site/identification

0

1%

3%

Online responses are no longer available. Please refer to our instructions for authors page for more information.