Medication errors are a source of significant morbidity and mortality among hospital patients worldwide. A United States study found that drug complications were the most common type of adverse hospital event, accounting for 19% of all these events.1 An adverse drug event is defined as any injury related to the use of a drug.2 The risk of a medication error causing an adverse drug event is increased by difficult and time-critical circumstances.
A recent qualitative study highlighted a myriad of factors that may lead to a prescribing error — drug choice, route or dose, or drug omission.3 Each drug administration is a complex process, involving up to 40 individual steps.4 The risk of an adverse event is increased when the patient’s condition is unstable, or the drug is administered intravenously.1
Studies in the United Kingdom5,6 and the US7 have investigated hospital doctors’ ability to calculate and prescribe drug doses accurately, and the effect of education programs on this skill. Junior doctors were identified as being at particular risk of making medication errors and highlighted as a target for education.7,8 Studies have also tested calculation and prescribing skills of nurses9,10 and paramedics.11 A study of intensive care physicians found that most medication errors were in dosing, which is consistent with the finding that doctors have difficulty converting between ratios, mass concentration and percentages.5 To our knowledge, no similar research has been conducted in Australia.
This study aimed to describe the ability of doctors to calculate drug doses and their workplace prescribing and calculation habits at an Australian tertiary hospital.
This prospective, observational study was conducted at a 570-bed major metropolitan teaching hospital that serves both adult and paediatric populations. The study was approved by the hospital’s ethics committee.
Data were collected using a two-part questionnaire. The first part asked for demographic data and workplace prescribing habits using five-point Likert scales (Box 1). The second part comprised a drug-dose calculation test. Before undertaking the test, doctors were asked to estimate their score (“predicted” score), and the score they regarded as adequate for their peers (“adequate” score), on visual analogue scales.
The drug-dose calculation test comprised 12 questions (Box 2) modelled on those used in previous studies, which have been validated in study populations of up to almost 3000 participants.5-7 For the purpose of the study, a drug-dose calculation was defined as the process of formulating a dose based on the patient’s weight, or finalising the correct dose of a particular drug for a specific patient where conversion between measurement systems was required (eg, percentage to mass per unit volume). The test used drug doses recognised in Australian clinical practice.12-14
The questions were based on common adult and paediatric clinical scenarios, focusing on parenteral drug administration because of its higher risk. Each question gave enough information for doctors to answer, even if they did not regularly use the drug or practise in similar clinical situations. The questions aimed not to test doctors’ knowledge of drug doses, but rather to isolate their mathematical and problem-solving ability in drug-dose calculation. Four questions were posed on each type of formulation in which drug concentrations in solution are commonly expressed: mass per unit volume, ratios and percentages.
Missing answers were scored as incorrect. The score out of a possible 12 was converted to a percentage (actual score). Predicted and adequate scores were also expressed as percentages.
The questionnaire was distributed over a 3-week period in February 2007 to a convenience sample of medical staff with diverse levels of experience. All acute medical and surgical disciplines were included. Medical officers working in psychiatry were excluded, as the questionnaire focused on acute medical scenarios that are less common in psychiatric practice.
The questionnaires were distributed during staff meetings within work hours. This was done without prior knowledge of doctors, to avoid participants either preparing for or avoiding the test. Questionnaires were collected immediately after completion. The survey was anonymous, calculators were permitted, and no rigid time limit was set, although 15 minutes was suggested.
Before analysis, all variables were examined using SPSS, version 14.0 (SPSS Inc, Chicago, Ill, USA) for missing values, outliers and accuracy of data entry.
Paired sample t tests were used to detect any significant difference in mean adequate, predicted and actual percentage scores. Bonferroni correction was used for three-way paired t tests, leading to an α value of 0.017 (0.05/3) for these comparisons. For continuous variables, one-way analysis of variance (ANOVA) and Student’s independent t test were used to compare demographic groups. For categorical data, the χ2 test was used to detect differences in proportions. An α value less than 0.05 was deemed statistically significant.
The questionnaire was distributed to 190 doctors and returned by 142 (75% response rate). One respondent did not answer any demographic questions and was excluded, leaving 141 valid questionnaires for analysis.
Characteristics of the 141 participants are summarised in Box 3. There were few missing data (less than 5% for any specific variable).
Eighty per cent of doctors considered that a score of 90% or more on the test would be adequate. However, only 28% of participants scored over 90%, and 44% achieved less than 75%.
All 141 participants answered all test questions, with the exception of two participants who each left a single question unanswered. These were coded as incorrect.
The mean test score achieved by participants is shown in Box 4, along with the mean predicted score and the mean score they considered adequate. The mean score considered adequate (91.6%; 95% CI, 89.5%–93.8%) was significantly higher than both the predicted score (74.7%; 95% CI, 71.0%–78.5%) and actual score (72.5%; 95% CI, 67.8%–77.3%) (P < 0.001 for both comparisons).
Consultant and registrar staff achieved higher actual scores than junior staff (mean score, 85.0% v 57.8%, P < 0.001), as did doctors in “critical care” specialties (intensive care, emergency medicine and anaesthesia) compared with non-critical care doctors (83.0% v 63.5%, P < 0.001). Anaesthetists had the highest scores of all specialty groups. No significant differences in scores were found between the sexes or between different countries of training.
Almost 80% of participants said they had never had formal testing of their drug-dose calculation skills, either as part of their employment conditions or in compulsory continuing medical education, such as college training. Nevertheless, 83% of participants indicated they need to calculate a drug dose at least once a week. Participants who undertook drug-dose calculations twice or more per day had a mean actual score of 82.8%, compared with 50.8% for those who stated they never used the skill (P = 0.004).
Participants who indicated they had made a previous drug-dose mistake scored higher than those who indicated they had “never” made a mistake (mean actual score, 90.6% v 62.7%, P = 0.006).
Most doctors (89%) said they “mostly” or “always” double-check their own drug-dose calculations. The 11% of participants who stated they always had another staff member check their calculated doses performed worst in the calculation test (ANOVA, P < 0.001).
Almost all doctors surveyed (96%) preferred milligrams per millilitre (mg/mL) as the formulation for drug labelling. Doctors performed significantly better on questions involving drug concentrations expressed as mg/mL compared with those involving percentages (83.3% v 63.4%, P < 0.001) or ratios (83.3% v 70.2%, P < 0.001). This finding was consistent over the subgroups.
Senior doctors scored higher for all formulations than junior doctors: 83.6% v 40.8% (percentage formulations); 82.6% v 55.8% (ratios), and 88.8% v 77.0% (mg/mL) (P < 0.001 for all comparisons). Critical care doctors also scored higher than non-critical care doctors: 76.5% v 53.0% (percentages), 81.5% v 60.5% (ratios) and 90.8% v 77.0% (mg/mL) (P < 0.001 for all).
This study found that doctors expected a higher level of skill in drug calculation from their peers than they were able to achieve themselves. Furthermore, junior doctors and those working in non-critical care areas scored lower on a drug-dose calculation test. Both these groups reported that their previous education in drug calculations was less than adequate when compared with more senior doctors and those working in critical care areas.
Doctors’ self-predicted and actual scores were similar, suggesting they have good insight into their own skill and limitations. However, the mean score judged as adequate was significantly higher than the mean score the doctors achieved themselves: 80% of participants expected a colleague to score 90% or more to practise adequately in a clinical environment. These high expectations, and the group’s failure to achieve them, raise medicolegal concerns about the criteria doctors use to judge their peers. In a US study, 83% of 175 respondents believed prescribing errors were unacceptable and should not occur.15
A UK study found that doctors generally had a poor level of skill in calculating drug doses.5,8 We found similarly that junior and newly graduated doctors perform most poorly, and that critical care doctors perform best. Within the critical care specialties, we surveyed a relatively large number of senior anaesthetists, partly explaining the higher scores in this group.
Strikingly, participants who stated they had “never” or “unlikely” ever made a mistake in a drug-dose calculation scored significantly lower (62.7%) in the calculation test than those who admitted to past errors (90.6%). This result may be accounted for by the more experienced doctors, who performed better but had longer careers in which to make a mistake. However, it also raises concern that some doctors may lack insight into their ability and overestimate their skill, thus being unaware of their current or past mistakes.
Reassuringly, most doctors in our study (89%) said they “mostly” or “always” double-check their own drug-dose calculations. This is a higher proportion than in a US study,15 which showed that only half of interns always double-checked their calculated doses. It is difficult to know whether our results truly reflect better workplace practices in Australia, as it has been shown repeatedly that self-reported compliance with desired behaviour is higher than objectively measured compliance.15,16 However, doctors who performed worst in the calculation test were most likely to have a second staff member check their calculated doses. This reflects awareness of their deficiencies and supports the belief that the self-reporting of workplace habits was accurate.
Our study also supports previous arguments for standardised drug labelling.5,6,17 Nearly all doctors preferred solutions to be expressed in mg/mL. This preference was supported by significantly higher scores for calculations involving concentrations expressed as mg/mL. Concentrations expressed as percentages or ratios resulted in more calculation errors, potentially leading to adverse events.5,6,18,19
Standardising the units for drug concentrations in solution to mass per unit volume would lessen the risk of error by reducing the complexity of dose calculation, particularly in time-critical, high-stress areas.8 These strategies for risk reduction have been effective in the aviation and nuclear industries2,5 and are well suited but underutilised in acute care medicine.
Some may argue that a written test is a poor predictor of the true performance of doctors in clinical practice. However, residents who show poor calculation skills in a written examination are likely to perform even more poorly under stressful conditions.18
It is of concern that over three-quarters of participants (79%) reported never being tested in the skill of drug-dose calculation during their careers, suggesting this skill is assumed. One doctor calculated a dose that was 1000 times the correct dose (Question 7, Box 2). Doctors need to be trained to identify “alarms” that a dose calculation is incorrect or dangerous.20 Directly achievable recommendations to reduce errors include encouraging safe workplace practices such as double-checking one’s own calculations, cross-checking with another staff member, and utilising web-based medication programs.
Our study had a number of limitations. The newly constructed questionnaire was not validated, although it was derived from previously used and validated surveys. We cannot exclude the possibility of selection bias, but the response rate was high (74%), from a large representative sample of the hospital’s medical staff, and few data were missing. Although some potential participants may have declined to participate if they expected to perform poorly, this would have biased towards higher actual scores, which is alarming given the generally poor scores achieved. Lastly, it was beyond the scope of this study to assess whether incorrect calculations would have led to clinical errors and affected patient outcomes.
This study showed that the doctors surveyed expected a higher level of skill in calculating drug doses from their colleagues than they achieved or expected of themselves. In addition, junior doctors and those in non-critical care specialties performed more poorly, clearly confirming the need for improved teaching of drug-dose calculations to medical students and junior staff.21,22
To address calculation, mathematical process and arithmetic errors, we recommend ongoing training and enforcement via formal, regular assessment of skill in calculating drug doses for all doctors.7,8,15,17,23,24 In this way, the skill levels of individual doctors may be more likely to reflect the high expectations they have of their colleagues. Since the completion of this study, we have been approached by the hospital’s medical education office to run formal training sessions on this skill for intern staff. This will enable us to conduct further, more robust, research.2,25
1 Workplace prescribing and dose-calculation habits and possible replies
2 Drug-dose calculation test and answers, and percentage of doctors who answered correctly
3 Demographic characteristics of the study population (n = 141)
* Values in parentheses are the numbers of junior/senior doctors. |
4 Mean of scores achieved by participating doctors (actual score), scores they predicted they would achieve, and scores they judged as adequate
- Chanelle M Simpson1
- Gerben B Keijzers2
- James F Lind3
- Department of Emergency Medicine, Gold Coast Hospital, Gold Coast, QLD.
No formal funding was sought for this study. We thank Dr Julia Crilly (Southern Area Health Service Emergency Department Clinical Network, Gold Coast Hospital, QLD) for her constructive input, and Dr Michael Steele (Bond University, Gold Coast, QLD) for reviewing the statistics.
None identified.
- 1. Peth HA. Medication errors in the emergency department: a systems approach to minimizing risk. Emerg Med Clin North Am 2003; 21: 141-158.
- 2. Wheeler SJ, Wheeler DW. Medication errors in anaesthesia and critical care. Anaesthesia 2005; 60: 257-273.
- 3. Coombes ID, Stowasser DA, Coombes JA, Mitchell C. Why do interns make prescribing errors? A qualitative study. Med J Aust 2008; 188: 89-94. <MJA full text>
- 4. Abeysekera A, Bergman IJ, Kluger MT, Short TG. Drug error in anaesthetic practice: a review of 896 reports from the Australian Incident Monitoring Study database. Anaesthesia 2005; 60: 220-227.
- 5. Wheeler DW, Remoundos DD, Whittlestone KD, et al. Doctors’ confusion over ratios and percentages in drug solutions: the case for standard labeling. J R Soc Med 2004; 97: 380-383.
- 6. Rolfe S, Harper NJN. Ability of hospital doctors to calculate drug doses. BMJ 1995; 310: 1173-1174.
- 7. Glover ML, Sussmane JB. Assessing paediatric residents’ mathematical skills for prescribing medication: a need for improved training. Acad Med 2002; 77: 1007-1010.
- 8. Wheeler DW, Wheeler SJ, Ringrose TR. Factors influencing doctors’ ability to calculate drug doses correctly. Int J Clin Pract 2007; 61: 189-194.
- 9. Bindler R, Bayne T. Medication calculation ability of registered nurses. Image J Nurs Sch 1991; 23: 221-224.
- 10. Grandell-Niemi H, Hupli M, Leino-Kilpi H, et al. Medication calculation skills of nurses in Finland. J Clin Nurs 2003; 12: 519-528.
- 11. Hubble MW, Paschal KR, Sanders TA. Medication calculation skills of practicing paramedics. Prehosp Emerg Care 2000; 4: 253-260.
- 12. Shann F. Drug doses. 13th ed. Melbourne: Collective, 2005.
- 13. Advanced Life Support Group. Advanced paediatric life support: the practical approach. 4th ed. Oxford: Blackwell Publishing, 2005.
- 14. MIMS online [database on the Internet]. MIMS Australia, 2006. http://www.mims.hcn.net.au/ifmx-nsapi/mims-data/?MIval=2MIMS_ssearch (accessed Dec 2006).
- 15. Garbutt JM, Highstein G, Jeffe DB, et al. Safe medication prescribing: training and experience of medical students and housestaff at a large teaching hospital. Acad Med 2005; 80: 594-599.
- 16. Adams AS, Soumeri SB, Lomas J, Ross-Degan D. Evidence of self-report bias in assessing adherence to guidelines. Int J Qual Health Care 1999; 11: 187-192.
- 17. Wheeler SJ, Wheeler DW. Dose calculation and medication error — why are we still weakened by strengths [editorial]? Eur J Anaesthesiol 2004; 21: 929-931.
- 18. Rowe C, Koren T, Koren G. Errors by paediatric residents in calculating drug doses. Arch Dis Child 1998; 79: 56-58.
- 19. Scrimshire JA. Safe use of lignocaine. BMJ 1989; 298: 1494.
- 20. Dean B, Schachter M, Vincent C, Barber N. Causes of prescribing error in hospital inpatients: a prospective study. Lancet 2002; 359: 373-378.
- 21. Wheeler DW, Remoundos DD, Whittlestone KD, et al. Calculation of drug doses in solution: are medical students confused by different means of expressing drug concentrations? Drug Saf 2004; 27: 729-734.
- 22. Wheeler DW, Whittlestone KD, Salvador R, et al. Influence of improved teaching on medical students’ acquisition and retention of drug administration skills. Br J Anaesth 2006; 96: 48-52.
- 23. Lesar TS, Lomaestro BM, Pohl H. Medication-prescribing errors in a teaching hospital: a 9-year experience. Arch Intern Med 1997; 157: 1569-1576.
- 24. Barber N, Rawlins M, Franklin BD. Reducing prescribing error: competence, control, and culture. Qual Saf Health Care 2003; 12 Suppl 1: i29-i32.
- 25. Nelson LS, Gordon P, Simmons MD, et al. The benefit of house officer education on proper medication dose calculation and ordering. Acad Emerg Med 2000; 7: 1311-1316.
Abstract
Objective: To assess the ability of doctors to calculate drug doses and their workplace prescribing and calculation habits.
Design and setting: Prospective, questionnaire-based observational study conducted at a 570-bed teaching hospital in February 2007.
Participants: Convenience sample of 190 doctors, representing all acute medical and surgical disciplines and diverse levels of experience.
Main outcome measures: Demographic data, self-reported prescribing habits, predicted score on a 12-item test of ability to calculate drug doses, score considered adequate for peers, and actual score.
Results: 141 doctors (74%) completed the questionnaire. The mean actual score on the test was 72.5% (95% CI, 67.8%–77.3%), which was similar to the group’s mean predicted score (74.7%; 95% CI, 71.0%–78.5%) but significantly lower than the mean of the score they considered adequate (91.6%; 95% CI, 89.5%–93.8%) (P < 0.001). Subgroup analyses showed that senior doctors and those in critical care specialties (intensive care, emergency medicine and anaesthesia) achieved significantly higher actual scores than junior doctors and those in non-critical care specialties, respectively.
Conclusions: Doctors expect their colleagues to perform significantly better in a drug-dose calculation test than they expect to, or can achieve, themselves. Junior staff and those in non-critical care specialties should be targeted for education in the skill of drug-dose calculation to reduce the risk of medication error and its consequences.