It is well recognised that there are significant deficiencies in the current process for obtaining informed consent for participation in clinical research.1-7 Various attempts have been made to enhance participant understanding, with limited success.1,8-12 Some strategies have resulted in improvements in information transmission to and retention by not only study participants but also patients in general.9,10,13-19 However, complex methods of information provision, such as multimedia methods, may cause confusion and thereby reduce understanding.20,21
Studies of the provision of information in clinical research have usually adopted the perspectives of researchers and regulatory bodies, rather than those of participants.22 There have been a few exceptions;23-26 for example, a comparison of an information statement developed by participants with an information statement developed by researchers showed that the former was associated with greater participant understanding.26 We therefore sought to assess the efficacy of a computer-based method of communicating information to prospective clinical trial participants, with the aim of improving participant understanding.
To allow for interim analysis, one of us (A S K) randomly assigned participants in blocks of 10 to the intervention and control arms of the study. Thus, of every 10 participants, five were assigned to the intervention group and five to the control group. An overview of the study protocol is shown in Box 1.
The computer-based presentation was an interactive program that was displayed on a 17-inch computer monitor. Its structure and substantive content were identical to those of the paper-based statement. However, the computer-based presentation also included interactive, explanatory features; these included text boxes linked to keywords, to providing further explanation, hyperlinks to diagrammatic and pictorial presentations of procedures, and a video clip of a live right heart catheterisation procedure. The information was presented in concise sections, separated, at intervals, by a quiz (Box 2). Participants could move forward and backward through each page or skip to a specific page by clicking on a side panel. The text size was larger than in the paper-based statement, and the text was presented on a coloured background.
On the basis of an earlier study,8 the sample size was initially estimated as 100 for a power of 0.8, with an expected difference in means of 5% given a putative population standard deviation of 8.8. However, when variance was verified on the basis of the first 10 participants (SD, 7.6) the sample size was re-estimated as 60, to attain the same difference in means and power.
Data were gathered from 60 participants (30 assigned to the computer-based task, and 30 assigned to the paper-based task), whose characteristics are summarised in Box 3. Most were male (42/60), most used computers on a daily basis (50/70), and 40% were working full-time (24/60). The mean age was 52.0 years (range, 27–70 years). In the group that completed the paper-based task, 21 of 30 had completed tertiary education, compared with 15 of 30 in the group that completed the computer-based task. All participants were fluent in spoken English, and all but one were fluent in written English.
Frequencies of percentages of correct answers for both groups are shown in Box 4. These scores were clearly different in the two groups — scores of participants who completed the computer-based task were skewed towards the higher percentages, and scores of participants who completed the paper-based task were lower and more spread out. The group that completed the computer-based task had a highest individual score of 96% (two participants with 22 correct answers) and a lowest of 65% (two participants with 15 correct answers), compared with 91% (5 participants with 21 correct answers) and 17% (one participant with four correct answers), respectively, in the group that completed the paper-based task. The participant who was not fluent in written English achieved the highest assessment score in the group that completed the paper-based task. Multivariate analysis showed no correlation between percentages of correct answers and age or sex.
The interviews undertaken after participants completed the reading tasks were 3–18 minutes in duration. A selection of representative quotes from the interviews is presented in Box 5. The computer-based task received positive feedback, especially about its presentation and special features. Participants stated that these characteristics made them feel better informed and better able to make a decision about being involved in the study. Participants in the group that completed the paper-based task stated that they found the information difficult to understand, and made negative comments about the length and presentation of the document. After verbally explaining to participants the nature of the other form of information delivery tested in this study, more participants from both groups stated that they believed they would find a computer-based presentation easier to understand (21 in the computer-based task group, 18 in the paper-based task group).
First, we found a major difference between the groups in the understanding of the more complex details of the study. Other studies examining the efficacy of multimedia consent processes in enhancing understanding of clinical trials have shown limited success.29-31
Second, participants who completed the computer-based task felt more comfortable in making a decision about being involved in the study. Building trust between researchers and participants is a cornerstone of any study, and feeling informed about a study may help improve these relationships and allay participant anxiety about taking part in a study.23 The quizzes within the computer-based task allowed participants to self-assess their understanding and affirm, for themselves, their eligibility to participate in it. This could not only save researchers valuable time in explaining study procedures11 but also facilitate an appropriate emphasis on issues of special concern to individual participants23 without rushing the consent process. This feature may benefit mass screening programs where large numbers of individuals can self-assess their understanding and also self-select themselves as potential participants, in addition to being contacted by researchers to take part in a study.
Third, the overall lower assessment scores in the group that completed the paper-based task raise concerns about participants’ levels of understanding when this conventional system is used. Further, the wide range of these scores in this group suggests variability in understanding among participants enrolling in research studies with paper-based information statements.9 Participants in the group that completed the computer-based task received and understood uniform and complete information presented in an attractive manner. This is likely to be of significant advantage in multicentre trials, where a computer-based approach could be employed to uniformly and reliably communicate with participants at many locations.
Fourth, participants in the group that completed the computer-based task were more likely to indicate a willingness to participate in the mock study (if it were real). This could indicate a benefit in recruiting (and perhaps even retaining) study participants.8
Our study had some limitations. It was restricted to English-speaking, computer-literate patients with diabetes. Although the computer-based method was successful in our study, further research is necessary to assess its efficacy in other settings and participant groups. Also, we measured participants’ levels of understanding immediately after they completed the reading tasks; this may have demonstrated improvement in information recall rather than understanding,9 but it is more likely that both are improved. In addition, further research is needed to assess whether the findings apply equally to men and women.
1 Protocol used to compare a computer-based presentation with a paper-based statement of information relating to a mock study
4 Assessment scores of participants*
* Assessment scores represent percentages of correct answers to 23 assessment questions. |
5 Quotes from participants regarding the methods of information presentation
General comments about the computer-based presentation
“I don’t think you could get it any easier ...”
Positive aspects of the computer-based presentation
Shortcomings of the paper-based statement
Presentation could be improved
“There is some scope of it missing, in terms of presentation.”
Received 28 April 2009, accepted 19 October 2009
- Asuntha S Karunaratne1
- Stanley G Korenman2
- Samantha L Thomas1
- Paul S Myles3,1
- Paul A Komesaroff1,3
- 1 Monash University, Melbourne, VIC.
- 2 David Geffen School of Medicine, University of California, Los Angeles, Calif, United States.
- 3 Alfred Hospital, Melbourne, VIC.
None identified.
- 1. Davis TC, Holcombe RF, Berkel HJ, et al. Informed consent for clinical trials: a comparative study of standard versus simplified forms. J Natl Cancer Inst 1998; 90: 668-674.
- 2. Stead M, Eadie D, Gordon D, Angus K. “Hello, hello - it’s English I speak!”: a qualitative exploration of patients’ understanding of the science of clinical trials. J Med Ethics 2005; 31: 664-669.
- 3. Dixon-Woods M, Williams SJ, Jackson CJ, et al. Soc Sci Med 2006; 62: 2742.
- 4. Molyneux CS, Peshu N, Marsh K. Trust and informed consent: insights from community members on the Kenyan coast. Soc Sci Med 2005; 61: 1463-1473.
- 5. Cassileth BR, Zupkis RV, Sutton-Smith K, March V. Informed consent — why are its goals imperfectly realised? N Engl J Med 1980; 302: 896-900.
- 6. Wendler D, Rackoff JE. Informed consent and respecting the autonomy: what’s a signature got to do with it? IRB 2001; 23: 1-4.
- 7. Cheng JD, Hitt J, Koczwara B, et al. Impact of quality of life on patient expectations regarding phase I clinical trials. J Clin Oncol 2000; 18: 421-428.
- 8. Llewellyn-Thomas HA, Thiel EC, Sem FW, Woermke DE. Presenting clinical trial information: a comparison of methods. Patient Educ Couns 1995; 25: 97-107.
- 9. Dunn LB, Lindamer LA, Palmer BW, et al. Improving understanding of research consent in middle-aged and elderly patients with psychotic disorders. Am J Geriatr Psychiatry 2002; 10: 142-150.
- 10. Raich PC, Plomer KD, Coyne CA. Literacy, comprehension, and informed consent in clinical research. Cancer Invest 2001; 19: 437-445.
- 11. Stalonas PM, Keane TM, Foy DW. Alcohol education for inpatient alcoholics: a comparison of live, videotape and written presentation modalities. Addict Behav 1979; 4: 223-229.
- 12. Fureman I, Meyers K, McLellan T, et al. AIDS Educ Prev 1997; 9: 330-341.
- 13. Gagliano ME. A literature review on the efficacy of video in patient education. J Med Educ 1988; 63: 785-792.
- 14. Benson PR, Roth LH, Appelbaum PS, et al. Information disclosure, subject understanding, and informed consent in psychiatric research. Law Hum Behav 1988; 12: 455-475.
- 15. Wirshing DA, Sergi MJ, Mintz JA. Videotape intervention to enhance the informed consent process for medical and psychiatric treatment research. Am J Psychiatry 2005; 162: 186.
- 16. Coyne CA, Xu R, Raich P, et al. Randomized, controlled trial of an easy-to-read informed consent statement for clinical trial participation: a study of the Eastern Cooperative Oncology Group. J Clin Oncol 2003; 21: 836-842.
- 17. Gray SH. The effect of sequence control on computer assisted learning. J Comput Based Instr 1987; 14: 54-56.
- 18. Kinzie MB, Sullivan HJ, Berdel RL. Learner control and achievement in science computer assisted instruction. J Educ Psychol 1988; 80: 299-303.
- 19. Watters J. Information retrieval and the virtual document. J Am Soc Inf Sci 1999; 50: 1028-1029.
- 20. Lynch PJ, Horton S. Imprudent linking weaves a tangled Web. Computer 1997; 30: 115-117.
- 21. Lawless KA, Kulikowich JM. Understanding hypertext navigation through cluster analysis. J Educ Comput Res 1996; 14: 385-399.
- 22. Ancker J. Assessing patient comprehension of informed consent forms. Control Clin Trials 2004; 25: 72-74.
- 23. Agre P, McKee K, Gargon N, Kurtz RC. Patient satisfaction with an informed consent process. Cancer Pract 1997; 5: 162-167.
- 24. Akkad A, Jackson C, Kenyon S, et al. Informed consent for elective and emergency surgery: questionnaire study. BJOG 2004; 111: 1133-1138.
- 25. Cox K. Informed consent and decision-making: patients’ experiences of the process of recruitment to phases I and II anti-cancer drug trials. Patient Educ Couns 2002; 46: 31-38.
- 26. Guarino P, Elbourne D, Carpenter J, Peduzzi P. Consumer involvement in consent document development: a multicenter cluster randomized trial to assess study participants’ understanding. Clin Trials 2006; 3: 19-30.
- 27. Calisir F, Gurel Z. Influence of text structure and prior knowledge of the learner on reading comprehension, browsing and perceived control. Comput Human Behav 2003; 19: 135-145.
- 28. Kerrigan DD, Thevasagayam RS, Woods TO, et al. Who’s afraid of informed consent? BMJ 1993; 306: 298-300.
- 29. Flory J, Emanuel E. Interventions to improve research participants’ understanding in informed consent for research: a systematic review. JAMA 2004; 292: 1593-1601.
- 30. Jeste DV, Palmer BW, Golshan S, et al. Multimedia consent for research in people with schizophrenia and normal subjects: a randomized controlled trial. Schizophr Bull 2008; 35: 719-729.
- 31. Limacher MC. What prevents women from participating in research studies? J Watch Womens Health 2007; June 28: 3.
Abstract
Objective: To assess the efficacy, with respect to participant understanding of information, of a computer-based approach to communication about complex, technical issues that commonly arise when seeking informed consent for clinical research trials.
Design, setting and participants: An open, randomised controlled study of 60 patients with diabetes mellitus, aged 27–70 years, recruited between August 2006 and October 2007 from the Department of Diabetes and Endocrinology at the Alfred Hospital and Baker IDI Heart and Diabetes Institute, Melbourne.
Intervention: Participants were asked to read information about a mock study via a computer-based presentation (n = 30) or a conventional paper-based information statement (n = 30). The computer-based presentation contained visual aids, including diagrams, video, hyperlinks and quiz pages.
Main outcome measures: Understanding of information as assessed by quantitative and qualitative means.
Results: Assessment scores used to measure level of understanding were significantly higher in the group that completed the computer-based task than the group that completed the paper-based task (82% v 73%; P = 0.005). More participants in the group that completed the computer-based task expressed interest in taking part in the mock study (23 v 17 participants; P = 0.01). Most participants from both groups preferred the idea of a computer-based presentation to the paper-based statement (21 in the computer-based task group, 18 in the paper-based task group).
Conclusions: A computer-based method of providing information may help overcome existing deficiencies in communication about clinical research, and may reduce costs and improve efficiency in recruiting participants for clinical trials.