Connect
MJA
MJA

Ethics of artificial intelligence in supportive care in cancer

Ian N Olver
Med J Aust || doi: 10.5694/mja2.52297
Published online: 20 May 2024

Supportive care in cancer involves preventing or managing the symptoms of cancer and the side effects of treatment, and encompasses physical, psychosocial and spiritual adverse effects. Supportive care in cancer aims to improve the patient's quality of life from diagnosis through treatment and survivorship care.1 Applying artificial intelligence (AI) to supportive care in cancer involves using AI platforms that combine cancer knowledge bases, precision medicine libraries and guidelines with patient data, including genomic profiles, laboratory tests, and medications.

The use of AI can provide decision support to patients and clinicians in delivering personalised supportive care in cancer.2 This includes, for example, optimising anticancer drug dosing to avoid toxicity and selecting the appropriate supportive care in cancer drug dosing using a patient's pharmacogenomic profile.3 AI enables a more accurate prediction of toxicities such as emesis by including patient‐related factors added to the emetogenicity of cancer treatment. Moreover, AI can monitor patients to detect early signs of toxicity.4 In addition, natural language processing — an application of AI — has been used to extract patient‐reported outcomes of adverse events such as social isolation, which is not captured routinely and is not encoded in electronic health records but may be included in clinical notes.5 Otherwise, adverse events would need to be captured separately in patient surveys or questionnaires.6

AI also involves training computers on large datasets using algorithms that provide instructions to find patterns in the data, which, with artificial neural networks, can continue to self‐learn, weigh parameters and provide a summarised interpretation.7,8

Addressing ethical concerns relevant to using AI can reassure patients, as its use will partly rest on public perceptions of AI. This will depend on whether patients trust its accuracy, the transparency of how their data are used in the process, the privacy of their data, and their ability to make informed choices about their health information.

The first consideration for trust in AI is non‐maleficence. This is particularly important in supportive care in cancer, which aims to reduce symptoms, so adverse effects of the supportive care in cancer treatment must be minimal. This is an underlying concept in patient‐centred care which serves to reassure patients about the advice they are given. If AI is used for clinical decision support, it could cause harm by providing inaccurate or inconsistent results.9 The algorithm should not be considered as value neutral; it will reflect any biases in the training set used. Common sources of bias are race, sex, and social or economic factors, which means that the algorithm may not be transferable to a dataset with different characteristics to that on which it was derived.10 Moreover, in recognising patterns, AI tools do not provide the meaning or context of the outcome. The AI decision making is based on features of their input data, whereas human decision making encompasses knowledge, beliefs and values.10

One recommendation for improving patient information and acceptance of AI is to seek informed consent for its use. Before patients provide that consent, they will need to know the likelihood of the use of AI improving their outcomes as part of predicting an adverse effect or providing management advice. However, to date, much evaluation of AI tools is based on their comparative accuracy with human clinical decision making, whereas what patients want to know is the impact on outcomes such as their quality of life.9,11 For example, in supportive care in cancer, even if the AI is better at predicting vomiting with chemotherapy, will that translate to better control of vomiting? Moreover, as part of trusting their outputs, patients may have the expectation that AI tools have undergone a full evaluation. However, a recent review of whether clinical studies of AI were comprehensive enough to support a full health technology assessment showed that most studies had limitations and suggested a requirement that the assessment procedures should be modified to specifically assess AI before clinical implementation.12

A further recommendation is to have a global governance and regulation framework for AI. This framework would include the governance of data, such as consent and data protection, how governments can share data, benefits with the private sector, and data ownership. The World Health Organization has a working group to look at regulating AI and achieve the balance between promoting and stifling innovation.13 Other European and United States agencies, such as the Council of Europe and the White House Office of Management and Budget, are also beginning to address governance frameworks.14 The development of these frameworks should include community consultation. Nationally, the Australian Alliance for Artificial Intelligence in Healthcare is updating its roadmap after community consultation to assist with developing policy options and supporting companies to invest in AI in health care.15

As publicly available AI chatbots are being used by patients to access information on supportive care in cancer, it is important to ensure that the AI output is accurate and that will depend on the algorithm and training set. General chatbots often do not provide the source of the information, which compromises it being checked for accuracy, and quoting it could open the issue of plagiarism.16 A chatbot giving a single answer as opposed to making multiple suggestions may give an impression of false objectivity.

In addition, there is also the question of ownership of the data. Patients should be asked to give consent to the use of AI both for use of their data in training sets (which has not always been the case) and for the use of AI in clinical decision making.16,17 This requires transparency and the ability to inform patients about AI and its limitations and performance gaps.18 The problem is that in using deep learning algorithms, which continue to learn from data without further human direction, the patients will have to accept reduced transparency of the process, which becomes a “black box” in terms of how the decision was reached.

The privacy and secure storage of an individual's data are also potential concerns with digitised data. Even with large de‐identified datasets, the potential for re‐identification must be communicated.19,20 There is always a balance between maintaining the privacy of health data and making it available for research and policy generation. One solution is distributed learning, where instead of sharing and centralising individual data, clinicians share the metadata and the algorithms analyse separate databases. This allows obtaining the same solutions as if the data were centralised, with questions and answers shared without needing to share the individual data.21

Some patients may be uncomfortable with the idea of a computer making a decision about their treatment instead of a clinician, but would accept AI‐based input into a clinician's decision‐making process. Shared decision making then has three components: doctor, patient and AI. However, with the increasing use of algorithms where the use of patient data is more opaque, patients may not be able to exercise their autonomous input over an AI‐derived decision.22 Clinician time could be freed by the use of AI to enable engagement in better supportive care in cancer, such as providing psychosocial support, thereby enhancing the doctor–patient relationship. Alternatively, trust in the accuracy of AI could erode trust in a clinician and patients could access chatbots independent of clinician input.16

Given that adverse outcomes are particularly problematic in supportive care in cancer, another major ethical concern with AI is traceability.17 With the increasingly complex interactions of humans and AI, to whom can moral or ethical responsibility be traced for an adverse outcome for an AI‐based decision? There are so many people involved with the development of algorithms and training sets, marketing the tools, analysing the output and applying it to a clinical situation that transparent allocation of responsibility and accountability for an adverse outcome is very difficult.

Moreover, there are also legal implications. For example, if a patient had an adverse outcome through the use of AI, tort law is based on human performance or hardware defects, not defects in autonomous software. Suggested solutions would be to confer personhood on the AI tool, have everyone involved sharing a common enterprise liability, or simply apply a standard of care to the implementation and evaluation of the AI tool.23 Harm from an unrecognised AI error could be grounds for negligence, but, in future, it may be negligent not to rely on AI when a vast amount of omics and other data become part of the decision‐making process. If humans start to favour AI‐generated decisions (also known as “automation bias”), this may lead to errors of omission, where AI errors are not recognised or are disregarded, or to errors of commission, where the AI decision is accepted despite other evidence to the contrary.17

A more global ethical issue is the just allocation of resources. Are AI‐based tools only going to be available in higher income countries?13 If they do become available in countries where there is a shortage of human clinical resources, will an over‐reliance on AI lead to the pitfalls of automation bias? Even in higher income countries, there could be disparity between availability in the private and the public sectors.16

In conclusion, AI tools have enormous potential in supportive care in cancer as a clinical decision support that can analyse vast quantities of data and deliver personalised solutions, as well as providing information to patients through chatbots and providing patient support. However, patient acceptance will depend on addressing ethical challenges and, thus, there is a need for global standards for governance and for assessing the impact of the use of AI on patient‐related outcomes.

 


Provenance: Commissioned; externally peer reviewed.

  • Ian N Olver1

  • University of Adelaide, Adelaide, SA


Correspondence: ian.olver@adelaide.edu.au

Competing interests:

Ian Olver received a partial travel grant as an invited speaker at the Multinational Association of Supportive Care in Cancer Annual Scientific Meeting in Nara Japan in June 2023.

  • 1. Olver I, Keefe D, Herrstedt J, et al. Supportive care in cancer — a MASCC perspective. Support Care Cancer 2020; 28: 3467‐3475.
  • 2. Business Wire. Halo Intelligence, by VieCure, is the first whole knowledge system in oncology that brings together everything clinicians need to find the right treatment path for every patient, every time. BusinessWire (Denver, CO) 2022; 6 June. https://www.businesswire.com/news/home/20220606005818/en/Halo‐Intelligence‐by‐VieCure‐Is‐the‐First‐Whole‐Knowledge‐System‐in‐Oncology‐That‐Brings‐Together‐Everything‐Clinicians‐Need‐to‐Find‐the‐Right‐Treatment‐Path‐for‐Every‐Patient‐Every‐Time (viewed Dec 2023).
  • 3. Patel JN, Olver IN, Ashbury F. Pharmacogenomics in cancer supportive car: key issues and future directions. Support Care Cancer 2021; 29: 6187‐6191.
  • 4. Xu L, Sanders L, Li K, Chow JCL. Chatbot for health care and oncology applications using artificial intelligence and machine learning: systematic review. JMIR Cancer 2021; 7: e27850.
  • 5. Lederman A, Lederman R, Verspoor K. Tasks as needs: reframing the paradigm of clinical natural language processing research for real‐world decision support. JAMA 2022; 29: 1810‐1817.
  • 6. Zhu VJ, Lenert LA, Bunnell BE, et al. Automatically identifying social isolation from clinical narratives for patients with prostate cancer. BMC Med Inform Decis Mak 2019; 19: 43.
  • 7. Castaneda C, Nalley K, Mannion C, et al. Clinical decision support systems for improving diagnostic accuracy and achieving precision medicine. J Clin Bioinforma 2015; 5: 4.
  • 8. Shreve JT, Khanani SA, Haddad TC. Artificial intelligence in oncology: current capabilities, future opportunities, and ethical considerations. Am Soc Clin Educ Book 2022; 42: 1‐10.
  • 9. Carter SM, Rogers W, Win KT, et al. The ethical, legal and social implications of using artificial intelligence systems in breast cancer care. Breast 2020; 49: 25‐32.
  • 10. Geis JR, Brady A, Wu CC, et al. Ethics of artificial intelligence in radiology: summary of the joint European and North American multisociety statement. Insights Imaging 2019; 10: 101.
  • 11. Tzelves L, Manolitsis I, Varkarakis I, et al. Artificial intelligence supporting cancer patients across Europe — the ASCAPE project. PLoS One 2022; 17: e0265127.
  • 12. Farah L, Davaze‐Schneider J, Martin T, et al. Are current clinical studies on artificial intelligence‐based medical devices comprehensive enough to support a full health technology assessment? A systematic review. Artif Intell Med 2023; 140: 102547.
  • 13. World Health Organization. Ethics and governance of artificial intelligence for health: WHO guidance. Geneva: WHO, 2021. https://www.who.int/publications/i/item/9789240029200 (viewed Apr 2024).
  • 14. Kumar KS, Miskovic V, Blasiak A, et al. Artificial intelligence in clinical oncology: from data to digital pathology and treatment. Am Soc Clin Oncol Educ Book 2023; 43: e390084.
  • 15. Coiera EW, Verspoor K, Hansen DP. We need to chat about artificial intelligence. Med J Aust 2023; 219: 98‐100. https://www.mja.com.au/journal/2023/219/3/we‐need‐chat‐about‐artificial‐intelligence
  • 16. Cohen IG. What should ChatGPT mean for bioethics? Am J Bioeth 2023; 23: 8‐16.
  • 17. Morley J, Machado CCV, Burr C, et al. The ethics of AI in health care: a mapping review. Soc Sci Med 2020; 260: 113173.
  • 18. Čartolovni A, Tomičić A, Lazić Mosler E. Ethical, legal, and social considerations of AI‐based medical decision‐support tools: a scoping review. Int J Med Inform 2022; 161: 104738.
  • 19. Sharpless NE, Kerlavage AR. The potential of AI in cancer care and research. Biochem Biophys Acta Rev Cancer 2012; 1876: 188573.
  • 20. Culhane C, Rubenstein B, Teague V. Health data in an open world. 18 Dec 2017. https://www.researchgate.net/publication/321873477_Health_Data_in_an_Open_World#fullTextFileContent (viewed Dec 2023).
  • 21. Zerka F, Barakat S, Walsh S et al. Systematic review of privacy‐preserving distributed machine learning from federated databases in health care. JCO Clin Care Inform 2020; 4: 184‐200.
  • 22. Triberti S, Durosini I, Pravettoni G. A “third wheel” effect in health decision making involving artificial entities: a psychological perspective. Front Public Health 2020; 8: 117.
  • 23. Sullivan HR, Schweikart SL. Are current tort liability doctrines adequate for addressing injury caused by AI? AMA J Ethics 2019; 21: E160‐E166.

Author

remove_circle_outline Delete Author
add_circle_outline Add Author

Comment
Do you have any competing interests to declare? *

I/we agree to assign copyright to the Medical Journal of Australia and agree to the Conditions of publication *
I/we agree to the Terms of use of the Medical Journal of Australia *
Email me when people comment on this article

Online responses are no longer available. Please refer to our instructions for authors page for more information.