Editor’s Note: In Part 2 of our article shedding light on AI use in thoracic oncology, Dr. Mak and Mr. Haugg discuss what the future may hold for AI in lung cancer. For more, read Part 1, Demystifying AI: Current Insights on Artificial Intelligence in Thoracic Oncology.
In the coming years, AI is anticipated to rapidly transform the oncology landscape with advancements that promise to reshape the future of patient care. In the realm of medical science, it is imperative to highlight that we are at the preliminary stage of integrating AI, with significant attention in the research community, but AI translation in clinical practice remains limited—indicating high potential but with limited empirical validation.1
AI shows particular potential in early cancer detection,2 cancer risk assessment, and treatment planning in personalized healthcare.3 AI models can analyze highly complex, multimodality data, including clinical risk factors, multi-omics, and imaging, to generate risk profiles specific to each patient, enabling personalized predictions that other tools cannot achieve. Generative AI in oncology promises to significantly enhance patient care and potentially reduce physician workloads, particularly in areas like automated drafting of oncologic histories in clinical notes or generating radiology reports as an initial use case.
Such models can act as highly effective digital assistants, integrating a diverse array of patient data, including imaging, EHR, and laboratory results, to provide comprehensive and personalized analysis while reducing the documentation burden. Furthermore, these generative AI models will likely be implemented as patient-facing resources via chatbots or digital navigators to provide cancer patients with education and support both at the point of care and at home. For drug development, AI has been increasingly used to identify novel druggable targets, screen existing drugs for repurposing, and to elucidate complex network interactions from multi-omics.4
Lastly, there will likely be further breakthroughs with a transition towards foundation AI models, which are capable of cross-domain learning and task adaptation, thereby offering a unified, reusable AI model for various medical applications. In essence, foundation models will extend the capabilities of generative models by incorporating a more comprehensive range of tasks with minimal need for task-specific labeled data.
For instance, multimodality foundation models trained on medical imaging, genomic data, and text from pathology and radiology reports may then be fine-tuned for multi-purpose tasks, including the automatic generation of detailed radiology reports or clinical summaries, treatment selection, or outcome prediction. These advances could provide several benefits, such as reducing the workload of clinicians, enhancing clinical knowledge and understanding, and offering interactive visualizations to aid medical decision-making or surgical procedures.5
By enhancing clinicians’ capabilities, AI in oncology will empower them to deliver improved patient care without replacing their expertise.
What Are the Challenges and Barriers to Realizing the Potential of AI in Clinical Applications?
To date, datasets used to train AI models may require intensive curation work and may not be fully representative of clinical situations. Therefore, the resulting AI models may have performance and generalizability gaps during the translation from research testing to real-world clinical settings, particularly if models are overfitting to the practices of a single institution. The quality of AI models is fundamentally connected to the quality and diversity of the data used for their training. If the training datasets underrepresent certain groups, the AI model can perpetuate biases and underperform in these groups, maintaining existing inequities.6 It is essential that AI models be carefully checked for bias and that they undergo special testing to ensure that they serve all populations equitably.
In response to these challenges, generative AI models, especially pre-trained foundation models, are emerging as potential solutions. They can supplant the need for detailed annotation and allow model training with smaller but more representative datasets, thereby enhancing model diversity and potentially reducing biases.
Another key challenge in implementing AI in cancer care is ensuring the reliability, replicability, and safety of AI health solutions, which is further compounded by the lack of “explainability” (how the model processes input data to arrive at its conclusions) inherent in “black box” deep learning algorithms. Without sufficient transparency in these advanced AI models, understanding and trusting their decisions becomes significantly more difficult, thus raising concerns about their practical application in clinical settings.7
What Questions Remain Unresolved?
While many practical aspects of AI application in oncology are under active investigation, some unresolved questions concern the ethical and regulatory implications of AI decision-making in healthcare.
Who Is Responsible?
While it is clear that in the current healthcare model, physicians are primarily responsible for the consequences of their decisions and errors, the implementation of automated AI decision support or risk predictions will require careful consideration of the shared responsibility between clinician end-users and the data scientists and industry who develop the AI model. Given the rapid implementation of AI into clinical care, though, it is likely that the physician-centric framework of responsibility will persist, which will require clinicians to have an understanding of the underlying training data, performance parameters, and appropriate use of AI models deployed in the clinic.8 Frameworks for clinician end-user education and guidance on how to use a given AI model, akin to regulatory labels for drugs, have been proposed to provide transparency, but have not been uniformly adopted.9
What Level of Evidence Do We Need?
Lastly, the scientific and regulatory framework for the acceptance of AI applications into clinical care will need to evolve as technologies advance. In particular, we will likely move from decision support applications where AI models simply provide prompts or nudges to clinicians to higher levels of AI autonomy independent of human clinician oversight. Both evidentiary and regulatory frameworks will need to be standardized and adapted to the level of risk, with a greater burden of proof for clinical benefit and harm mitigation with higher levels of AI autonomy.10
Furthermore, a given AI model’s robustness across different clinical environments is a practical challenge that is similar to known gaps in knowledge and performance of existing oncologic interventions from preclinical to clinical trial to real-world settings, but with the added complexity of needing to account for evolving data over time and the resulting and algorithmic shifts (e.g., updated algorithms retrained on new data).
The advancements in AI for medical applications are progressing at a remarkable pace, with research producing new applications at an astonishing rate and significant hype around novel AI techniques like large language models, foundation models, and generative AI.
While an initial wave of AI tools has begun to enter clinical practice, significant work remains to demonstrate the clinical value of these tools to improve lung cancer patient care and outcomes and enhance the day-to-day work of clinicians. Nevertheless, we anticipate that the breadth of AI applications will continue to expand and transform clinical care across the lung cancer patient journey.
References
- 1. Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med. 2019;25(1):44-56. doi:10.1038/s41591-018-0300-7
- 2. Lipkova J, Chen RJ, Chen B, et al. Artificial intelligence for multimodal data integration in oncology. Cancer Cell. 2022;40(10):1095-1110. doi:10.1016/j.ccell.2022.09.012
- 3. Kann BH, Hosny A, Aerts HJWL. Artificial intelligence for clinical oncology. Cancer Cell. 2021;39(7):916-927. doi:10.1016/j.ccell.2021.04.002
- 4. You Y, Lai X, Pan Y, et al. Artificial intelligence in cancer target identification and drug discovery. Signal Transduct Target Ther. 2022;7(1):156. doi:10.1038/s41392-022-00994-0
- 5. Moor M, Banerjee O, Abad ZSH, et al. Foundation models for generalist medical artificial intelligence. Nature. 2023;616(7956):259-265. doi:10.1038/s41586-023-05881-4
- 6. Rajpurkar P, Chen E, Banerjee O, Topol EJ. AI in health and medicine. Nat Med. 2022;28(1):31-38. doi:10.1038/s41591-021-01614-0
- 7. Morley J, Machado Ccomputer vision, Burr C, et al. The ethics of AI in health care: A mapping review. Soc Sci Med. 2020;260(113172):113172. doi:10.1016/j.socscimed.2020.113172
- 8. Neri E, Coppola F, Miele V, Bibbolino C, Grassi R. Artificial intelligence: Who is responsible for the diagnosis? Radiol Med. 2020;125(6):517-521. doi:10.1007/s11547-020-01135-9
- 9. Bitterman DS, Kamal A, Mak RH. An Oncology Artificial Intelligence Fact Sheet for Cancer Clinicians. JAMA Oncol. Published online March 23, 2023. doi:10.1001/jamaoncol.2023.0012
- 10. Bitterman DS, Aerts HJWL, Mak RH. Approaching autonomy in medical artificial intelligence. The Lancet Digital Health. 2020;2(9):e447-e449. doi:10.1016/S2589-7500(20)30187-4