Large language models in healthcare and medicine
Description
Large Language Models (LLMs) have rapidly emerged as transformative tools in healthcare and medicine, following the widespread attention sparked by OpenAI’s ChatGPT in 2022. Their general-purpose adaptability enables applications across clinical practice, patient support, research, and public health, ranging from diagnosis, triage, and treatment planning to literature review, workflow automation, and global health communication. This factsheet reviews pressing ethical concerns raised by the increasing adoption of LLMs in healthcare. Key issues include epistemic risks such as hallucinations, opacity, and the dissemination of misleading information; privacy and data protection challenges related to the handling of sensitive patient data; biases arising from unrepresentative training datasets; and the dangers of uncontrolled experimentation through premature deployment in clinical settings. These concerns highlight tensions between innovation and patient safety, professional responsibility, and equitable access to healthcare technologies. Given their disruptive potential and accelerating adoption, LLMs should be regarded as a form of “social experiment” in medicine, requiring iterative evaluation, robust ethical frameworks, and clearly defined standards for human oversight. Addressing these challenges will be essential to ensure that LLMs contribute to patient benefit, professional integrity, and the equitable advancement of medical care.
Files
DiMEN-Fact-2_LLMs in Medicine.pdf
Files
(1.4 MB)
| Name | Size | Download all |
|---|---|---|
|
md5:faa25f4d8529b9b64fad99c1081b3753
|
1.4 MB | Preview Download |
Additional details
Related works
- Compiles
- Preprint: 10.48550/arXiv.2403.14473 (DOI)
- Journal article: 10.1038/s41746-024-01157-x (DOI)
Funding
- Volkswagen Foundation
- Digital Medical Ethics Network 9B 233