Generative AI in Health Care and Liability Risks for Physicians and Safety Concerns for Patients
Creators
- 1. Project CLASSICA: Validating AI in Classifying Cancer in Real-Time Surgery, Penn State Dickinson Law, Carlisle, Pennsylvania
- 2. Penn State Dickinson Law, Carlisle, Pennsylvania
Description
Plain language summary: This Viewpoint discusses the potential use of generative artificial intelligence (AI) in medical care and the liability risks for physicians using the technology, as well as offers suggestions for safeguards to protect patients.
Generative artificial intelligence (AI) is a quickly emerging subfield of AI that can be trained with large data sets to create realistic images, videos, text, sound, 3-dimensional models, virtual environments, and even drug compounds. It has gained more attention recently as chatbots such as OpenAI’s ChatGPT or Google’s Bard display impressive performance in understanding and generating natural language text. Generative AI is being heralded in the medical field for its potential to ease the long-lamented burden of medical documentation by generating visit notes, treatment codes, and medical summaries. Physicians and patients might also turn to generative AI to answer medical questions about symptoms, treatment recommendations, or potential diagnoses. While these tools may improve patient care, the liability implications of using AI to generate health information are still in flux. To date, no court in the United States has considered the question of liability for medical injuries caused by relying on AI-generated information.
Notes
Files
Generative AI in Health Care and Liability Nunez Gerke JAMA accepted paper.pdf
Files
(223.6 kB)
Name | Size | Download all |
---|---|---|
md5:9cbd76c01506ccb0937ad40af4499db8
|
223.6 kB | Preview Download |
Additional details
Related works
- Is published in
- Journal article: 10.1001/jama.2023.9630 (DOI)