How to Prevent Hallucination in Artificial Intelligence-Assisted Clinical Practice
- Author(s)
- DaeHyun Kim
- Keimyung Author(s)
- Kim, Dae Hyun
- Department
- Dept. of Family Medicine (가정의학)
- Journal Title
- Keimyung Med J
- Issued Date
- 2025
- Volume
- 44
- Issue
- 2
- Keyword
- Artificial intelligence hallucination; Clinical decision-making; Ethical artificial intelligence; Machine learning validation; Medical diagnostics
- Abstract
- The integration of artificial intelligence (AI) into clinical practice has ushered in new frontiers in diagnostic accuracy, operational efficiency, and healthcare accessibility. However, an emerging concern in AI-assisted healthcare is the phenomenon of “hallucination,” the generation of incorrect, fabricated, or unverifiable information, which can mislead clinical decision-making. This review examines the causes and implications of hallucinations in AI-generated clinical data and proposes practical mitigation strategies. Hallucinations can be minimized through enhanced model training, validation using high-quality medical datasets, robust human oversight, adherence to ethical design principles, and the implementation of comprehensive regulatory frameworks, thereby ensuring the safe, ethical, and effective deployment of AI in clinical settings. Interdisciplinary collaboration is critical to improve model transparency and reliability.
- 공개 및 라이선스
-
- 파일 목록
-
Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.