<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <title>Repository Collection: null</title>
  <link rel="alternate" href="https://kumel.medlib.dsmc.or.kr/handle/2015.oak/29769" />
  <subtitle />
  <id>https://kumel.medlib.dsmc.or.kr/handle/2015.oak/29769</id>
  <updated>2026-04-04T12:48:15Z</updated>
  <dc:date>2026-04-04T12:48:15Z</dc:date>
  <entry>
    <title>How to Prevent Hallucination in Artificial Intelligence-Assisted Clinical Practice</title>
    <link rel="alternate" href="https://kumel.medlib.dsmc.or.kr/handle/2015.oak/46352" />
    <author>
      <name>DaeHyun Kim</name>
    </author>
    <id>https://kumel.medlib.dsmc.or.kr/handle/2015.oak/46352</id>
    <updated>2026-01-14T00:40:38Z</updated>
    <published>2024-12-31T15:00:00Z</published>
    <summary type="text">Title: How to Prevent Hallucination in Artificial Intelligence-Assisted Clinical Practice
Author(s): DaeHyun Kim
Abstract: The integration of artificial intelligence (AI) into clinical practice has ushered in new frontiers in diagnostic accuracy, operational efficiency, and healthcare accessibility. However, an emerging concern in AI-assisted healthcare is the phenomenon of “hallucination,” the generation of incorrect, fabricated, or unverifiable information, which can mislead clinical decision-making. This review examines the causes and implications of hallucinations in AI-generated clinical data and proposes practical mitigation strategies. Hallucinations can be minimized through enhanced model training, validation using high-quality medical datasets, robust human oversight, adherence to ethical design principles, and the implementation of comprehensive regulatory frameworks, thereby ensuring the safe, ethical, and effective deployment of AI in clinical settings. Interdisciplinary collaboration is critical to improve model transparency and reliability.</summary>
    <dc:date>2024-12-31T15:00:00Z</dc:date>
  </entry>
  <entry>
    <title>The Augmented Clinician: Artificial Intelligence as an Indispensable Co-pilot</title>
    <link rel="alternate" href="https://kumel.medlib.dsmc.or.kr/handle/2015.oak/46349" />
    <author>
      <name>DaeHyun Kim</name>
    </author>
    <id>https://kumel.medlib.dsmc.or.kr/handle/2015.oak/46349</id>
    <updated>2026-01-14T00:40:38Z</updated>
    <published>2024-12-31T15:00:00Z</published>
    <summary type="text">Title: The Augmented Clinician: Artificial Intelligence as an Indispensable Co-pilot
Author(s): DaeHyun Kim</summary>
    <dc:date>2024-12-31T15:00:00Z</dc:date>
  </entry>
  <entry>
    <title>Health Outcomes among Heated Tobacco Product Users, Combustible Cigarette Users, and Quitters: a Cohort Study</title>
    <link rel="alternate" href="https://kumel.medlib.dsmc.or.kr/handle/2015.oak/46355" />
    <author>
      <name>BangBu Youn</name>
    </author>
    <author>
      <name>DaeHyun Kim</name>
    </author>
    <id>https://kumel.medlib.dsmc.or.kr/handle/2015.oak/46355</id>
    <updated>2026-01-14T00:40:38Z</updated>
    <published>2024-12-31T15:00:00Z</published>
    <summary type="text">Title: Health Outcomes among Heated Tobacco Product Users, Combustible Cigarette Users, and Quitters: a Cohort Study
Author(s): BangBu Youn; DaeHyun Kim
Abstract: This study aimed to investigate the health effects of heated tobacco product (HTP) and combustible cigarette (CC) use, as well as cessation, by tracking health outcomes using a prospective cohort design. A total of 750 males were included, comprising 250 HTP users, 250 age-matched (± 2 years) CC users, and 250 quitters. The HTP user group was selected from individuals who underwent a health examination between 2021 and 2022. The CC user group was randomly selected from the same age range and examination period. Participants were provided with information on the health hazards of smoking and were advised on smoking cessation for both CCs and HTPs. A follow-up test was conducted 2 years (18  ±  8 months) later. Peak expiratory flow and peak expiratory flow percentage were significantly lower in CC users than in HTP users or quitters. Erythrocyte sedimentation rate and total cholesterol were significantly lower in quitters than in the HTP and CC user groups. Alpha-fetoprotein levels were significantly lower in quitters than in HTP and CC users, whereas carbohydrate antigen 19-9 levels were significantly higher in quitters than in the other two groups. Differences in respiratory function, inflammatory markers, and cancer markers were observed among HTP users, CC users, and quitters. Therefore, ongoing longitudinal follow-up is required.</summary>
    <dc:date>2024-12-31T15:00:00Z</dc:date>
  </entry>
  <entry>
    <title>The Potential Applications and Implications of Large Language Models in the Medical Field</title>
    <link rel="alternate" href="https://kumel.medlib.dsmc.or.kr/handle/2015.oak/46356" />
    <author>
      <name>Myung Sub Sim</name>
    </author>
    <author>
      <name>Seung Wan Hong</name>
    </author>
    <id>https://kumel.medlib.dsmc.or.kr/handle/2015.oak/46356</id>
    <updated>2026-01-14T00:40:38Z</updated>
    <published>2024-12-31T15:00:00Z</published>
    <summary type="text">Title: The Potential Applications and Implications of Large Language Models in the Medical Field
Author(s): Myung Sub Sim; Seung Wan Hong
Abstract: Large language models (LLMs) such as ChatGPT have demonstrated remarkable performance, including passing professional exams. However, because they generate responses through probabilistic prediction, their ability to directly replace medical experts remains limited. This study evaluates the applicability of LLMs in medicine using models available as of August 2023. Two medical guidelines were selected, and key questions derived from them were used to assess three offline models (KoVicuna, WizardVicuna, and LLaMa2) and the online ChatGPT model via LangChain. Model performance was evaluated based on accuracy and response time. ChatGPT achieved the highest accuracy with the shortest response time. Among the offline models, WizardVicuna 13 B exhibited high accuracy, whereas LLaMa2 7 B demonstrated balanced performance with relatively fast responses. Although LLMs cannot provide precise diagnoses or treatment recommendations owing to hallucinations and computational constraints, they show promise as clinical decision-support tools. With further refinement, LLMs may augment rather than replace physicians in medical practice.</summary>
    <dc:date>2024-12-31T15:00:00Z</dc:date>
  </entry>
</feed>

