Generative AI can boost health literacy but risks misinformation

Chatbots powered by LLMs can simplify medical terminology, generate personalized responses, and help patients prepare questions before visiting a doctor. These systems are being piloted to assist in visit preparation, reinforce discharge instructions, and support adherence to medications.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 28-08-2025 17:54 IST | Created: 28-08-2025 17:54 IST
Generative AI can boost health literacy but risks misinformation
Representative Image. Credit: ChatGPT

Generative artificial intelligence is rapidly transforming how patients access medical information, but new research warns that safety, trust, and clinical oversight remain critical. A paper  published in Frontiers in Digital Health explores both the promise and risks of large language models (LLMs) in health communication and proposes a structured evaluation tool to guide their responsible use.

The study, “Generative AI in Consumer Health: Leveraging Large Language Models for Health Literacy and Clinical Safety with a Digital Health Framework”, examines how generative AI can improve health literacy while raising concerns about misinformation, patient over-reliance, and ethical accountability. The authors introduce the RECAP framework, a five-point model emphasizing Relevance, Evidence-based accuracy, Clarity, Adaptability, and Precision, to evaluate consumer-facing AI tools in healthcare.

How are large language models being used in consumer health?

According to the review, generative AI is already being deployed across multiple patient-facing contexts. Chatbots powered by LLMs can simplify medical terminology, generate personalized responses, and help patients prepare questions before visiting a doctor. These systems are being piloted to assist in visit preparation, reinforce discharge instructions, and support adherence to medications.

Condition-specific applications are also emerging. For patients managing chronic illnesses such as diabetes, arthritis, or cancer, AI-driven tools offer tailored educational content. Similarly, mental health is becoming a key area of experimentation. Generative models are being integrated into apps for stress management, motivational prompts, and cognitive-behavioral therapy support, though the authors caution that these tools are less predictable than structured therapy platforms.

The study also identifies opportunities in interactive multimedia education, where LLMs can generate captions, voiceovers, and personalized summaries for platforms like patient portals and health education channels. These innovations could make medical information more accessible to individuals with limited literacy or disabilities. However, without oversight, such outputs risk embedding misinformation into otherwise engaging formats.

Despite their versatility, the effectiveness of these applications depends on careful integration into healthcare ecosystems. The absence of standardized evaluation methods leaves a gap that the RECAP framework aims to fill.

What are the clinical risks of generative AI in health communication?

While generative AI holds significant promise, the study underscores that clinical safety cannot be assumed. Because LLMs generate probabilistic content rather than authoritative medical advice, they carry risks that could affect patient outcomes.

The most pressing concern is hallucination and inaccuracy, where systems fabricate references or deliver unsound medical conclusions with misplaced confidence. This issue is compounded by ambiguity and false reassurance, where language that sounds plausible can mask critical red-flag symptoms, delaying necessary care.

Bias is another key risk. Training data often reflects inequities in race, gender, and socioeconomic status, which can be reproduced in AI-generated responses, further marginalizing vulnerable populations. The study warns that without intervention, such biases could exacerbate existing health disparities.

Patients may also develop over-reliance on AI for diagnosis or decision-making, leading to delays in seeking professional care. Ethical and privacy issues compound the problem. Many AI-powered health tools lack robust data disclosures, leaving patients unaware of how their personal health information is stored, used, or shared. Accountability is another unresolved concern: when an AI tool provides misleading advice that results in harm, it remains unclear who bears responsibility.

These risks echo challenges seen with earlier digital health tools such as WebMD or Babylon, which provided rigid triage assistance but often caused anxiety or confusion. Generative AI differs by offering free-form conversational responses, which increases engagement but magnifies the consequences of errors.

How can the RECAP framework improve AI safety in consumer health?

To address these challenges, the researchers introduce the RECAP framework, a clinician-informed model for evaluating the quality and safety of AI-generated health outputs. The framework is designed to give clinicians, developers, and regulators a pragmatic rubric that reflects the realities of patient communication.

  • Relevance ensures that responses are specific to the patient’s context and culturally appropriate, avoiding generic or tangential advice.
  • Evidence-based emphasizes grounding outputs in established clinical guidelines rather than probabilistic guesses or fabricated sources.
  • Clarity requires the use of plain language, avoiding jargon, and providing actionable recommendations that support health literacy.
  • Adaptability addresses the need for tailored communication across diverse populations, adjusting for differences in age, education, and health complexity.
  • Precision and Safety underline the importance of accurate information, appropriate urgency, and recognition of the system’s limitations. AI should defer to professional care when necessary rather than overextending its authority.

By embedding these criteria into evaluations, the framework aims to mitigate risks such as misinformation, bias, and overconfidence while ensuring that patient-facing tools enhance, rather than undermine, clinical trust.

The RECAP model is conceptual and requires further validation through real-world testing. However, its integration of health literacy, clinical priorities, and ethical safeguards makes it distinct from other frameworks that focus solely on technical performance or accuracy. 

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback