ChatGPT delivers on facts, falls short on emotion in addressing vaccine fears
The results indicate that ChatGPT consistently avoids misinformation and tends to offer pro-vaccine responses grounded in scientific reasoning. However, its answers often lacked citations or clear source attribution, reducing the transparency and credibility needed for high-stakes health communication. In many cases, ChatGPT offered general summaries rather than citing concrete studies or public health authority guidelines, leaving room for doubt among skeptical readers.

A new peer-reviewed study has evaluated the performance of ChatGPT in responding to common vaccine-related questions from the general public, offering insights into its potential role as a digital ally in public health communication. Published in Healthcare under the title “Can ChatGPT Counter Vaccine Hesitancy? An Evaluation of ChatGPT’s Responses to Simulated Queries from the General Public”, the research assesses how effectively the large language model (LLM) provides accurate, empathetic, and evidence-based information in a field fraught with misinformation.
Amid global concerns over declining vaccine confidence, the integration of artificial intelligence into public health outreach has been proposed as a scalable solution. However, this study reveals that while ChatGPT delivers generally reliable responses, it frequently lacks the nuance, consistency, and specificity required to meaningfully influence vaccine-hesitant individuals.
How accurate and trustworthy are ChatGPT’s vaccine responses?
The research team used a qualitative coding method to analyze ChatGPT's answers to 50 simulated vaccine-related queries drawn from the Vaccine Confidence Repository. These queries reflected real-world concerns and myths, including fears of infertility, altered DNA, and corporate conspiracy theories. ChatGPT's responses were evaluated based on their accuracy, source attribution, level of empathy, and the presence of disclaimers.
The results indicate that ChatGPT consistently avoids misinformation and tends to offer pro-vaccine responses grounded in scientific reasoning. However, its answers often lacked citations or clear source attribution, reducing the transparency and credibility needed for high-stakes health communication. In many cases, ChatGPT offered general summaries rather than citing concrete studies or public health authority guidelines, leaving room for doubt among skeptical readers.
Moreover, the model's explanations sometimes defaulted to vague language, such as “according to health experts” or “studies show,” without anchoring these claims in verifiable sources. This undermines its potential as a trustworthy educational tool, especially for audiences already predisposed to distrust institutional messaging.
Does ChatGPT convey empathy and build trust with users?
Beyond accuracy, the study probed ChatGPT’s ability to demonstrate empathy - a critical factor in engaging vaccine-hesitant individuals. Many vaccine-hesitant people are not simply misinformed; they are emotionally invested, fearful, or influenced by personal experience. As such, responses that are overly clinical or dismissive can alienate users rather than reassure them.
The findings show that while ChatGPT does not adopt a confrontational tone, it frequently fails to acknowledge users' emotional states. Most responses are delivered in a neutral, detached style, with limited use of empathetic language such as validating feelings or expressing understanding. Although this tone avoids antagonism, it may also fail to foster the emotional connection necessary for trust-building.
For instance, when addressing concerns about vaccine safety in children, ChatGPT tends to reiterate general assurances rather than engaging directly with the parent’s fear or referencing community-level safety outcomes. The absence of narrative framing or personalized messaging makes the responses technically correct but emotionally flat.
This shortfall limits ChatGPT’s ability to effectively operate as a persuasive agent in emotionally charged debates, a key function in combatting vaccine hesitancy, which is deeply rooted in personal belief systems and social trust.
What are the broader implications for using AI in public health communication?
The study’s authors emphasize that while ChatGPT can serve as a reliable first-line information tool, it should not be seen as a replacement for human-led communication strategies. Its strengths, scale, speed, and information recall, are offset by weaknesses in contextual awareness, source credibility, and relational communication.
One of the key risks identified is the potential for ChatGPT to unintentionally oversimplify or misrepresent complex health issues. Because it is trained to provide balanced and inoffensive content, it may hedge on controversial topics or avoid making strong normative statements—even when the science is clear. This equivocation can confuse users or create a false sense of ongoing debate where none exists.
The model’s inability to dynamically adjust based on user responses is another limitation. Unlike a trained health communicator, ChatGPT cannot ask follow-up questions, read emotional cues, or tailor messaging strategies in real-time. As a result, it lacks the dialogic flexibility needed to navigate complex or resistant attitudes.
Despite these limitations, the study acknowledges that AI tools like ChatGPT could still play a valuable role in the broader ecosystem of health communication. For example, they can support overwhelmed call centers, triage routine questions, or serve as content generators for health campaigns. With improved training data, regulatory oversight, and integration into vetted platforms, AI chatbots may enhance the reach and consistency of pro-vaccine messaging.
The researchers also recommend incorporating feedback loops that allow healthcare professionals and public health authorities to vet and refine the outputs of AI systems. This hybrid approach, combining machine scalability with human judgment, could help mitigate the risks while maximizing the potential benefits.
- READ MORE ON:
- ChatGPT vaccine hesitancy
- AI in public health communication
- ChatGPT health misinformation
- AI chatbot vaccine education
- artificial intelligence vaccine trust
- Can ChatGPT help reduce vaccine hesitancy?
- how ChatGPT responds to vaccine myths and concerns
- using ChatGPT for vaccine information and awareness
- FIRST PUBLISHED IN:
- Devdiscourse