Online opinion leaders hold key to public confidence in AI technologies
Participants consistently rated AI as more useful and easy to use when the information came from trusted figures. Even when experts delivered critical or cautionary messages, their credibility led audiences to engage more deeply with the content, ultimately reinforcing positive perceptions of AI. This phenomenon, referred to as an “expert credibility paradox” in the study, reveals that trusted sources can influence attitudes beyond the immediate tone of their communication.

The role of online opinion leaders (OILs) in influencing public perceptions of artificial intelligence is coming under increased scrutiny as technology adoption accelerates. A new study published in Frontiers in Communication reveals how expertise and emotional tone in online messaging affect young people’s attitudes toward AI.
Titled "Expertise and Emotion: How Online Opinion Leaders Shape Public Perceptions of AI—Among University Students in China," the research paper highlights a complex interplay between expert credibility, emotional arousal, and AI-related anxiety, offering new insights for technology communicators and policymakers.
How does expert credibility influence perceptions of AI?
The study found that expert credibility plays a decisive role in shaping how AI is perceived. Based on an experiment involving 102 Chinese university students, the researchers demonstrated that messages from highly credible experts significantly enhanced participants’ evaluations of AI, regardless of whether the messages were positive or negative. This suggests that audiences place a premium on expert authority when forming opinions about emerging technologies.
Participants consistently rated AI as more useful and easy to use when the information came from trusted figures. Even when experts delivered critical or cautionary messages, their credibility led audiences to engage more deeply with the content, ultimately reinforcing positive perceptions of AI. This phenomenon, referred to as an “expert credibility paradox” in the study, reveals that trusted sources can influence attitudes beyond the immediate tone of their communication.
These findings suggest that credible experts, whether academics, engineers, or industry leaders, are powerful agents in shaping public trust in AI. For organizations and policymakers aiming to foster AI adoption, leveraging expert voices in public discourse could prove to be an effective strategy.
What role does emotional arousal play in AI communication?
The study also explored the impact of emotional intensity, or arousal, in online messaging about AI. Emotional arousal refers to the degree to which a message triggers heightened emotional responses, whether positive or negative. The results showed that high-arousal messages, particularly those expressing strong negative emotions, captured more attention and encouraged participants to critically evaluate the technology.
Surprisingly, these high-arousal negative messages sometimes produced counterintuitive effects, leading to more favorable perceptions of AI. When paired with high expert credibility, emotionally charged critical content heightened engagement and strengthened positive evaluations. The researchers suggest that this effect arises because emotionally intense messages push audiences to think more deeply, increasing the likelihood of reassessing their assumptions about AI.
This dynamic underscores the importance of carefully balancing emotional tone in AI communication strategies. While emotional content can amplify engagement, its effects depend heavily on the perceived credibility of the source. Without expert backing, strong emotional appeals risk fueling fear or misinformation. With credible voices, however, emotional arousal can drive constructive engagement and greater acceptance of new technologies.
How does AI anxiety influence the relationship between emotion and perception?
A third key factor explored in the study is AI-related anxiety, which mediates the relationship between emotional arousal and perceptions of AI. The researchers found that emotional messages, particularly those with high arousal, tended to increase anxiety about AI. Elevated anxiety, in turn, dampened positive perceptions of AI’s usefulness and ease of use.
However, this suppressive effect was significantly moderated by expert credibility. When messages came from trusted experts, the anxiety triggered by high-arousal content was mitigated, allowing audiences to retain or even strengthen favorable views of AI. This finding reinforces the dual importance of expertise and emotional balance in technology communication.
The interplay between anxiety and perception is particularly relevant in the context of widespread public concerns about AI. From fears about job loss and privacy violations to anxieties about autonomous decision-making, emotional reactions can shape public opinion and policy debates. The study highlights that credible expert communication is critical in addressing these anxieties and guiding audiences toward informed perspectives rather than fear-driven responses.
Implications for technology communication and policy
The study suggests that policymakers, educators, and technology companies should actively engage credible experts to communicate about AI, ensuring that messages are both trustworthy and emotionally resonant.
Furthermore, the research indicates that communication strategies should not avoid emotion entirely. Emotional arousal, when paired with credibility, can be a powerful tool for engagement. However, it must be managed carefully to avoid fueling unnecessary anxiety that could hinder technology adoption.
For policymakers, these findings also underscore the need for targeted interventions to address AI-related anxieties. Public education campaigns, transparent information about AI capabilities and limitations, and platforms that facilitate dialogue with experts can help build a more informed and confident user base.
The study also provides valuable guidance for online opinion leaders themselves. Those with significant digital influence bear a responsibility to communicate about AI ethically and accurately. Their credibility not only affects individual attitudes but also shapes broader societal trust in technology.
- READ MORE ON:
- Online opinion leaders and AI perception
- How experts influence public views on AI
- AI anxiety and public perception
- Building trust in AI through credible messaging
- Public understanding of AI shaped by online leaders
- Emotional and expert factors in AI trust building
- Perceptions of AI usefulness and ease of use Ask ChatGPT
- FIRST PUBLISHED IN:
- Devdiscourse