Abuse anxiety found to erode trust and use intentions for generative AI
Users with higher abuse anxiety exhibited lower trust, diminished perceptions of usefulness, weaker acceptance of generative AI, and reduced intention to use it in the future. This pattern indicates that once anxiety takes hold, it has a cascading effect, undermining both rational evaluations (such as perceived utility) and emotional readiness (trust), which jointly determine actual use decisions.

Generative artificial intelligence (genAI) is rapidly integrating into everyday applications, but its growth is shadowed by rising concerns over potential misuse. A new study published in the Journal of Theoretical and Applied Electronic Commerce Research reveals that anxiety over generative AI abuse can significantly erode user trust, perceived usefulness, and willingness to adopt the technology.
The study When Generative AI Meets Abuse: What Are You Anxious About? offers one of the most detailed examinations yet of how negative perceptions surrounding AI misuse affect the path from user awareness to actual adoption. By extending the Technology Acceptance Model (TAM) and incorporating multidimensional trust measures, the authors provide a clear empirical framework for understanding and mitigating the impact of AI-related anxieties.
How abuse anxiety disrupts the AI adoption process
The study defines Generative AI Abuse Anxiety (GAIA) as user apprehension about the misuse of generative AI technologies, including fears over deepfakes, misinformation, privacy breaches, and manipulation. Using a variance-based partial least squares structural equation modeling (PLS-SEM) approach, the authors analyzed 318 valid responses from an online survey distributed via snowball sampling. Participants were asked to evaluate their trust in generative AI systems, perceptions of usefulness, acceptance levels, and intention to use, alongside their anxiety levels.
The results were decisive: GAIA exerts a consistent negative influence across all key stages of adoption. Users with higher abuse anxiety exhibited lower trust, diminished perceptions of usefulness, weaker acceptance of generative AI, and reduced intention to use it in the future. This pattern indicates that once anxiety takes hold, it has a cascading effect, undermining both rational evaluations (such as perceived utility) and emotional readiness (trust), which jointly determine actual use decisions.
The findings confirm that abuse-related fears are not just peripheral concerns; they are central factors that can derail adoption pipelines, even for AI systems with strong performance capabilities.
Role of human-like and system-like trust
A distinctive aspect of the research is its breakdown of trust into human-like and system-like categories, each with subdimensions that play unique roles in influencing adoption.
Human-like trust encompasses ability, benevolence, and integrity, traits typically associated with human agents but here applied metaphorically to AI systems. The study found that all three subdimensions were positively linked to both perceived usefulness and acceptance of generative AI. However, GAIA was negatively associated with each, meaning that anxiety undermines these perceptions at their roots.
System-like trust refers to fairness, accountability, transparency, and explainability, technical and procedural qualities expected from algorithmic systems. The analysis revealed a nuanced picture: fairness and explainability significantly boosted perceived usefulness, while fairness, transparency, and explainability all contributed to acceptance. Accountability, however, did not emerge as a significant driver in either case.
This divergence suggests that users respond more strongly to qualities that make AI decisions understandable and equitable than to abstract assurances of responsibility. The implication for developers is clear: emphasizing explainability and fairness in system design and communication strategies may yield greater gains in user trust and adoption than focusing solely on accountability frameworks.
Implications for policy, design, and governance
Reducing abuse anxiety must be treated as a strategic priority. This can involve proactive communication about safeguards, visible implementation of bias mitigation strategies, and education campaigns that address misinformation about AI capabilities and limitations.
Second, the data shows that trust-building is not monolithic. Developers need to attend to both human-like and system-like trust dimensions, tailoring interventions accordingly. For example, demonstrating the “ability” of a generative AI tool might involve showcasing its accuracy across diverse use cases, while enhancing “fairness” and “explainability” could be achieved through transparent model documentation and interpretable outputs.
From a governance perspective, the study’s evidence supports the creation of regulatory and industry standards that require transparency and fairness metrics for generative AI systems. Such measures could be embedded in certification schemes, giving users visible trust signals that counterbalance abuse anxieties.
The authors also note that anxiety’s pervasive influence calls for longitudinal monitoring. As AI systems evolve and public discourse shifts, the specific triggers of GAIA may change, requiring adaptive trust strategies. This aligns with broader calls for dynamic AI governance that keeps pace with technological innovation and social sentiment.
- FIRST PUBLISHED IN:
- Devdiscourse