When help hurts: Proactive AI features reduce user confidence and use
The study highlights important differences in how users respond to human versus AI assistance. When help was requested, AI support was still perceived as more threatening than human help. This suggests that the social norms of reciprocity that buffer human–human helping do not extend neatly to interactions with AI systems.

Artificial intelligence assistants are being embedded at a rapid pace into workplace technologies, promising to boost productivity and streamline tasks. But a new study warns that unsolicited help from AI can undermine user confidence and reduce adoption. The research provides fresh insight into why proactive AI features may backfire.
The paper, titled “Proactive AI Adoption Can Be Threatening: When Help Backfires,” was published on arXiv. Drawing on psychological theory and two large-scale experiments, the study identifies self-threat as the key mechanism explaining why users often resist proactive AI assistance even when it is useful.
Why does proactive AI reduce adoption?
The research focuses on the difference between reactive help, which is requested by the user, and anticipatory help, which is provided without being asked. The authors argue that unsolicited help undermines people’s sense of competence and autonomy, producing a psychological effect known as self-threat.
In two vignette-based experiments with more than 1,300 participants, the study consistently found that anticipatory help heightened feelings of threat. This in turn lowered willingness to accept assistance, reduced the likelihood of future use, and weakened performance expectations of AI tools. The mechanism was clear: when people felt their independence was challenged, they reacted defensively, even rejecting help that could improve their output.
The findings go beyond earlier explanations that focused mainly on mismatches in trust or usability. Instead, the study shows that the psychological costs of unsolicited assistance can be significant, reshaping whether people embrace or reject AI in their daily work.
How do AI and human help differ?
The study highlights important differences in how users respond to human versus AI assistance. When help was requested, AI support was still perceived as more threatening than human help. This suggests that the social norms of reciprocity that buffer human–human helping do not extend neatly to interactions with AI systems.
In practice, this means that even when users initiate a request, AI helpers can trigger a stronger sense of threat compared to colleagues. Anticipatory AI help was no less threatening than human anticipatory help, reinforcing the idea that unsolicited intervention carries similar risks regardless of the source.
These findings complicate common assumptions in human–computer interaction design. It is not enough to give users control or explain system actions. The very identity of the helper, human or AI, shapes the emotional response. For AI developers, this underscores the need to account for deeper psychological reactions when designing mixed-initiative systems.
Does framing anticipatory help differently solve the problem?
The second study examined whether different forms of anticipatory AI help might reduce threat. It compared offering, where the AI suggests help and awaits approval, to providing, where the AI acts automatically. Both forms were experienced as equally threatening. Simply adding a confirmation step did not reliably alleviate user concerns.
This null result suggests that the unsolicited nature of proactive assistance carries psychological weight that cannot easily be reduced by small framing changes. Even when minimal agency is preserved, users still perceive the interaction as intrusive.
The authors argue that design solutions must go further. Rather than treating initiative-taking as a purely technical decision, AI systems must be calibrated for psychological readiness. Strategies such as gradual introduction of proactive features, collaborative framing of assistance, cues of reciprocity, and positive affirmations could help reduce threat. Without such measures, well-intentioned proactive features may do more harm than good by discouraging long-term adoption.
The findings call for a rethinking of design principles for proactive systems, placing user psychology at the center. Designers must consider not only when systems are confident enough to act but also when users are ready to accept assistance without feeling diminished.
The research also opens directions for future study, including testing these dynamics in interactive systems over longer timeframes, exploring domain and cultural differences, and assessing whether personalization can adapt initiative levels to individual user preferences.
- FIRST PUBLISHED IN:
- Devdiscourse