Unclear regulations and workflow burdens discourage AI adoption
Trust, or the lack of it, emerged as a key barrier in AI adoption. Across domains, professionals expressed concerns about AI flaws, including errors, hallucinations, and hidden biases. These concerns were particularly pronounced in medicine, where the risks of relying on inaccurate AI outputs could endanger patient safety, and in law, where flawed recommendations could influence judicial outcomes.

A new investigation into artificial intelligence (AI) adoption across professional sectors reveals that individual decision-makers weigh a complex mix of factors before integrating AI into their workflows. The study underscores that human judgment, perceptions of technology, and professional stakes strongly shape whether AI tools are embraced or rejected.
The research, titled "Why Do Decision Makers (Not) Use AI? A Cross-Domain Analysis of Factors Impacting AI Adoption," examines how professionals in journalism, law, medicine, and the public sector interact with AI systems. Through 16 in-depth interviews, the authors developed a framework identifying the conditions under which AI adoption thrives and the obstacles that hold it back.
Why do some professionals embrace AI while others resist?
The study found that adoption hinges on four critical factors: the decision-maker’s background, their perceptions of the AI tool, the consequences they face when using it, and how they believe other stakeholders will be affected. These factors collectively explain why adoption rates vary across domains.
In professions like journalism, AI tools are widely embraced to streamline research, data gathering, and content analysis. Journalists are more open to AI assistance when tasks involve topics outside their immediate expertise, allowing them to speed up investigation processes without compromising editorial standards. Public sector workers also adopt AI in cases where the technology simplifies data processing and supports policy decisions, particularly in underfunded, resource-constrained environments.
On the other hand, medicine and law demonstrate a much more cautious approach. Clinicians and legal professionals, whose decisions have high stakes and direct consequences for patients or clients, tend to rely on their own expertise. They are wary of delegating tasks to algorithms that may lack the nuance or contextual understanding necessary for complex judgments. In these fields, the value of human interpretation outweighs potential efficiency gains, and mistrust of AI’s opaque mechanisms creates further hesitation.
What drives skepticism about AI tools?
Trust, or the lack of it, emerged as a key barrier in AI adoption. Across domains, professionals expressed concerns about AI flaws, including errors, hallucinations, and hidden biases. These concerns were particularly pronounced in medicine, where the risks of relying on inaccurate AI outputs could endanger patient safety, and in law, where flawed recommendations could influence judicial outcomes.
The study also highlights how perceptions of AI transparency affect adoption. Many decision-makers are reluctant to use tools they view as “black boxes” because the lack of explainability makes errors harder to identify and justify. This mistrust extends to the belief that AI is sometimes overvalued, with an undeserved reputation for neutrality and objectivity despite its inherent biases.
Legal liability and reputational risks further complicate adoption. In fields like medicine and law, professionals remain accountable for decisions, even when AI tools are involved. Unclear regulations leave users uncertain about who bears responsibility when errors occur, discouraging reliance on AI in high-stakes contexts. Even when organizations or vendors offer indemnity clauses, individual professionals often feel they shoulder the ultimate risk.
Workflow integration also plays a decisive role. If AI tools disrupt established processes or increase workloads rather than simplifying them, adoption rates decline. Many respondents noted that poorly integrated systems add burdens rather than benefits, making them impractical in demanding work environments.
How do stakeholder pressures influence AI use?
The study reveals that decision-makers do not act in isolation. They carefully consider how AI adoption will affect their organizations, clients, and broader communities. Organizational pressures can encourage adoption when efficiency gains translate to higher output or reduced costs. For example, law firms and news agencies may promote AI use to handle larger caseloads or produce content faster.
However, stakeholder concerns can also deter adoption. In medicine, clinicians prioritize patient well-being above efficiency, avoiding AI tools that might compromise care. In the public sector, fears of perpetuating bias or stereotyping in decision-making make professionals cautious. Similarly, in journalism, ethical considerations and the risk of misinformation weigh heavily in determining whether AI tools are used.
The research also found that adoption varies with the perceived cultural and social impacts of AI outputs. Professionals worry that AI-generated communications may fail to account for cultural sensitivities, perpetuate stereotypes, or alienate target audiences. These concerns are particularly acute when tailoring communications for diverse populations, such as in public health campaigns.
Case studies: E-discovery and tailored communication
The authors applied their framework to two case studies, AI use in e-discovery and tailoring communication. E-discovery tools, which help gather and analyze digital data, were embraced by journalists, legal professionals, and public sector workers because they streamline research processes. However, medical professionals largely rejected these tools, citing the importance of nuanced, patient-specific decision-making.
For communication tasks, adoption patterns were more mixed. While journalists and public health workers used AI to simplify language and translate content for diverse audiences, legal professionals avoided these tools due to the high value placed on human rhetoric and the risk of misrepresentation.
Recommendations for responsible AI adoption
To support safe and effective use of AI, the authors propose several recommendations. They highlight the need to integrate decision-maker perspectives early in the design process to ensure tools align with real-world workflows. Enhancing AI literacy is also crucial, helping professionals understand the strengths and limitations of these technologies. Additionally, organizations must develop infrastructure that seamlessly incorporates AI into existing processes and provide resources to navigate complex regulatory environments.
Clearer policies and proactive governance are essential to address liability concerns and encourage responsible adoption. The authors warn that without addressing these human, ethical, and systemic factors, AI adoption will remain inconsistent and potentially risky.
- FIRST PUBLISHED IN:
- Devdiscourse