AI in crime reduction: High potential, high challenges and uneven coping across countries
The overall rating of AI’s role in crime reduction was high, reflecting a broad consensus that digital technologies are no longer optional but essential in modern policing and social intervention. This view was consistent across both countries studied, underscoring the global relevance of AI in crime prevention.

Artificial intelligence is emerging as a key player in reducing crime, but experts caution that gaps in training, regulation, and infrastructure could hinder its full potential. A new study published in Frontiers in Artificial Intelligence investigates how AI is being used to support crime prevention and social interventions, while also assessing the barriers professionals face in adopting the technology.
The research, titled Artificial intelligence as a tool to enhance social interventions in reducing crime, draws on survey responses from 170 professionals in Jordan and Saudi Arabia. It evaluates the perceived role of AI in crime reduction, the main challenges that impede its adoption, and how practitioners cope with those challenges.
What role does AI play in crime reduction?
The study finds that professionals consider AI a powerful tool for enhancing security, accelerating crime detection, and supporting law enforcement. Respondents highlighted AI’s use in identifying suspects, mapping crime scenes, analyzing smartphone data and call records, facial recognition, and predicting crime-prone areas. Many also pointed to AI’s role in prevention, noting that predictive models can help allocate resources and anticipate risks before incidents escalate.
The overall rating of AI’s role in crime reduction was high, reflecting a broad consensus that digital technologies are no longer optional but essential in modern policing and social intervention. This view was consistent across both countries studied, underscoring the global relevance of AI in crime prevention.
What challenges limit AI’s effectiveness?
Despite its promise, the study underscores serious obstacles that could blunt AI’s impact in real-world settings. Among the top concerns cited were the absence of specialized AI research centers focused on crime prevention, weak legal accountability for AI-driven decisions, and ongoing risks to data privacy.
Respondents also reported significant skill gaps, with many institutions lacking trained specialists capable of managing advanced AI systems. Limited digital training opportunities were seen as a major barrier, alongside broader issues of insufficient infrastructure and resources.
The challenge of secure data sharing was also emphasized. Without standardized, secure repositories and frameworks for exchanging information, the potential of AI to track and prevent criminal activity is undermined. These barriers were consistent across both Jordan and Saudi Arabia, suggesting that the issues reflect regional and global trends rather than isolated problems.
How do professionals cope with AI-related challenges?
While challenges remain high, the study found that professionals report strong coping strategies when dealing with AI-related barriers. Many adopt problem-solving approaches to overcome technological issues, pursue training opportunities, and seek out specialized literature to expand their knowledge. Persistence in learning and adapting to AI tools was highlighted as a common theme, suggesting resilience among practitioners even in the face of structural gaps.
Interestingly, the study revealed a difference between countries: while the challenges were rated similarly in Jordan and Saudi Arabia, Jordanian professionals reported stronger coping abilities than their Saudi counterparts. This indicates that while barriers may be universal, the strategies for overcoming them can vary depending on institutional support, cultural attitudes, and available resources.
Policy recommendations and future directions
The authors stress that if AI is to fulfill its potential in reducing crime, policymakers and institutions must address structural weaknesses. They recommend scaling up training programs to build a larger pool of AI specialists, particularly in law enforcement and social intervention roles. Establishing dedicated research centers would also support innovation and provide targeted solutions for regional security challenges.
Stronger legal and regulatory frameworks are needed to address issues of accountability, privacy, and transparency in AI systems. The study highlights the importance of clear policies to ensure that AI tools are used responsibly, with safeguards against misuse. Secure data-sharing mechanisms must also be developed, allowing institutions to collaborate without compromising sensitive information.
Apart from technical and regulatory reforms, public awareness campaigns are seen as vital. Increasing understanding of AI’s role in crime prevention can help build trust and acceptance, ensuring that AI is viewed as a legitimate and regulated tool rather than a black-box system.
- FIRST PUBLISHED IN:
- Devdiscourse