Legal gaps, cyberattacks undermine public confidence in AI-based crypto regulation

Findings from the survey reveal generally favorable perceptions of AI’s ability to detect and manage risks related to financial fraud and money laundering. Nearly 59% of respondents indicated strong confidence in AI’s use for identifying financial fraud, while 48% viewed AI as effective in addressing money laundering. However, support dipped to 42% when it came to terrorist financing, indicating lingering doubts about AI's ability to handle more opaque and sensitive threat domains.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 19-05-2025 09:17 IST | Created: 19-05-2025 09:17 IST
Legal gaps, cyberattacks undermine public confidence in AI-based crypto regulation
Representative Image. Credit: ChatGPT
  • Country:
  • Bahrain

Artificial intelligence (AI) is emerging as a crucial tool in Bahrain’s strategy to combat financial fraud and terrorist financing through the tracking of cryptocurrency transactions. However, public trust remains uneven, challenged by legal, technological, and privacy concerns. These are the key findings of a new empirical study titled “Building Public Trust in Bahrain: Leveraging Artificial Intelligence to Combat Financial Fraud and Terrorist Financing Through Cryptocurrency Tracking,” published in the May 2025 issue of Social Sciences.

The study, conducted via a nationwide survey of 403 residents in Bahrain, provides one of the first in-depth analyses of public sentiment toward AI-driven cryptocurrency surveillance. As global interest in RegTech and anti-money laundering (AML) intensifies, the Bahraini case highlights the delicate balance between digital innovation and the need for public legitimacy in financial governance.

How do Bahrainis view the role of AI in fighting financial crime?

Findings from the survey reveal generally favorable perceptions of AI’s ability to detect and manage risks related to financial fraud and money laundering. Nearly 59% of respondents indicated strong confidence in AI’s use for identifying financial fraud, while 48% viewed AI as effective in addressing money laundering. However, support dipped to 42% when it came to terrorist financing, indicating lingering doubts about AI's ability to handle more opaque and sensitive threat domains.

Respondents expressed the most confidence in AI’s ability to identify suspicious activity in virtual currency transactions. Techniques such as IP address tracking and document auditing garnered approval from over 41% of participants. These findings suggest broad recognition of AI’s technical potential to flag fraudulent behavior, particularly when integrated into transaction-monitoring systems.

Yet, when it came to enforcement functions, such as freezing assets linked to terrorism or crime, respondents were notably more cautious. Only 33% believed AI should play a role in freezing illicit funds. This reflects a hesitancy to delegate punitive authority to algorithmic systems, particularly in high-stakes domains like counterterrorism.

The study also indicates support for AI as a tool for enhancing institutional transparency. Nearly half of the participants acknowledged AI’s potential to reduce auditing workloads, while 27% cited its utility in applying transparency standards to state and financial management systems.

What are the barriers to public trust in AI-based cryptocurrency tracking?

Despite widespread acknowledgment of AI’s capabilities, numerous barriers hinder its broader acceptance. Chief among these are concerns about transparency, privacy, and the complexity of AI decision-making, often referred to as the "black box" problem. More than half of respondents (53.6%) expressed that algorithmic complexity significantly affects trust, with 59% identifying black box AI as a major risk in cryptocurrency surveillance.

Legal infrastructure gaps were flagged by 42% of respondents as a leading challenge. These include ambiguities around how AI-generated data is used, interpreted, and enforced in court or by financial authorities. Additionally, 30.5% highlighted difficulties in identifying cryptocurrency users as a key concern, exacerbated by decentralized platforms and pseudonymity in blockchain transactions.

Cyberattacks present another major obstacle. The most cited threats included hacking (45%), denial-of-service attacks (32%), and algorithmic manipulation (30%). These concerns reflect fears that AI systems themselves could be compromised, reducing both their reliability and the public’s willingness to support their expansion in financial regulation.

Meanwhile, data privacy remains a polarizing issue. Although only 19% of respondents believe AI directly conflicts with privacy rights, a much larger 39% think the conflict exists “to some extent,” while 18% were unsure. These views indicate the need for clearer policies and public education on how data is handled and protected within AI ecosystems.

How can Bahrain strengthen public trust in AI for financial oversight?

To close the trust gap, the study proposes a multi-pronged approach centered on regulatory reform, infrastructure development, and public engagement. A key insight is that public confidence rises when AI is implemented within robust e-governance systems. Nearly 49% of respondents agreed that AI-powered e-governance would be “very effective” in preventing terrorist financing via cryptocurrencies - a figure that underscores the importance of institutional accountability in deploying AI.

Another critical factor is technological sovereignty. A substantial 60.7% of participants said that Bahrain’s limited technological capabilities hinder effective AI adoption. Respondents pointed to weak infrastructure (57.1%), a shortage of local expertise (54.8%), and dependence on foreign tech providers (44%) as major barriers. These findings suggest that Bahrain must invest in developing domestic AI talent and infrastructure to reduce reliance on external systems and boost public confidence.

Finally, the research stresses the need for explainable AI. Participants called for mechanisms that allow both authorities and the public to understand how AI systems reach decisions. Enhancing explainability would not only mitigate legal risks but also address public apprehensions around fairness, accountability, and bias.

Importantly, the study cautions against overreliance on AI for regulatory enforcement without addressing these foundational issues. Without transparent governance and public consensus, even the most advanced technologies may fail to deliver on their promises.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback