AI safety regulations may be hijacked by big tech

The study warns that captured AI regulations could produce widespread distributive and democratic injustices. According to the author, regulatory capture typically transfers economic benefits to a small group of shareholders while imposing hidden costs on consumers, taxpayers, and smaller competitors. In AI, this effect could be amplified because the technology is becoming integral to multiple industries and public services.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 05-08-2025 10:09 IST | Created: 05-08-2025 10:09 IST
AI safety regulations may be hijacked by big tech
Representative Image. Credit: ChatGPT

Regulatory frameworks meant to ensure artificial intelligence (AI) safety may inadvertently serve the interests of powerful technology companies, warns a new study by Thomas Metcalf, affiliated with the University of Bonn and Spring Hill College. The research, published in AI & Society, argues that regulatory capture, the manipulation of laws and rules by dominant firms, poses a severe threat to the future of AI governance.

The paper, titled "AI Safety and Regulatory Capture," highlights how AI’s rapid growth, combined with complex technical demands and high entry costs, makes the sector particularly prone to capture. Metcalf outlines how entrenched corporate power could use AI safety regulations to stifle competition, deepen inequality, and compromise public welfare.

Why AI safety regulations are at risk

The author identifies several structural factors that make AI safety regulation highly vulnerable. Unlike other industries, AI development is concentrated in the hands of a few firms with immense financial and technical resources. Developing large language models and frontier AI systems requires significant investment in computing infrastructure, specialized expertise, and vast datasets. This concentration, he notes, creates a regulatory environment where dominant players can influence policies to secure their advantage.

The author further stresses the role of information asymmetry. Policymakers rely heavily on industry expertise to understand and regulate AI, giving major corporations disproportionate influence over the regulatory process. Additionally, phenomena such as revolving doors, where personnel shift between industry and regulatory bodies, and the moral panic surrounding AI risks provide powerful firms with opportunities to shape rules in their favor under the guise of public safety.

Rapid innovation in AI adds another layer of complexity. Regulatory frameworks struggle to keep pace with technological developments, creating openings for outdated or ill-targeted laws that benefit established companies while hindering new entrants. Metcalf argues that such dynamics echo patterns seen in other sectors, where regulations were captured to protect oligopolies rather than consumers.

What harms could regulatory capture cause?

The study warns that captured AI regulations could produce widespread distributive and democratic injustices. According to the author, regulatory capture typically transfers economic benefits to a small group of shareholders while imposing hidden costs on consumers, taxpayers, and smaller competitors. In AI, this effect could be amplified because the technology is becoming integral to multiple industries and public services.

Disadvantaged groups are likely to bear the brunt of these policies. Regulations that impose high compliance costs could prevent startups, especially those in low-income countries, from entering the market. Bottlenecks in AI model development and engineer training would further restrict participation to those with existing socioeconomic advantages. Such outcomes, Metcalf warns, would exacerbate global inequalities and slow the spread of AI-enabled benefits to developing economies.

The paper also highlights democratic deficits. If powerful AI firms control the design and implementation of regulations, the public’s ability to influence decisions diminishes. Even well-intentioned efforts to democratize AI governance may backfire, as marginalized communities often lack the resources to participate meaningfully in policy processes dominated by corporate interests.

Beyond these immediate harms, the research identifies two critical meta-problems. Captured regulations may be counterproductive, reducing innovation in AI safety itself and discouraging investments in truly protective measures. Furthermore, they may facilitate ethics washing, where companies project an image of ethical compliance without substantive change. These practices not only weaken regulatory effectiveness but also expose economies and societies to long-term risks.

How can the risk be mitigated?

While the study issues a strong cautionary note, it also examines potential pathways to mitigate regulatory capture in AI safety. Metcalf reviews three major proposals from recent scholarship.

One promising approach is the creation of regulatory markets, as suggested by Hadfield and Clark. In this model, private firms would compete to provide regulatory oversight under government-set goals, theoretically fostering innovation in safety practices. However, Metcalf warns that this arrangement could itself be susceptible to capture, as the close ties between industry and regulators might simply shift to a different level.

A second proposal, from Guha and colleagues, advocates for an open AI ecosystem where researchers share information about model capabilities and risks. Such transparency could allow a broader range of experts to detect vulnerabilities and dangers. Yet the author notes that this strategy faces obstacles, including the opacity of large language models and the reluctance of leading firms to embrace openness without strong incentives.

The third approach, recommended by Wei and collaborators, emphasizes strengthening governance through increased monitoring, transparency, and stakeholder access. While these measures are essential, Metcalf cautions that they are not sufficient on their own. Technological complexity limits the effectiveness of watchdogs and reporting mechanisms, and transparency without enforcement may do little to counter entrenched interests.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback