AI secrecy laws fuel innovation but deepen digital inequality
A new study warns that while stronger protections for trade secrets are helping drive AI innovation and cross-border investment, they are also reinforcing opacity in decision-making systems, raising serious concerns about digital inequality and accountability.
The study, “Harmonising Trade Secret Protection in AI: Innovation, Opacity and Digital Vulnerability,” published in Laws, examines how international intellectual property regimes are reshaping the governance of artificial intelligence. It claims that global convergence around trade secret protection is simultaneously enabling technological growth and intensifying risks for individuals exposed to opaque algorithmic decisions.
Global harmonisation strengthens AI innovation but entrenches secrecy
The study identifies international harmonisation of trade secret laws as a foundational driver of the modern AI economy. As AI systems are developed, trained, and deployed across multiple jurisdictions, consistent legal protections for proprietary knowledge, such as source code, model architectures, and datasets, are essential for reducing uncertainty and enabling global collaboration.
Frameworks such as the TRIPS Agreement, along with guidance from the World Intellectual Property Organization and digital trade agreements like the USMCA and CPTPP, have established a common baseline for protecting undisclosed information. These mechanisms ensure that companies can safeguard their technological assets while operating across borders, lowering transaction costs and encouraging investment in research and development.
This convergence has made it easier for firms to scale AI solutions internationally. By protecting critical components of AI systems as trade secrets, companies can maintain competitive advantages while participating in global value chains. The study notes that such legal certainty is particularly important in knowledge-intensive sectors, where the economic value of innovation depends heavily on confidentiality.
The harmonisation process is not neutral. By prioritizing the protection of proprietary information, it creates a legal environment in which secrecy becomes the default condition. This has significant implications for how AI systems are governed, particularly when they are used in decision-making processes that directly affect individuals.
Opaque AI systems amplify digital vulnerability and limit accountability
While trade secret protections support innovation, the study finds that they also contribute to the growing opacity of AI systems. Many algorithmic models are shielded from scrutiny, making it difficult for regulators, researchers, and affected individuals to understand how decisions are made.
This opacity is not merely a technical issue but a structural one. The study introduces the concept of digital vulnerability to describe how individuals and groups are exposed to harm in environments shaped by opaque and asymmetrical technologies. Unlike traditional forms of vulnerability, which are often tied to stable socioeconomic conditions, digital vulnerability is situational and can affect anyone interacting with AI systems.
Three key dimensions define this vulnerability.
- Information asymmetry: Those who develop and control AI systems possess far greater knowledge than those affected by them.
- Dependency: Individuals increasingly rely on digital infrastructures to access essential services such as credit, employment, and healthcare.
- Lack of effective mechanisms for contesting automated decisions, particularly when the underlying logic is protected as a trade secret.
The consequences are most severe for already disadvantaged groups. Individuals facing economic hardship, limited digital literacy, or systemic discrimination are more likely to be affected by opaque decision-making systems without having the tools or resources to challenge them. In such cases, trade secret protections can unintentionally reinforce existing inequalities by preventing transparency and accountability.
This form of vulnerability is often pathogenic, meaning it is produced or intensified by institutional and legal arrangements. By restricting access to critical information, trade secret regimes can contribute to conditions in which individuals are systematically excluded from meaningful participation in decisions that affect their lives.
Regulatory tensions emerge between secrecy and transparency demands
A key finding of the study is the growing tension between intellectual property protections and emerging demands for AI transparency and accountability. While trade secret regimes emphasize confidentiality, new governance frameworks are increasingly calling for explainability, documentation, and oversight of algorithmic systems.
Regulatory initiatives such as the European Union’s AI Act, the OECD AI Principles, and UNESCO’s recommendations on AI ethics reflect a global shift toward greater transparency. These frameworks recognize that trustworthy AI requires not only technical performance but also the ability to understand, audit, and challenge automated decisions.
However, the coexistence of these frameworks with strong trade secret protections creates a legal conflict. In many cases, requirements for transparency are limited by the need to protect proprietary information, resulting in partial or conditional access rather than full accountability. This tension is particularly evident in high-risk AI systems, where transparency is essential for safeguarding fundamental rights.
The study warns that digital trade agreements, which often prohibit governments from requiring access to source code or algorithms, may further restrict regulatory oversight. While these provisions are designed to protect innovation, they can also limit the ability of states to investigate and address potential harms caused by AI systems.
This dynamic creates a complex governance challenge. Efforts to harmonize intellectual property laws may inadvertently constrain the development of more protective regulatory frameworks, particularly in countries with limited economic or institutional capacity.
Call for vulnerability-sensitive governance to balance innovation and rights
In response to these challenges, the study advocates for a vulnerability-sensitive approach to the harmonisation of trade secret protection. Rather than treating intellectual property as an absolute priority, this approach emphasizes the need to balance innovation with the protection of human rights and the reduction of digital inequality.
A key principle is the primacy of fundamental rights in situations where AI systems have significant social impact. This means ensuring that trade secret protections do not prevent access to information necessary for understanding, auditing, or challenging automated decisions.
The study also highlights the importance of creating exceptions to trade secret protections for regulatory oversight, independent research, and judicial processes. Such exceptions would allow authorities and researchers to access critical information under controlled conditions, enabling more effective monitoring and accountability.
Another priority is preserving regulatory space for countries, particularly in the Global South, to adopt stronger protections against AI-related harms. Global harmonisation should not limit the ability of states to address local challenges or protect vulnerable populations.
The research further calls for integrating ethical and legal considerations into the design of AI governance frameworks. This includes developing mechanisms for transparency, accountability, and public participation that reflect the diverse ways in which AI systems impact society.
- FIRST PUBLISHED IN:
- Devdiscourse

