Strategic market openness: How AI giants use "free" models to dominate


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 17-09-2025 22:19 IST | Created: 17-09-2025 22:19 IST
Strategic market openness: How AI giants use "free" models to dominate
Representative Image. Credit: ChatGPT

A new working paper has raised sharp questions about how so-called open artificial intelligence (AI) models are reshaping control and governance across the AI value chain.

The research, titled Openness in AI and Downstream Governance: A Global Value Chain Approach and submitted on arXiv, was authored by Christopher Foster of the University of Manchester. It examines how firms’ decisions to release models under “open” labels are less about altruism and more about calculated strategies to consolidate influence in downstream markets.

How open is open?

The paper dissects the recent wave of open-weight releases by leading AI firms such as Meta with its LLaMA series, and major Chinese players including DeepSeek and Qwen. While the publicity surrounding these models focuses on openness and accessibility, the research makes clear that most do not align with traditional definitions of open source. Instead, they operate along a spectrum of access that includes partial openness, controlled conditions, and explicit restrictions.

Importantly, the majority of these models make weights publicly available while withholding the underlying training data, full documentation, or reproducibility details. The lack of transparency means that downstream developers remain dependent on the original firms for upgrades, optimization, and in some cases, permission to retrain or adapt. Licenses often include caps on scale, bans on derivative models, or commercial restrictions. These practices, the study argues, constitute “controlled openness” rather than genuine open source, producing a façade of accessibility while preserving the competitive leverage of frontier firms.

The analysis notes that this controlled access is not accidental. It serves both to increase adoption by small developers, governments, and companies, and to channel those users into ecosystems defined by the lead firms. The promise of openness reduces barriers to entry, but the conditions of use ensure that control remains centralized.

Why are AI firms embracing controlled openness?

The paper introduces the concept of “strategic market openness” to explain why companies spend billions to build frontier models only to release them at no cost. According to Foster, openness can serve as a competitive weapon. By releasing weights, firms can rapidly establish standards, gain developer loyalty, and steer workloads toward their preferred infrastructure. In cloud-dominated markets, the more developers adapt an open-weight model, the more likely they are to purchase compute, storage, and optimization services from the originating company.

This strategy aligns with patterns seen in other digital industries, where firms open selected layers of technology to build ecosystems while monetizing control points like infrastructure, platforms, or premium services. In AI, open-weight models act as Trojan horses. They encourage widespread use and experimentation, but their limitations push serious applications back into the orbit of the original developers.

The study highlights that firms also use openness to compete aggressively in fast-moving markets. By undercutting rivals through free releases, they can destabilize closed business models and force adoption of their own tools as de facto standards. For smaller firms, governments, or academic groups, the allure of free frontier weights is hard to resist. Yet the long-term effect may be greater dependency on a handful of global leaders.

What governance patterns emerge downstream?

Using global value chain analysis, the paper sets out a five-part typology of governance patterns that are beginning to shape downstream AI ecosystems. These models demonstrate how openness is being used to reconfigure, rather than dismantle, hierarchies of control.

The first is the hierarchical model, dominated by large incumbents who deliver closed AI products or services directly to clients, maintaining tight control. This is the traditional vendor relationship, with customers locked into specific solutions.

The second is the captive platform model, where access comes via cloud APIs or services. While developers gain functionality, they remain captive to platform rules, facing risks of lock-in and shifting pricing. Openness here is limited to access rather than autonomy.

The third is the open platform model, where infrastructure providers aggregate a wide array of models, including open weights, but still control the key chokepoints of compute and optimization. In this model, openness creates competition among models, but the infrastructure provider retains governance over the terms of use.

The fourth is the controlled openness model, which most closely reflects today’s open-weight releases. Here, firms release models with restrictions, encouraging adoption but maintaining leverage through licensing and technical limitations. This produces widespread diffusion but with hidden dependencies.

The fifth is the modular innovation model, which exists mainly in theory at present. It envisions genuinely open ecosystems where models, datasets, and tools can be freely combined, adapted, and redistributed, allowing decentralized governance. The paper suggests that while this model reflects the ideals of open source, it is rarely realized in AI.

Through this typology, Foster demonstrates that openness is not a neutral act. It is a design choice that shapes power, learning, and dependency across the AI value chain.

Why the findings matter for global AI development

The study carries significant implications for global governance and technological sovereignty. Many policymakers and industry leaders have welcomed open-weight models as pathways to democratize AI, foster local innovation, and reduce dependency on dominant Western or Chinese firms. Yet the analysis warns that without critical examination of licenses, data access, and infrastructure control, the promise of democratization may prove illusory.

For emerging economies, the risks are particularly acute. Reliance on partially open models may provide short-term access but lock governments and firms into ecosystems controlled from abroad. For advanced economies, the tension lies in balancing the benefits of rapid diffusion with the dangers of monopolistic consolidation.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback