Responsible AI is still theoretical: Industry fails on fairness and transparency
The authors stress that AI deployment has surged ahead of efforts to instill ethical oversight. Technologies driven by machine learning, deep learning, and large language models are now influencing decisions in public health, credit scoring, and academic testing. Yet, frameworks to guide their responsible use are often retrofitted or ignored altogether.

Artificial intelligence (AI) is transforming nearly every domain, from education and hiring to predictive policing and healthcare diagnostics, but a new global review warns that the ethical guardrails designed to govern its use are failing to evolve at the same pace.
In a comprehensive study titled “A Systematic Review of Responsible Artificial Intelligence Principles and Practice,” published in Applied System Innovation in 2025, researchers uncover a widening chasm between the high-minded principles of “responsible AI” and the fragmented, inconsistent ways those principles are being applied, or ignored, in real-world AI deployment.
Are current RAI frameworks enough to govern modern AI?
The review draws on an extensive dataset of over 70 peer-reviewed articles from 2015 to 2023. The researchers reveal that while the concept of “responsible AI” is widely referenced in policy circles, a universally accepted operational definition remains elusive.
The authors stress that AI deployment has surged ahead of efforts to instill ethical oversight. Technologies driven by machine learning, deep learning, and large language models are now influencing decisions in public health, credit scoring, and academic testing. Yet, frameworks to guide their responsible use are often retrofitted or ignored altogether.
The review identifies seven major principles common to existing RAI discourse:
- Transparency
- Fairness
- Accountability
- Privacy
- Robustness
- Human-centric design
- Sustainability
However, the authors caution that while these principles are widely cited, they are inconsistently interpreted and rarely implemented in unified ways across AI life cycles.
They call this the “principle-to-practice” gap, where organizations formally commit to ethical standards but fail to translate them into technical design, deployment processes, or regulatory oversight.
What are the core barriers to implementing responsible AI?
A key contribution of the study lies in its diagnostic framework for understanding what hampers the integration of RAI. These challenges, the authors argue, can be grouped into three structural gaps:
- Conceptual Gaps – The absence of shared definitions and measurement criteria leads to ambiguity in how AI systems are evaluated for responsibility.
- Practical Gaps – A lack of scalable tools and testing protocols to embed fairness, robustness, or explainability into actual AI pipelines.
- Regulatory Gaps – Fragmented or weak enforcement mechanisms, particularly in countries without strong digital governance institutions.
The authors further observe that even in regions with formal policy guidance, such as the European Union’s AI Act or the OECD’s AI Principles, there is insufficient clarity on how to monitor, audit, or penalize non-compliant AI behavior in commercial products.
The study also raises alarms over the opacity of algorithms in high-stakes settings, where AI tools are used to screen job candidates, predict criminal recidivism, or triage healthcare patients. In these cases, fairness and accountability are often compromised by proprietary “black-box” models.
Where are responsible AI practices working and failing?
Despite the widespread challenges, the review identifies a few sectors that are actively experimenting with operationalizing RAI. In healthcare, for instance, researchers are embedding explainable AI models into diagnostic tools, enabling doctors to review the decision logic behind clinical recommendations. In education, adaptive learning systems are being modified to reduce bias in content delivery based on race, gender, or language.
However, even in these more progressive sectors, the authors point out that implementation tends to be experimental, isolated, and rarely scaled across organizations or countries. Cross-sectoral knowledge-sharing remains low, and few AI projects undergo longitudinal audits or performance checks once deployed.
The review calls attention to the need for institutional governance structures that oversee AI from development to retirement. The authors advocate for mandatory ethical reviews, algorithmic documentation protocols, and third-party audits, mirroring quality assurance procedures in pharmaceuticals and aviation.
They also stress the importance of multi-stakeholder involvement. Ethical AI, they argue, cannot be engineered in isolation by computer scientists or tech corporations. Civil society, regulators, human rights advocates, and marginalized communities must be included in shaping what “responsibility” actually looks like.
The article calls for foundational reform. For RAI to be more than just a buzzword, it must be built into the fabric of AI design, not bolted on afterward, the study concludes.
The paper urges the global AI community to move from principles to practice, from aspirational statements to enforceable standards. As AI continues to expand into sensitive and strategic domains, responsibility must be treated not as an afterthought, but as infrastructure. If adopted seriously, the study suggests, responsible AI frameworks could serve as the backbone of digital trust in an increasingly algorithmic world.
- FIRST PUBLISHED IN:
- Devdiscourse