Why financial institutions struggle to implement ethical AI?
The findings also point to tensions between innovation and governance, with companies struggling to strike a balance between rapid technological advancement and robust ethical oversight. Skills shortages, cultural inertia, and a lack of adequate training further complicate the deployment of Responsible AI frameworks. These factors, combined with operationalization challenges and budget constraints, create a complex landscape for organizations striving to align AI with ethical and sustainable practices.

The integration of artificial intelligence into financial services continues to reshape the industry, but it also raises pressing questions about ethics, accountability, and sustainability. In a newly-published study, researchers shed light on the challenges financial institutions face when attempting to operationalize Responsible AI (RAI).
Published in AI (2025, 6, 169), the paper titled "Scratching the Surface of Responsible AI in Financial Services: A Qualitative Study on Non-Technical Challenges and the Role of Corporate Digital Responsibility" provides an in-depth analysis of the non-technical barriers preventing organizations from embedding ethical practices into their AI systems. The research also underscores how Corporate Digital Responsibility (CDR) can act as a bridge to close the gap between ethical principles and practical deployment.
Why are financial institutions struggling with responsible AI?
The study reveals that despite industry-wide recognition of the importance of ethical AI, the journey from theory to practice is fraught with obstacles. Through 15 semi-structured interviews with AI and governance experts across European financial institutions, the authors identified nine critical non-technical barriers.
Among these challenges, ambiguity in accountability emerged as a key concern, with organizations often unsure who should bear responsibility when AI systems fail or produce biased outcomes. Another major issue lies in unintended consequences, as many institutions lack readiness to anticipate and manage the ripple effects of AI adoption. Furthermore, the persistence of outdated business models and resistance to change hinder efforts to embed fairness, inclusivity, and transparency in AI practices.
The findings also point to tensions between innovation and governance, with companies struggling to strike a balance between rapid technological advancement and robust ethical oversight. Skills shortages, cultural inertia, and a lack of adequate training further complicate the deployment of Responsible AI frameworks. These factors, combined with operationalization challenges and budget constraints, create a complex landscape for organizations striving to align AI with ethical and sustainable practices.
How can corporate digital responsibility bridge the gap?
The authors argue that Corporate Digital Responsibility offers a structured pathway to integrate ethical principles into everyday AI governance. CDR, they note, is more than just a set of guidelines; it is a cultural shift that embeds human-centric values, environmental considerations, and transparency into an organization's digital strategy.
The research highlights that organizations embracing CDR frameworks are better positioned to navigate stakeholder trade-offs, mitigate sustainability risks, and build trust among customers and regulators. By aligning AI initiatives with broader corporate responsibility agendas, companies can move beyond compliance-driven approaches and foster long-term resilience.
In particular, the study emphasizes the importance of embedding CDR at the strategic level, ensuring that ethical considerations are not treated as afterthoughts but as integral components of innovation. This integration is crucial for mitigating unintended consequences and ensuring AI deployment aligns with societal expectations.
What does the future hold for responsible AI in finance?
Achieving Responsible AI in financial services requires more than technical fixes, it demands cultural transformation, cross-functional collaboration, and a rethinking of governance models, the study asserts.
Looking ahead, the study calls for stronger regulatory frameworks, clearer accountability mechanisms, and enhanced investment in skills development to close existing gaps. Financial institutions are urged to view Responsible AI not merely as a compliance issue, but as a strategic imperative that influences reputation, trust, and long-term sustainability.
- FIRST PUBLISHED IN:
- Devdiscourse