Supreme audit institutions must embrace AI or risk obsolescence
The research outlines the transformative impact of AI in public administration, calling it a fourth industrial revolution comparable to steam, electricity, and digital computing. Governments worldwide are integrating AI into decision-making, infrastructure maintenance, fraud detection, public safety, and service delivery. The study identifies five key opportunity areas: predictive analytics for policy, personalized citizen services, financial oversight, anti-corruption, and public safety enhancement.

As artificial intelligence (AI) accelerates digital transformation across public institutions, Supreme Audit Institutions (SAIs) face urgent pressure to adapt their tools, skills, and mandates. A newly published study in the journal World, titled "Artificial Intelligence and Public Sector Auditing: Challenges and Opportunities for Supreme Audit Institutions", warns that SAIs must swiftly develop AI literacy and infrastructure to remain effective in overseeing algorithmic governance and public finance.
The study, authored by researchers from the University of Granada, identifies both promising AI applications and systemic challenges threatening the accountability of AI-powered public services. Through a comparative, qualitative approach drawing on case studies and official audit frameworks, the paper provides a detailed roadmap for how SAIs can use, and regulate, AI to uphold democratic accountability and fiscal transparency in the digital age.
What risks and opportunities does AI pose for public governance?
The research outlines the transformative impact of AI in public administration, calling it a fourth industrial revolution comparable to steam, electricity, and digital computing. Governments worldwide are integrating AI into decision-making, infrastructure maintenance, fraud detection, public safety, and service delivery. The study identifies five key opportunity areas: predictive analytics for policy, personalized citizen services, financial oversight, anti-corruption, and public safety enhancement.
However, these benefits come with significant risks. These include data privacy breaches, algorithmic bias, lack of transparency, and the exclusion of citizens from opaque decision-making processes. Many AI systems operate as "black boxes," leaving even developers unable to explain how certain decisions are made. This opacity raises serious governance concerns, especially in high-stakes areas like criminal justice, procurement, and taxation.
Furthermore, the report notes that AI can unintentionally amplify existing inequalities when trained on historical datasets embedded with social bias. Discriminatory outcomes could result in unequal access to services or unjust surveillance, with disproportionate effects on vulnerable populations.
How can SAIs use AI to strengthen their oversight capabilities?
The study outlines how AI can enhance the core audit functions of SAIs, including fraud detection, risk analysis, and compliance monitoring. SAIs in countries like Brazil, India, the Philippines, the U.S., and Nepal are already implementing AI-driven platforms to automate repetitive tasks, analyze massive datasets, and identify anomalies in public spending.
For instance, Brazil’s Tribunal de Contas da União uses an AI system named Alice to monitor public procurement, while India’s Comptroller and Auditor General has established a centralized data analytics office. Spain’s SAI has automated invoice verification in electoral audits, and Nepal employs optical character recognition for extracting audit data from documents.
The report categorizes useful AI tools based on machine learning (ML) techniques and their application in audit environments:
-
Clustering algorithms to group public expenditures for comparative analysis.
-
Anomaly detection to flag deviations from budgetary norms.
-
Neural networks to process unstructured text and predict risks.
-
Decision trees to classify transactions or flag potential fraud.
-
Robotic Process Automation (RPA) to perform repetitive validation checks at scale.
The integration of these tools allows SAIs to move from sample-based auditing to full-population audits, boosting both efficiency and accuracy.
Are SAIs ready to audit AI systems used in government?
The study emphasizes that while SAIs are beginning to use AI tools internally, they are less prepared to audit the AI systems used by public agencies. Auditing algorithms requires not only technical infrastructure but also legal, ethical, and organizational capabilities. These include understanding coding practices, maintaining cloud infrastructure, and ensuring explainability of algorithmic decisions.
To support this effort, several countries have developed auditing frameworks. The Netherlands published a report on auditing government algorithms, while Finland, Germany, and the UK co-authored a white paper guiding SAIs on evaluating machine learning models. The European Court of Auditors and the UK’s National Audit Office have issued critical reports on AI governance gaps in their respective governments.
Common barriers identified include insufficient technical training, lack of interdisciplinary expertise, inadequate data policies, and poor inter-institutional collaboration. A European survey cited in the study found that most SAIs lack technical expertise to audit AI systems, struggle with data governance, and face legal constraints around algorithm access.
The study recommends several strategic responses:
- Institutional AI strategy: Leadership must prioritize digital transformation, fund AI initiatives, and recruit data scientists.
- Data governance: SAIs need structured policies to ensure data quality, accessibility, and cybersecurity.
- Auditor training: New curricula should incorporate ethical AI, legal frameworks, and technical ML fundamentals.
- Collaboration: SAIs should form alliances with academia, other control bodies, and international peers to build shared knowledge.
- Algorithmic oversight: Develop audit protocols that scrutinize algorithm design, performance, fairness, and explainability.
- FIRST PUBLISHED IN:
- Devdiscourse