Deepfake risks drive global shift toward responsible AI governance

The study outlines how AI is emerging as a transformative force across sectors such as healthcare, public administration, finance, and national security. However, it also presents unprecedented threats to privacy, trust, and democratic stability. The European Union has responded by developing a detailed roadmap for responsible AI innovation, culminating in the adoption of the AI Act, which entered into force on August 1, 2024, with progressive implementation scheduled through 2026.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 14-05-2025 09:23 IST | Created: 14-05-2025 09:23 IST
Deepfake risks drive global shift toward responsible AI governance
Representative Image. Credit: ChatGPT

In an AI-driven world, ensuring a human-centric, secure, and ethically aligned approach is no longer optional, it is a strategic imperative. A comprehensive study titled “Consolidating the Role of AI in the Economy and Society: Combating the Deepfake Phenomenon Through Strategic and Normative Approaches—The Case of Romania in the EU Context”, published in the journal Economies, examines the EU's evolving legal and institutional frameworks for artificial intelligence (AI), with a focus on deepfake prevention and Romania’s national alignment with continental policy efforts.

The research focuses on an urgent policy question: how can AI be developed and deployed to maximize social and economic benefits while curbing ethical and security risks, particularly those posed by synthetic media like deepfakes?

The study outlines how AI is emerging as a transformative force across sectors such as healthcare, public administration, finance, and national security. However, it also presents unprecedented threats to privacy, trust, and democratic stability. The European Union has responded by developing a detailed roadmap for responsible AI innovation, culminating in the adoption of the AI Act, which entered into force on August 1, 2024, with progressive implementation scheduled through 2026.

This regulatory framework classifies AI systems into four risk levels - minimal, limited, high, and unacceptable. It prohibits certain practices entirely, such as real-time biometric surveillance in public spaces and AI-driven behavioral manipulation. High-risk systems are subject to strict obligations including transparency, documentation, and human oversight, while general-purpose AI (GPAI) models face additional scrutiny depending on their societal impact.

Complementing the AI Act, the AI Pact was launched in November 2023 to encourage voluntary early compliance from developers and institutions. More than 550 organizations have joined the pact to align with EU standards ahead of mandatory enforcement.

How is Romania preparing for the AI era?

Against the backdrop of the EU’s push for coordinated governance, Romania has embarked on its own strategic transformation. The country adopted the National AI Strategy (NAIS) 2024–2027 in July 2024, marking a decisive step toward digital modernization and alignment with the AI Act and AI Pact.

Romania’s NAIS outlines six general objectives and 13 specific goals covering education, infrastructure, governance, and AI adoption. It promotes the development of national datasets, investment in testing and experimentation facilities, and public-private-academic partnerships. It also emphasizes ethical AI principles, human oversight, and equitable access to AI technologies.

An Interministerial Committee, composed of 34 institutions, has been tasked with monitoring the implementation of NAIS. The strategy supports applications of AI in cybersecurity, public health, finance, and education—while also preparing the labor force for a digital economy. A key ambition of the NAIS is to transform Romania into a regional leader in AI innovation and regulation.

However, challenges remain. Romania’s digital infrastructure needs significant upgrades, and there is a shortage of AI specialists. The government recognizes that long-term investment in education, R&D, and digital literacy is crucial to fully realize the benefits of AI deployment.

What is Romania doing about deepfakes?

The research spotlights deepfakes as a central regulatory concern. These AI-generated synthetic media forms, often indistinguishable from real videos or audio, are increasingly weaponized for political disinformation, fraud, and reputational damage. Although Romania has yet to experience widespread incidents, the study warns that proactive measures are urgently needed.

Romania has begun drafting dedicated legislation to regulate and mitigate the risks of deepfakes. A recent bill proposes mandatory labeling of synthetic content with clear on-screen or audio warnings. Failure to comply would result in administrative fines ranging from RON 10,000 to RON 200,000. The law also introduces specific definitions for deepfakes and prescribes monitoring roles for national cybersecurity agencies and media authorities.

In the proposed framework, all audiovisual content generated or altered using AI must carry visible disclaimers such as “This material contains imaginary scenarios”. Dissemination of unlabeled or malicious deepfake content would be treated as an offense, especially when it threatens national security or democratic processes.

The study emphasizes that this legislative effort mirrors initiatives in other EU countries and the United States. It aligns with broader EU mandates under the AI Act, which demands transparency for all synthetic content and places strict boundaries on AI systems used to manipulate public perception.

Balancing innovation and regulation in the AI era

The research concludes that while the potential for AI to drive economic growth and administrative efficiency is immense, the risks of misuse, especially through technologies like deepfakes, require robust legal safeguards and strategic foresight. Romania's national efforts, while still evolving, demonstrate a clear commitment to meeting EU standards for responsible AI governance.

Future recommendations include strengthening Romania’s AI governance infrastructure, expanding public awareness campaigns, and fostering international collaborations for technological and legal harmonization. The study also calls for continuous updates to the regulatory framework to keep pace with rapid technological changes.

In an AI-driven world, ensuring a human-centric, secure, and ethically aligned approach is no longer optional - it is a strategic imperative. Romania's progress, as detailed in the study, provides a valuable model for emerging digital economies across the EU.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback