Generative AI tools reshape propaganda, fact-checking and public manipulation

Deepfakes, hyper-realistic but fake audio-visual content, emerge as perhaps the most dangerous product of generative AI. Their ability to impersonate public figures, spread fabricated confessions, or manipulate video evidence raises fundamental questions about the evidentiary value of digital media. These tools are increasingly accessible, lowering the barrier for malicious use by state and non-state actors alike.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 22-07-2025 14:31 IST | Created: 22-07-2025 14:31 IST
Generative AI tools reshape propaganda, fact-checking and public manipulation
Representative Image. Image Credit: OnePlus

Generative artificial intelligence (genAI) is rapidly altering the disinformation landscape, simultaneously fueling the production of synthetic content while equipping defenders with tools to counter it. In a new study, researchers warn that the transformative power of large language models, deepfake generators, and synthetic media tools is outpacing global governance and ethical frameworks.

The study, titled “Mapping the Impact of Generative AI on Disinformation: Insights from a Scoping Review” and published in Publications in 2025, offers a comprehensive review of 64 peer-reviewed papers from 2021 to 2024. It outlines how generative AI technologies have become central to the information disorder ecosystem, shaping the creation, spread, and potential containment of misinformation, fake news, and propaganda across digital platforms.

How is generative AI changing the nature of disinformation?

The research identifies six major domains where generative AI is transforming disinformation dynamics: political propaganda, scientific misinformation, journalism, media literacy, fact-checking, and synthetic content like deepfakes.

In political communication, AI tools are being weaponized to generate persuasive, highly tailored content at scale. Campaign messaging, fake political endorsements, and false news narratives can be created and disseminated within minutes using text-to-image and large language model (LLM) tools. This poses direct threats to electoral integrity, democratic discourse, and public trust in political institutions.

Scientific disinformation is another growing concern. The review flags AI-generated academic articles, data forgeries, and fake citations as major challenges, especially during global crises such as the COVID-19 pandemic. These tactics undermine scientific consensus, confuse the public, and contribute to the erosion of trust in health authorities and research bodies.

Deepfakes, hyper-realistic but fake audio-visual content, emerge as perhaps the most dangerous product of generative AI. Their ability to impersonate public figures, spread fabricated confessions, or manipulate video evidence raises fundamental questions about the evidentiary value of digital media. These tools are increasingly accessible, lowering the barrier for malicious use by state and non-state actors alike.

Despite these risks, the same technologies also show potential to counter disinformation. Generative AI is being used to automate parts of the fact-checking process, such as identifying claim inconsistencies, flagging source mismatches, and streamlining content verification at scale. In this dual capacity, AI acts as both the threat and the shield in the modern information war.

Where do ethical and regulatory gaps persist?

The authors argue that governance is not keeping pace with innovation. While the EU AI Act, the U.S. Blueprint for an AI Bill of Rights, and platform-specific policies mark a step forward, these measures remain reactive and fragmented. The study notes that regulatory frameworks often lag the rapid evolution of generative models, leaving significant gaps in oversight, accountability, and enforcement.

One of the key cross-cutting issues highlighted in the review is the lack of transparency in how generative AI tools operate. Proprietary models developed by major tech firms rarely disclose training data, moderation policies, or bias-mitigation strategies. This opacity fuels misinformation, makes auditability difficult, and reduces public trust in content moderation.

The authors point to the need for harmonized global standards that define what constitutes harmful synthetic content, clarify liability for misuse, and require transparency by design in algorithmic development. They emphasize the importance of building institutional capacity for independent auditing and enforcement, particularly in democracies where electoral integrity is paramount.

There is also concern about ethical ambivalence. Many AI developers and platform providers present ethical commitments as voluntary or self-imposed, without binding accountability mechanisms. As a result, responsibility for mitigating AI-fueled disinformation is inconsistently distributed and often falls disproportionately on under-resourced civil society groups.

What are the opportunities for resilience and reform?

The study identifies several opportunities to build societal resilience against AI-enabled disinformation. Education and media literacy are positioned as long-term strategies. The integration of AI into personalized learning environments can help users develop critical thinking and discernment skills, provided the platforms themselves are transparent and grounded in factual content.

The review also highlights emerging best practices from newsrooms experimenting with AI-assisted journalism. These include tools that detect manipulated visuals, automate headline verification, and assess source credibility in real time. However, the authors caution that automation should not replace editorial judgment, as doing so risks replicating the biases embedded in training data.

Fact-checking organizations are already adopting generative AI to triage claims, trace their spread, and identify narrative patterns. With proper oversight, these tools can reduce the verification backlog during high-volume events like elections or public health emergencies. They offer a potential counterweight to the speed and volume of disinformation but require investment, cross-platform data access, and algorithmic explainability to function effectively.

Furthermore, the study calls for multi-stakeholder collaboration involving governments, tech companies, researchers, and civil society. Tackling disinformation in the generative AI era cannot rest on platform moderation alone. It demands a new social contract that upholds transparency, democratic accountability, and resilience-by-design in digital spaces.

The next phase of AI development will test not just technological ingenuity, but the global community’s ability to align innovation with ethical imperatives. Without swift, coordinated action, the disinformation arms race could become the defining threat to digital trust in the 21st century, the study asserts.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback