From opportunity to risk: How AI is redefining modern journalism

Journalists working in print, radio, television, and digital platforms expressed concern, with the highest levels of alarm reported by those in traditional media sectors. While professionals in digital-native outlets exhibited slightly lower levels of anxiety, the majority still recognized the escalating risks posed by algorithmic content generation and automated misinformation.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 03-09-2025 10:11 IST | Created: 03-09-2025 10:11 IST
From opportunity to risk: How AI is redefining modern journalism
Representative Image. Credit: ChatGPT

The integration of artificial intelligence into newsrooms is heightening concerns over accuracy, ethics, and trust. A new study investigates how journalists perceive the disruptive impact of AI technologies on news production and disinformation.

The research, “Journalists’ Perceptions of Artificial Intelligence and Disinformation Risks,” published in Journalism and Media, surveyed 504 journalists across the Basque Country. It reveals a profession divided between optimism about AI’s potential to streamline workflows and deep unease over its role in amplifying false information, from deepfakes to synthetic news content.

AI and the growing threat of disinformation

The study highlights an overwhelming consensus among journalists that AI is accelerating the spread of disinformation. Nearly nine in ten respondents acknowledged that AI technologies are likely to significantly increase risks associated with false or manipulated content. Deepfakes, synthetic videos, and automatically generated news are seen as the most pressing challenges, capable of undermining audience trust and destabilizing already fragile media ecosystems.

Journalists working in print, radio, television, and digital platforms expressed concern, with the highest levels of alarm reported by those in traditional media sectors. While professionals in digital-native outlets exhibited slightly lower levels of anxiety, the majority still recognized the escalating risks posed by algorithmic content generation and automated misinformation.

Experience also shaped perceptions. Journalists with more than a decade in the field demonstrated a heightened awareness of AI’s risks, with over 90 percent identifying it as a serious threat. In contrast, younger journalists, many of whom are digital natives, tended to be less alarmed, reflecting their greater familiarity with technology and adaptive use of digital tools.

Experience and usage define perspectives

The research uncovers a clear link between familiarity with AI tools and how journalists assess their impact. Those with regular, hands-on experience using AI for tasks such as transcription, data sorting, or content drafting tended to view the technology with more balance, recognizing both its potential and its limitations.

Conversely, professionals with minimal exposure to AI tools were more likely to view them as a source of unchecked risk. This divide underscores the role of digital literacy and practical training in shaping perceptions within newsrooms.

Despite the differences in perspective, the core concerns remain consistent across demographics. Journalists cited difficulty detecting false content as their primary worry, followed by fears of inaccurate or incomplete data emerging from AI platforms. Other cited risks include the potential for criminal misuse of AI, such as fraud, identity theft, and the propagation of harmful biases based on the data used to train these systems.

Interestingly, younger journalists, particularly those in online-first newsrooms, demonstrated a greater openness to integrating AI into their workflows. Their optimism is tempered by a recognition that unchecked automation could erode editorial oversight and credibility if not properly regulated and monitored.

Balancing innovation with ethical safeguards

According to the study, AI, while transformative, is not inherently negative. Many journalists acknowledged its ability to enhance efficiency, streamline workflows, and even assist in verifying information at scale. However, these benefits come with substantial responsibilities that require industry-wide strategies to mitigate risks.

Key recommendations from the study include the implementation of comprehensive AI training programs for journalists. Building technical and ethical literacy is seen as essential to ensuring that professionals can critically evaluate AI-generated outputs and detect errors or biases before publication.

The findings also highlight the urgent need for strong ethical and regulatory frameworks to govern the use of AI in journalism. Clear guidelines on transparency, data integrity, and editorial accountability are necessary to maintain trust with audiences and safeguard the credibility of journalism in an era of rapid technological change.

The authors argue for increased collaboration between experienced journalists and younger digital-native professionals. By fostering dialogue across generations, newsrooms can leverage diverse perspectives to develop balanced strategies that embrace innovation while maintaining rigorous editorial standards.

Implications for the future of journalism

For media organizations, the findings suggest that adopting AI tools without comprehensive oversight could exacerbate existing challenges related to misinformation, bias, and audience skepticism. Proactive investments in ethical oversight, robust verification systems, and staff training are therefore critical.

Policymakers also have a major role to play. Establishing consistent regulations to monitor AI use, particularly in content generation and distribution, will help safeguard journalistic integrity while fostering innovation in the industry. The research points to the importance of balancing the drive for technological advancement with the imperative to protect the public’s right to accurate, reliable information.

Educational institutions are also encouraged to integrate AI-focused modules into journalism curricula. Equipping future journalists with the skills to critically engage with AI tools will help ensure that the next generation enters the workforce prepared to navigate both the opportunities and risks posed by these technologies.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback