AI can detect deepfakes and fake news while gauging their psychological impact

Fake news articles contained significantly higher levels of negative sentiment, including emotions such as fear, anger, sadness, and disgust. In contrast, real news articles were more likely to elicit trust, joy, and surprise. This emotional disparity suggests that misinformation not only distorts facts but also manipulates emotional responses, contributing to a climate of fear and mistrust.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 10-07-2025 18:11 IST | Created: 10-07-2025 18:11 IST
AI can detect deepfakes and fake news while gauging their psychological impact
Representative Image. Credit: ChatGPT

A team of data scientists and researchers has developed a robust AI-based framework to detect and analyze the emotional impact of fake news and deepfakes, two of the most prevalent forms of digital misinformation. The study explores cutting-edge applications of machine learning and natural language processing in the fight against disinformation and manipulated media.

Titled "Leveraging data analytics for detection and impact evaluation of fake news and deepfakes in social networks," the research is published in Humanities and Social Sciences Communications. It presents a dual-pronged strategy: detecting fake content using advanced classification models and quantifying its emotional influence on the public. The findings reveal the power of data analytics to provide both technical and societal insights into the growing threat of online misinformation.

How accurate are machine learning models at detecting fake news and deepfakes?

The authors distinguishes fake news and deepfake, based on their composition and dissemination strategies. Fake news consists of deliberately misleading or fabricated information, often disguised as credible reporting, while deepfakes are synthetically generated multimedia that imitate real people’s faces and voices with startling realism. Despite their differences, both are designed to manipulate public perception and spread rapidly through online platforms.

To counter these threats, the researchers tested a suite of machine learning models on large datasets of fake and real content. For text-based fake news detection, they used term frequency-inverse document frequency (TF-IDF) and n-gram bag-of-words methods to convert text into feature vectors, and then applied six supervised classifiers: XGBoost, Support Vector Classifier, Random Forest, Gradient Boosting, Logistic Regression, and Decision Tree. Among these, XGBoost emerged as the top performer, achieving nearly perfect accuracy across both vectorization techniques. The study highlights how feature extraction significantly influences classification success, with TF-IDF generally offering superior results compared to n-gram-based representations.

For deepfakes, the researchers turned to image classification using convolutional neural networks. They trained two architectures, ResNet50 and DenseNet121, on a large dataset containing real and AI-generated images. ResNet50 consistently outperformed DenseNet121 in both accuracy and training efficiency, reaching nearly 99% accuracy in distinguishing genuine from synthetic facial images. These results underscore the growing capability of neural network models to detect even the most visually convincing manipulations.

What emotional toll does misinformation take on public trust?

The study examines how fake news affects readers emotionally. Using sentiment analysis tools such as VADER (Valence Aware Dictionary and sEntiment Reasoner) and NRCLex, the authors analyzed over 44,000 news articles to measure sentiment polarity and emotional content. The goal was to determine whether fake news systematically evokes stronger negative emotions compared to authentic reporting.

The findings were unequivocal. Fake news articles contained significantly higher levels of negative sentiment, including emotions such as fear, anger, sadness, and disgust. In contrast, real news articles were more likely to elicit trust, joy, and surprise. This emotional disparity suggests that misinformation not only distorts facts but also manipulates emotional responses, contributing to a climate of fear and mistrust.

The study also found that fake news tends to cluster around the negative end of the sentiment spectrum, especially below thresholds that indicate strong emotional negativity. These patterns are critical in understanding how fake content is engineered to exploit human psychology. Emotional manipulation makes misinformation more engaging, more likely to be shared, and ultimately more influential in shaping public opinion.

This emotional dimension of misinformation adds urgency to the development of detection tools. While accurate classification is essential, evaluating the psychological effects of fake news helps design more holistic interventions to preserve social trust and democratic discourse.

What’s next in the fight against digital deception?

Despite the promising accuracy of detection models, the researchers acknowledge significant limitations and call for multi-faceted strategies moving forward. One critical challenge lies in the separation of detection techniques for fake news and deepfakes. Most current models operate independently on text and image data, even though misinformation frequently combines both formats for greater impact. The study advocates for the integration of multimodal learning architectures that can analyze and classify mixed media content in real-time.

The authors also highlight the potential of blockchain technology to trace the origin and dissemination of digital content, thereby improving source credibility. Another emerging solution involves graph neural networks (GNNs), which can analyze the network structure of content sharing to identify coordinated misinformation campaigns and suspicious propagation patterns.

In addition to technical advancements, the researchers stress the importance of non-technical interventions. Media literacy programs are essential for equipping users with critical thinking skills and a healthy skepticism toward online content. They also recommend closer cooperation among technology companies, academic institutions, and policy makers to establish clear guidelines for platform governance and algorithmic transparency.

To address the emotional manipulation, it is crucial to incorporate affective computing into misinformation detection. Models that can detect not only factual inconsistencies but also emotional manipulation may offer a more comprehensive defense against coordinated disinformation campaigns.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback