Biosignals like EDA and PPG could power next generation of fake news detection
One striking observation was that more complex models like CNNs underperformed relative to simpler models. CNNs were particularly prone to overfitting and exhibited unstable behavior, such as identifying only the most frequent class while failing to detect minority ones. For instance, CNN models achieved near-zero recall in detecting disbelief in false information, arguably the most critical case for misinformation detection.

A novel study published on arXiv under the title “Truth and Trust: Fake News Detection via Biosignals” offers compelling evidence that physiological data, specifically electrodermal activity (EDA) and photoplethysmography (PPG), can be used to detect misinformation and assess the interplay between user belief and the objective truth of information.
Conducted by researchers from The Australian National University, Griffith University, and Curtin University, the research pioneers a human-centered, multimodal approach to misinformation detection, suggesting that how people physically react to content can serve as a marker for its veracity.
Can physiological signals accurately detect truth and falsehood?
The study centers on two primary classification tasks: determining whether physiological responses can distinguish objectively true from false statements, and whether they can model the more complex interaction of subjective belief and factual accuracy.
In a controlled laboratory setting, 28 participants evaluated climate-related claims while wearing non-invasive EmotiBit sensors that captured EDA and PPG data. Each claim was labeled both by its factual accuracy and by the participant’s belief response. This dual labeling enabled two classification objectives: binary veracity classification and a novel four-class belief-veracity classification.
Key findings showed that EDA, associated with the sympathetic nervous system and emotional arousal, outperformed PPG in identifying the truth status of claims. Using machine learning models, particularly K-Nearest Neighbors (KNN), the system achieved up to 65.73% accuracy in binary truth detection using EDA-derived features, suggesting that the body’s autonomic responses can reveal subtle cognitive judgments related to misinformation.
However, performance significantly declined when attempting to classify the interaction between belief and truth. For instance, the best model for joint classification, KNN on EDA, only reached 34% macro F1 score, with especially low success in identifying when participants correctly disbelieved false claims. This indicates that physiological responses alone may not provide enough signal to disambiguate such complex cognitive states.
How was the study structured to capture subtle cognitive-affective states?
Participants were exposed to randomized sequences of validated true and false statements about climate science. After a distraction task, they re-evaluated a subset of these claims, rating their truthfulness on a Likert scale. Their physiological responses were continuously recorded, synchronized with precise timestamps to isolate the signal corresponding to each claim assessment.
Researchers extracted handcrafted features from EDA and PPG signals, including statistical, spectral, and non-linear descriptors, then benchmarked four models: KNN, LightGBM, a neural network, and a convolutional neural network (CNN). Each model was evaluated separately on the two tasks. EDA consistently proved more reliable than PPG, particularly for the binary classification of truth versus falsehood.
One striking observation was that more complex models like CNNs underperformed relative to simpler models. CNNs were particularly prone to overfitting and exhibited unstable behavior, such as identifying only the most frequent class while failing to detect minority ones. For instance, CNN models achieved near-zero recall in detecting disbelief in false information, arguably the most critical case for misinformation detection.
This modeling behavior reveals a core challenge: while the nervous system does register some form of “truth signal,” the body’s response is less distinct when belief and truth diverge. Disbelieving misinformation does not consistently yield a detectable physiological pattern, unlike the arousal associated with unexpected or cognitively dissonant truths.
What does this mean for the future of misinformation detection?
The research presents both promise and caution. On one hand, physiological signals, especially EDA, offer a viable tool for content-agnostic misinformation detection, potentially enhancing systems where textual analysis or source metadata is unavailable or unreliable. This opens new avenues in fields such as cognitive monitoring, affective computing, and personalized media filtering.
However, the limited success in the joint belief-veracity classification highlights the difficulty of decoding internal states such as doubt, skepticism, or correction using biosignals alone. These constructs may require integration with additional modalities, such as eye-tracking, facial expression analysis, or linguistic context.
Moreover, the study raises ethical and technical considerations. While non-invasive sensors offer privacy advantages, broader deployment of physiological surveillance must account for user consent, cultural variability, and signal interpretation accuracy. The researchers stress the importance of safe and responsible innovation, advocating for multimodal systems that contextualize physiological cues within a wider framework of cognitive and behavioral data.
- FIRST PUBLISHED IN:
- Devdiscourse