AI for Energy Security: Study Compares LSTM and FFNN in Solar Grid Anomaly Detection
Researchers from Noida International University and Dayananda Sagar University explored deep learning models for anomaly detection in solar power systems, comparing LSTM, FFNN, and Isolation Forest approaches. Their study found FFNNs outperform LSTMs in accuracy and recall, but LSTMs still hold promise for future smart grid cybersecurity solutions.

The study “Smart Grid Cybersecurity: Anomaly Detection in Solar Power Systems Using Deep Learning” is the joint effort of researchers from Noida International University and Dayananda Sagar University, which delves into the rising security challenges faced by modern electricity networks. As renewable energy, especially solar, is woven deeper into power grids, the promise of sustainability collides with new vulnerabilities. Cyberattacks, equipment malfunctions, and operational disruptions are no longer abstract risks; they pose tangible threats to grid reliability. The researchers argue that artificial intelligence, particularly deep learning, can serve as a powerful guardrail, capable of detecting unusual behavior in real time and keeping the energy supply resilient.
Harnessing Deep Learning for Solar Anomalies
At the center of their investigation is the Long Short-Term Memory (LSTM) neural network, a deep learning model designed for time-series analysis. Solar power data, with its daily and seasonal cycles, provides exactly the type of patterns LSTMs excel at deciphering. The team used a dataset of nearly 60,000 solar and wind production records sourced from Kaggle, though wind values dominated at over 80 percent. Even so, the focus was firmly on solar anomalies, where natural dips in production, caused by nightfall or cloudy weather, must be carefully separated from genuine faults or attacks. Preprocessing played a crucial role in preparing the dataset: missing entries were imputed, outliers were managed using statistical techniques, and features were normalized to a uniform scale. Time-based attributes such as hour, day, and month were engineered to provide context, helping the model distinguish between predictable seasonal fluctuations and suspicious deviations that might jeopardize operations.
Building and Training the LSTM Model
The LSTM framework was carefully designed with 50 units and a ReLU activation function to capture complex nonlinear relationships. A dropout layer was added to reduce overfitting by disabling 20 percent of neurons during training. Data were divided into training, validation, and testing subsets, with 70 percent used to train the model. Each sequence spanned 24 hours, mimicking the natural cycle of solar energy production, and the model aimed to predict the subsequent time step. The Adam optimizer and mean squared error were chosen as the training tools, with k-fold cross-validation ensuring reliability across multiple partitions of the dataset. Anomalies were identified when the prediction error exceeded three standard deviations, a stringent threshold meant to filter out noise and focus on significant deviations.
The results painted a mixed picture. The LSTM model achieved a mean squared error of 0.846 and a precision of nearly 79 percent, showing it was trustworthy when labeling anomalies. However, its recall stood at only 33 percent, signaling that many real anomalies slipped through undetected. The imbalance between normal and abnormal data, the strict threshold, and the limited complexity of the architecture contributed to this shortfall. Yet despite its flaws, the LSTM provided valuable insights and proved practical in simulated scenarios. For example, it detected sudden production drops linked to inverter malfunctions and flagged irregularities in cases of cyberattacks involving false data injections, allowing corrective action before damage spread across the grid.
The Showdown: FFNN vs. LSTM vs. Isolation Forest
To measure effectiveness, the LSTM was compared with a Feedforward Neural Network (FFNN) and a traditional Isolation Forest algorithm. The FFNN emerged as the most impressive contender. It achieved an accuracy of 81.5 percent, precision of 77 percent, recall of nearly 59 percent, and an F1-score of 67 percent, with an area under the ROC curve of 0.83. By contrast, the LSTM trailed with an accuracy of 76 percent, recall of 33 percent, and an F1-score of 47 percent, while Isolation Forest lagged far behind, with accuracy below 68 percent and recall barely crossing 10 percent. Graphs of ROC curves confirmed these trends, underscoring the FFNN’s advantage. The researchers attributed its superior performance to its simpler structure, which handled noisy and variable data more effectively. However, they also noted that LSTMs retain long-term promise, particularly when trained on richer datasets with stronger temporal patterns. Hybrid approaches combining FFNN’s speed with LSTM’s sequential learning ability may ultimately deliver the best of both worlds.
Challenges, Regulations, and the Road Ahead
Beyond the models themselves, the study emphasized the practical and regulatory hurdles of real-world deployment. Real-time anomaly detection demands significant computational resources, often requiring expensive hardware or cloud computing solutions. This raises concerns over data security, especially when sensitive grid data is processed off-site. Models must also be retrained regularly, as grid conditions and cyber threats evolve, making the process resource-intensive. Integrating advanced AI systems into legacy grid infrastructure presents additional complications, with middleware and APIs necessary to ensure smooth interoperability. Regulatory frameworks further complicate the picture, as compliance with standards like NERC CIP, IEC 61850, and ISO 27001 is mandatory. Moreover, deep learning’s “black box” reputation could slow adoption unless explainable AI techniques are integrated to boost transparency. The researchers recommend federated learning for preserving data sovereignty and adherence to standardized data formats to ease regulatory approval and integration.
The study concludes with cautious optimism. While FFNN outperformed LSTM across most benchmarks, the unique ability of LSTM networks to capture temporal dynamics keeps them relevant for future refinements. Improvements such as data augmentation to rebalance anomaly representation, threshold optimization for better sensitivity, and advanced architectures like bidirectional LSTMs or attention mechanisms are proposed. The broader message is clear: as renewable integration expands, AI-driven anomaly detection is not just a technical innovation but a necessity for maintaining cybersecurity and operational stability in modern power grids. By enabling the timely detection of faults, cyberattacks, and inefficiencies, such systems can protect both the sustainability and the reliability of tomorrow’s electricity networks.
- FIRST PUBLISHED IN:
- Devdiscourse