From resilience to adaptivity: How AI and humans must co-evolve for cyber defense

Unlike resilience, cognitive adaptivity focuses on continuous behavioral learning. It combines machine learning, human cognition, and organizational processes to form a self-improving cybersecurity ecosystem.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 13-10-2025 09:08 IST | Created: 13-10-2025 09:08 IST
From resilience to adaptivity: How AI and humans must co-evolve for cyber defense
Representative Image. Credit: ChatGPT

With industries worldwide moving toward full digital integration under Industry 5.0 and 6.0, cybersecurity has become both more complex and more human. A new study argues that traditional resilience strategies are no longer sufficient. Published in Information under the title “From Resilience to Cognitive Adaptivity: Redefining Human–AI Cybersecurity for Hard-to-Abate Industries in the Industry 5.0–6.0 Transition”, the paper introduces cognitive adaptivity as a next-generation model for managing cyber threats in high-risk, sustainability-driven sectors.

The authors focus on hard-to-abate industries, those that are essential to economic infrastructure but face high energy demands and complex supply chains, such as ceramics, steel, and heavy manufacturing. Their findings suggest that cybersecurity must evolve beyond reaction and recovery toward anticipation, learning, and co-evolution between humans and AI systems.

From cyber resilience to cognitive adaptivity

The study critiques the conventional notion of cyber resilience, which emphasizes recovering after an attack. In an Industry 5.0–6.0 context, where artificial intelligence collaborates directly with human operators, threats evolve too quickly for static defenses. The authors propose cognitive adaptivity, a framework designed to enable real-time learning, adaptive trust, and anticipatory defense.

Unlike resilience, cognitive adaptivity focuses on continuous behavioral learning. It combines machine learning, human cognition, and organizational processes to form a self-improving cybersecurity ecosystem. The framework’s four core dimensions are:

  • Human–AI Trust Dynamics: Building calibrated trust between workers and AI systems to prevent overreliance or skepticism.
  • Behavioral Evolution Mechanisms: Training teams to detect subtle anomalies and adapt response protocols dynamically.
  • Sustainability Constraints Integration: Aligning cybersecurity with environmental and governance metrics under frameworks like the Corporate Sustainability Reporting Directive (CSRD).
  • Systemic Antifragility Development: Encouraging organizations to grow stronger with each cyber event by embedding learning into system design.

Through this model, cybersecurity becomes a living process rather than a static defense posture.

The ceramics Case: Testing Adaptivity in a Hard-to-Abate Sector

To test this framework, the authors conducted a detailed case study in the ceramics value chain, chosen for its complex industrial processes and susceptibility to digital disruption. Using a mixed-methods approach, combining interviews, AI-assisted data analysis, and multi-layered cross-validation, the study mapped how Industry 5.0 integration changed both the nature of attacks and organizational responses.

Key findings revealed that while the number of cyber incidents declined in recent years, average costs per incident nearly doubled due to the sophistication of attacks exploiting human–AI collaboration gaps. Attackers increasingly target decision-making interfaces, where automated systems and human oversight intersect, making social engineering and AI manipulation critical vulnerabilities.

The research estimates that implementing cognitive adaptivity could reduce the cost of cyber incidents by up to 60 percent, mainly by improving early detection and preventing escalation. However, the transition demands structural investment in data governance, cross-sector cooperation, and employee retraining.

The ceramics sector also demonstrated how trust calibration, the ability to decide when to rely on AI and when to override it, is essential for maintaining both operational efficiency and safety. Adaptive cybersecurity therefore depends on explainable AI systems, continuous feedback loops, and transparent human oversight.

The Human-AI balance: Building sustainable cyber defense

The study positions cognitive adaptivity as a human-centered cybersecurity paradigm, one that strengthens cooperation between humans and machines instead of isolating them. As AI becomes more autonomous, human oversight shifts from direct control to strategic supervision, requiring new forms of training and decision-support.

The authors emphasize that cognitive adaptivity aligns cybersecurity with sustainability goals. By incorporating environmental, social, and governance (ESG) indicators into digital risk management, industries can report cyber resilience as part of their sustainability performance. This integration helps meet global standards such as the European Green Deal and the UN Sustainable Development Goals (SDGs), which increasingly demand transparent, data-driven governance.

A crucial feature of the model is systemic antifragility, the capacity to improve through exposure to stress. Instead of merely absorbing shocks, adaptive systems analyze them to enhance predictive models. This approach, the study argues, transforms cybersecurity from a cost center into a strategic advantage.

The researchers also propose an Adaptivity Coefficient, a performance metric that measures how fast organizations learn and evolve relative to the speed of new threats. Industries with high adaptivity show faster response cycles, lower risk exposure, and greater long-term resilience.

However, the study warns of overreliance on automation. If AI-driven systems fail or are compromised, human decision-making must remain agile enough to take control. The authors stress the importance of maintaining redundancy, fallback protocols, and situational awareness, ensuring that adaptive systems remain accountable to human ethics and regulatory compliance.

Toward Industry 6.0: Co-evolution, learning, and policy reform

Looking ahead, the study envisions Industry 6.0 as a cognitive ecosystem where machines, humans, and environmental systems co-evolve. In this context, cybersecurity must serve as both a technological and socio-organizational capability.

The transition requires a three-phase strategy:

  • Integration of human-AI learning loops into cybersecurity governance.
  • Cross-sector data sharing and standardization, enabling real-time threat intelligence exchange among industries.
  • Policy harmonization that links cybersecurity, sustainability, and industrial innovation within a unified regulatory framework.

The authors advocate for international cooperation to build adaptive cyber defense standards that are transparent, auditable, and sustainable. They caution that while digital resilience has been a buzzword in Industry 4.0 and 5.0, it is cognitive adaptivity that will define competitiveness in Industry 6.0.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback