Data centers powering AI face scrutiny over rising water and energy use
The rapid expansion of AI infrastructure is intensifying pressure on energy systems, with researchers calling for urgent adoption of sustainable design and operational practices.
A new study, titled “A Systematic Review of Green and Sustainable AI: Taxonomy, Metrics, Challenges, and Open Research Directions,” published in Sustainability, maps the evolving landscape of sustainable AI research. It identifies key technological strategies, environmental trade-offs, and policy gaps shaping the transition toward greener artificial intelligence systems.
The study finds that AI workloads accounted for nearly 47.4 gigawatts of global data center electricity consumption by 2025, with projections indicating a sharp rise in the coming years.
AI’s environmental footprint expands as models scale
AI has entered a phase of rapid expansion, fueled by deep learning models, large datasets, and high-performance computing infrastructure. However, this growth comes with significant environmental consequences. Training modern AI systems requires massive computational resources, translating into high energy consumption, carbon emissions, and water use for cooling systems.
The study reveals that deep learning experiments can consume tens of gigawatt-hours of electricity, generating thousands of tons of carbon dioxide emissions. In some cases, modern AI models require up to 4600 times more power than traditional machine learning techniques, highlighting the scale of the challenge.
The research draws attention to often overlooked environmental factors, including water usage and electronic waste. Data centers rely heavily on water for cooling, while the rapid turnover of hardware contributes to growing e-waste concerns. These factors are rarely integrated into conventional AI performance evaluations, creating a gap in understanding the full environmental impact of AI systems.
The study also identifies a broader structural issue: much of the existing research remains fragmented, focusing on isolated aspects such as algorithm efficiency or hardware optimization without offering a unified framework. This fragmentation has limited the ability to develop comprehensive sustainability strategies that address the entire AI lifecycle.
Without coordinated efforts, the growth of AI could outpace sustainability gains, leading to increased pressure on global energy and resource systems, the study warns.
Four-pillar framework defines path to sustainable AI
To address these challenges, the study proposes a multi-dimensional taxonomy that organizes sustainable AI strategies into four key categories: algorithmic efficiency, hardware and system optimization, lifecycle and data-centric approaches, and operational and policy-level sustainability.
- Algorithmic efficiency: At the model level, this category focuses on reducing computational complexity and energy consumption through techniques such as pruning, quantization, and knowledge distillation. These methods enable AI systems to maintain performance while using fewer resources, offering one of the most immediate pathways to reducing environmental impact.
- Hardware and system-level optimization complements these efforts by aligning AI workloads with energy-efficient processors and architectures. Technologies such as edge computing and specialized hardware accelerators can significantly lower energy consumption by reducing reliance on centralized data centers and optimizing processing efficiency.
- Lifecycle and data-centric approaches extend the focus beyond training and deployment to include the full lifecycle of AI systems, from hardware production to end-of-life disposal. This perspective enables a more comprehensive assessment of environmental impact, incorporating factors such as carbon emissions, resource extraction, and waste generation.
- At the operational level, policy and governance frameworks play a critical role in ensuring accountability and scalability. The study highlights the importance of standardized reporting, carbon-aware scheduling, and the use of renewable energy in data centers to support sustainable AI deployment.
These four pillars are not independent but interconnected. Improvements in one area can amplify benefits in others, creating a reinforcing system that supports holistic sustainability. For example, more efficient algorithms can enhance the effectiveness of energy-efficient hardware, while lifecycle assessments provide the data needed to inform policy decisions.
Metrics gaps and policy challenges hinder progress
The study identifies significant gaps in measurement, reporting, and policy implementation that limit progress toward sustainable AI. One of the most critical challenges is the lack of standardized metrics. While some studies report energy consumption and carbon emissions, only a fraction consider water usage, e-waste, or full lifecycle impacts. This inconsistency makes it difficult to compare results across studies and assess the true environmental cost of AI systems.
The research also highlights a strong bias toward performance metrics such as accuracy and latency, often at the expense of sustainability considerations. This “performance-first” approach has historically driven the development of increasingly large and resource-intensive models, contributing to the environmental challenges now facing the industry.
Another key issue is the dependence of many sustainability metrics on specific hardware and regional conditions. Energy consumption measurements often vary based on the type of hardware used, while carbon emission estimates depend on local energy mixes. These factors complicate efforts to develop universal benchmarks and limit the generalizability of research findings.
At the policy level, the study notes that regulatory frameworks are still emerging. While initiatives such as the EU AI Act are beginning to address transparency and accountability, there is a lack of globally standardized requirements for reporting environmental impacts. This gap reduces transparency and hinders the adoption of best practices across the industry.
The study also points to a broader economic and social challenge. Developing sustainable AI technologies often requires significant investment, which may be beyond the reach of smaller organizations. Without targeted incentives and support, there is a risk that sustainability initiatives could widen the gap between large technology companies and smaller players.
Transition from ‘red AI’ to sustainable systems gains momentum
The evolution of AI research reflects a gradual shift from performance-driven development, often referred to as “red AI,” to a more balanced approach that prioritizes sustainability. The study traces this transition through several phases, beginning with a period of rapid growth focused on maximizing accuracy, followed by increasing awareness of environmental impacts, and culminating in the current emphasis on sustainable AI practices.
In recent years, regulatory pressures and industry initiatives have accelerated this shift. The rise of generative AI and large language models has intensified scrutiny of energy consumption and carbon emissions, prompting researchers and organizations to explore more efficient alternatives.
The study highlights the emergence of new technologies and strategies aimed at reducing the environmental footprint of AI. These include carbon-aware scheduling, which aligns computational tasks with periods of low carbon intensity, and water-efficient cooling systems designed to reduce resource consumption in data centers.
Advances in hardware and algorithm design are enabling more efficient AI systems. Techniques such as edge computing and federated learning reduce the need for centralized processing, lowering energy consumption and improving scalability.
Despite these advances, the transition to sustainable AI is still in its early stages. Many of the most promising solutions have yet to be widely adopted, and significant challenges remain in scaling these approaches to meet the growing demand for AI technologies.
Global research surge reflects rising urgency
The study also documents a sharp increase in research activity related to green and sustainable AI, reflecting growing recognition of the issue’s importance. From just one publication in 2016, the number of studies has surged to 20 by 2025, indicating a rapid expansion of interest in the field.
This growth is driven by a combination of factors, including rising energy costs, increasing regulatory pressure, and growing awareness of AI’s environmental impact. The research is global in scope, with contributions from 22 countries, though the United States and China remain the leading contributors.
The findings suggest that sustainability is becoming a key criterion for evaluating AI systems, alongside traditional metrics such as performance and accuracy. This shift is likely to influence future research priorities and industry practices, driving the development of more efficient and environmentally responsible technologies.
The study identifies several promising areas for future research, including energy-constrained learning, distributed AI systems, and improved lifecycle assessment methods. These approaches have the potential to reduce the environmental impact of AI while maintaining its benefits for society.
- FIRST PUBLISHED IN:
- Devdiscourse

