AI’s carbon footprint threatens climate goals, researchers propose new sustainability tax
The paper highlights the double-edged role of AI in the climate crisis. On one hand, advanced systems, particularly large generative models, require enormous amounts of computational power, consuming vast energy resources and producing significant carbon emissions. The competitive race to build ever-larger models amplifies this problem, with little transparency from developers about the true environmental cost.

Artificial intelligence is increasingly recognized as both a driver of environmental strain and a tool for combating climate change. A new study by researchers from the University of Padua argues that AI development cannot be divorced from its environmental consequences and must be governed by ethical responsibility as much as by technical regulation.
The research, “AI and Climate: An Ethical Sustainability Framework for Balancing Risks and Responsibilities”, published in AI & Society, examines how AI contributes to climate risks while also offering solutions, and proposes a novel Sustainability Alignment Tax (SAT) to guide decision-making. The study calls for shifting AI governance from a narrow risk-based approach to a broader responsibility-driven model that places sustainability at its core.
How does AI contribute to the climate problem and its solutions?
The paper highlights the double-edged role of AI in the climate crisis. On one hand, advanced systems, particularly large generative models, require enormous amounts of computational power, consuming vast energy resources and producing significant carbon emissions. The competitive race to build ever-larger models amplifies this problem, with little transparency from developers about the true environmental cost.
On the other hand, AI technologies are already making critical contributions to climate action. They enable smart energy grids, predictive maintenance for renewable energy systems, optimized transport networks, precision agriculture, and advanced climate modeling. AI also supports environmental monitoring by tracking ecosystem health, predicting natural disasters, and simulating climate scenarios that improve emergency preparedness.
This duality, the authors stress, means AI can either deepen ecological damage or provide essential tools for resilience. The direction depends on whether governance frameworks incorporate sustainability into the core of AI policy and practice.
What tools exist to measure AI’s environmental impact?
The study reviews existing methodologies designed to track the carbon footprint of AI. Tools such as the Machine Learning Emissions Calculator, carbontracker, experiment-impact tracker, and Software Carbon Intensity metrics are being used to estimate emissions from model training. These instruments represent progress but remain limited in scope, as most focus on the training phase and rarely capture the environmental costs of daily deployment and large-scale usage.
Despite their shortcomings, such tools provide a foundation for greater accountability in the sector. They can help organizations assess trade-offs between system performance and sustainability, and highlight where optimization could reduce energy consumption without undermining functionality.
The authors point out that industry leaders often underreport environmental impacts or treat them as secondary concerns, focusing instead on accuracy and speed. This imbalance, they argue, creates a governance gap that threatens to undermine both environmental commitments and public trust.
What is the Sustainability Alignment Tax and why does it matter?
The study proposes the Sustainability Alignment Tax (SAT), a conceptual framework to balance AI’s benefits with its socio-environmental costs. Adapted from the idea of an “alignment tax” in AI safety research, the SAT incorporates sustainability into decision-making, effectively requiring developers, companies, and policymakers to account for ecological impacts when pursuing technical gains.
The SAT asks a fundamental question: is it justifiable to prioritize marginal improvements in AI model performance if they result in disproportionate harm to the environment? By framing this as a “tax,” the model encourages decision-makers to treat sustainability trade-offs as costs that must be factored into the full lifecycle of AI technologies.
The framework offers practical guidance. In some cases, slightly reduced accuracy or increased development costs may be ethically preferable to environmentally harmful alternatives. The SAT therefore provides both an ethical compass and a practical tool for aligning AI innovation with sustainability goals.
The authors argue that such responsibility-based governance is essential to complement existing risk-based approaches like those in the EU AI Act. While risk-based frameworks primarily categorize AI systems by their potential to cause harm, the SAT pushes further, embedding accountability for environmental and social impacts into every stage of AI development and deployment.
From risk management to responsibility governance
The authors state that managing AI’s environmental impact cannot be left solely to compliance measures. The responsibility-based approach they advocate requires developers, companies, and governments to actively assume accountability for the consequences of AI use. This involves integrating sustainability into design principles, regulatory policies, and corporate strategies.
The framework aligns with international environmental law and broader sustainability agendas, aiming to ensure that AI contributes positively to climate goals rather than undermining them. The authors stress that without such shifts, AI could become a paradoxical technology, advertised as a tool for sustainability while simultaneously driving environmental harm.
To sum up, AI must be developed and governed in a way that acknowledges its climate impact, maximizes its potential for positive change, and holds actors responsible for its costs. The Sustainability Alignment Tax represents a step toward embedding this balance, reframing AI not just as a technological challenge but as a matter of ethical responsibility and global sustainability.
A roadmap for aligning AI with climate goals
The research provides a roadmap for aligning AI with environmental and ethical responsibilities. Policymakers are urged to adopt frameworks that go beyond risk classification to include sustainability metrics. Developers should integrate emission tracking and optimization into the design and deployment of models. Companies must evaluate trade-offs transparently, ensuring that pursuit of competitive advantage does not come at the expense of ecological stability.
By embedding sustainability into the fabric of AI governance, the study argues, it is possible to harness artificial intelligence as a driver of positive environmental change. Without this shift, however, AI risks becoming another accelerant in the climate crisis.
- FIRST PUBLISHED IN:
- Devdiscourse