background imageUnsplash

Is AI governance repeating climate policy’s fatal mistakes?

author image

By Philip Corsano

· 5 min read


How private tech companies are capturing constitutional power while regulators sleep 

Summary

AI governance faces a critical inflection point. Drawing parallels with climate governance,  this article shows how procedural sophistication can obscure real accountability and allow  powerful actors to shape outcomes without democratic oversight. The absence of binding  international treaties on AI echoes early failures in climate regulation. Customary  international law — which helped fill legal gaps in climate litigation — offers a viable  normative framework for governing AI. Key principles such as due diligence, the  precautionary approach, and duties to prevent transboundary harm can and should apply to  algorithmic systems that shape legal meaning and affect fundamental rights across borders.  Without a commitment to these universal norms, AI governance may entrench inequality and  undermine the rule of law itself. 

The procedural legitimacy cycle 

Both climate and AI governance exhibit a six-stage cycle of procedural legitimation:

1. Clear scientific or technical consensus emerges
2. Policymakers construct elaborate frameworks
3. These procedures allow elite capture under the guise of oversight
4. Costs are externalised onto those least able to resist
5. Courts intervene, invoking treaty and customary international law. 6. Private actors shift accountability back to state procedures.

In climate governance, scientific consensus led to the Paris Agreement. Procedural  compliance often obscured lack of substantive progress. As Boston University’s 2024  research shows, fossil fuel infrastructure remains three times more concentrated in  environmental justice communities. The 2024 ITLOS advisory opinion responded by  invoking both UNCLOS treaty duties and customary law — including due diligence and  precaution — as legally binding even in the absence of effective national enforcement. 

AI governance mirrors these stages but accelerates through them. Stage 1 has been met: there is strong consensus on AI’s risks. Stage 2 is embodied in the EU AI Act, with its 113 articles and 180 recitals. But this regulation prioritises internal audits, risk categorisations, and  documentation — without confronting the constitutional consequences of machine-based  interpretation. 

The UK post office scandal: A case study in algorithmic constitutionalism

Consider the UK’s Post Office Horizon scandal. Over 1,000 sub-postmasters were  wrongfully prosecuted due to shortfalls recorded by a faulty computer system. An AI trained on court records from those cases would conclude that Horizon-based convictions reflected  settled legal reasoning. From the model’s perspective, this is statistical truth. 

But viewed systemically, the probability that over 1,000 individuals independently chose to  steal in the same way is vanishingly small. Human judgment — absent from the original  trials — would have recognised this as evidence of systemic error. Yet AI, trained on  precedent alone, learns the wrong lesson. 

This is algorithmic constitutionalism in action: the encoding of legal meaning in training  data, immune to context, equity, or rebuttal. Systems built to “interpret” law now reify its  failures. The risk is not merely bias, but the institutionalisation of error under the authority  of technology. 

From greenwashing to interpretive washing 

In climate governance, corporate actors engaged in greenwashing: publishing glossy  disclosures while failing to cut emissions. In AI, the parallel is interpretive washing — providing transparency reports, bias audits, and model cards that claim neutrality, while  reinforcing interpretations that disproportionately harm disadvantaged groups. 

This happens because training data comes from prior systems already embedded with  inequity. Legal filings, judicial decisions, and administrative procedures are presented as  neutral, but encode systemic power dynamics. Transparency alone cannot fix this. It can even  legitimise it. 

Reclaiming legal sovereignty in the algorithmic era 

This article has argued that the failure of AI governance to anticipate and regulate the constitutional stakes of automated interpretation is repeating — at accelerated speed — the failures of international climate governance. But the comparison is not merely structural or procedural. The most critical lesson drawn from climate litigation, particularly the ITLOS Advisory Opinion, is that substantive justice cannot wait for procedural perfection. When binding treaties lag or are distorted by sovereign discretion and elite capture, the general principles of customary international law remain the last line of defence for vulnerable  communities and the legitimacy of international rule of law.

This point has long been echoed by the International Court of Justice (ICJ). In Pulp Mills on the River Uruguay (Argentina v. Uruguay) [2010] ICJ Rep 14, the Court reinforced that states have an obligation under customary international law to conduct prior environmental impact assessments where there is a risk of transboundary harm. Though framed in environmental  terms, the principle it enshrines — the obligation to assess and prevent foreseeable harm from state-regulated activities — is directly applicable to AI governance. 

Likewise, the ICJ in Legality of the Threat or Use of Nuclear Weapons [1996] ICJ Rep 226  emphasised the precautionary principle in conditions of scientific uncertainty involving  irreversible harm. Algorithmic interpretation systems, with their opaque logics and systemic  reach, constitute such a risk. When applied to fundamental rights, unreviewable AI systems  demand a similar standard of caution, restraint, and pre-emptive oversight. 

Conclusion: The democratic imperative 

AI governance cannot afford the decades-long delays that characterised climate litigation. The harms are invisible until embedded. The systems become entrenched before they are  understood. 

Customary international law, long viewed as secondary, now provides the most durable basis for protecting the human right to contest and shape legal meaning. In a world without an AI treaty, it is customary law that reasserts that law must remain knowable, challengeable, and humane. 

Just as climate governance required new coalitions of law, science, and activism, so too must AI governance. This is not a technical arms race — it is a constitutional reckoning. We must: 

• Preserve zones of mandatory human interpretation in all legal and quasi-legal  decisions;
• Require equity-based constitutional impact assessments for AI systems; • Demand procedural rights for communities to contest algorithmic interpretations; • Use customary law as a global accountability mechanism when national systems fail.

The window to preserve democratic control over legal interpretation is closing fast. But it has not yet closed. What is needed now is more than regulation — it is an alliance of jurists,  technologists, civic institutions, and affected communities to reaffirm that legal meaning  belongs to society: not the rule of code instead of the rule of law.

illuminem Voices is a democratic space presenting the thoughts and opinions of leading Sustainability & Energy writers, their opinions do not necessarily represent those of illuminem.

Did you enjoy this illuminem voice? Support us by sharing this article!
author photo

About the author

Philip Corsano-Leopizzi is a conflict resolution advisor and a qualified barrister with 30+ years of experience in climate, human rights, and corporate governance. A former diplomat in Russia, he has led major initiatives in energy, transport, and finance, and advised on UN SDG compliance with a focus on the Arctic and sustainable development. He specialises in mediating high-stakes disputes through integrated legal, economic, and human rights frameworks, and are committed to building coalitions for a just energy transition.

Other illuminem Voices


Related Posts


You cannot miss it!

Weekly. Free. Your Top 10 Sustainability & Energy Posts.

You can unsubscribe at any time (read our privacy policy)