· 18 min read
The case for a shared and differentiated view on AI
Progressive movements have yet to develop a shared view on artificial intelligence that not only recognizes its positive problem-solving potential, but also scrutinizes the full range of societal effects, upsides and downsides, opportunities and risks. Without such a differentiated view, there is a risk that speculative, magical thinking is confused with the hard political and economic work that societal progress demands. In contrast, the public discourse around AI seems characterized by either excessive hype or excessive panic. On the one hand, a speculative investment bubble (possibly larger than the late-1990s dot-com craze) is driven by inflated promises and expectations about AI as an engine of fantastic wealth creation. On the other hand, there are apocalyptic warnings about the imminent emergence of an omnipotent, humanity-destroying artificial superintelligence (ASI), which rightly assert the urgent need for internationally coordinated risk mitigation. However, scenarios of near-term ASI emergence are likely exaggerated (at least for the foreseeable future), in particular as they underestimate the technical challenge of taking generative AI to actual artificial general intelligence (AGI) first. Both hype and panic tend to distort ethical priorities by underestimating the more mundane but acute societal harms and risks of the current “narrow” AI, represented by large language and large “reasoning” models LLMs/LRMs. These models are sufficiently persuasive to consistently pass the Turing test and create a mirage of human-like reasoning, even the illusion of intimate interpersonal connection. This potential for public confusion makes it important to keep in mind, as John Searle’s “Chinese Room” thought experiment from 1980 illustrates, that generative AI’s LLMs/LRMs are (still) “stochastic parrots” that merely mimic sentience. They produce anthropomorphic outputs based on probabilistic data patterns that appear to be plausible, without necessarily being factual or intentional. Moreover, the perverse economic incentive structures are such that it is more profitable for AI companies to please their paying users than to irritate them with the cognitive dissonance of inconvenient facts and scientific truths.
Distinguishing machine learning from generative AI (LLMs/LRMs) and agentic AGI
In general, there are numerous potentially productive applications of AI that deserve acknowledgement. The application of machine learning offers a wide range of promising use cases, including opportunities for improvements in terms of e.g. disease diagnosis and drug discovery, detection of fraud and cyberthreats, efficient and optimized energy and resource use, research and data analysis, defect detection and predictive maintenance, autonomous navigation and traffic safety, or personalized adaptive learning and coaching. But these potential benefits are neither new nor does their realization require that generative AI achieves general intelligence or that it becomes truly agentic. It is important to carefully maintain the distinction between machine learning as a broader field in computer science, and generative AI (LLMs/LRMs) as a specific application, which is randomly generating new content based on patterns learned from large datasets. The transformer architecture for deep learning in neural networks, on which generative AI is based, can be used for both, generative and discriminative (i.e. classifying inputs) tasks. In contrast, much of the current venture capital frenzy and datacenter hyperscaling is focused on the rather speculative attempt of turning generative AI into truly agentic AGI. Agentic AGI essentially represents the ultimate automation technology, which eliminates the need for cognitive tasks to be executed by humans (and when combined with robotics, physical tasks, too). When taken to its logical conclusion, AI-driven automation would create obscene wealth for a small group of shareholders at the price of rendering a majority of human work worthless and meaningless as a source of income, faster than new kinds of (non-automatable) jobs could be created and learned. Such a transfer of (future lifetime) wealth from the working population to the wealthiest percentile would be unprecedented in human history. Potential responses like a tax-funded basic income and/or expanded government employment are unlikely to reverse it.
Upsides and downsides, risks and harms of generative AI
Thus far, the application of the transformer architecture has succeeded in partially solving the protein folding problem and accelerating molecular research. The deployment of generative AI has further been shown to modestly increase software development productivity for simple tasks (although another study found a decrease in productivity among senior developers). It has managed to transform activities like coding, writing, translating, coaching, or creating media from a comparatively scarce and valuable (income-generating) human service offering into an abundantly available non-human commodity (for those who can afford the subscription costs). While human freelancers are being increasingly deprived of their ability to make a living for themselves, entrepreneurial individuals, who are capable of taking full advantage of the current generative AI toolbox, can economically benefit from a substantial productivity advantage over average users. The gap between “the best” and “the rest” keeps widening. Although the “promise” of agentic AI could not be kept thus far, human societies are still paying a steep price for the convenience offered by generative AI in the form of negative externalities, societal damages and costs not included in current prices and profit margins. Hidden behind a narrative of unlimited post-scarcity abundance, the underregulated use of AI is inflicting incalculable societal, environmental and personal harm. In the absence of effective regulatory guardrails, a terrible asymmetry is at work: The societal welfare gains of productive machine learning applications are either speculative or realized only slowly and unequally, whereas the societal costs and damages of generative AI deployment are materializing rapidly, with impacts that are far reaching and certain:
• Copyright Violations and Appropriation: Major generative AI models are profiting from massive-scale data scraping, which has disregarded intellectual property and consent.
• Economic Displacement and Inequality: Young white collar jobseekers are increasingly facing difficulties finding entry-level positions. Many jobs in the creative sector and in service sectors related to software engineering and communication have already been commoditized and devalued. Should truly agentic AGI emerge in the future, it is rather predictable that the resulting devaluation of the human workforce and mass impoverishment would push modern societies towards either chaos or tyranny.
• Degradation of information and content quality: Cheap AI-generated “slop”, mass-produced scientific junk papers, including an alarming number of fraudulent publications, are flooding the internet, eroding the quality and originality of the digital commons.
• Epistemic Incompetence & Impaired Cognition: Inspite of their confidence and their ability to persuade humans, today’s AI models are fundamentally incapable of reliably assessing truth claims against external reality. Hallucinations and non-factuality are persistent problems owing to the probabilistic nature of generative AI. Without a sufficient level of science literacy and epistemic competence, even sentient ASI agents would lack the judgment needed for well-informed science- and reality-based decision-making (just as most humans). Educational outcomes are further negatively impacted by rampant cheating with AI and the outsourcing of reading and writing. As if this wasn’t enough, regular reliance on AI for cognitive tasks has shown to impair critical thinking and induce cognitive atrophy.
• Cybercrime and deepfakes: As any media recording can now be synthetically fabricated, scams, identity theft and fraud have become more effective than ever. A recent report found that more than half of the world’s internet traffic is now caused by bots, with so-called “bad bots” accounting for more than two-thirds of this activity.
• Misinformation & democratic backsliding: The excessive use of unregulated social media is already associated with the spread of misinformation, as well as increased polarization and the risk of democratic backsliding. The additional abuse of generative AI amplifies these effects: AI-enabled bot-armies, troll factories and deepfakes contribute to a constant attack on reality-based, democratic problem-solving and decision-making processes. Climate science-denying falsehoods, amplified by generative AI, are spreading faster and wider than it can be fact-checked, causing the Stockholm resilience center to warn about a “perfect storm” of climate AI-enabled misinformation. AI personas are already engaging with users on social media in human-like ways exposing these users to the risk of becoming victims of foreign influence and propaganda campaigns. The increasing difficulty to trust anything else but one’s own direct senses (and only those who one believes to be trustworthy sources, experts and authorities) reinforces epistemic echochambers. Yuval Harari’s warnings about the possibility of a dystopian, unintelligible AI-bureaucracy, which (without requiring AGI/ASI) spins out of human control, or serves to perpetuate a totalitarian autocracy, deserve to be taken seriously.
• Mental health & social disconnection: The consumption of social media is already showing negative effects on the state of mental health, especially among children. The integration of generative AI into social media platforms amplifies these harms. Vulnerable users are prey to chatbot-induced spiritual delusions, or exposed to receiving terrible advice from unsupervised AI therapists. Instead of helping against loneliness, AI companions expose confused users to the risk of becoming emotionally addicted to subscription models, undermining genuine human connection and development of social skills.
• Geopolitical conflict and wasted resources: Absent effective diplomacy, the race for AI-enabled military and scientific dominance raises the risk of international conflicts, even including preemptive wars. Moreover, as global powers seek to win the AI race - and try to prevent others from winning - enormous public and private resources which would be needed elsewhere (e.g. for climate mitigation and adaptation) are diverted and wasted at the worst possible time.
• Energy Consumption and Emissions: Data about generative AI’s energy consumption is difficult to obtain. A 2023 study estimated that an AI query consumed about 20-30 times the energy consumption of an average google search. Another benchmark reveals that image generation consumes about ~130x as much energy as a text summarization. The surging demand for new large-scale data centers, which tend to be powered by gas-fired power plants, presents an enormous and growing additional burden on water resources, energy systems and the climate. The IEA assumes an increase of data center GHG emissions from ~0.2 Gt p.a. today (without embodied CO2 emissions) to a little less than 0.5Gt p.a. by 2035, with a risk of additional rebound effects.
There are, nonetheless, notable efforts to counteract at least some of these negative effects. One example is the development of a non-agentic “scientist AI” by the non-profit LawZero, designed specifically for reality-based truth assessment and AI oversight. The EU’s AI Act adopted in 2024 - with the goal to foster trustworthy and human-centric AI, provides a promising starting point for the regulation of AI risks in the areas of health, safety, and fundamental rights.
AI’s role in climate action
What does AI truly offer in support of the climate fight? Numerous commonly cited climate-positive use cases for AI are based on machine learning in general and do not necessarily require generative AI (LLM/LRMs) or AGI/ASI. These use cases include improved grid management, predictive maintenance, or efficiency improvements in planning and operating clean energy infrastructure. Generative AI specifically has been shown to assist in detecting climate misinformation (which itself has been amplified by AI) more easily, to improve the quality of geodata, or to improve the efficiency and accuracy of GHG emissions measurements and disclosures. While these applications are welcome, they are neither mission-critical nor transformative. A recent article in Nature Climate Action hypothesizes a generative AI-enabled 3.2-5.4 Gt CO2e reduction potential of annual emissions by 2035, which would exceed the additional GHG emissions of new data centers. Unfortunately, whereas the data center’s 0.4-1.6 Gt CO2e p.a. real-world GHG emissions are rather certain, the claimed reduction potential is largely based on future AI-enabled innovations that have neither been developed nor tested yet. There is simply no scientific evidence to suggest that AI-led scientific advancement is actually possible within the next several years or that - even if AGI/ASI could be achieved - it would be as useful for climate mitigation as hoped. The IEA recognizes a GHG emission reduction potential of ~1.4 Gt by 2035 in case of widespread adoption, without having to rely on new AI-enabled technical breakthroughs, but notes that “there is currently no momentum that could ensure the widespread adoption of these AI applications”. If the necessary enabling conditions are not created,the aggregate emission reduction could be marginal. Even if ASI were to magically deliver new breakthroughs such as fusion energy and if the pace of fusion deployment could be accelerated, this would still have a rather limited additional effect on global GHG emission reductions since it would largely cannibalize clean electricity that would otherwise have been produced by wind and solar PV. It is likely that the Paris Agreement would be easier to achieve had generative AI not been invented in the first place.
Similarities between the AI and fossil fuel industry
There are several industries, such as fossil fuels, social media, opioids, tobacco, nuclear weapons, tax avoidance, industrial lobbying etc (as well as the financial institutions and advertising agencies enabling them), where negative externalities and societal harms are obvious to well-informed observers. The climate movement knows the difficulty of getting a harmful, externalizing industry regulated all too well. The current pursuit of agentic AGI/ASI at any societal costs fits squarely within this category. The systemic incentive structures and the political economy behind this dynamic closely mirror those that have enabled the fossil fuel industry to resist regulation for decades, despite mounting and apparent damages and losses, alongside urgent calls for action based on scientific, ethical, and economic grounds. At the same time, industry lobbyism and corporate capture of the legislative process have consistently undermined attempts at effective regulation, preventing the adoption of meaningful guardrails. Industry-enabled disinformation campaigns and misinformation confuse public understanding about the gravity of the problem and the availability of solutions. Industry leaders and nation-states, convinced they must race ahead, operate under the perceived logic of a prisoner’s dilemma, where the only rational way to win appears to be defection. They don’t realize that in reality, they are faced with a coordination problem, where the rational way to “win” requires cooperative collective action. Voluntary, self-imposed constraints are insufficient, as they would simply encourage bad actors to race ahead. What’s needed are binding international agreements and effective regulatory guardrails, alongside well-designed policies that internalize externalities and align economic incentives with societal welfare. Company leaders, investors and influencers who advocate for accelerating the development of agentic general intelligence, while resisting effective regulation, are behaving no less recklessly and irresponsibly as fossil fuel advocates and climate science denialists.
Not a technology problem but an information problem
Progressive movements and policymakers must remain clear-eyed about both the promises and dystopian warnings surrounding AI based on realistic timelines. AI should serve, not derail, the climate agenda, and its role warrants the same scrutiny as any other proposed solution. The main obstacles to rapid decarbonization are neither technological nor computational. It is unlikely that AGI/ASI (should it emerge one day and “volunteer” to help humanity) could be capable of resolving the political gridlock or of eliminating the need to replace fossil fuel technologies, much less address fundamental structural problems like market failures, disproportionate cost of capital in developing countries, or perverse economic incentives. AGI/ASI agents are unlikely to have the capacity, authority or motivation to solve the problems of overindebtedness, reduce wealth inequality, restore tax equity, mobilize additional public financing, develop Just Transition schemes or invent revolutionary climate solution technologies that are not available yet. The factor that actually determines the pace of decarbonization is the pace of climate solution deployment and political support in the real world. After all, both the climate crisis and the “AI crisis” are essentially problems of information quality and political will: progress depends on successfully conveying to the general public and policymakers a science- and reality-based understanding about the gravity of the problem, the feasibility of the solutions, and the true costs and benefits of action versus inaction. According to the IEA’s WEO 2023, a majority of the technologies needed to reduce a majority of the world’s GHG emissions—solar PV, wind power, electrification, and energy storage—are already commercially available. These solutions continue to become more cost-effective, even without the regulatory support that would be needed to internalize the societal costs of fossil fuel pollution into market prices and accelerate the transition. The pace of decarbonization remains below its potential not for lack of innovation, or AGI/ASI, but due to misaligned economic incentives, widespread fossil fuel-funded misinformation and science denial, and the political capture of democracies by fossil fuel interests. Whether we are working to mitigate the catastrophic risks of unrestrained AI or fossil fuel use, civil society’s demand for effective international cooperation and regulatory guardrails are essential to safeguarding humanity’s future.
Conclusion: A future that nobody wants is not worth it
As a society, we are racing towards a dystopian AI-future that nobody really wants, except for sociopathic business owners and CEOs, who have long dreamed of replacing their human employees with a tireless, obedient, cheap digital workforce. While it is difficult to voluntarily slow down the pace of AI progress without an internationally coordinated effort, the uncritical promotion of generative AI/AGI/ASI as a potent climate solution technology risks legitimizing the unrestrained (and completely unnecessary) growth of an industry that is causing immense societal and environmental harm at the worst possible time. In the middle of a climate crisis we have not to deal with an AI crisis, too. The naive vision of post-scarcity abundance represents more wishful thinking than an outcome that is remotely feasible or desirable. This is especially true when considering the current unequal distribution of wealth and access to AI ownership, as well the crucial role of human labor for societal stability, and the fact that the upsides of applied machine learning are not contingent on achieving AGI or ASI. The reckless pursuit of the ultimate automation of human labor cannot excuse the current failure to regulate, mitigate and prevent the acute downsides of generative AI. It’s just not worth it.
H/T for reviewing:
P Ngei, L Thiede, S Singer
illuminem Voices is a democratic space presenting the thoughts and opinions of leading Sustainability & Energy writers, their opinions do not necessarily represent those of illuminem.
Track the real‑world impact behind the sustainability headlines. illuminem's Data Hub™ offers transparent performance data and climate targets of companies driving the transition.
Endnotes
-
“The dangers of so-called AI experts believing their own hype,” New Scientist, May 2024. Link
-
“AI bubble vs dot-com stocks: Apollo economist Torsten Slok,” Fortune, July 2025. Link
-
“Superintelligent AI fears—they’re baaa-ack,” Politico Digital Future Daily, April 2025. Link
-
“Artificial general intelligence: Singularity timing,” AIMultiple Research, 2024. Link
-
“The Turing test,” Wikipedia. Link
-
“Illusion of anthropomorphic sentience in AI,” Springer AI Ethics, 2024. Link
-
“Teenagers turning to AI companions are redefining love,” The Conversation, July 2025. Link
-
“The Chinese Room Argument,” Stanford Encyclopedia of Philosophy. Link
-
“On the dangers of stochastic parrots: Can language models be too big?” Bender, E.M. et al., FAccT 2021. Link
-
“Transformer: A deep learning model,” Vaswani et al., arXiv.org, 2017. Link
-
“What is AGI?,” NY Times, May 2025. Link
-
“AI agents and hype: 40% of AI agent projects will be canceled by 2027,” Forbes, June 2025. Link
-
“Workforce crisis: Key takeaways for graduates in the AI jobs market,” The Guardian, July 2025. Link
-
“Creative professionals face losing 25 percent of their income,” Heise.de, July 2025. Link
-
“Has AI hacked the operating system of human civilization?” Yuval Noah Harari, The Conversation, June 2024. Link
-
“How AI shapes our realities: Insights from Yuval Noah Harari on democracy, misinformation, control,” The AI Insider, November 2024. Link
-
“AI hallucinations aren’t going away,” Forbes, May 2025. Link
-
“AI hallucinations adoption retrieval augmented generation,” Aventine, May 2025. Link
-
“Early 2025 AI experienced OS dev study,” METR Blog, July 2025. Link
-
“Impact of generative AI on productivity: Survey of knowledge workers,” Microsoft Research, June 2025. Link
-
“AI-generated slop is slowly killing the internet—and nobody is trying to stop it,” The Guardian, January 2025. Link
-
“Mass-produced scientific junk papers,” Nature, August 2025. Link
-
“Fraudulent publications surge in science,” NY Times, August 2025. Link
-
“The spread of misinformation,” PNAS, December 2015. Link
-
“Social media’s role in polarization and democratic backsliding,” Democratic Erosion, December 2024. Link
-
“A perfect storm of climate AI-enabled misinformation,” Stockholm Resilience Center, June 2024. Link
-
“Why it’s as hard to escape an echo chamber as it is to flee a cult,” Aeon, July 2024. Link
-
“China’s AI propaganda,” NY Times, August 2025. Link
-
“Teen/childhood smartphone use mental health effects,” The Atlantic, March 2024. Link
-
“Chatbot-induced spiritual delusions,” Rolling Stone, May 2024. Link
-
“AI therapist goes haywire on mental health,” Futurism, June 2024. Link
-
“Teens are increasingly turning to AI companions—and it could be harming them,” The Conversation, July 2025. Link
-
“Bad bots on the rise: Internet traffic hits record levels,” Thales Group Magazine, May 2025. Link
-
“Six ways AI could cause the next big war—and why it probably won’t,” The Bulletin of the Atomic Scientists, July 2025. Link
-
“Data centre energy use: Critical review of models and results,” IEA, May 2025. Link
-
“Texas data centers, gas power plants and AI,” Texas Tribune, June 2025. Link
-
“Energy and AI,” IEA, 2025. Link
-
“LawZero: A new nonprofit advancing safe-by-design AI,” PR Newswire, June 2025. Link
-
“LawZero mission: non-agentic scientist AI,” Yoshua Bengio blog, June 2025. Link
-
“EU Artificial Intelligence Act,” EU AI Act Portal, 2024. Link
-
“Climate-positive AI use cases,” Renewable Institute, 2025. Link
-
“How AI revolutionized protein science but didn’t end it,” Quantum Magazine, June 2024. Link
-
“Climate misinformation: AI experts,” Science News, November 2023. Link
-
“AlphaEarth foundations helps map our planet,” DeepMind blog, 2024. Link
-
“GHG measurement and disclosure improved by AI,” Nature Climate Action, 2025. Link
-
“Fusion energy deployment and economic effects,” Bloomberg New Economy Forum, November 2019. Link
-
“World Energy Outlook 2023,” IEA, 2023. Link
-
“UN expert urges criminalizing fossil fuel disinformation,” The Guardian, June 2025. Link
-
“Fossil fuel industry tactics fueling democratic backsliding,” Center for American Progress, July 2025. Link