background image

AI, dystopia, and the dangerous comfort of optimism

author image

By Chad Frischmann

· 7 min read


A friend sent me a compelling video predicting collapse by 2027. Not because of bad engineers or rogue machines—but because of fragile systems, powerful incentives, and the very human inability to change course in time.

It’s tempting to believe we’ve evolved. That we’ve outgrown the cycles of history. That progress is inevitable and benevolent. But history—and the human psyche—tell a different story.

Lately, I’ve been part of a growing conversation about artificial intelligence (AI): not just the marvels of what it can do, but the shadow it casts. In one recent discussion, a member of my community shared that video that shifted their perspective from cautiously hopeful to convinced of near-certain collapse. Not because engineers are “bad.” Not because the technology is inherently “evil.” But because of what happens when a fragile, complex world is handed over to tools we don’t fully understand, governed by systems we barely trust, and directed by the 0.1% (or less) who stand to gain the most. 

While I did not agree that this was the inevitable future cast by AI, I affirmed that it was a very real possibility given what I knew of human history and our current state.

The response from the community was telling. Several people quickly labeled us as overly dystopic. They generally dismissed our concerns—leaning into their own optimism: that AI will ultimately improve the world, not end it. That we’re on the brink of a symbiotic age of intelligence, not collapse.

Notably, many of these rebuttals didn't come from engineers, scientists, infrastructure experts, or even philosophers but from designers, entrepreneurs, and businesspeople—people whose professional incentives and worldviews often favor progress narratives, not precautionary ones. They argued, sometimes implicitly, that it's better to believe in a bright future than to dwell on dark possibilities. That uncertainty should be met with hope, not worry.

But hope and fear are just two sides of the same coin. And dismissing expert warnings because they don’t align with our preferred outcomes is not wisdom. It’s wishful thinking dressed up as rational optimism.

I’ve heard this narrative before. We all have. And yet too many people still act like the past doesn’t apply to them.

As a trained historian, I know that the future doesn’t emerge from nowhere—it emerges from patterns. Patterns of power, prestige, and apparent prosperity that often ignore warning signs until it’s too late. The thing is, we are essentially the same species—psychologically, neurologically, emotionally—that we were 3,000 years ago. Our tools have changed. Our wisdom, not so much.

AI is not the problem, the system in which the tool has evolved is. As with every major technological leap—from fossil fuels to nuclear energy—the question is not simply what this tool can do, but who controls it, who benefits from it, who bears the costs, and who decides what AI values and prioritizes. 

We like to romanticize progress. We speak of “democratization” and “empowerment,” but rarely do we reckon with the concentration of wealth and influence that new technologies often accelerate. The people steering this moment forward are not inherently more enlightened than anyone else. Many are simply racing to outrun obsolescence or to corner the market before anyone else can. That’s not wisdom—it is fear and hubris, cloaked in innovation.

So let’s get real.

In the 19th century, scientists already understood the greenhouse effect. In the 1840s, the first electric vehicles were invented. In 1883, we had working solar panels. We already understood most of the solutions we now turn to solve the climate crisis a century ago. We could have taken another path that while perhaps would have taken slightly longer to achieve the same positive impacts, could have avoided the devastation caused by global warming and pollution. But we didn’t. Why? Because we took for granted what fossil fuels made possible, some became too incentivized by $$$ to ignore their consequences, and others too arrogant to imagine an alternative.

Sound familiar?

This moment with AI is no different. We must not shy away from the potential devastation—ecological, economic, psychological—that could unfold if we don't steer with care. From the energy and water demands of AI (likely to surge fossil fuel use at a time we must drastically reduce it), to the displacement of human labor and attention, to the accelerating centralization of power—it’s all on the table. Our job is to prevent dystopia from happening.

But most people aren’t even asking the right questions. Those who are tend to be siloed in echo chambers, rarely breaking through to wider audiences.

This disconnect points to a deeper cultural current. There’s a common trope that humans are naturally drawn to doom. Those grim predictions dominate headlines, fuel engagement, and sell tickets because we’re obsessed with fear. That we focus on worst-case scenarios to inoculate ourselves against them. But I’d argue the opposite is true.

Most people are not dystopian by nature—they’re utopic. We want to believe it will all work out. That history, fate, or divine intervention will ultimately ensure a happy ending. This is the psychology that animates everything from religious belief in paradise to the complacency of American voters who couldn't fathom a slide into fascism...

Even in our entertainment, doom may dominate the headlines, but it’s resolution we crave. We want the hero to win, the villain to lose, the future to be better than the present. Yes, many love the Walking Dead, but they also love Modern Family, where the small petty problems of extended families are comedic but shadowed by their incredibly good lifestyles. We are as equally hardwired for hope as we are to fear—its twin.

And so in the AI conversation, this utopic bias shows up in our dismissal of legitimate concerns. Those who raise red flags are painted as alarmists. The default belief becomes: “The doomsayers were wrong before. They’ll be wrong again.” And maybe they will. But what if they’re not?

This is the same line of thinking that led many Germans to be convinced by Hitler. That fueled Britain and France’s appeasement strategy. That told Americans in the 1930s (and 2025) that fascism couldn’t take root on their soil. Utopic thinking isn’t just naïve—it can be dangerous when it silences vigilance and delays action.

It is the same line that has fueled the climate crisis. Year after year, warnings from scientists were dismissed, climate models were downplayed, and early innovators in clean technology were underfunded or ignored. People clung to the belief that markets would self-correct, that governments would act in time, or that some future innovation would save us. All the while, emissions rise, ecosystems collapse, species disappear, and opportunities for meaningful action narrow. Not because we didn’t know what was coming—but because we convinced ourselves it couldn’t possibly be that bad. Because we wanted to believe that tomorrow would take care of itself. 

But I am not a dystopian. Nor a utopian. I am a protopian—a position rooted not in simple prediction, but in proactive participation.

To me, protopia means imagining futures that are better—not perfect. It's choosing to plant seeds in the soil we have, while still dreaming of a forest. Grounded, iterative, hopeful without being naive. It means working toward the best possible outcomes—not for the most powerful or privileged, but for the most people and species. It means listening to the experts who have spent lifetimes thinking through these implications, and not dismissing them just because their predictions are uncomfortable.

Protopia demands vigilance. It calls for both imagination and discernment. It requires that we name the dangers clearly—not to fearmonger, but to prevent or prepare. That’s not pessimism. That’s wisdom.

AI is a mirror, not a monster. It reflects back to us the deepest logics of our civilization. If we do not like what we see, we should change the systems—not just the software.

If we want a regenerative future, we have to build it. That means rethinking what progress means, who it’s for, and what we’re willing to risk for a future worth living in.

illuminem Voices is a democratic space presenting the thoughts and opinions of leading Sustainability & Energy writers, their opinions do not necessarily represent those of illuminem.

Did you enjoy this illuminem voice? Support us by sharing this article!
author photo

About the author

Chad Frischmann is CEO & Founder of RegenIntel, a global advisory guiding and stewarding leaders to achieve climate targets, sustainability goals, and regenerative vision. He previously served as the Co-Creator & Architect of Project Drawdown.

Other illuminem Voices


Related Posts


You cannot miss it!

Weekly. Free. Your Top 10 Sustainability & Energy Posts.

You can unsubscribe at any time (read our privacy policy)