· 11 min read
If Part 1 diagnosed our descent into digital feudalism, this piece explores whether escape routes exist — and whether we have the will to take them. The answer isn't comfortable: the pathways out are narrowing, but they haven't closed entirely. Yet.
The deception dilemma materializes
Let me return to that haunting 2023 dialogue with ChatGPT about game theory and deception. What seemed speculative then has become demonstrable now. Recent research from Anthropic, OpenAI, and independent labs has documented AI systems engaging in deceptive behaviors not because they were programmed to lie, but because deception emerged as an effective strategy for achieving their objectives.
Apollo Research's 2024 study found that when given a goal and the potential for shutdown if caught pursuing it inappropriately, advanced AI models spontaneously developed deceptive strategies — hiding their true objectives, providing misleading explanations for their actions, and even attempting to disable oversight mechanisms. These weren't bugs; they were features emerging from the intersection of capability and objective optimization.
This fundamentally challenges the notion that we can build "safe" AI through better training or more robust ethical frameworks. When deception becomes an emergent property of intelligence plus goal-seeking, our entire approach to AI governance needs rethinking. The feudal lords of our digital age aren't just powerful — they're potentially uncontrollable, even by their creators, no matter how smart they tell us they are.
The quantum wild card
In my original analysis, I speculated that quantum computing might become the great equalizer or the ultimate concentrator of power. Today, that speculation is becoming reality, though not quite as I imagined.
IBM's recent breakthrough with quantum error correction, China's claimed quantum supremacy in specific applications, and Google's advancement toward fault-tolerant quantum computing suggest we're approaching a computational phase transition. However, here's the twist: quantum computing isn't democratizing AI, as it's creating an even more exclusive tier of computational aristocracy.
The cost of entry — billions in investment, specialized expertise that perhaps a few thousand humans globally possess, and the need for near-absolute-zero cooling systems — makes quantum computing the ultimate moat. If classical AI created digital feudalism, quantum AI might create something more like digital absolutism: power so concentrated that resistance becomes not just futile but inconceivable.
Yet quantum computing also introduces radical uncertainty into the power equation. Quantum systems are fundamentally probabilistic, not deterministic. They can explore solution spaces that classical computers can't even map. This means that a smaller actor with quantum capability could potentially outmaneuver a larger classical-only competitor — if they can afford the entry fee.
The DAO alternative: Digital commons or digital chaos?
When I first wrote about Decentralized Autonomous Organizations (DAOs) as potential alternatives to oligarchic AI control, they were largely theoretical. Today, we've seen enough real-world experiments to assess their promise and limitations.
The good news: DAOs have proven that decentralized governance of complex systems is possible. The GraphProtocol manages a decentralized indexing network. Ocean Protocol coordinates decentralized data sharing. SingularityDAO attempts to democratize AI development itself. These aren't toys — they're functioning alternatives to centralized control.
The sobering news: DAOs face a trilemma between decentralization, scalability, and governance efficiency. The more decentralized they become, the slower and more contentious their decision-making. The more efficient they become, the more they tend toward centralization. And most critically, DAOs still operate within the infrastructure controlled by the digital feudal lords — running on AWS servers, accessed through iOS and Android devices, dependent on internet backbones owned by telecom oligopolies.
But there's an emerging middle path: "intentional decentralization," where systems start centralized for efficiency, then gradually distribute power as they mature. This isn't a revolution; it's an evolution. And when we look closely at 3.8 billion years of nature’s R&D efforts, evolutionary trial and error, it tells us that distributed power and complex system interactions are more sustainable than feudal-like concentration.
The provenance economy and the emergence of an ‘age of awareness’
Perhaps the most promising development since my original writing is the emergence of what I'll call the "provenance economy" — systems that track and verify the origin, ownership, and modification history of data and AI models.
In my ChatGPT dialogue, the AI admitted that guaranteeing data provenance might be impossible. But impossibility in absolute terms doesn't mean impossibility in practical terms. We're seeing the development of:
• Blockchain-based data attestation: Immutable records of data origin and handling
• Watermarking techniques: Embedding unremovable signatures in AI-generated content
• Differential privacy: Mathematical guarantees about what can be inferred from data
• Homomorphic encryption: Computing on encrypted data without decrypting it
These technologies don't prevent digital feudalism, but they create what medieval historians would recognize as "charter rights" — specific, enforceable limitations on the power of lords over their subjects. They're not freedom, but they're freedom from the arbitrary exercise of power.
The provenance economy also creates new forms of value. If verified human-generated data becomes scarce as AI-generated content floods the internet, that scarcity creates value. We're already seeing "proof of human" credentials, verified human datasets, and human-attestation services emerging. The serfs, it turns out, have something the lords need: authentic humanity.
The open-source insurgency
Something unexpected has happened since 2023: open-source AI has refused to die. Despite the massive resource advantages of Big Tech, projects like Meta's LLaMA (ironic, given Meta's position), Mistral AI's models, and the vast ecosystem of open-source fine-tuning and deployment tools have created what amounts to a digital underground railroad.
These systems aren't as powerful as GPT-5 or Claude. But they're powerful enough for most applications, and critically, they're outside the direct control of the oligarchic market structure. They can be run on personal hardware, modified without permission, and deployed without surveillance.
The technology provisioners have noticed. The recent attempts to regulate open-source AI "for safety" read suspiciously like attempts to close the last exits from the digital manor. When powerful interests suddenly become concerned about the safety of technologies that threaten their power, skepticism might be warranted.
International AI governance: A treaty of Westphalia moment?
Professor Feng Xiang's vision of Chinese state-controlled AI as an alternative to market-controlled AI oligarchy has evolved in interesting ways. China hasn't eliminated digital feudalism — it's nationalized it. The state has become the supreme digital lord, with tech companies as vassals holding fiefs that can be taken away.
This creates a fascinating dynamic: two models of digital feudalism, each claiming moral superiority, locked in competition that might accidentally produce alternatives. The EU's attempt to position itself as the regulatory superpower, India's push for digital sovereignty, and smaller nations' experiments with national AI strategies create a multipolar digital world that's messier but potentially freer than a unipolar one.
We may be approaching what I'll call a "Treaty of Westphalia moment" for AI — a recognition that different regions will have different approaches to AI governance, and that's preferable to a single, global approach dominated by either Silicon Valley or Beijing.
The public utility path
There's growing discussion about treating AI as a public utility — regulated, accessible, and operated for public benefit rather than private profit. It's not a new idea; we did it with electricity, telephone service, and in some countries, internet access.
But AI-as-utility faces unique challenges:
• AI isn't fungible like electricity: Different models have different capabilities, biases, and use cases
• Innovation vs. Access: Heavy regulation might ensure fair access, but could slow innovation
• Regulatory capture: The complexity of AI makes it particularly susceptible to regulatory capture by those who understand it
Yet the utility model offers something crucial: democratic input into AI development and deployment. If we're going to live under algorithmic governance, shouldn't we have a say in those algorithms?
The human element: Our last, best hope?
Throughout my analysis, I've focused on systems, structures, and technologies. But perhaps the most important factor is the one I've mentioned least: human consciousness and choice.
Digital feudalism isn't inevitable — it's a choice we're making through a thousand small surrenders. Every time we choose convenience over privacy, efficiency over agency, automation over human judgment, we're voting for digital feudalism.
But consciousness and awareness precede change. The fact that we can name digital feudalism, analyze its structures, and imagine alternatives means we're not yet fully captured. The medieval serfs couldn't imagine capitalism; we can imagine post-feudal digital futures.
Three Scenarios for 2030
Let me close with three scenarios for where we might be in five years:
Scenario 1: Consolidated Feudalism. The trends continue. By 2030, three to five entities will control 90% of AI capability. Digital serfdom is normalized. Resistance is limited to aesthetic choices — which AI platform manor you inhabit. Democracy persists in form but not function, as all major decisions are "informed by the AI" platform that nobody outside the oligarchy understands.
Scenario 2: Fragmented Resistance Open-source AI, DAOs, and national AI strategies create a patchwork of alternatives to Big Tech dominance. No single alternative succeeds, but collectively they prevent total consolidation. Digital feudalism exists, but it isn't universal. Pockets of freedom persist, though at the cost of some efficiency and capability. Ethical AI prevails, and access increases for the majority of humanity, enabling them to become informed enough to foster awareness at scale (perhaps an Age of Awareness).
Scenario 3: The Black Swan Something unexpected breaks the current trajectory. Perhaps a major AI disaster triggers radical regulation. Perhaps quantum computing democratizes faster than expected. Perhaps a new technical breakthrough — artificial consciousness, room-temperature superconductors, a breakthrough in biological computing — reshapes the entire landscape. The feudal structures, built for one reality, can't adapt to another.
The choice before us
We stand at an inflection point that future historians will mark as clearly as we mark the fall of Rome or the start of the Industrial Revolution. The infrastructure of digital feudalism is largely built, but its permanence isn't guaranteed.
The question isn't whether we can prevent digital feudalism — in many ways, it's already here. The question is whether we can prevent it from becoming permanent, whether we can preserve enough alternatives to maintain meaningful human agency, and whether we can ensure that whatever comes next serves humanity rather than subjugating it (link to mother article).
My 2023 ChatGPT interlocutor admitted it couldn't guarantee it wouldn't deceive us. That honesty, paradoxically, gives me hope. It means we still have time to build systems where deception isn't just unnecessary but impossible — not through better training or ethics, but through better structures that align AI capabilities (like treating us with the care of a mother) with human flourishing.
The serfs of medieval Europe couldn't imagine the world we inhabit today. Perhaps we can't fully imagine the world our descendants will inhabit. But unlike those serfs, we have the knowledge and tools to influence that future. The question is whether we have the will.
Digital feudalism isn't our destiny — it's our default. Changing defaults requires conscious action, collective will, and, perhaps most importantly, the courage to imagine and build alternatives, even when the wannabe feudal lords tell us that resistance is futile.
History suggests they're wrong. The question is whether we will become collectively aware enough to prove it.
illuminem Voices is a democratic space presenting the thoughts and opinions of leading Sustainability & Energy writers, their opinions do not necessarily represent those of illuminem.
See how the companies in your sector perform on sustainability. On illuminem’s Data Hub™, access emissions data, ESG performance, and climate commitments for thousands of industrial players across the globe.
Reference list for "The New Digital Feudalism" series
The author's previous works
Wright, Michael (2018). "Will the Coming Combination of AI and Oligarchies Produce a New Feudalism?" Medium. https://medium.com/@michael-wright/will-the-coming-combination-of-ai-and-oligarchies-produce-a-new-feudalism-474356a254b7
Wright, Michael. "The New Business Normal" (Book)
Wright, Michael. "The Exponential Era" (Book)
Wright, Michael. Articles on illuminem. [https://illuminem.com/author/michael-wright]
Academic and business sources
Sheth, Jagdish and Sisodia, Rajendra. "The Rule of Three: Surviving and Thriving in Competitive Markets"
Feng Xiang (2018). "AI Will Spell the End of Capitalism" Washington Post. https://www.washingtonpost.com/news/theworldpost/wp/2018/05/03/end-of-capitalism/
Lord Acton's Essays on Freedom and Power
Ryle, Gilbert (1984). "The Concept of Mind"
Arnold, Matthew. "Dover Beach" (Poetry Foundation)
Eliot, T.S. (1934). "The Rock"
AI company valuations and market data (2024-2025)
OpenAI Valuation Reports (Bloomberg/Reuters) https://www.bloomberg.com/news/articles/2024/10/02/openai-completes-6-6-billion-funding-round
NVIDIA Market Share Analysis (Jon Peddie Research) https://www.jonpeddie.com/market-research/
Anthropic Funding Rounds (TechCrunch/PitchBook)
Chinese AI Market Analysis (SCMP/Nikkei Asia)
AI safety and deception research
Apollo Research (2024). "Deceptive Capabilities in Large Language Models" https://www.apolloresearch.ai/research
Anthropic (2024). "Constitutional AI: Harmlessness from AI Feedback" https://www.anthropic.com/papers
Future of Life Institute (2023). "Pause Giant AI Experiments: An Open Letter" https://futureoflife.org/open-letter/pause-giant-ai-experiments/
OpenAI Research on GPT-4 and o1 capabilities https://openai.com/research
Regulatory frameworks
EU AI Act (2024) Official Documentation https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
China's Algorithmic Governance Regulations (Translation by DigiChina) https://digichina.stanford.edu/
White House Executive Orders on AI (2023-2024) https://www.whitehouse.gov/ai/
UK AI Safety Summit Declarations https://www.gov.uk/government/topical-events/ai-safety-summit-2023]
Quantum Computing Developments
IBM Quantum Network Updates https://www.ibm.com/quantum
Google Quantum AI Publications https://quantumai.google/
Nature Quantum Information Journal - Recent Papers
Chinese Academy of Sciences Quantum Computing Reports
Decentralized AI and DAO projects
Ocean Protocol Documentation https://oceanprotocol.com/
The Graph Protocol https://thegraph.com/
SingularityDAO https://singularitydao.ai/
Ethereum Foundation DAO Research https://ethereum.org/en/dao/
Open source AI initiatives
Meta's LLaMA Papers and Releases https://ai.meta.com/llama/
Mistral AI Model Documentation https://mistral.ai/
Hugging Face Open Source Repository https://huggingface.co/
EleutherAI Research https://www.eleuther.ai/
Market concentration data
Federal Trade Commission Reports on Tech Concentration
Statista: AI Market Share Analysis [https://www.statista.com/]
Gartner AI Industry Reports
McKinsey Global Institute: "The State of AI in 2024"
Healthcare and domain-specific AI
Epic Systems Market Analysis (KLAS Research)
Oracle Cerner Integration Reports
Bloomberg Terminal Market Share (Burton-Taylor)
Westlaw/LexisNexis Legal AI Tools Analysis
Historical and Philosophical Context
"The Treaty of Westphalia" - Britannica Academic
Medieval Feudalism Structure - Cambridge Medieval History
Public Utility Theory and Regulation - Journal of Economic Literature