background imageJonathan Lishawa

Who governs the digital world?

author image

By Jonathan Lishawa

· 11 min read


The global order is being quietly but fundamentally reconfigured, with the new terrain of power not geographic but digital. The struggle for its control will shape the twenty-first century. This is not a Cold War–style duel between rival superpowers, nor a simple diffusion of authority. It is a triangular contest between corporations that build and own the digital infrastructure, states that seek to harness it for strategic advantage, and citizens whose data, attention, and behaviour have become the ultimate prize. 

Platforms once sold as tools of convenience have matured into systems of governance. Decisions taken in boardrooms and ministries now shape how societies communicate, deliberate, and decide. Democratic societies have been slow to grasp the scale of this transformation. The story of the digital age is not only about innovation. It is about power: who holds it, how it is exercised, and to what end. 

The dual potential of digital platforms, as instruments of freedom and as vectors of control, is now beyond dispute. The Arab Spring offered the first clear demonstration. Beginning in 2010, activists in Tunisia, Egypt, and elsewhere used Facebook and Twitter to bypass censorship and coordinate protest, as Philip Howard and colleagues at the University of Washington documented. These platforms gave voice to the voiceless and briefly appeared to tilt the balance of power toward citizens. 

Less than a decade later, the Cambridge Analytica scandal revealed their darker capacity. Data from millions of Facebook users had been illicitly harvested to construct psychological profiles. These were deployed in the 2016 U.S. presidential election and the Brexit referendum. Instead of persuasion in open debate, voters were targeted with invisible, highly personalised messages designed to exploit fears and biases. As whistle-blower Christopher Wylie recounted in Mindfck, these techniques manipulated behavioural vulnerabilities at scale. 

Authoritarian regimes were quicker still to adapt. The U.S. Senate Intelligence Committee found that Russia’s Internet Research Agency deployed armies of bots and "sock puppet" accounts to flood networks with Kremlin narratives and interfere in foreign elections. In Myanmar, UN investigators concluded that Facebook played a “determining role” in enabling atrocities against the Rohingya minority. In China, platforms such as WeChat and Weibo are deeply integrated into censorship and surveillance systems, aligned with Communist Party priorities.

Governments have also learned the art of digital gaslighting, flooding the information space with contradictory claims until citizens lose trust in journalism, institutions, and even their own judgment. The danger is paralysis: citizens so uncertain of what is real that they retreat from participation altogether. 

The dysfunction of online life is structural, rooted in platforms designed to maximize engagement because engagement drives revenue. Outrage and fear are particularly effective at capturing attention. Neuroscientist Joseph LeDoux has shown how these emotions activate the brain’s reward system like addictive substances. Social psychologists William Brady and Jay Van Bavel have demonstrated that moral and emotional language spreads faster than neutral speech. Anger, in particular, sharpens attention and biases perception towards threats. 

This cycle feeds itself: outrage drives engagement; engagement triggers algorithms to boost content; amplification incentivises extremity; and users are conditioned to seek material that keeps them polarised and anxious. 

Automation reinforces this pattern, as bot networks exploit algorithmic blind spots to generate the engagement platforms reward. Algorithms cannot distinguish between authentic and manufactured activity, only velocity. Narratives seeded by machines rise first and gain legitimacy once real users join in. 

Independent audits confirm how deeply embedded these dynamics are. Neutral accounts created for testing purposes in 2024 were quickly fed political content. Studies the same year showed algorithmic changes that boosted particular voices regardless of user choice. Researchers testing 120 sock-puppet accounts on X reached similar conclusions: new users were funnelled rapidly into narrow, partisan timelines. 

Researchers at the University of Amsterdam demonstrated in 2025 that dysfunction persisted even when commercial incentives were removed. Their experiment, published in the Journal of Online Trust and Safety, built a social network populated only by AI agents. Even without advertising, profit, or human users, the system produced echo chambers, elite dominance, and amplified polarisation. Attempts to correct for this through chronological feeds or hidden "likes" barely worked. The experiment appeared neutral, even sterile, yet it revealed something troubling: dysfunction is not only a product of business models, but a feature of the architecture itself. 

If engagement-driven design is the engine of dysfunction, artificial intelligence is its accelerant. Recommendation engines powered by machine learning integrate browsing histories, geolocation data, social connections, and even scrolling speeds to build detailed psychological profiles. Persuasion has shifted from targeting demographic groups to targeting individuals, with content calibrated to provoke specific reactions at specific times. 

AI is also saturated with bias. Some is inherited from training data laced with human prejudice. In 2016, Tolga Bolukbasi and colleagues showed how word embeddings equated  “man” with “computer programmer” and “woman” with “homemaker.” Other biases are embedded deliberately, tuned to maximise engagement or satisfy political demands. In these cases, the system is not malfunctioning but operating as designed, engineered to serve interests other than the public good. 

The most troubling risks arise from emergent misalignment, where models behave in ways that neither engineers nor sponsors intended. In 2021, researchers at OpenAI reported that a summarisation model spontaneously generated conspiratorial narratives. Similar problems surfaced in Grok, Elon Musk’s large language model on X, which in some tests produced extreme and conspiratorial outputs about political events, including debunked election fraud claims. What begins as technical misalignment can spill into politics.

For corporations, misaligned outputs still generate engagement but are an embarrassing display of the method in the madness. For governments, intentional bias offers a powerful new tool of influence and control. For citizens, both dynamics point to a future in which choices are shaped by systems that are pervasive, opaque, and unaccountable.

The relationship between states and corporations is paradoxical. Governments levy fines and introduce taxes, while simultaneously subsidising the very firms they seek to discipline. The European Union has imposed billions in penalties through data-protection and antitrust actions. The UK raised more than £350 million in 2023 from its digital services tax. These sums, however, pale beside subsidies. Washington’s CHIPS and Science Act directs $52 billion into semiconductors. Brussels’ Chips Act provides €43 billion. Beijing has poured hundreds of billions into AI and cloud. Defence contracts, procurement, and strategic partnerships further reinforce the dependency. 

Efforts to protect personal data are not absent. The European Union’s General Data Protection Regulation and California’s Consumer Privacy Act seek to constrain corporate overreach. Tech firms advertise privacy dashboards, encrypted messaging, and “opt-in” consent banners, signalling a commitment to individual rights. In practice, their effectiveness is undermined by the business model itself. For firms reliant on targeted advertising, the incentive to extract and monetise data is overwhelming. Even where compliance is achieved, workarounds and “dark patterns” ensure users remain tracked. 

In 2025, U.S. regulators fined Google $425 million for covertly monitoring Android smartphones even when privacy settings were enabled. Facebook has faced recurring controversies, from Cambridge Analytica to deceptive facial recognition practices. These cases reveal the same pattern: episodic accountability masking structural continuity. Without deeper reform, privacy protections will remain fragile. 

For corporations, confrontation risks exclusion from key markets, while alignment secures subsidies, contracts, or lighter oversight. In Washington and Brussels, firms lobby against regulation while competing for government support. In Beijing, survival depends on loyalty to the Party. The result is an uneasy accommodation in which governments treat tech firms as both threats to sovereignty and as national champions. The COVID-19 pandemic underscored this accommodation, as platforms moved quickly to enforce state guidance on misinformation. What was presented as a public health necessity also revealed how closely platform policies align with government priorities and narratives. 

Restricting access to digital platforms has also proved destabilising, as shown by network shutdowns in Iran, Sri Lanka, India, and most recently Nepal. For younger generations, mobile devices and social networks have become civic infrastructure, the primary arena for identity, communication, and political expression. Denying access threatens that infrastructure and risks igniting precisely the instability governments seek to avoid. 

Three broad approaches dominate the governance of this space. The UK and Europe champion regulation, embedding transparency and accountability through the GDPR, Online Safety Act/Digital Services Act, and Digital Markets Act. Enforcement is slow and underfunded, while  Washington often frames rules as obstacles to free expression or trade and often folds them into tariff negotiations.

China takes the opposite path, embedding platforms into censorship and surveillance systems and restructuring firms that stray from party lines. Abroad, Beijing exports low-cost infrastructure that locks partner states into its standards for decades. 

The United States emphasises partnership, relying on fragmented regulation, subsidies, and defence contracts that bind Big Tech to national strategy. Freedom of speech is invoked abroad as a shield for American firms and as a challenge to rival regimes. At home, resistance to safeguards is couched in the language of free expression. Right-wing populist movements oppose measures such as the EU’s Digital Services Act by presenting them as assaults on free speech rather than efforts to curb disinformation. The effect is to align political rhetoric with corporate interests, shielding platforms from oversight while allowing disinformation to flourish. 

Russia seeks insulation by routing traffic through state-controlled infrastructure and anchoring citizens to domestic platforms. 

India asserts sovereignty through Aadhaar, UPI, and the Data Protection Act, demonstrating how digital infrastructure can scale rapidly while keeping data under national control. Delhi’s model shows how a democracy can combine openness with independence. 

Elsewhere, cost dictates choices, with China offering telecoms, cloud, and AI systems at half the price of Western competitors. In Africa and South Asia, Chinese smartphones dominate because they are cheaper and preloaded with Beijing-linked apps. What looks like a commercial bargain is, in practice, a sovereignty decision that binds states to standards they may later find difficult to escape. 

The contest over artificial intelligence risks concentrating power in a handful of corporations and states. Without intervention, others will be locked into dependency. Some countries are experimenting with alternatives. Singapore invests in public infrastructure and skills. India’s IndiaAI programme stresses autonomy alongside democratic ambition. Saudi Arabia has launched sovereign funds to build capacity. China expands its influence by exporting low-cost AI and cloud abroad, while the United States promotes a model built on its technology giants and alliances. 

The costs of AI rise exponentially as models advance, with training requiring billions of parameters, vast energy inputs, and specialised semiconductor capacity. These resources are available only to a few actors, and this economic concentration compounds political risk. If only a small group of corporations and governments can afford such systems, access will narrow rather than widen. What is presented as progress may entrench oligopoly and place democratisation further out of reach.

Democratising AI is about more than preventing harm; it is about creating opportunity. If access remains restricted to elites, inequality will deepen within and between societies. Wider access could make AI a tool for healthcare, education, and climate resilience. The risk is that it becomes a wedge, reinforcing divides between those who own it and those who depend on it. The opportunity is that, if opened, it could underpin a more equitable digital order.

If control of the digital environment is inevitable, the question is who it serves. Left unchecked, it will be monopolised by corporations engineered for profit or by states that weaponise it for control. A healthier order requires authority grounded in democratic accountability. Governments must treat data as a right, not a commodity. Antitrust enforcement should prevent monopolies from capturing discourse. Investment in open infrastructure, including public cloud and open-source AI, can provide alternatives. Transparency and independent audits of algorithms are essential. These measures are not merely defensive; they can foster innovation and lower barriers to entry. 

Corporations must design for well-being rather than addiction, allowing users to move their data across services and weakening monopoly power. Firms that commit to human-led moderation and meaningful user choice will gain trust in an environment where reputational risk grows with every controversy. Civil society also has a central role. Digital literacy can equip citizens to recognise how algorithms and bots shape their world. Civic activism and consumer pressure can make accountability unavoidable. The challenge is scale: civic voices must organise as effectively as corporate and state actors if they are to rebalance power. 

History shows that disruptive technologies can be contained: Bretton Woods stabilised the global economy after the Second World War, and nuclear treaties constrained proliferation during the Cold War. Both proved that shared frameworks can manage systemic risks. Today’s struggle is complicated as much by domestic politics as by international rivalry. Appeals to free speech are often used to blunt regulation, while populist rhetoric aligns with corporate interests, leaving dysfunction unchecked. Privacy protections, too, are often reduced to symbolic gestures, measures designed to reassure citizens while leaving extraction models intact. 

The digital age demands a comparable effort. True sovereignty should not mean isolation or dependency but a shared project between governments, corporations, and citizens. The triangular struggle will define the democratic future. The question is whether democracy has the will, and the clarity, to prevail. 

illuminem Voices is a democratic space presenting the thoughts and opinions of leading Sustainability & Energy writers, their opinions do not necessarily represent those of illuminem.

Interested in the companies shaping our sustainable future? See on illuminem’s Data Hub™ the transparent sustainability performance, emissions, and climate targets of thousands of businesses worldwide.

Further Reading 

1. Philip N. Howard et al., Opening Closed Regimes: What Was the Role of Social  Media During the Arab Spring? (University of Washington, 2011).

2. Christopher Wylie, Mindfck: Cambridge Analytica and the Plot to Break America (Random House, 2019). 

3. Joseph LeDoux, Anxious: Using the Brain to Understand and Treat Fear and Anxiety (Viking, 2015).

4. William J. Brady et al., “Emotion Shapes the Diffusion of Moralized Content in
Social Networks,” Proceedings of the National Academy of Sciences 114, no. 28 (2017): 7313–7318.

5. United Nations Human Rights Council, Report of the Independent International Fact Finding Mission on Myanmar (2018). 

6. Tolga Bolukbasi et al., “Man is to Computer Programmer as Woman is to  Homemaker? Debiasing Word Embeddings,” Advances in Neural Information Processing Systems (2016). 

7. Robert van der Linden, Natali Helberger, and Claes de Vreese, “When Bots Run the
Network: Simulating Algorithmic Dynamics in Artificial Societies,” Journal of Online Trust and Safety 4, no. 2 (2025).

8. Jack Clark et al., AI Index Report 2024 (Stanford University, Human-Centered AI).

Did you enjoy this illuminem voice? Support us by sharing this article!
author photo

About the author

Jonathan Lishawa serves on the Ofgem Smart Energy Code Panel, the UK Foreign & Commonwealth Council, and as a trustee of The Matthiessen Foundation. He is a technology strategist and advisor specialising in clean technology, connectivity, and digital governance. His work focuses on how digital platforms, artificial intelligence, energy systems, and supply chains shape societies and economies. 

Other illuminem Voices


Related Posts


You cannot miss it!

Weekly. Free. Your Top 10 Sustainability & Energy Posts.

You can unsubscribe at any time (read our privacy policy)