· 20 min read
The player piano trap
Artificial intelligence, automation, and robotics are redefining the logic of production. They promise efficiency but risk eroding the wage base that sustains demand. Kurt Vonnegut’s Player Piano (1952) imagined an America where machines produced flawlessly while people were left idle. Production succeeded; society did not. The system perfected output while following out purpose. That parable may soon shift from fiction to forecast.
Across advanced economies, factories, data centres, and logistics systems are expanding faster than consumption. A modern AI accelerator can draw as much power as a small neighbourhood, while a median wage buys only a fraction of that compute capacity¹. If production continues to rise while incomes stagnate, efficiency gains will concentrate in capital while labour’s share of output shrinks. The circulation of demand weakens, and the supply system grows heavier while the consumption system remains human and wage dependent.
This imbalance is already visible in early form. China illustrates the trajectory: following the 2021 property correction, capital moved into electric vehicles, batteries, semiconductors, and renewable energy². Made in China 2025 and Dual Circulation encouraged capacity building over consumption, treating supply as security. By 2025, household consumption still accounted for under 40 per cent of GDP, compared with roughly 70 per cent in the United States and Europe³. Firms, facing oversupply, cut prices to protect share; margins eroded and productivity gains risked turning deflationary. Beijing labelled the dynamic neijuan – involution – the point at which effort and investment rise while returns decline.
Western economies could follow a similar path. The US CHIPS and Science Act and the EU Chips Act channel public money into fabrication and packaging plants justified geopolitically rather than by confirmed demand⁴. Hyperscalers are investing hundreds of billions in data centres and AI clusters on the assumption that markets will absorb the output. Each factory, foundry, and server farm appears rational in isolation, yet together they risk constructing a system that oversupplies itself while underpaying its consumers.
The danger is structural, not cyclical. When the instruments of production – fabrication plants, data halls, robotic lines – are funded by debt and subsidies but generate little new household income, resilience can turn into overcapacity. Supply becomes capital-heavy and energy-intensive; demand remains wage-light and fragile. Unless policy reconnects income and output, growth could become self-defeating.
A system can scale output faster than wages for a few years, but not indefinitely. Without redistribution through wages, taxation, or participation, efficiency becomes fragility. The paradox of the Player Piano economy is that it could succeed in the technical sense – more output, fewer errors – yet fail in the civic one: producing without people.
The cost of compute
Most employment now lies in services, the very sectors where automation is advancing fastest. Customer support, logistics, finance back offices, content workflows, and clinical documentation are becoming early laboratories for AI substitution. Productivity rises while payrolls stagnate, loosening the wage anchor that keeps the centre of the economy stable. If this trend accelerates, AI could hollow out the income base that sustains demand long before new industries emerge to replace it.
Each generation of AI compounds this risk. Chip and server production already consumes more electricity and ultra-pure water per wafer, while operation grows more energy-intensive as models scale, context windows lengthen, and inference remains continuously active⁵. The newest systems are multimodal and agentic: they not only respond but plan, call tools, execute code, and critique results. Every step generates tokens – the basic unit of AI work – text, code, or images processed or produced.
Earlier models such as GPT-3 handled roughly one thousand tokens per query. Current frontier systems manage ten to twenty times that amount, and next-generation architectures could reach hundreds of thousands of tokens per task⁵. Because energy use scales directly with token volume, even dramatic chip-efficiency gains cannot offset the growth in total power demand.
If present trajectories continue, the cost of computation could begin to rival the cost of labour it displaces. A 100,000-GPU deployment using NVIDIA’s GB200 NVL72 cluster consumes about 270 MW at the wall, or roughly 2.4 TWh per year⁵. Configured for forthcoming Rubin Ultra-class systems, that figure could approach 4.4 TWh⁵. At an average industrial tariff of $0.12 per kWh, annual electricity costs exceed half a billion dollars before cooling, networking, or maintenance. Global data-centre electricity demand could reach one trillion kWh by 2030, up from about 415 TWh in 2023⁵.
The economic consequences follow. Vendors pass rising energy and infrastructure costs into subscription tiers and enterprise licences. Services that were once free now charge monthly fees; enterprise seats climb into triple digits per user per month. Elasticity then constrains diffusion: households cancel first, small firms postpone upgrades, and large firms ration access to high-value staff. Doubling a plan from $60 to $120 per user doubles the annual cost of a 2,500-seat rollout – enough to halt adoption. Pricing thus becomes the gatekeeper of participation.
If energy and infrastructure pressures persist, access to AI could stratify along income lines. High-value users will retain premium access, while households and small businesses will confront rising costs and falling inclusion. The same technology designed to democratise intelligence could instead amplify inequality.
Governments are beginning to see the tension. Regulators in Ireland and the UK have warned that hyperscale loads may strain grid stability and water availability⁶. In North America, utilities are negotiating multi-gigawatt interconnections for AI campuses, reordering substation queues, and raising tariffs to fund capacity expansion. The physical footprint of digital intelligence is now visible in power and water policy.
If AI continues along this path, its limiting factor may not be intelligence but infrastructure – the capacity of societies, grids, and ecosystems to sustain it. The future of progress will depend less on model size than on energy discipline, and less on speed than on who can afford a seat at the table.
Strategic dependence and the Global South
The geography of AI is beginning to consolidate into a few powerful industrial and digital blocs. Each model, fabrication plant, and data centre depends on long supply chains of metals, minerals, hydrocarbons, and water. Semiconductor manufacturing alone consumes millions of litres of ultra-pure water daily, while hyperscale facilities rely on continuous electricity and cooling. As these dependencies deepen, cloud and AI ecosystems are evolving into geopolitical infrastructure. Nations may soon find themselves not just clients but tenants within digital empires.
Platform and hardware concentration reinforce this dependence. Control of AI infrastructure is consolidating across three layers: hardware vendors such as NVIDIA, AMD, and Intel, and their manufacturing partners TSMC, Samsung, and ASML; cloud and compute providers including AWS, Microsoft Azure, Google Cloud, Alibaba Cloud, and Huawei Cloud; and AI software and service vendors such as OpenAI, Anthropic, Google DeepMind, and Baidu. Each layer defines its own architectures, software toolchains, and APIs — from chip instruction sets and CUDA frameworks to model endpoints and developer platforms. Together, these vertically integrated ecosystems are technically efficient but strategically restrictive.
A further divide is emerging between AI development philosophies. Western firms are converging on closed, subscription-based models whose access and pricing are controlled through proprietary APIs and licensing. China, by contrast, is pursuing a hybrid path that favours open-weight, open-data models to encourage local adaptation and reduce dependency on foreign intellectual property. Unfortunately, openness does not equal independence. Even open models rely on advanced hardware, data centre capacity, and continuous power, tying them back to the same global supply chains. For emerging and developing economies, lower-cost open-source systems may prove more accessible, but their efficiency and energy intensity will ultimately determine whether they enable self-reliance or entrench new dependencies.
Software and models built on these proprietary foundations are costly to migrate, binding nations and firms to specific hardware–software stacks that shape not only commerce but also governance and security. As these digital borders harden, control of computation — from silicon to software — may begin to determine political autonomy as much as control of energy or territory.
The Global South faces sharper exposure. Many developing economies built their growth on labour-intensive exports and outsourced services that translated global demand into domestic employment. As AI automates manufacturing, voice, document, and financial workflows, that bridge is narrowing⁷. The same hubs that rose through business-process outsourcing — from Manila to Nairobi — could see the demand that once fuelled their prosperity repatriated to automated systems in wealthier markets, and or their social mobility greatly diminished.
This technological shift can also enable a form of leapfrogging. AI-driven diagnostics, adaptive education platforms, predictive agriculture, and microgrid management tools offer ways to extend essential services without the legacy cost of traditional infrastructure⁸. A rural clinic equipped with diagnostic models can provide urban-quality care; predictive software can balance renewable microgrids where national transmission networks are still underdeveloped. Whether this transition strengthens autonomy or dependency will depend on who owns and governs the data and compute resources that underpin it.
Without deliberate policy, a new extractive order could emerge. When speech, health, and imagery data from the Global South are used to train models headquartered and monetised in the North, value again flows outward⁹. Analysts warn that such data colonialism risks replicating the resource asymmetries of the industrial era unless local capacity, data sovereignty, and benefit-sharing are protected⁹.
Several policy levers can alter this trajectory. Regional data-centre cooperatives, backed by development banks and climate finance, can anchor local sovereignty while providing affordable access to shared AI infrastructure¹⁰. Development strategies should evolve from roads and ports to digital public goods — open datasets, civic models, and local-language AI built under public-interest licences. Investment in digital education, public cloud infrastructure, and transparent data governance would allow the Global South to convert AI into a force for inclusion rather than dependency.
The Global South’s advantage lies in agility. Younger populations, fewer legacy systems, and flexible institutions make adaptation faster and cheaper. If digital capacity and ownership are shared fairly, automation can become a catalyst for equitable growth and proof that a participatory AI economy remains possible.
Structural and financial fragility
AI and automation rely on a tightly coupled industrial ecosystem. Semiconductors, batteries, advanced materials, and renewable infrastructure share overlapping supply chains that depend on copper, lithium, nickel, rare earth elements, and reliable grid access. When one node accelerates, it exerts pressure on the others. A shortage of high-purity silicon or power electronics slows both chip fabrication and clean-energy deployment. As supply chains lengthen and concentration rises, local shocks can transmit globally through cost, delay, and scarcity.
This interdependence links the digital economy to physical infrastructure in ways that traditional policy models rarely capture. Natural gas and oil remain essential for semiconductor manufacturing, chemical feedstocks, and shipping. Electricity demand is rising faster than renewable build-out in several regions, tightening margins for industrial users and pushing utilities to prioritise hyperscale customers. Water use for chipmaking and data-centre cooling competes with residential supply, especially in drought-prone regions. When resources tighten, the state often intervenes to secure domestic production or restrict exports. This process can turn commercial competition into strategic confrontation.
The financial structure of AI investment adds another layer of risk. A circular capital model is forming in which the same investors hold overlapping positions in chip design, cloud infrastructure, and software platforms. Valuations rise as each layer of the stack purchases capacity from the next, inflating asset prices without generating equivalent consumer income¹¹. When profitability depends on continued expansion rather than realised demand, the system becomes vulnerable to correction.
If current trajectories persist, capital could decouple from productive income streams. Revenues increasingly derive from cross-subsidised or speculative reinvestment rather than broad-based consumption. In this circular structure, hardware producers rely on hyperscalers’ orders; hyperscalers rely on software demand; and software revenues rely on enterprises projecting future productivity gains rather than realised savings. When any layer slows, the contraction propagates through the others.
The result is a form of systemic exposure that resembles the housing and energy markets before 2008, but distributed across data centres, fabs, and compute markets. Analysts estimate that the combined interlocking exposure among the leading hyperscalers, foundries, and AI software firms already extends into hundreds of billions of dollars¹¹. Because much of this investment is debt-funded or subsidy-backed, a downturn could leave governments with stranded assets and underused capacity.
Western industrial policy compounds the risk. Subsidies and tax credits encourage domestic capacity expansion as a hedge against geopolitical uncertainty. In aggregate, these programmes create a synchronised build-out of similar assets across economies that share the same demand ceiling. When demand underperforms, capacity remains idle while public guarantees remain active. Governments absorb the downside, privatising gains and socialising risk.
Mitigating this requires both transparency and coordination. Disclosure standards for utilisation, embodied carbon, and water use can align industrial incentives with environmental and fiscal prudence. Linking subsidy eligibility to capacity utilisation and demand evidence would prevent overbuild and strengthen public accountability. Cross-border reporting under the OECD or WTO could reduce the incentive for capital flight by establishing common standards for investment disclosure¹².
The structural fragility of the current model lies in its asymmetry. Supply is global, capital is mobile, but demand remains national and wage-based. Unless the income side of the equation is reinforced, the system risks constructing a vast, energy-intensive infrastructure that few can afford to use.
Reconnecting income
The risk of a Player Piano economy can be reduced if policy reconnects productivity growth with purchasing power. The goal is not to restrain automation but to ensure its social foundation. A phased policy approach can rebuild the link between efficiency and inclusion while keeping adoption broad.
A central component is a capital-based levy on automation equipment that directly substitutes for labour. This would apply to robots, automated tools, and AI accelerators used primarily for in-house services rather than for creating new products or markets. Verification would rely on anonymised payroll and asset data, tracking employment trends within sites over time. If a facility’s payroll declined while its automation asset base increased, the proportion of displaced labour could be estimated. The levy would then apply to the depreciable value of that qualifying equipment.
For illustration, a fulfilment centre installing £50 million of automation that reduces payroll by 15 per cent could pay a 2 per cent levy, raising £1 million annually for regional retraining and wage-insurance schemes. If a mid-sized economy purchased £20 billion of eligible equipment per year and 60 per cent met the displacement test, a 2 per cent levy could raise around £240 million annually for several years, scaling to low billions as new cohorts are added¹³.
To maintain investment incentives, the levy should exclude automation that demonstrably augments labour or creates high-skill maintenance, programming, or safety roles. Firms would qualify for partial or full exemptions by proving that new roles were added at equivalent wage levels. The policy could begin with voluntary reporting under an international framework, moving to a binding levy once consistent data exist. This phased rollout would limit initial resistance and allow refinement before full adoption.
International coordination is essential. Without alignment, firms could shift automation assets to lower-tax jurisdictions or reclassify equipment as software services to avoid the levy. A harmonised reporting standard through the OECD or WTO could define eligible equipment, proof of displacement, and criteria for exemptions. This would create a level field while discouraging jurisdictional shopping.
Revenue from the levy should be directed exclusively towards the demand side. Temporary wage insurance can support workers during retraining or redeployment, while vouchers tied to accredited programmes can fund transition into roles in operations, maintenance, metrology, or grid management. Local energy or transport credits can offset the increased resource costs that automation may impose on communities. These mechanisms would distribute gains from capital investment across the wider economy, sustaining consumption and political consent.
A complementary measure would be a utility-style access tier for foundational AI. Publicly supported providers would guarantee a baseline level of inference and fine-tuning capacity at fair, published prices. Non-discriminatory access for small firms, researchers, and public services would convert idle or underused capacity into steady utilisation. Tariffs linked to local energy and water costs would encourage efficient operation, while premium and proprietary tiers would remain outside the scheme to preserve commercial incentives.
Public procurement can also reinforce resource discipline. Governments that contract AI services for healthcare, education, or transport can require metered, per-use pricing aligned with verified energy and water consumption. Vendors unable to meet reporting thresholds would need to reprice or improve efficiency. This would normalise transparency and prevent hidden cross-subsidies while accelerating cost reduction for private buyers.
Grid and water management should evolve from subsidy to accountability. Large interconnections for data centres and automated plants can be tied to co-investment in local energy storage or behind-the-metre renewables. Tax credits and planning permissions could depend on demonstrated utilisation rates, with clawbacks for extended underperformance. Water pricing can include scarcity multipliers and incentives for recycled or non-potable sources, with revenues used to reduce residential bills or fund conservation.
Each of these tools operates on the same principle: converting the benefits of automation into sustained purchasing power without constraining innovation. By pricing displacement and resource use transparently, governments can stabilise demand while keeping markets open. The purpose is not to tax progress but to ensure that progress remains socially affordable.
Competition, ownership, and new demand
Reconnecting income and innovation also depends on structural reforms to market concentration and ownership. A system in which a few firms control the majority of compute, models, and data cannot sustain inclusive growth. Competition policy, public investment, and new demand creation form the longer-term levers for economic balance.
Competition policy should evolve to reflect the realities of AI ecosystems. Traditional antitrust frameworks focus on price and consumer welfare, but the decisive variable in AI is access. Mandatory open API standards for foundational models and data infrastructure can reduce switching costs and prevent technical lock-in. Transparency requirements on model architecture, training data provenance, and interoperability can prevent dominant firms from closing downstream markets. Mergers that integrate chip design, cloud hosting, and model operation should face stricter scrutiny to prevent vertical consolidation¹⁴.
Sovereign wealth funds and public investment agencies can reinforce economic resilience by taking minority equity positions in critical AI infrastructure. This approach would allow citizens to share directly in the financial returns generated by automation. By capturing a portion of the upside from compute, data, and platform profits, governments can offset the fiscal costs of subsidies while expanding public ownership of strategic assets¹⁵. Funds can be structured to avoid interference in management while ensuring that a share of long-term value flows back to taxpayers.
Broader ownership also supports political legitimacy. When citizens see tangible benefits from technological change, social resistance diminishes, and diffusion accelerates. This form of public participation can stabilise the political foundations of industrial policy while retaining the efficiency of market allocation.
The rebalancing of income also requires new demand sources that resist automation. AI will displace certain forms of routine work but can create markets for inherently human services. Hyper-personalised healthcare, bespoke education, advanced tutoring, and creative design services combine machine efficiency with human interpretation and empathy. These activities are difficult to automate because they rely on complex social interaction, emotional intelligence, and unstructured problem-solving.
As automation expands, time itself becomes a valuable resource. A gradual rise in leisure, flexible work, and lifelong education can generate new consumption patterns. The “leisure dividend” – where time saved through automation converts into demand for travel, cultural experience, and community participation – may offset part of the demand loss from reduced employment. Policies that support shorter working weeks or portable benefits could accelerate this transition while maintaining income stability¹⁶.
AI can also generate indirect demand through cost reduction. Automation in healthcare, education, and transport can lower essential living costs, effectively increasing real purchasing power even when wages stagnate. If properly governed, these productivity gains could function as a quiet redistribution, improving welfare without direct transfer.
The aim is to use competition, ownership, and human-centred innovation to transform automation from a narrow capital multiplier into a broad welfare multiplier. Market openness, distributed returns, and new sectors for creative and relational work can make technological progress both inclusive and durable.
The test of participation
The balance between automation and inclusion will determine whether the next productivity wave strengthens prosperity or fragments it. Efficiency is valuable only when it serves participation. A capital-intensive production system that suppresses wage income also undermines the consumption it relies upon. When technology amplifies this asymmetry, politics eventually reflects it.
The twentieth century demonstrated that broad participation in productivity gains stabilised both economies and democracies. The present challenge is to renew that social contract in an era where machines learn faster than labour markets can adjust. If automation proceeds without a mechanism to share returns, the resulting economic polarisation could fuel disillusionment, populism, and withdrawal from civic trust. The loss would be not only economic but institutional¹⁷.
The proposed policy framework – the capital-based levy, the fair-access utility tier, transparent infrastructure governance, and public ownership stakes – is designed to preserve that trust. Each element links innovation to inclusion and efficiency to equity. These are not instruments of control but of insurance, designed to keep participation viable as automation accelerates.
Phased implementation can make such reforms politically feasible. Beginning with voluntary reporting and international coordination before introducing binding levies and open-access rules allows systems to mature without abrupt disruption. Transparency in utilisation and public-benefit reporting builds credibility, while shared international standards reduce the scope for evasion or regulatory competition¹⁸.
The political reality is that vested interests will resist these shifts. Major technology firms possess significant lobbying power, cross-border mobility, and the capacity to frame debates around innovation and competitiveness. Addressing this requires democratic persistence and coalitions across governments, civil society, and labour organisations. The public case must rest on sustainability, fairness, and long-term resilience, not on protectionism or nostalgia.
Participation also requires imagination. The future of work will not simply replicate past forms of employment. Human capability can move toward coordination, care, interpretation, and creativity – areas that combine empathy and context with technological fluency. Policies that support lifelong education, vocational transition, and shared digital infrastructure can convert uncertainty into opportunity.
The choice is not between automation and employment, but between concentrated and distributed prosperity. Nations that integrate compute, communication, and energy as public infrastructure will be better placed to diffuse capability widely and maintain social stability¹⁹. Where access remains narrow, technology may accelerate fragmentation instead of progress.
Vonnegut’s warning endures because it was moral as well as economic. He depicted a world that had perfected production but forgotten people. A Player Piano economy becomes real when capacity substitutes for consent and control replaces participation. The test for governments, firms, and citizens is whether they can restore that balance before abundance loses its meaning²⁰.
If policy reconnects productivity with purpose and innovation with inclusion, the machines can play beautifully – not in isolation, but as part of a shared composition in which every citizen still has a seat and a stake.
illuminem Voices is a democratic space presenting the thoughts and opinions of leading Sustainability & Energy writers, their opinions do not necessarily represent those of illuminem.
Track the real‑world impact behind the sustainability headlines. illuminem's Data Hub™ offers transparent performance data and climate targets of companies driving the transition.
References
¹ International Energy Agency (IEA), Electricity 2024: Analysis and Forecast to 2026, Paris, 2024.
² McKinsey Global Institute, China’s Next Chapter: From Property to Productivity, Shanghai, 2024.
³ National Bureau of Statistics of China (NBS), Statistical Communiqué on National Economic and Social Development 2025, Beijing, 2025.
⁴ European Commission, EU Chips Act Implementation Progress Report, Brussels, 2025.
⁵ International Energy Agency (IEA), Data Centres and Data Transmission Networks: Analysis and Outlook to 2030, Paris, 2025.
⁶ UK Department for Energy Security and Net Zero, Electricity Networks Strategic Plan 2024–2030, London, 2024.
⁷ Organisation for Economic Co-operation and Development (OECD), Global Value Chains and the Future of Services Trade, Paris, 2025.
⁸ United Nations Development Programme (UNDP), AI for Inclusion: Policy Pathways for the Global South, New York, 2025.
⁹ Couldry, N. and Mejias, U. (2023), Data Colonialism: Reclaiming Our Data Futures, Oxford University Press, Oxford.
¹⁰ World Bank, Digital Development Overview: Building Resilient Data Economies in Africa and Asia, Washington D.C., 2025.
¹¹ International Monetary Fund (IMF), Tech Finance and Systemic Risk: Assessing AI and Semiconductor Exposure, Washington D.C., 2025.
¹² Organisation for Economic Co-operation and Development (OECD), AI Markets and Industrial Subsidies: Transparency and Coordination Framework, Paris, 2024.
¹³ International Labour Organization (ILO), The Future of Work 2025: Skills, Transitions, and Digital Labour Markets, Geneva, 2025.
¹⁴ UK Competition and Markets Authority (CMA), Foundational Models and Digital Market Power: Interim Report, London, 2025.
¹⁵ Sovereign Wealth Fund Institute (SWFI), Public Investment Strategies for Strategic Technology Infrastructure, Geneva, 2024.
¹⁶ World Economic Forum (WEF), Rebalancing Growth: The New Demand Economy, Davos, 2025.
¹⁷ Organisation for Economic Co-operation and Development (OECD), Technology, Inequality, and Participation: Policy Toolkit for Inclusive Automation, Paris, 2025.
¹⁸ International Energy Agency (IEA), AI and Energy Demand: Policy Scenarios for a Digital Economy, Paris, 2025.
¹⁹ United Nations Conference on Trade and Development (UNCTAD), Digital Sovereignty and the Future of Global Trade, Geneva, 2025.
²⁰ von der Leyen, U., State of the Union Address: A Competitive and Inclusive Digital Europe, European Parliament, Strasbourg, 2025.