· 18 min read
Understanding the true future of artificial intelligence is not just an intellectual exercise; it is a strategic imperative in our turbulent current moment, which is characterised by unparalleled global upheavals spanning geopolitics, economies, sustainability, and crucial resource pivots like oil and water. The development and application of AI will have a significant impact on national resilience, economic competitiveness, and our collective capacity to confront existential threats as we observe changing power dynamics, heightened trade rivalries, and the undeniable effects of climate change, necessitating immediate adaptation. This post seeks to dispel the widespread hyperbole about AI, especially the obsession with "scale," and offer a balanced viewpoint on the technology's actual possibilities and drawbacks.
In light of these global shifts, this article provides a critical analysis of how AI is being shaped, who stands to benefit, and why a human-centric, sustainable approach to its development is crucial. It does this by drawing on insights from both the technologically advanced West and the quickly industrialising East.
The scale conundrum: A necessity or an overindulgence?
The dominant narrative in the development of artificial intelligence fervently promotes "scale" — increasingly massive models, processing of enormous datasets, and computer power never before seen. Large Language Models (LLMs), which include billions or even trillions of parameters and exhibit what many refer to as "emergent abilities," have revolutionised expectations. Greater generalisation, improved performance on a variety of tasks, and the ability to solve more challenging, previously unsolvable issues are all anticipated outcomes of this unrelenting quest for gigantism. It's an enticing image of omniscient intelligence that is fuelling an unparalleled global arms race in model architecture and compute infrastructure.
As an advocate of sustainable growth and green computing, I must, however, wonder if this exclusive emphasis on sheer scale is really the most effective, just, or even wise course of action. This quest has an extremely high price tag:
• Astronomical computing expenses: The enormous financial commitment required to train and operate these enormous models is mostly only available to well-funded countries and tech firms.
• Colossal energy consumption and carbon footprint: The energy needed just to train AI models can be comparable to what small nations use annually. Global sustainability goals are directly at odds with this, which presents a serious environmental dilemma.
• Concentration of power: The exorbitant expenses concentrate the development and management of AI in the hands of a few number of powerful organisations, potentially leading to monopolies and escalating digital inequality.
• Black-box issues: Even their creators cannot understand the decision-making processes of many complex models, which remain opaque. There are significant issues with this lack of interpretability and transparency, especially in delicate applications like healthcare, banking, or the legal system.
• Bias amplification: These algorithms, which are trained on large, frequently unfiltered datasets that represent historical prejudices, societal injustices, and human frailties, inevitably pick up on and magnify these biases, thereby sustaining and even institutionalising discrimination.
The allure of bigger, faster, more powerful AI models is undeniable, but we must ask if this insatiable hunger for scale is truly leading us to smarter, more sustainable, and more equitable intelligence, or simply to larger, more opaque energy guzzlers.
According to developmental economics, the "pure scale" strategy inherently raises the entry hurdle for developing countries. Large sums of money, sophisticated infrastructure, and specialised personnel are required, all of which are sometimes lacking in emerging nations. This has the potential to increase the technology divide and turn sophisticated AI into a luxury rather than a means of achieving broad social progress.
Table 1: Scale vs. alternatives - a spectrum of AI development
|
Feature |
Pure scale approach (e.g., GPT-4) |
Alternative approaches (e.g., SLMs, Neuromorphic, Symbolic) |
Implications for developmental economics |
|---|---|---|---|
|
Model size |
Billions to Trillions of parameters |
Millions to Billions (or radically different architectures) |
Exclusionary: High barrier to entry for smaller economies. |
|
Data needs |
Petabytes of diverse, unfiltered data |
Smaller, domain-specific, curated datasets; data-efficient learning |
Inclusive: Localized data can be leveraged more effectively. |
|
Compute needs |
Massive, cutting-edge GPU clusters |
Efficient, specialized hardware (e.g., neuromorphic chips), CPUs |
Barrier to Entry: Requires immense capital investment. |
|
Energy footprint |
Extremely High |
Significantly Lower |
Sustainability Challenge: Exacerbates energy poverty/inequality. |
|
Interpretability |
Low (Black Box) |
Higher (especially Symbolic AI) |
Trust Deficit: Limits adoption in critical sectors (e.g., governance). |
|
Cost |
Extremely High (Training & Inference) |
Significantly Lower |
Accessibility: More affordable for developing nations. |
|
Primary goal |
Generalization, Emergent Abilities, AGI pursuit |
Efficiency, Specialization, Domain Expertise, Sustainability |
Relevance: Tailored solutions for specific local challenges. |
The table underscores the inherent unsustainability and exclusivity of an AI paradigm solely focused on scale. This has catalysed interest in alternative approaches:
• Small language models (SLMs): SLMs offer advantages in cost-effectiveness, speed, edge deployment (operating on local devices), enhanced security (less data leakage), and the capacity to be trained on smaller, domain-specific datasets for tailored, precise solutions. They also demonstrate significant capabilities with a remarkably reduced number of parameters (e.g., Microsoft's Phi models).
• Neuromorphic computing: This innovative method co-locates memory and computation to replicate the energy efficiency of the human brain. It offers advances in real-time, low-power AI, which is essential for sustainable edge AI, by utilising spiking neural networks (SNNs).
• Symbolic AI and hybrid neuro-symbolic systems: Rule-based inference, logic, and knowledge representation are the main focusses of these methods. For credibility in crucial fields like healthcare and finance, they provide the interpretability and reasoning skills that big neural networks frequently lack. Hybrid systems that integrate deep learning's pattern recognition with symbolic reasoning are frequently where the revival is observed.
• Data-efficient AI: By lowering the enormous amounts of data needed for training, methods like few-shot learning and active learning seek to make AI more approachable and resource-friendly.
In line with the requirements and capabilities of a larger global society, these options mark a hopeful transition towards AI that is more effective, specialised, and accessible.
East Meets West: Divergent paths to AI dominance
Due to different economic, political, and cultural demands, the Western and Asian hemispheres' divergent philosophical and strategic stances are increasingly influencing the global AI scene.
Table 2: Western vs. Asian AI development philosophy
|
Feature |
Western approach (e.g., USA, EU) |
Asian approach (e.g., China, Japan, South Korea) |
Developmental economics lens |
|---|---|---|---|
|
Core focus |
Foundational Models, AGI Pursuit, Research Freedom |
Application-Specific AI, Industrial Integration, National Strategy |
West: Global solutions, often without local context. East: Targeted solutions, immediate impact. |
|
Driving force |
Venture Capital, Academic Research, Private Sector Innovation |
Strong Government Support, State-Led Investment, Industrial Policy |
West: Market-driven. East: State-driven, potential for top-down efficiency. |
|
Data strategy |
Diverse public/private datasets, often large and generalized |
Large population data, industry-specific data, government-led data initiatives |
West: Data privacy often paramount. East: Data utility often prioritised for national goals. |
|
Regulatory stance |
Comprehensive, ethics-driven (EU AI Act); "light touch" (US, UK) |
Agile, "soft-law," innovation-first (Japan); strong state guidance (China) |
West: Focus on rights/risk. East: Focus on practical deployment/growth. |
|
Open source |
Strong emphasis and community, driving collaboration |
Growing engagement (e.g., China's DeepSeek), often pragmatic |
West: Democratizing access. East: Strategic tool for national competitiveness. |
|
Talent pool |
Top-tier researchers, strong academic heritage |
Rapidly growing, large talent pool (especially China) |
West: Niche expertise. East: Mass-scale training, rapid skill deployment. |
The desire for universal, foundational models is a defining feature of the Western approach, which is exemplified by the US and Europe. Businesses like OpenAI, Google, and Meta make significant investments in developing general-purpose AI systems that can comprehend and produce content in a variety of fields. This approach is supported by a thriving ecosystem of venture capital, a culture of academic freedom that encourages speculative research, frequently with the distant goal of artificial general intelligence (AGI), and a strong emphasis on open-source collaboration, even though it frequently involves proprietary underlying technologies.
Europe, in particular, places a high priority on "responsible AI," creating extensive legal frameworks like as the EU AI Act to guarantee moral development and application that reflects deeply ingrained social norms on human rights and privacy. The US still prioritises trustworthiness and encourages private sector innovation, notwithstanding its preference for a more relaxed approach.
On the other hand, the Asian approach is noticeably more application-centric and pragmatic, especially when spearheaded by China, Japan, and South Korea. China's state-led programs leverage its manufacturing expertise for robots, smart cities, and integrating AI into conventional areas like healthcare and agriculture. These projects pour enormous investment (e.g., an estimated $100 billion AI investment) into AI for industrial transformation. Rapid deployment is prioritised over broad regulatory principles at first, with a concentration on immediate economic growth and national security. A strategy that emphasises experimentation and agility to speed up market adoption is exemplified by Japan's "Society 5.0" vision and its "innovation-first" "soft-law" approach. In order to foster a robust talent pool and support national innovation and defence, South Korea also prioritises AI.
Although worries about data privacy and government supervision may surface, this regional approach frequently benefits from large population statistics and a willingness to deeply integrate AI into enterprises and public services.
The race for innovation and time to market: Who gains?
The Western approach, with its open-source culture, robust venture capital funding for speculative ventures, and academic freedom, may be more conducive to ground-breaking theoretical advancements and the investigation of AI's ultimate frontiers when it comes to pure innovation and fundamental research. Even if the goal of AGI is ambitious, it has the potential to push the limits of fundamental research and provide findings that could not have immediate commercial implications but could set the stage for future innovations. Emphasising open-source ecosystems can promote broader collaboration, democratise access, and speed up the discovery of vulnerabilities and novel applications through collective intelligence.
However, the Asian model, especially in nations like China, seems to offer a major advantage for applied innovation and time to market. Large-scale industrial integration and quick deployment are made possible by their robust government support, which is frequently accompanied by sizable subsidies and national regulations. The emphasis on useful, industry-specific solutions ensures that AI technologies are created with the demands of the market in mind, resulting in quicker iterations and practical application. The sheer volume of skilled individuals graduating from AI-related programs in China, along with possibly laxer initial regulatory requirements (in contrast to the EU), results in faster product cycles and broader acceptance.
"As the West promotes the lofty ideals of artificial intelligence, the East is frequently occupied with implementing AI and creating practical solutions that affect millions of people every day." According to a global technology transfer analyst, "this practical approach, supported by strategic national directives, often cuts through bureaucratic red tape and accelerates market penetration."
The Asian approach offers a more flexible roadmap for developing economies since it places a strong emphasis on application-specific, problem-solving AI and frequently makes use of government-led infrastructure and data efforts. It illustrates how AI can be used to achieve concrete development goals, such as raising industrial efficiency, increasing agricultural productivity, or improving public services, rather than being limited to very abstract technological marvels seen only in cutting-edge research labs. Emerging nations' developmental priorities are more closely aligned with this emphasis on quick, practical impact.
AI: Not a silver bullet, but an augmentation of humanity
Recognising a basic fact is essential: artificial intelligence is not and never will be a panacea for all of humanity's intricate issues. AI is still only a tool, a clever by-product of human creativity, despite the widespread hype and utopian tales. Since it is fundamentally human-made, it naturally reflects our prejudices, anxieties, goals, and, in fact, all the positive and negative characteristics that make up the human condition.
Humans were not created with perfection. However, a defining feature of our evolution is our quest for perfection. The perception of AI's progress must be significantly influenced by this basic reality about our frail nature. AI systems will always pick up on and magnify human errors if they are taught on data that reflects historical biases, societal injustices, or poor human judgment. This is a significant reflection of the AI's history and the data it uses, not a failure of the AI per se.
"AI serves as a mirror reflecting mankind. In the end, what we see mirrored back — the genius, the prejudices, the capacity for both good and harm — is a reflection of who we are and the information we provide it." In conversations regarding ethical AI, I've frequently stated that "to expect perfection from AI is to ignore the inherent imperfections of its creators."
Consequently, a significant shift in perspective on the advancement of AI is necessary. Instead of seeing AI as a self-sufficient, perfect system that will "solve" every problem, we need to see it as a potent extension of human potential. By processing information more quickly, spotting patterns we might overlook, and automating repetitive chores, technology can aid us in reaching new heights in our own evolutionary journey and free up human intellect for higher-order thinking, creativity, and empathy. The focus should shift from creating "perfect" AI to creating "responsible" AI that complements human intelligence while recognising its intrinsic limitations and our common goal of progress.
This reframing is essential to guaranteeing that AI development stays consistent with human values and advances the more general objectives of sustainable and equitable development.
Heading towards another bubble? A crystal ball glimpse
There are unsettling similarities between the present fervour surrounding AI and past tech bubbles, particularly the dot-com boom of the late 1990s. We are seeing a fierce focus on technical metrics and user acquisition over core business models, extremely concentrated venture capital flows into a few well-hyped areas, and extremely high valuations for AI start-ups, frequently predicated on future promise rather than current profitability. While crucial to the development of AI, the explosive growth of firms like NVIDIA, whose market value has increased as a result of the unparalleled demand for its chips, may also be viewed as a prime example of this focused zeal and speculative zeal.
Table 3: Signs of a potential AI bubble
|
Characteristic |
Dot-com bubble (Late 1990s) |
Current AI boom (2020s) |
Risk for sustainable development |
|---|---|---|---|
|
Valuations |
Astronomical, often for unprofitable companies |
Sky-high, especially for foundational model and chip companies |
Misallocation of capital away from truly impactful, sustainable AI. |
|
Investment focus |
Internet connectivity, e-commerce |
Foundational models, LLMs, AI infrastructure (chips) |
Overemphasis on 'general' AI, neglecting domain-specific, local needs. |
|
Market narrative |
"Internet changes everything," "new economy" |
"AI changes everything," "AGI is near," "new industrial revolution" |
Unrealistic expectations, leading to 'AI washing' and greenwashing. |
|
Concentration |
Capital flowed into a few well-known internet firms |
Capital highly concentrated in a few dominant AI players (e.g., OpenAI, Anthropic, NVIDIA) |
Monopolization, hindering diverse innovation and equitable access. |
|
Public hype |
Intense media frenzy, retail investor speculation |
Widespread media coverage, celebrity endorsements, rapid adoption by early adopters |
Creates 'fear of missing out,' leading to unwise investments. |
Despite the obvious similarities, there is one important difference: today's top AI businesses have truly transformative technology with practical applications that are already producing substantial value, in contrast to many dot-com endeavours that had sustainable business models. Search engines, healthcare diagnostics, logistics, the creative industries, and countless other sectors have already incorporated AI extensively, demonstrating its ability to drive real business demands and yield measurable productivity benefits. Due to its demonstrated benefits, artificial intelligence is in high demand.
The "bubble" risk, however, is in the assessment of AI's immediate potential rather than its inherent usefulness. The market may be inflating the value of companies beyond their long-term sustainable growth paths, or it may be too optimistic about how quickly and easily profits will materialise. Instead of a full-blown, systemic bubble that is poised to pop like the one that occurred in 2000, we are probably in an "AI boom" with isolated instances of speculative overvaluation. However, as technology advances and investment becomes more sensible, it is very likely that some segments may experience a correction or a "trough of disillusionment" (according to Gartner's Hype Cycle). Legislators and prudent investors will distinguish between true, sustainable innovation and speculative zeal.
Crystal ball: Convergence or divergence?
Future developments in AI are expected to follow both divergent and convergent routes, influenced by the intricate interactions of economic pressures, geopolitical strategy, and technological advancement.
Divergence: We expect regulatory frameworks will continue to differ, especially between the more flexible, innovation-focused models that are common in parts of Asia and the EU's comprehensive, rights-based approach. A fragmented global AI environment could result from this, with models and applications created under one legal regime possibly not functioning or being accepted in another. Furthermore, different, potentially incompatible AI ecosystems motivated by concerns about technological sovereignty and national security will probably be fostered by the strategic competition between countries, particularly between the US and China. According to developmental economics, this discrepancy might make digital inequalities worse, where some countries are left more behind since access to cutting-edge AI is dependent on geopolitical relationships and particular technology stacks.
Convergence: Ironically, strong forces will propel convergence as well. Common standards and interoperability will be pushed for by the intrinsically global nature of data and talent, the universal appeal of effective and potent AI tools, and the growing awareness of common global challenges (such as pandemics, climate change, and sustainable development, for which AI is an essential tool). The efficiency advantages of smaller, specialised AI models will become too strong to ignore due to mounting economic and environmental pressures, possibly moving the emphasis from a pure scale race to "AI efficiency" and "sustainable AI," concepts I fervently support through my work with the Green Computing Foundation.
As best practices are shared and modified, hybrid approaches that combine the finest aspects of Asian (application, quick deployment) and Western (generalisation, study depth) strategies are likely to develop. Global norms and responsible governance will also be under more pressure as the world struggles with the ethical implications of AI. This might help close certain regulatory gaps and promote a more cohesive approach to AI ethics and safety.
From a sustainability and developmental economics perspective, a hopeful convergence would see the world moving towards:
• Decentralized and accessible AI: Fostering local entrepreneurship and innovation in underdeveloped nations as opposed to consolidating power in exclusionary, resource-intensive, hyper-scale models.
• AI for good: Giving priority to applications that directly meet the Sustainable Development Goals (SDGs) of the UN, such as precision agriculture, equitable healthcare delivery, climate modelling, and renewable energy optimisation.
• Responsible innovation: By including moral principles, equity, openness, and responsibility from the very beginning of the design process, AI will help all facets of society, not just those with greater financial or technological resources.
Conclusion: The human imperative for an intelligent future
It is imperative that the discourse surrounding artificial intelligence progresses beyond the mere quest for computational power. We are at a turning point in history when the decisions we make now will affect not just the technology environment of the future but also the fundamental structure of our civilisations. AI is a profound mirror of who we are, neither an omnipotent deity to be worshipped nor an alien intelligence to be feared. Its shortcomings are our shortcomings, its biases are our biases, and its capacity for good is evidence of our shared ambition.
The real test of our progress will not be the size of our AI models, but rather how well we incorporate our quest for perfection, a trait that is specific to humans and involves constant improvement, into their development and implementation. This entails realising that while AI, like its human creators, is always flawed, it also has a vast potential for growth and beneficial influence. Instead of giving in to the delusion that AI is a cold, unquestionable answer to all of our problems, might we use it as a potent augmentation and a cooperative partner in our pursuit of a more just, sustainable, and affluent world? Can we make sure that the drive for technological progress is in line with the need for both planetary and human well-being?
Our common dedication to purpose-driven innovation, international cooperation, and a shared humanistic vision for the intelligent future hold the key to the solution, not just server farms or silicon valleys. It involves creating AI with compassion, implementing it responsibly, and overseeing it with vision. My message as a thought leader in sustainability is clear: AI's future lies not just in what it can accomplish, but also in what it can do for us, led by our values and in support of a genuinely sustainable global evolution.
illuminem Voices is a democratic space presenting the thoughts and opinions of leading Sustainability & Energy writers, their opinions do not necessarily represent those of illuminem.
See how the companies in your sector perform on sustainability. On illuminem’s Data Hub™, access emissions data, ESG performance, and climate commitments for thousands of industrial players across the globe.






