A(G)I, The Great Divergence, and Technological Lock-In
The Great Divergence
Proselytising boosters of AI and AGI often speak in civilisational terms. If we don’t maximally pursue the development of artificial intelligence, so claim American boosters, China will. And if we don’t maximally pursue the development of artificial intelligence, so claim Chinese boosters, America will. With civilisational victory comes civilisational expansion; the spread of the values these (supposed) intelligences embody. With loss comes subsumption; the death of the values these (supposed) intelligences oppose.
This framing bears a remarkable resemblance to the Great Divergence: the study of the technological and institutional bifurcation between primarily western nations — like the United Kingdom — and China in the eighteenth and nineteenth centuries. The technological, fiscal, and military might of the latter was superseded by the former’s industrialising prowess — culminating in a series of humiliating defeats that figuratively and literally pieced the Chinese empire apart.
Reasonings for the Great Divergence are manifold, typically the preserve of economic historians like Daron Acemoglu, Simon Johnson, and James Robinson — who jointly won the Nobel Prize in Economic Sciences last year in recognition of their research highlighting the importance of social institutions in economies’ developmental trajectories. The conclusions of Mark Elvin, an environmental historian who specialised in the history of China, pointed elsewhere: to the idea of technological lock-in.
Technological lock-in is a multivalent concept. Its applications stretch from QWERTY — the almost-universal layout of Latin-script keyboards — to the internal combustion engine; concerning itself with how the initial shape technological adoption takes predetermines and often constricts later innovation. In the Elvinian sense, technological lock-in has a greater relation to power and politics than it does to technology in itself. Elvin’s reasoning for the Great Divergence placed emphasis on post-Song and mid-Qing dynasty resource-intensive hydrological management systems.1
These systems, partly centring on Hangzhou Bay — which sits between Hangzhou, Shanghai, and Ningbo on the eastern coast of China — allocated and concentrated massive amounts of capital and labour to create and subsequently maintain hydrological infrastructure projects. This infrastructure altered the region’s — and empire’s — waterways to such an extent that new lifeways developed therearound. A network of constructed dams, dykes, and seawalls allowed for the conversion of more land to higher-yielding agriculture, as well as a more efficient water-based transport system. Both of these factors resulted in urban areas’ later expansion, too.
Crucially, the prosperity that this network enabled predetermined its continued maintenance. Expanded agricultural production and its accompanying developments legitimised political power — it was consequently the role of the politically powerful to ensure that converted land continued to be protected; to remain converted. Hydrological systems were thereby self-justifying. Regional and imperial prosperity depended thereon, so the capital and labour required for maintenance naturally followed.
This resulted in Elvinian technological lock-in: the resources required to maintain economic prosperity and political legitimacy were bound up, shut away from alternative (often technological) interventions that may have avoided the Divergence that later came. Investments in hydrological infrastructure, say, precluded greater investment in the Chinese empire’s naval fleet, or in the empire’s exploitation of coal resources. The opportunity cost was immense. Worse, it is not as if all potentialities were on the cards at all times — it is that other potentialities were not conceived of given the default direction capital and labour took. There was no reason to innovate. There was in fact a disincentive to innovate, as any rerouting of resources would undermine politico-economic power bases.

Pre-Technological Lock-In
AI boosters would likely point to Elvin’s thesis as a lesson to learn from; as a reason to accelerate the adoption and development of AI and AGI respectively. This is misguided. One could just as easily conclude that AI boosters are today’s hydrological engineers, and the leaders of big tech — as well as the leaders of nations and regions that seek to win this supposed race we are observing — those who draw power from the systems they themselves set out.2 The current rush of capital into data centre development, the funnelling of VC cash into AI-centred startups, and the outsize role AI-centric investment is playing in US (and global) economic growth should arouse our suspicions — when boosters draw their legitimacy and power from the continued success of whatever it is they boost, they will by-default exaggerate to get their way.
This comparison has some rough edges. In the first instance, the difference between lock-in and the definite bubble we are at present experiencing has to be set out: bubbles don’t typically relate to the power of progenitors. They instead relate almost entirely to their financial wellbeing; to their wealth. Progenitors boost precisely because it is in their financial interest to do so. The bubbles of times past — whether Dot-com of this millennium, South Sea of the eighteenth century, or the tulip mania of the seventeenth — did not usually exhibit the potentially transformative hold artificial intelligence could have over knowledge, productive capacities, and the outputs of entire economies. Further to this, today’s progenitors are not advocating for the maintenance of a fixed technological state. They are instead advocating for a pre-technological state; for a specific (continually upward) trajectory.
Yet this pre-technological lock-in still observes characteristics of an actual technological lock-in: the inextricability of resource allocation and power; the (inefficient) concentration of capital and labour; and the opportunity cost, or rather the path dependency, of the technological future being envisioned. The $3tn being pumped into data centre infrastructure, and the hundreds of billions more that will relatedly be deployed to guarantee those data centres’ electricity supplies, is a fathomably unfathomable amount of capital that is not being allocated elsewhere. The ~$800bn needed annually until 2030 to electrify the world’s grid systems, for one, is hardly being met: a third of this, as of the end of last year, is all that’s flowing.
To put that into perspective, $1.6tn of the $3tn being pumped into data centres by 2028 consists of GPU purchases — primarily from Nvidia — alone. If those purchases were redirected into grid electrification, targets would be met overnight. And that is not considering the hundreds of billions of VC dollars being funnelled into AI startups with increasingly detached valuations that rest on their eventual incestuous acquisition by big tech — who are themselves reinforcing each others’ lofty valuations by getting overly cosy with one another. Those dollars could and should instead be directed towards the hard work being done in decarbonisation and nature recovery; missions that are likely to deliver more consistent and stable planetary returns, and foster a climatic-ecological humanity-wide path-dependent divergence away from a less-than-stable climato- and bio-sphere.3
Consequences
Rather than the need for power dictating technology, as it did in hydrological post-Song China, the need for technology is now dictating power. The desire to come out on top, both within the tech industry and between states and supposed civilisations, is dictating the relentless allocation of capital to potentially fruitless ends. In this sense, the pursuit of AI and AGI can be seen as a resource-intensive cementing of control, promising to bestow the ‘victor’ with imaginably unimaginable power.4
This may come at a cost. Much like China’s emphasis on hydrological infrastructure paid dividends in its early construction — partly enabling its relative dominance until the dawn of industrialisation — AI may well prove initially, or at some point, at least a little valuable. Its use cases are clear; its revolutionary potential, less so.5 Yet by obsessive-compulsively pursuing a singular technology and opportunity-costingly part-forgoing alternative investments, humanity as a whole as well as its individual states and ‘civilisations’ could be setting themselves up for an impoverished path-dependent future. And most importantly — whether that future is impoverished or not — we are further building up and consecrating systems of power that embed the interests of technology and its progenitors.
That power will in all likelihood be heightened as artificial intelligence embeds itself in military processes. Whether applied in coordinated, autonomous, drone-based conflict or in direct cyberwarfare, the potential to fall behind adversaries — or rather the fear of falling behind adversaries — is acute. Fear will drive adoption — and drive the non-adoption of other alternative technologies; or military pathways that avoid direct confrontation.
We are, though, reassuringly pre- a technological lock-in. Elvin’s diagnosis does not yet apply to the continued (mis)allocation of capital in our economies to maintain the status quo — rather, to attempt to build it. We can still avoid true lock-in. One can envision what this might look like. Trillions spent on GPUs that depreciate by a third a year and require continual renewal and replacement. If the widespread deployment of AI and AI-derived technologies was to come about, and if it were to (part-)replace human labour, a future wherein entire economies become virtual vassals of GPU developers and manufacturers is possible — beholden to the likes of Nvidia and AMD. What might eventually be trillions per year spent on renewal and replacement alone would suck capital away from other, more noble, and perhaps higher-yielding alternatives. Instead of dams, dykes, and levees locking us in, it might well be CUDA.
That is an unhappy future, and one hopes that the current cycle of technological investment turns out to be more of a bubble than a lock-in. If the latter, our developmental trajectory — and our very wellbeing — may be unduly and degradingly affected. Regardless, what is equally worrying is the lock-in of time, attention, and rhetoric we presently see — and are perhaps ourselves guilty of. The current cycle of capital allocation is matched by a corresponding intellectual allocation; the spending of our finite intellectual energy investing in and discussing the development of artificial intelligence. We are speaking less about other potentialities; other matters that may matter more than those we are presently obsessing over. When facing existential challenges other than those which AI-doomers propound, the oxygen-, energy-, and capital-hungry requirements of our current technological arc may prove dangerous.
To clarify for those with some knowledge of Mark Elvin’s work, this is explicitly not relating to his high-level equilibrium trap theory, which separately argues that China’s divergence can be pinned on its balance of demand and supply — its cheap and abundant labour as well as its efficient trade networks.
Although, to be clear, I am here incongruously jamming a multi-hundred-year timescale into that of (what might be) a decade long.
This is not a serious proposition, in that the allocation of capital does not operate in this manner. It is regardless worth framing things like this, if simply to illustrate that the investment required for climatic and ecological action can be realised if the correspondent will to do so is — particularly given about half of that $3tn will come from credit. It is additionally worth considering that the risk profile of an investment in a deeply-depreciating infrastructural asset like a data centre, or an investment in an essentially speculative startup, is lower than that of an investment in a climate startup with a proven end-user and market, or an investment in renewables themselves. While the expected returns may well be lower, their lower risk profile could balance the delta between AI and climate-centric investments out.
Assigning full credit to my friend Sam Rigg for this framing: the technologically ‘victorious’ party in the previous divergence forced drug-induced subservience in the form of the Opium Wars on the other. In today’s world, perhaps the ‘victor’ will impose AI-facilitated consumptive slop on the ‘loser’.
This is not to discredit AI’s relevance and already-realised — as well as potential — impacts. Nor is it to diminish its role in shaping the solutions to our climatic-ecological and other crises — I do not want to paint a picture of mutual exclusivity. I simply wish to state its overstatedness.

That connection between the current AI race and the Great Divergence is so smart, and it makes me think the real lock-in might actually be in the values embedded in these systems rather then just the technological standards, which is a bit scary.