Global Compute and the Stories We Tell in the Near Future

The numbers behind the AI boom get thrown around so casually — billions here, gigawatts there — that they've stopped meaning much. So I wanted to find a way to actually see the scale of what's being built, and more importantly, how fast it's accelerating.

The animated chart below tracks data center capacity across four major regions from 2020 through 2031. The horizontal axis shows how much of that capacity is devoted to AI workloads. The vertical axis shows total installed power. And the size of each bubble tells you how much of that is AI-dedicated compute — the raw physical infrastructure behind every chatbot, every image generator, every autonomous system you're going to encounter in the next decade.

Hit Play. Watch what happens around 2025. Then watch the US bubble between 2027 and 2031.

What you're looking at isn't just infrastructure investment. It's the material substrate of a civilizational shift — one that's arriving considerably faster than most people's mental models of the future. The story I've written below tries to say something honest about what that actually means for ordinary people, in ordinary lives, in the next five to ten years.

Global data center capacity & AI share — animated (2020–2031)

Bubble size = AI-dedicated capacity (GW)  ·  X = AI share of workloads  ·  Y = total installed capacity  ·  2026–2031 are forward projections

United States
Europe
China
India
2020

Dashed lines show full 2020–2031 trajectory per region. Drag the slider to any year. Hover bubbles for exact values.

Sources: IEA, JLL 2026 Global Data Center Outlook, Bain & Co., McKinsey & Co., Goldman Sachs, S&P Global / 451 Research, IEEFA / Takshashila Institution. Data are approximate midpoints of wide forecast ranges. Post-2025 values are projections.

When the future arrives faster than the story we're telling about it

There's a particular kind of cognitive vertigo that hits when you realize a technology you've been treating as aspirational is, in fact, already here — just unevenly distributed, as Gibson famously put it. We are living inside that vertigo right now with artificial intelligence, and the chart above is part of why.

The bubbles growing on that chart aren't just infrastructure statistics. They are, in a very concrete sense, the physical substrate of a civilizational shift — one that will reorganize daily life for ordinary people in ways that most public discourse has barely begun to register. Not in 2050. Not after some imagined singularity. In the next five to ten years, in ordinary households, on ordinary roads, in schools and clinics and elder care facilities and living rooms.

Let's talk about what that actually means, domain by domain.

The road

The autonomous vehicle story has been told so badly for so long — full of overpromised timelines and crashed demos — that many people have written it off as permanently five years away. That instinct is now badly miscalibrated. The reason early robotaxis underperformed wasn't fundamentally about sensors or maps. It was about compute: the models weren't large enough, weren't trained on enough edge cases, weren't able to reason about the full ambiguous complexity of a human street. That constraint is dissolving fast.

Waymo currently operates commercial robotaxi fleets in San Francisco, Los Angeles, Phoenix, and Austin, completing over 150,000 paid trips per week as of early 2026. The system isn't just driving — it's navigating city streets without a human anywhere in the loop. Chinese competitors (Baidu's Apollo Go, WeRide, Pony.ai) are operating at comparable scale in Beijing and Shanghai. The bottleneck shifting from compute to regulation and public trust doesn't make it slower — it often makes it faster, because those are tractable social problems in ways that physics problems aren't.

For the average person, the downstream effects of full autonomy are harder to grasp than the vehicle itself. When driving is no longer a task that requires a human, the car becomes a room. Travel time becomes usable time. For older adults who can no longer drive safely, it's a restoration of independence that is genuinely life-altering — the difference between aging in place with full social participation versus slow, grinding isolation. For the rural poor who currently have no viable transit options, it's access — to work, to medical care, to the social infrastructure of a larger world.

It will also, with brutal efficiency, eliminate millions of driving jobs. That reckoning hasn't really been reckoned with yet.

The body in the house

The robotics story is more surprising than the vehicle story, because it's arriving from a direction most people weren't watching. For decades, the robot was imagined as a stiff, pre-programmed machine that could only do one thing — weld a chassis, move a pallet along a fixed path. What compute and foundation models are doing to robotics is roughly what they did to language: they're making the systems generalize.

Boston Dynamics, Figure, 1X, Apptronik, Agility Robotics — these companies are building humanoid robots that can learn household tasks from demonstration, that can recover from falls, that can operate in environments they've never seen before. The reason this matters is specificity: a human home is a radically unstructured environment full of objects the robot has never encountered, tasks that weren't pre-programmed, and moments requiring improvised judgment. Until recently, that was impassable. The models being trained on the compute infrastructure in that chart are beginning to pass through it.

For aging populations specifically — and this is close to your professional heart — the implications are enormous. The shortage of human caregivers for the elderly is already a crisis in the US, Europe, and Japan, and it will become catastrophic as boomer cohorts move through the high-need years of late life. A robotic assistant that can help someone rise from a chair, prepare a meal, provide medication reminders, and keep ambient company isn't a replacement for human connection — but it is a plausible answer to the logistical problem of care at scale that no amount of workforce training can solve quickly enough.

This technology will not arrive as a luxury appliance everyone can afford. The equity dimensions are profound. Who gets the robot? Who still does the human caretaking — and under what conditions, for what wages, with what legal protections?

The classroom and the living room

Perhaps the most immediate and least-understood shift is already underway: the personalization of learning and cognitive assistance. For generations, the educational system has operated at the pace and style of the median student in a room of thirty, because that was the only feasible delivery mechanism. That constraint is now structurally dissolved.

An AI tutor with sufficient compute behind it can adapt in real time to a student's conceptual gaps, move at their pace, explain the same concept twelve different ways, never lose patience, and identify learning differences that a tired teacher with too many students simply cannot track. The research on one-to-one human tutoring consistently shows two-sigma improvement over classroom instruction — the famous "Bloom's 2 Sigma Problem" from 1984. We may now have a path to delivering something like that at population scale.

For adults, the shift is less about formal education and more about what we might call cognitive companionship — an AI that knows your history, your goals, your work, your way of thinking, and can help you navigate complex decisions, research unfamiliar territory, draft, edit, remember, plan. The boundary between tool and collaborator is blurring in ways that are philosophically interesting and practically consequential. What does expertise mean when anyone can have an expert available? What does authorship mean? What does memory mean?

For older adults in particular — people managing complex medication regimens, navigating bureaucratic systems increasingly designed for the digitally fluent, trying to stay cognitively engaged — a genuinely useful AI assistant is not a trivial convenience. It's a meaningful extension of functional capacity.

The border problem

You're right that compute power crosses borders in ways that physical infrastructure doesn't. A US-based model can serve a user in Mumbai or Lagos or São Paulo almost as easily as one in Minneapolis. This creates a dynamic that is both democratizing and troubling. Democratizing because the gap between what a wealthy American professional can access and what an Indian student can access is closing faster than almost any previous technology gap in history. Troubling because the values embedded in these systems — what they will and won't do, whose norms they reflect, what languages and cultural contexts they model well — are being determined by a very small number of actors, mostly in Northern California.

China, to its credit and its own purposes, is building its own stack. Europe is attempting to regulate rather than build. India is somewhere between aspiration and chaos. The result, in practice, is that most of the world's population will interact with AI systems that were fundamentally shaped by a particular cultural and commercial context — and that matters in ways we're only beginning to think clearly about.

What people aren't grasping

The failure of imagination here isn't stupidity — it's that the changes are compound rather than linear, and human cognition is deeply resistant to compound projections. We understand "a little better" easily. We don't instinctively understand "ten doublings."

The compute in that chart isn't just growing — it's growing while simultaneously becoming more efficient, while models are getting better at doing more with less, while inference is shifting from centralized data centers to edge devices that can run locally. The combination of more raw compute plus better architectures plus local deployment plus multi-modal capability (these systems can now see, hear, and speak, not just read and write) is producing something qualitatively different from what existed even three years ago.

The science fiction frame is actually useful here, not as prediction but as preparation. Le Guin, Octavia Butler, Kim Stanley Robinson — the serious speculative fiction writers were never really predicting the gadgets. They were asking: when the material conditions of life change this dramatically, what happens to human relationships, to power, to identity, to the stories people tell about themselves? Those questions are now engineering questions, not just literary ones. The people building this infrastructure are making choices — about who it serves, at what cost, with what safeguards — that will shape the answers.

Most ordinary people have no seat at that table. One of the things worth doing, for anyone who works in the space where narrative and community and democratic participation intersect, is helping people develop enough of a map that they can at least ask the right questions about what's being built in their name.

The bubbles on that chart are growing whether or not anyone is watching. The question is whether we can make the watching — and the conversation it enables — matter.