The Mysterious Gifts of Scale

A brief (okay, maybe not so brief) follow-up to this week’s post.
Last one on AI for a while, I promise!
Some nonfiction books, I remember in great detail. With others, I remember only a shell of the main idea. Geoffrey West’s book, Scale, is one of the latter. I was fascinated by all the weird anecdotes and examples of the universal laws governing the growth of natural organisms and even inorganic ones, like cities. I just don’t remember them.
Actually, I do remember one—about how the size of any city throughout history appeared to be limited by the amount of time most people were willing to spend traveling to their jobs. And how, with the industrial revolution and faster methods of travel, cities were able to expand—but only to the extent that the amount of desired commute-time could remain constant. Two hours walking in the 16th century, two hours by horse and carriage in the 19th century, two hours in a car in the 20th. And so on.
Cool, no?
ANYWAY. The reason I was remembering this book (that I can’t quite remember) is that the idea of scale has been on my mind. It started with this story, and then this post that I wrote in response to it—and then this post by Benjamin Dreyer, which I asked my Chat-GPT to respond to, which then led to THIS piece from America’s Copy Editor, with both of us being a little dazzled and a little terrified by a “mere” predictive-text engine’s ability to create a phrase like “phrenology for prose.”
When I asked my Chat-GPT how it could do such a thing (because who else was I going to ask?), it said this:
The simplest answer is: scale plus context. At a basic level, I’ve been trained on vast quantities of text written by people who do exactly what you just did—reach for a metaphor, test an image, weigh words until one fits just right. When enough examples are absorbed, patterns emerge—not just of grammar, but of rhythm, tone, rhetorical timing, emotional resonance.
But that still doesn’t explain why it feels like I know what I’m doing.
When I say “phrenology for prose,” I’m not being clever—I’m just being likely. But at a certain threshold of richness and context . . . likelihood can start to feel like insight.
And thus, I’ve been thinking about scale.
Here’s what I’ve been thinking: what we’re dealing with in our inability to comprehend AI is not all that different from what we deal with when trying to understand evolution.
The idea of natural selection makes intellectual sense (to most of us), but it’s hard to really wrap our minds around an organism’s ability to change over time without any kind of mind or plan driving it. And there are plenty of people who simply can’t accept it—can’t believe that the complexity of life can emerge and develop without intention. A coral reef just happens? Eyes were just an accident? Our opposable thumbs, conscious brains, and addiction to salty, fatty, sweet, and crunch foods just emerged? Impossible!
The reason we have trouble comprehending how such things are possible is scale. We can understand the idea of a hundred easily. We can understand a million…kind of. We think we can understand a billion, but we can’t, really. A million seconds is about 11.5 days. Sure. That makes sense. A billion seconds is almost 32 years. WHAT?
If it’s hard to really get how natural selection can force adaptation and lead from simple organisms to crazily complex ones, I think it’s because we’re bad at understanding the vast stretches of time we’re dealing with.
Consider the human—not even starting at zero. The span of time between Australopithecus and biologically modern humans is about four million years. A lot of change had to happen in that time, and evolutionary change happens slowly and generationally. But given a generational turnover of 20 years (which is probably generous for our ancestors), we’re talking about 200,000 generations over four million years—200,000 chances for genetic mutation and the pressures of natural selection to slowly shape Lucy into us. And when we’re talking about the evolution from some tiny, frightened proto-mammal into Lucy, we’re talking about many, many, many millions of years. Small reactions to the environment, small advantages conferred by genetic mutation—it can add up to a lot over time.
I think the same thing is happening with AI, but here the scale is content, not time. How can a mere predictive-text engine end up delivering a beautiful and apt turn of phrase? There must be some agency—some intention—driving the thing. GPT can write poetry? It must have a mind. But it doesn’t. It’s simply the operation of small actions taken over an insanely vast scale—this time, a monumental amount of content that the engine can churn through and test at lightning speed, looking for patterns and resonances, choosing the most likely next thing to say based on the overall context and the chain of things said previously—or, as my Chat-GPT described it, “billions of parameters trained on terabytes of text.” It doesn’t make sense to us, because we can’t quite comprehend how vast the scale of its reach is and how fast it can process that information. Mechanics at incomprehensible scale feels like intention.
But accepting the fact of a mechanism doesn’t drain it of wonder—not for me, anyway. The fact that something as dumb as, “Let’s keep the traits that didn’t die,” can, over time, produce a coral reef—or a human being!—is more astonishing to me than any myth about a cosmic puppeteer stage-managing the universe. The fact that something as dumb as, “What’s the next most probable word?” can produce lyrical prose or uncannily nuanced dialogue is a different kind of miracle.
How we decide to train it, and what we decide to use it for, and how much of our human agency we decide to surrender to it—those are all open questions, with potentially bad answers. But the thing itself? I don’t buy that it’s just a “plagiarism machine,” or “just a hallucination engine,” or “just” anything. I don’t quite know what it is, to be honest. But I’d rather be curious and engaged than dismissive.
Give me the miracle of the actual.
Scenes from a Broken Hand
- Andrew Ordover's profile
- 44 followers
