The hopes and fears of runaway artificial intelligence both share something in common: that intelligence can run away at all. But it isn’t clear to me that this is so. It’s just as likely that there are a finite number of things to know and an infinite number of things that are unknowable.
Physics has uncovered ways that the universe seems to protect how much information can be gleaned from a system. And these limits don’t appear to be related to the quality of our measuring instruments, but rather are a distinct feature of the cosmos itself. The speed of objects through space has a limit that may not be exceeded. Entropy has no antidote. It’s not even clear if the field of mathematics is internally consistent.
When computers blew past humans in the game of chess, they didn’t continue zooming into the stratosphere. Rather, the top engines began to slow down near an asymptote somewhere over 3,500. Very similar to how the top human players seem to be constrained around an Elo score of 3,000. We seem to conflate the speed with which computers get to our level and exceed that level with the possibility that there is no height to which they won’t climb. We do not yet know this.
I think it’s much more likely that computers will hit a limit similar to ours, but at some multiple beyond us. Finding that limit will be a fantastic discovery, whether it’s because you believe, like me, that AI is a necessary good for curing other ills, or if you think AI is going to cause more harm than good.
The post Is Intelligence Asymptotic? appeared first on Hugh Howey.
Published on April 02, 2023 08:55
“We have artificial minds that do basic computations millions of times faster than we can. But that’s still not enough because as the number of facts rise, meaningful interactions between them rises faster, forming a mountain of possibilities so steep that even something much faster than us still can’t work through it all. The machines can make it farther up the curve than we can, to be sure, but they’re only about a third smarter than the smartest of us.”
“If it’s so much faster, that means each second is a long time for it to think,” Arlin said. “If you put me in a room for a million years, I could solve a lot of problems.”
“Given a million years you could go through a lot with a small set of facts. But given a large set of facts, the permutations of all of them, their causes and effects, their associations... the number of possibilities explodes rapidly as the fact set grows. It’s a combinatoric explosion. Considering the interactions of ten facts, possibilities, or events is much more than twice as hard as considering the interactions of five things. A mind with machine memory, incredibly fine senses, the ability to think about a hundred things at once, incredibly fast net connections, and everything else a large AI has, is confronted with millions of facts every microsecond it’s alive. It has to wonder whether the third microbe from the left on the rightmost ceiling tile on the last row has anything to do with the murder of Mr. Mustard.”
“Colonel Mustard. But we discard useless facts like that,” Arlin pressed.
“It may be useless, or it may be the only remaining microbe of the disease that killed him. But yes, you’re right, part of intelligence is about figuring out which facts to examine and which ones to discard. The only way to control that explosion is by aggressively culling facts that aren’t important. Terrans do that all the time since we can only handle a few ideas at once. But care is needed: discard one fact necessary to solve the problem and you’re stuck. And if an AI culls its fact inventory all the way down until it’s aware of only the things a human is aware of, then it’s only as smart as a sharp-minded human, though somewhat faster. It threw away all the extra facts that could have made it godlike. Somewhere in all those facts are chains that could be used to make amazing deductions, but the power it takes to analyze them rises exponentially with the number of facts.”
“I guess I believe you. I find it hard to grasp intuitively,” Arlin said.
“I can offer a more intuitive explanation, at the cost of over-abstracting. Take a five-year-old kid. When he considers a brand new problem, he sees it as black or white. He examines these problems from fewer angles, and he has a smaller grasp of the consequences. When an adult considers a new problem, she juggles more facts than a kid can. But does the answer always come more easily? No, sometimes you become aware of more and more of the what-ifs and the tradeoffs. Now remember, I said a new problem, so you aren’t supposed to make use of canned answers kids don’t know yet. Sometimes the more you know, the more confused you get. It all seemed so simple when you were a kid. Now you know enough to know you’re partly guessing all the time. Are you a hundred times smarter than a kid? Not really. You pushed farther up the curve until the weight of a bunch of facts, consequences, and unknowns overwhelmed your ability to push farther. You considered all sorts of things the kid never even thought of, and all it got you was a swarm of what-ifs you can’t really tie down. You may have achieved a key insight the kid couldn’t see, but it wasn’t easy. Now consider—kids are low on the curve, adults farther up, and a genius way up there, but it’s getting steeper and steeper. Doubling the power of a genius’s mind can’t get twice as far anymore, it only gets you a little farther up the rising curve.”