More on this book
Kindle Notes & Highlights
Read between
March 13, 2024 - July 30, 2025
your appetite for hypotheses tends to get even larger. And if it’s growing faster than the statistical strength of the data, then many of our inferences are likely to be false. They are likely to be white noise.”
Overfitting, as Silver points out, gives false confidence on existing data, but quickly shatters this illusion when new data arrive—and don’t fit the model or theory.
Bowman went on to explain that earthquake prediction is limited because a theoretical understanding of what is happening under the Earth’s surface along fault lines is lacking.
Theory in science, one might say, can never really be eliminated. One irony of the inevitability myth here is that theory is necessary not only for genuine science, but also for making good on dreams of general intelligence in AI. Modern confusions and mythology has the tail wagging the dog.
the high-level neocortex inspired theories of intelligence that animate much of the vision of Data Brain projects reproducing the human mind in silica are hopelessly general and unusable. The theories themselves are of very little use
Where humans fail to illuminate a complicated domain with testable theory, machine learning and big data supposedly can step in and render traditional concerns about finding robust theories otiose.
This suggests that the weaknesses of such theories leave visionary neuroscientists like Markram undeterred in large part because of the prevalent belief that the march of Big Data AI toward general intelligence and beyond will fill in the details later. Rather than challenge the science that is occurring in neuroscience today,
We have seen that Big Data AI is not well-suited for theory emergence.
On the contrary, without existing theories, Big Data AI falls victim to overfitting, saturation, and blindness from data-inductive methods generally.
“I cannot think of major new conceptual advances that have come from such big science efforts in the past.”
Neuroscientists also worried that the Human Brain Project did not set out to test any specific hypothesis or collection of hypotheses about the brain.8
The neuroscientists pointed out in the petition that more detailed simulations of the brain don’t inevitably lead to better understanding.
what the goal is. What does it mean to understand the human mind? When will we be satisfied? This is much, much more ambitious.”
statement of mythology about AI—came in 2019, Scientific American (no enemy of future ideas about science) and The Atlantic both published searching accounts of what went wrong.11 As one scientist put it, “We have brains in skulls. Now we have them in computers. What have we learned?”12
This faith is not novel science but simply bad science,
general intelligence is supposed to be emerging from AI and its applications to scientific research, it’s noticeably downplayed in the roles of scientists.
remarked recently that innovations seem to be drying up, not accelerating.
Tech startups once dreamed of the next big idea to woo investors in the Valley, but now have exit strategies that almost universally aim
for acquisitions by big tech companies like Google and Facebook, who have a lock on innovation anyway, since Big Data AI always works better for whoever owns the most data. The fix is in.
“The general statistical effect of an anti-intellectual policy would be to encourage the existence of fewer intellectuals and fewer ideas.”5 Such anti-intellectual policies are so clearly evident in modern data-centric treatments of science that the threat is now impossible to ignore. Wiener
The culture has become, as Wiener worried, sanguinely anti-intellectual and even antihuman.
The connection here to the myth is unavoidable, as mythology about the coming of superintelligent machines replacing humans makes concern over anti-intellectual and anti-human bias irrelevant. The very point of the myth is that anti-humanism is the future; it’s baked into the march of existing technology.
It’s difficult to imagine a cultural meme that is more directly corrosive to future flourishing and, paradoxically, more directly inimical to the very invention or discover...
This highlight has been truncated due to consecutive passage length restrictions.
Wiener pointed out that the economics of corporate profit make investment in a genuine culture of ideas difficult, since early bets on ideas are all in essence bad, as their full value becomes apparent only downstream.
The culture is squeezing profits out of low-hanging fruit, while continuing to spin AI mythology, a strategy guaranteed to lead to disillusionment without an inflow of radical conceptual innovation.
but what’s unforgivable is the deliberate attempt to reduce personhood, as Lanier puts it—disparaging and taking away the importance and value of the human mind itself. Such a strategy is fantastically self-defeating and stupid.
“However, the use of the human mind for evolving really new thoughts is a new phenomenon each time. To expect to obtain new ideas of real significance by the multiplication of low-grade human activity and by the fortuitous rearrangement of existing ideas without the leadership of a first-rate mind in the selection of these ideas is another form of the fallacy of the monkeys and the typewriter,
To sum up: there is no way for current AI to “evolve” general intelligence in the first place, absent a fundamental discovery. Simply saying “we’re getting there” is scientifically and conceptually bankrupt, and further fans the flames of antihuman and anti-intellectual forces interested in (seemingly) controlling and predicting outcomes for, among other reasons, maximizing short-term profit by skewing discussion toward inevitability.
very few leaders in the current culture are actually pursuing an agenda where human ingenuity can thrive.
Perhaps we could start with a frank acknowledgement that deep learning is a dead end, as is data-centric AI in general, no matter how many advertising dollars it might help bring in to big tech’s coffers.
no one has the slightest clue how to build an artificial general intelligence.
The dream remains mythological precisely because, in actual science, it has never been even remotely understood.
The specter of a purely technocratic society where science, which once supplied us with radical revolutionary discoveries and inventions, now plays the role of lab-coated technician tweaking knobs on the “giant brains” of supercomputers,
Yet a derangement of culture spread in large part by the myth (and the rise of ubiquitous computation) keeps alive the possibility that freeing ourselves of modern technology myths might spur progress by causing reinvestment in human insight, innovation, and ideas.
linear and inevitable march to artificial general intelligence (and beyond).
In other words, systems that don’t understand but still perform have become a concern.
advances in visual object recognition until, somewhere out on the long tail of unanticipated consequences and therefore not included in the training data, your vehicle happily rams a passenger bus as it takes care to miss a pylon. (This happened.)
Thus limits to inductive AI lacking genuine understanding are increasingly pushed into AI discussion because we are rushing machines into service, in important areas of human life, which have no understanding.
eager to keep increasing the dominion of AI technologies in every possible area of life.
But the problem arises not, as Russell suggests, because AI systems are getting so smart so quickly, but rather because we’ve rushed them into positions of authority in so many areas of human society, and their inherent limitations—which they’ve always had—now matter.
toward practical concerns about ceding real authority to AI—to, let’s face it, mindless machines—will eventually result in a renewed appreciation for human intelligence and value.
how we can best use increasingly powerful idiots savants to further our own objectives, including in the pursuit of scientific progress.
AI has this role to play, at least, but a mythology about a coming superintelligence should be placed in the category of scientific unknowns. If we wish to pursue a scientific mystery directly, we must at any rate invest in a culture that encourages intellectual ideas—we will need them, if any path to artificial general intelligence is possible at all.
we’re not out of ideas, then we must do the hard and deliberate work of reinvesting in a culture of invention and human flourishing. For we will need our own general intelligence to find paths to the future, and a future better than the past.