The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do
Rate it:
Open Preview
75%
Flag icon
your appetite for hypotheses tends to get even larger. And if it’s growing faster than the statistical strength of the data, then many of our inferences are likely to be false. They are likely to be white noise.”
75%
Flag icon
Overfitting, as Silver points out, gives false confidence on existing data, but quickly shatters this illusion when new data arrive—and don’t fit the model or theory.
75%
Flag icon
Bowman went on to explain that earthquake prediction is limited because a theoretical understanding of what is happening under the Earth’s surface along fault lines is lacking.
76%
Flag icon
Theory in science, one might say, can never really be eliminated. One irony of the inevitability myth here is that theory is necessary not only for genuine science, but also for making good on dreams of general intelligence in AI. Modern confusions and mythology has the tail wagging the dog.
76%
Flag icon
the high-level neocortex inspired theories of intelligence that animate much of the vision of Data Brain projects reproducing the human mind in silica are hopelessly general and unusable. The theories themselves are of very little use
76%
Flag icon
Where humans fail to illuminate a complicated domain with testable theory, machine learning and big data supposedly can step in and render traditional concerns about finding robust theories otiose.
76%
Flag icon
This suggests that the weaknesses of such theories leave visionary neuroscientists like Markram undeterred in large part because of the prevalent belief that the march of Big Data AI toward general intelligence and beyond will fill in the details later. Rather than challenge the science that is occurring in neuroscience today,
77%
Flag icon
We have seen that Big Data AI is not well-suited for theory emergence.
77%
Flag icon
On the contrary, without existing theories, Big Data AI falls victim to overfitting, saturation, and blindness from data-inductive methods generally.
77%
Flag icon
“I cannot think of major new conceptual advances that have come from such big science efforts in the past.”
77%
Flag icon
Neuroscientists also worried that the Human Brain Project did not set out to test any specific hypothesis or collection of hypotheses about the brain.8
77%
Flag icon
The neuroscientists pointed out in the petition that more detailed simulations of the brain don’t inevitably lead to better understanding.
78%
Flag icon
what the goal is. What does it mean to understand the human mind? When will we be satisfied? This is much, much more ambitious.”
78%
Flag icon
statement of mythology about AI—came in 2019, Scientific American (no enemy of future ideas about science) and The Atlantic both published searching accounts of what went wrong.11 As one scientist put it, “We have brains in skulls. Now we have them in computers. What have we learned?”12
78%
Flag icon
This faith is not novel science but simply bad science,
78%
Flag icon
general intelligence is supposed to be emerging from AI and its applications to scientific research, it’s noticeably downplayed in the roles of scientists.
78%
Flag icon
remarked recently that innovations seem to be drying up, not accelerating.
78%
Flag icon
Tech startups once dreamed of the next big idea to woo investors in the Valley, but now have exit strategies that almost universally aim
78%
Flag icon
for acquisitions by big tech companies like Google and Facebook, who have a lock on innovation anyway, since Big Data AI always works better for whoever owns the most data. The fix is in.
78%
Flag icon
“The general statistical effect of an anti-intellectual policy would be to encourage the existence of fewer intellectuals and fewer ideas.”5 Such anti-intellectual policies are so clearly evident in modern data-centric treatments of science that the threat is now impossible to ignore. Wiener
78%
Flag icon
The culture has become, as Wiener worried, sanguinely anti-intellectual and even antihuman.
79%
Flag icon
The connection here to the myth is unavoidable, as mythology about the coming of superintelligent machines replacing humans makes concern over anti-intellectual and anti-human bias irrelevant. The very point of the myth is that anti-humanism is the future; it’s baked into the march of existing technology.
79%
Flag icon
It’s difficult to imagine a cultural meme that is more directly corrosive to future flourishing and, paradoxically, more directly inimical to the very invention or discover...
This highlight has been truncated due to consecutive passage length restrictions.
79%
Flag icon
Wiener pointed out that the economics of corporate profit make investment in a genuine culture of ideas difficult, since early bets on ideas are all in essence bad, as their full value becomes apparent only downstream.
79%
Flag icon
The culture is squeezing profits out of low-hanging fruit, while continuing to spin AI mythology, a strategy guaranteed to lead to disillusionment without an inflow of radical conceptual innovation.
79%
Flag icon
but what’s unforgivable is the deliberate attempt to reduce personhood, as Lanier puts it—disparaging and taking away the importance and value of the human mind itself. Such a strategy is fantastically self-defeating and stupid.
79%
Flag icon
“However, the use of the human mind for evolving really new thoughts is a new phenomenon each time. To expect to obtain new ideas of real significance by the multiplication of low-grade human activity and by the fortuitous rearrangement of existing ideas without the leadership of a first-rate mind in the selection of these ideas is another form of the fallacy of the monkeys and the typewriter,
79%
Flag icon
To sum up: there is no way for current AI to “evolve” general intelligence in the first place, absent a fundamental discovery. Simply saying “we’re getting there” is scientifically and conceptually bankrupt, and further fans the flames of antihuman and anti-intellectual forces interested in (seemingly) controlling and predicting outcomes for, among other reasons, maximizing short-term profit by skewing discussion toward inevitability.
80%
Flag icon
very few leaders in the current culture are actually pursuing an agenda where human ingenuity can thrive.
80%
Flag icon
Perhaps we could start with a frank acknowledgement that deep learning is a dead end, as is data-centric AI in general, no matter how many advertising dollars it might help bring in to big tech’s coffers.
80%
Flag icon
no one has the slightest clue how to build an artificial general intelligence.
80%
Flag icon
The dream remains mythological precisely because, in actual science, it has never been even remotely understood.
80%
Flag icon
The specter of a purely technocratic society where science, which once supplied us with radical revolutionary discoveries and inventions, now plays the role of lab-coated technician tweaking knobs on the “giant brains” of supercomputers,
80%
Flag icon
Yet a derangement of culture spread in large part by the myth (and the rise of ubiquitous computation) keeps alive the possibility that freeing ourselves of modern technology myths might spur progress by causing reinvestment in human insight, innovation, and ideas.
81%
Flag icon
linear and inevitable march to artificial general intelligence (and beyond).
81%
Flag icon
In other words, systems that don’t understand but still perform have become a concern.
81%
Flag icon
advances in visual object recognition until, somewhere out on the long tail of unanticipated consequences and therefore not included in the training data, your vehicle happily rams a passenger bus as it takes care to miss a pylon. (This happened.)
81%
Flag icon
Thus limits to inductive AI lacking genuine understanding are increasingly pushed into AI discussion because we are rushing machines into service, in important areas of human life, which have no understanding.
81%
Flag icon
eager to keep increasing the dominion of AI technologies in every possible area of life.
81%
Flag icon
But the problem arises not, as Russell suggests, because AI systems are getting so smart so quickly, but rather because we’ve rushed them into positions of authority in so many areas of human society, and their inherent limitations—which they’ve always had—now matter.
81%
Flag icon
toward practical concerns about ceding real authority to AI—to, let’s face it, mindless machines—will eventually result in a renewed appreciation for human intelligence and value.
81%
Flag icon
how we can best use increasingly powerful idiots savants to further our own objectives, including in the pursuit of scientific progress.
81%
Flag icon
AI has this role to play, at least, but a mythology about a coming superintelligence should be placed in the category of scientific unknowns. If we wish to pursue a scientific mystery directly, we must at any rate invest in a culture that encourages intellectual ideas—we will need them, if any path to artificial general intelligence is possible at all.
81%
Flag icon
we’re not out of ideas, then we must do the hard and deliberate work of reinvesting in a culture of invention and human flourishing. For we will need our own general intelligence to find paths to the future, and a future better than the past.
1 3 Next »