Smart Until It's Dumb: Why artificial intelligence keeps making epic mistakes⁠—and why the AI bubble will burst
Rate it:
Open Preview
2%
Flag icon
The frenzy around AI makes it very hard to answer those difficult questions because we can’t tell how much of the progress in AI is real and how much of it is exaggeration or even fantasy.
6%
Flag icon
For example, a company that is regarded as an “AI start-up” seems to attract 15% to 50% more funding than other types of tech start-ups,10 so companies overuse the word “AI” to garner investors’ attention.
6%
Flag icon
A venture capital firm studied 2,830 start-ups in Europe that were classified as AI companies and found that 40% of them weren’t using AI in a way that was material to the business.13
13%
Flag icon
Machine learning is not carried out by giving all the data we have to the machine and letting it learn anything it wants. As we’ve seen, giving the computer that much freedom would be highly ineffective for building useful software.
27%
Flag icon
One of the problems of deep learning is its lack of explainability. Deep learning models are big and configured automatically, making it hard to know how they come up with their outputs.
39%
Flag icon
“Historically, AI researchers have not had a strong record of being able to predict the rate of advances in their own field or the shape that such advances would take.”
39%
Flag icon
There is no clear pathway yet toward artificial general intelligence. We might be positively surprised in the near future, but it is conceivable that, even if possible in principle, it will not happen anytime soon.
40%
Flag icon
But merely counting errors conceals some of the greatest challenges of current AI, like the fact that it sometimes makes unexpected goofs that humans don’t usually make or that it sometimes gets completely confused by uncommon yet simple “cow-on-the-beach” situations.
41%
Flag icon
Unless something radically new is discovered, you’ll likely observe a similar pattern when autonomous vehicles are announced around the world: they will either require constant human supervision or operate in a restricted and controlled environment.
41%
Flag icon
That’s entrepreneurship (and engineering) upside down! In an ideal world, one has a problem first and finds the best solution for it afterward. However, in the world of AI, very often people first bring the solution to the table—AI—without even knowing what they’ll do with it.
42%
Flag icon
They also kept purchasing expensive subscriptions to external AI software that locked them into long-term contracts. The real problem was that this company had created an official cart-before-horse team.
43%
Flag icon
There must be an explanation for this frenzy, a reason why everyone tries to push AI everywhere. I think one of the main reasons is that saying you’re working on AI is a great way of raising funding from private investors and the government with a low level of accountability.
44%
Flag icon
Very often, businesspeople genuinely believe—or are led to believe—that AI is a silver bullet that can solve any problem. In the most extreme case, they may even think AI can do the impossible.
47%
Flag icon
The experiment was botched. It was like a vaccine trial where the placebo is given to smokers and the true vaccine to non-smokers, thus invalidating any conclusions about the vaccine’s effectiveness for improving people’s health.
49%
Flag icon
One may think that at this point, when the weaknesses of the product were known, the team leaders would have scaled back. However, they kept hiring more people onto the project team. I have seen this several times—an AI team keeps asking for more budget and more employees, even when things aren’t working. They fear that, if they do otherwise, the rest of the organization will realize that things aren’t going as well as promised, putting the future of the team in jeopardy. So, in my experience, the explosive growth of AI teams does not end when people notice the AI isn’t as good as expected.
51%
Flag icon
When it comes to preventing problems of dishonesty about AI’s performance (Stage 4), my best guess is that this comes down to fostering the right work culture.
63%
Flag icon
Suppose we repeat this process, replacing all John’s neurons one by one with equivalent microchips, until the whole brain is an electronic circuit. John’s new brain is exactly equivalent in functionality to his original one. His behavior is exactly the same as before, as his entire brain’s computer program hasn’t changed.
65%
Flag icon
According to the computational theory of the mind, John would still experience consciousness due to the actions of billions of Chinese people who keep calculating the outputs of running his brain’s computer program while talking to one another and taking notes. Some people consider this an obvious observation and aren’t baffled by the idea. For others, this is pure sci-fi and defeats our common notion of consciousness and the sense of self.
66%
Flag icon
While these scenarios seem far-fetched, they are a natural consequence of the computational theory of the mind. So, the proponents of the theory should be ready to study them and suggest solutions to these ethical dilemmas. On this matter, Aaronson says that declaring that the mind is a computer program to “escape all that philosophical crazy-talk” is ironically backward, as we end up landing on a swamp of philosophical perplexities rather than dodging them.
67%
Flag icon
But some people have observed that if running a computer program is what creates consciousness, then we could claim anything running a computer program to be sentient.
67%
Flag icon
If Google’s AI is conscious, then why isn’t a thermostat conscious? After all, it also runs a computer program. David Chalmers argued that thermostats are likely to be conscious, although “we will likely be unable to sympathetically imagine these experiences any better than a blind person can imagine sight, or than a human can imagine what it is like to be a bat.”
68%
Flag icon
The question of whether and how consciousness emerges from computer programs remains unsolved. If Google’s AI is conscious, then why wouldn’t a thermostat also be conscious? And if Google’s AI isn’t conscious, then why are our brains conscious?
68%
Flag icon
This debate is important for analyzing the future of AI. The idea that our minds are computers is the ultimate argument supporting that artificial general intelligence is possible; all it takes is to scan a brain and use it as a template to build an equivalent computer.
69%
Flag icon
If you reject the computational theory of the mind because of its strange implications on consciousness, free will, or something else, then we lose that strong argument in favor of the feasibility of artificial general intelligence. If our minds are something more than just large carbon-based implementations of ordinary computers, then we can’t guarantee that a computer will ever be able to exactly imitate what a brain does and thus attain our level of intelligence at every task.
69%
Flag icon
But now we face another painful question—if the mind isn’t a computer, then what is it? What could possibly be happening physically inside a brain that a computer would not be able to imitate? For that to be the case, there must be something that a computer cannot do that a brain can. Let’s explore four possibilities.
74%
Flag icon
Calculations show that, for general relativity to hold up, the missing stuff, called dark matter and dark energy, must amount to a whopping 95% of all matter and energy in the universe. So, there are two options here: either general relativity is wrong or current physics only understands 5% of the stuff out there while the rest is made up of something we don’t know. Neither option is flattering for physics.
74%
Flag icon
But as we’ve seen in this chapter, that “if” question hasn’t been answered yet. It is not a universally accepted truth that brains are computers, that artificial general intelligence is possible or that these highly intelligent computers would be conscious. The “if” question has stimulated discussion for years across multiple fields, including biology, philosophy and physics. I can’t provide my own answer to “if” because, as seen in this chapter, it’s quite complicated. I do have an answer to the “when” question though: not anytime soon.
78%
Flag icon
Current AI also struggles in new or uncommon situations that were not present in the data it learned from, but the ability to deal with unprecedented situations is paramount for safety. So, the real fear with AI is that overly optimistic people will think it is infallible and try to deploy it in unsafe ways.
79%
Flag icon
I’m not sure the extent to which this slowdown will impact the economy, but, as companies have already poured billions into AI projects, it will probably come at a price.
79%
Flag icon
My biggest fear at the moment is that some people may get caught up in the hype and not acknowledge AI’s limitations. I fear that they may consider AI infallible and convince everyone else of that belief, and because of that, people may end up using machine learning for critical tasks it is not prepared to perform safely. Maybe they’ll manage to have fully autonomous buses roam busy streets. However, while current AI is a good pretender, sooner or later, it ends up making surprisingly silly—and potentially devastating—mistakes.