More on this book
Kindle Notes & Highlights
Read between
November 10 - November 17, 2024
In 1960, economist Herbert A. Simon famously said, “Machines will be capable, within twenty years, of doing any work that a man can do.”
of writing down thousands of rules manually would be a sure way to build truly intelligent machines. So, domain experts were hired to try to distill their thought processes into numerous rules. This idea, called expert systems, drove the AI hype in the ’80s. One of the big issues with this approach was that it was very difficult and impractical—teaching a machine to perform any task required a lot of rules. Also, experts had a hard time describing their intuitive reasoning as a set of inflexible, robotic rules, and different experts often disagreed.
Consider the following statistic: Between 2001 and 2009, the yearly consumption of cheese in the U.S. was highly correlated with the number of people who died by becoming entangled in their bed sheets.18 This is a fact; the numbers prove it. However, the phenomena are probably not causally related—it was just a coincidence. But if we only trust the numbers, we may be led to believe that cheese consumption and bed sheet strangulation are related phenomena.
To avoid adding nonsensical rules, we must limit the set of things the computer can learn from the data—we must help the computer learn. So, data scientists constantly rely on human knowledge—or assumptions—about the task to restrict the computer’s freedom to learn and point it in the right direction.
So, if you snoop into the daily life of a data scientist, you’ll see that the bulk of the job involves building a tailor-made machine learning solution for the task at hand, leveraging prior knowledge and assumptions to help the machine learn. Machine learning is not carried out by giving all the data we have to the machine and letting it learn anything it wants. As we’ve seen, giving the computer that much freedom would be highly ineffective for building useful software. As we’ll discuss later on, this holds true even with the most advanced AI built to date.
However, reinforcement learning doesn’t escape the need for the computer’s actions to be governed by human assumptions. The data scientist must explicitly program which actions the machine is allowed to try out.
The most common type of machine learning, known as supervised learning, requires the data to be labeled, meaning that each sample must be tagged with the correct answer.
Its successor, ChatGPT, is a model especially designed to excel at conversation, but this required supervised learning—humans were asked to manually label thousands of input phrases with their expected answers and also rank different answers by quality, and this data was fed into the model to refine it.
Overall, this model looks like a funnel: an image enters the funnel and is progressively filtered and shrunk. The final output is a tiny image that is bright if a cat (or whatever object you’re looking for) is detected and dark if it is not detected. This funnel is known as a convolutional neural network, or CNN, and has become the bread and butter of deep learning for image categorization.
But perhaps one of the most worrying aspects of deep learning is that it can be easily fooled. For example, it is possible to make minor changes to an image, which are imperceptible to the human eye, to fool the model into giving a completely different output.
So, while deep learning has made important strides, it would be naïve to think it’s infallible. And, while one may argue that humans are not infallible either, we don’t often mistake a turtle for a rifle or a soap dispenser for an ostrich. Being able to trick deep learning so easily has been a bad and unexpected surprise for many. We may even wonder, is current AI truly smart or just a good pretender?
Machines learn statistical regularities in how people have done the job before, but this doesn’t give them the broad knowledge of the world required to truly excel at the tasks. So, machines may dupe us for a while because they’re good pretenders, but at some point, they end up making silly mistakes a human would never make. At that point, it becomes clear that they are functioning without a model of the world as we know it.
Machine learning, as it stands now, needs to analyze many samples to learn effectively. Neuroscientist Stanislas Dehaene wrote, “The state of the art in machine learning involves running millions, even billions, of training attempts on computers. […] In this contest, the infant brain wins hands down: babies do not need more than one or two repetitions to learn a new word.”
That’s one of the most striking differences between human learning and current machine learning—humans can accomplish more learning from limited data. When a parent points out a butterfly to a toddler and says, “That’s a butterfly,” the toddler may learn the word right away.
To reach AGI, computers would have to match human performance in the most challenging tasks, including language comprehension. As we’ve seen throughout this chapter, machine learning, which is currently the highest-performing type of AI, does not accomplish that. So, the missing piece to reach AGI is not just some practical limitation, say, that computers aren’t fast enough or that we don’t have enough data. Faster computers or more data might be necessary, but they wouldn’t be enough. In order to reach AGI, someone would need to discover a new, unprecedented methodology, since machine
...more
As companies kept trying to build self-driving cars through the 2010s and into the 2020s, the shortcomings of AI started to become evident. Waymo, one of the foremost companies in the industry, soon realized that current AI cannot cope well with out-of-the-ordinary situations. So, they started training their AI models with examples of atypical actors in the road environment, including “a construction worker waist deep in a manhole, someone in a horse costume, [and] a man standing on the corner spinning an arrow-shaped sign.”50
Elon Musk also seems equally disappointed of late. After moving the goalpost several times, he finally acknowledged, in 2021, “Generalized self-driving is a hard problem, as it requires solving a large part of real-world AI. I didn’t expect it to be so hard, but the difficulty is obvious in retrospect.”53
This process requires engineers to start by first understanding and clearly formulating the client’s problem without introducing any biases about the solution. For instance, a problem statement like “Our start-up needs to build an AI system to do X” isn’t formulated in the best way, as it mixes the problem with the solution. A better formulation would be “Our clients are struggling with X” (and we should make an effort to truly understand X and describe it in detail). This way, we avoid putting the cart before the horse (Stage 1 from the beginning of this chapter).
As if quantum theory didn’t already have enough problems, there is another elephant in the room. Quantum theory leaves one remarkable interaction of nature completely unexplained: gravity. Since gravity is rather weak at the scale of particles, it doesn’t affect the results of experiments in the lab. A comprehensive theory of particles should account for gravity, but the current best theory doesn’t. Instead, gravity is explained by a separate theory, Einstein’s general theory of relativity. This theory is, unfortunately, incompatible with quantum theory; quantum theory does not account for the
...more
We can see a stellar instance of this view in the book Human Compatible by Stuart Russell. In this book, Russell mentions a group of researchers that announced that they don’t think human-level AI is possible. To this, he comments, “This is the first time serious AI researchers have publicly espoused the view that human-level or superhuman AI is impossible… It’s as if a group of leading cancer biologists announced that they had been fooling us all along: they’ve always known that there will never be a cure for cancer.”89 This view implies that you can only conduct serious AI research if you
...more
Should I be afraid of AI? Probably not in the way a lot of people seem to be afraid, worrying about robots gone rogue, machine-led mass extinction, skyrocketing unemployment, etc. But you might need to be concerned about the consequences of humans being overly optimistic and putting too much trust in the capabilities of AI.
Current AI also struggles in new or uncommon situations that were not present in the data it learned from, but the ability to deal with unprecedented situations is paramount for safety. So, the real fear with AI is that overly optimistic people will think it is infallible and try to deploy it in unsafe ways.
My biggest fear at the moment is that some people may get caught up in the hype and not acknowledge AI’s limitations. I fear that they may consider AI infallible and convince everyone else of that belief, and because of that, people may end up using machine learning for critical tasks it is not prepared to perform safely. Maybe they’ll manage to have fully autonomous buses roam busy streets. However, while current AI is a good pretender, sooner or later, it ends up making surprisingly silly—and potentially devastating—mistakes.