A Thousand Brains: A New Theory of Intelligence
Rate it:
Open Preview
3%
Flag icon
comrades comes before even your own life.” The conflict between the old reptilian and the new mammalian brain furnishes the answer to such riddles as “Why does pain have to be so damn painful?” What, after all, is pain for? Pain is a proxy for death. It is a warning to the brain, “Don’t do that again: don’t tease a snake, pick up a hot ember, jump from a great height. This time it only hurt; next time it might kill you.” But now a designing engineer might say what we need here is the equivalent of a painless flag in the brain. When the flag shoots up, don’t repeat whatever you just did. But ...more
9%
Flag icon
An animal doesn’t need a neocortex to live a complex life. A crocodile’s brain is roughly equivalent to our brain, but without a proper neocortex. A crocodile has sophisticated behaviors, cares for its young, and knows how to navigate its environment. Most people would say a crocodile has some level of intelligence, but nothing close to human intelligence. The neocortex and the older parts of the brain are connected via nerve fibers; therefore, we cannot think of them as completely separate organs. They are more like roommates, with separate agendas and personalities, but who need to cooperate ...more
This highlight has been truncated due to consecutive passage length restrictions.
9%
Flag icon
The neocortex is surprisingly different. Although it occupies almost three-quarters of the brain’s volume and is responsible for a myriad of cognitive functions, it has no visually obvious divisions. The folds and creases are needed to fit the neocortex into the skull, similar to what you would see if you forced a napkin into a large wine glass. If you ignore the folds and creases, then the neocortex looks like one large sheet of cells, with no obvious divisions.
12%
Flag icon
What Mountcastle says in these first three sentences is that the brain grew large over evolutionary time by adding new brain parts on top of old brain parts. The older parts control more primitive behaviors while the newer parts create more sophisticated ones. Hopefully this sounds familiar, as I discussed this idea in the previous chapter. However, Mountcastle goes on to say that while much of the brain got bigger by adding new parts on top of old parts, that is not how the neocortex grew to occupy 70 percent of our brain. The neocortex got big by making many copies of the same thing: a basic ...more
13%
Flag icon
So, what was Mountcastle’s proposal for the location of the cortical algorithm? He said that the fundamental unit of the neocortex, the unit of intelligence, was a “cortical column.” Looking at the surface of the neocortex, a cortical column occupies about one square millimeter. It extends through the entire 2.5 mm thickness, giving it a volume of 2.5 cubic millimeters. By this definition, there are roughly 150,000 cortical columns stacked side by side in a human neocortex. You can imagine a cortical column is like a little piece of thin spaghetti. A human neocortex is like 150,000 short ...more
13%
Flag icon
Let’s review. The neocortex is a sheet of tissue about the size of a large napkin. It is divided into dozens of regions that do different things. Each region is divided into thousands of columns. Each column is composed of several hundred hairlike minicolumns, which consist of a little over one hundred cells each. Mountcastle proposed that throughout the neocortex columns and minicolumns performed the same function: implementing a fundamental algorithm that is responsible for every aspect of perception and intelligence.
26%
Flag icon
In 1971, scientist John O’Keefe and his student Jonathan Dostrovsky placed a wire into a rat’s brain. The wire recorded the spiking activity of a single neuron in the hippocampus. The wire went up toward the ceiling so they could record the activity of the cell as the rat moved and explored its environment, which was typically a big box on a table. They discovered what are now called place cells: neurons that fire every time the rat is in a particular location in a particular environment. A place cell is like a “you are here” marker on a map. As the rat moves, different place cells become ...more
35%
Flag icon
There are two modest-size regions of the neocortex that are said to be responsible for language. Wernicke’s area is thought to be responsible for language comprehension, and Broca’s area is thought to be responsible for language production. This is a bit of a simplification. First, there is disagreement over the exact location and extent of these regions. Second, the functions of Wernicke’s and Broca’s areas are not neatly differentiated into comprehension and production; they overlap a bit. Finally, and this should be obvious, language can’t be isolated to two small regions of the neocortex. ...more
This highlight has been truncated due to consecutive passage length restrictions.
36%
Flag icon
According to linguists, one of the defining attributes of language is its nested structure. For example, sentences are composed of phrases, phrases are composed of words, and words are composed of letters. Recursion, the ability to repeatedly apply a rule, is another defining attribute. Recursion allows sentences to be constructed with almost unlimited complexity. For example, the simple sentence “Tom asked for more tea” can be extended to “Tom, who works at the auto shop, asked for more tea,” which can be extended to “Tom, who works at the auto shop, the one by the thrift store, asked for ...more
This highlight has been truncated due to consecutive passage length restrictions.
47%
Flag icon
The long-term goal of AI research is to create machines that exhibit human-like intelligence—machines that can rapidly learn new tasks, see analogies between different tasks, and flexibly solve new problems. This goal is called “artificial general intelligence,” or AGI, to distinguish it from today’s limited AI.
53%
Flag icon
I recently attended a panel discussion titled “Being Human in the Age of Intelligent Machines.” At one point during the evening, a philosophy professor from Yale said that if a machine ever became conscious, then we would probably be morally obligated to not turn it off. The implication was that if something is conscious, even a machine, then it has moral rights, so turning it off is equivalent to murder. Wow! Imagine being sent to prison for unplugging a computer. Should we be concerned about this?
86%
Flag icon
Wiki Earth Letting a distant civilization know that we once existed is an important first goal. But to me, the most important thing about humans is our knowledge. We are the only species on Earth that possesses knowledge of the universe and how it works. Knowledge is rare, and we should attempt to preserve it.