How to Create a Mind: The Secret of Human Thought Revealed
Rate it:
Open Preview
39%
Flag icon
The Markov models used in speech recognition code the likelihood that specific patterns of sound are found in each phoneme, how the phonemes influence one another, and the likely orders of phonemes. The system can also include probability networks on higher levels of language structure, such as the order of words, the inclusion of phrases, and so on up the hierarchy of language.
39%
Flag icon
Whereas our previous speech recognition systems incorporated specific rules about phoneme structures and sequences explicitly coded by human linguists, the new HHMM-based system was not explicitly told that there are forty-four phonemes in English, the sequences of vectors that were likely for each phoneme, or what phoneme sequences were more likely than others. We let the system discover these “rules” for itself from thousands of hours of transcribed human speech data. The advantage of this approach over hand-coded rules is that the models develop probabilistic rules of which human experts ...more
39%
Flag icon
How do we set the many parameters that control a pattern recognition system’s functioning? These could include the number of vectors that we allow in the vector quantization step, the initial topology of hierarchical states (before the training phase of the hidden Markov model process prunes them back), the recognition threshold at each level of the hierarchy, the parameters that control the handling of the size parameters, and many others. We can establish these based on our intuition, but the results will be far from optimal. We call these parameters “God parameters” because they are set ...more
39%
Flag icon
We used what are called genetic or evolutionary algorithms (GAs), which include simulated sexual reproduction and mutations.
39%
Flag icon
we determine a way to code possible solutions to a given problem. If the problem is optimizing the design parameters for a circuit, then we define a list of all of the parameters (with a specific number of bits assigned to each parameter) that characterize the circuit. This list is regarded as the genetic code in the genetic algorithm. Then we randomly generate thousands or more genetic codes. Each such genetic code (which represents one set of design parameters) is considered a simulated “solution” organism. Now we evaluate each simulated organism in a simulated environment by using a defined ...more
40%
Flag icon
At the end of each generation we determine how much the designs have improved (that is, we compute the average improvement in the evaluation function over all the surviving organisms). When the degree of improvement in the evaluation of the design creatures from one generation to the next becomes very small, we stop this iterative cycle and use the best design(s) in the last generation.
40%
Flag icon
The key to a genetic algorithm is that the human designers don’t directly program a solution; rather, we let one emerge through an iterative process of simulated competition and improvement. Biological evolution is smart but slow, so to enhance its intelligence we greatly speed up its ponderous pace. The computer is fast enough to simulate many generations in a matter of hours or days, and we’ve occasionally had them run for as long as weeks to simulate hundreds of thousands of generations. But we have to go through this iterative process only once; as soon as we have let this simulated ...more
40%
Flag icon
We then experimented with introducing a series of small variations in the overall system. For example, we would make perturbations (minor random changes) to the input. Another such change was to have adjacent Markov models “leak” into one another by causing the results of one Markov model to influence models that are “nearby.” Although we did not realize it at the time, the sorts of adjustments we were experimenting with are very similar to the types of modifications that occur in biological cortical structures. At first, such changes hurt performance (as measured by accuracy of recognition). ...more
41%
Flag icon
So how can we tell whether a particular design feature of the biological neocortex is a vital innovation introduced by biological evolution—that is, one that is instrumental to our level of intelligence—or merely an artifact that the design of the system is now dependent on but could have evolved without? We can answer that question simply by running simulated evolution with and without these particular variations to the details of the design (for example, with and without connection cross talk). We can even do so with biological evolution if we’re examining the evolution of a colony of ...more
41%
Flag icon
conceit
43%
Flag icon
Almost every product we touch is originally designed in a collaboration between human and artificial intelligence and then built in automated factories.
43%
Flag icon
Turing felt that all of human intelligence was embodied and represented in language, and that no machine could pass a Turing test through simple language tricks. Although the Turing test is a game involving written language, Turing believed that the only way that a computer could pass it would be for it to actually possess the equivalent of human-level intelligence.
43%
Flag icon
That the general public is now having conversations in natural spoken language with their handheld computers marks a new era. It is typical that people dismiss the significance of a first-generation technology because of its limitations. A few years later, when the technology does work well, people still dismiss its importance because, well, it’s no longer new. That being said, Siri works impressively for a first-generation product, and it is clear that this category of product is only going to get better.
44%
Flag icon
Over the past decade two major insights have deeply influenced the natural-language-understanding field. The first has to do with hierarchies. Although the Google approach started with association of flat word sequences from one language to another, the inherent hierarchical nature of language has inevitably crept into its operation. Systems that methodically incorporate hierarchical learning (such as hierarchical hidden Markov models) provided significantly better performance. However, such systems are not quite as automatic to build. Just as humans need to learn approximately one conceptual ...more
44%
Flag icon
This is also how Siri and Dragon Go! work—using rules for the most common and reliable phenomena and then learning the “tail” of the language in the hands of real users. When the Cyc team realized that they had reached a ceiling of performance based on hand-coded rules, they too adopted this approach. Hand-coded rules provide two essential functions. They offer adequate initial accuracy, so that a trial system can be placed into widespread usage, where it will improve automatically. Secondly, they provide a solid basis for the lower levels of the conceptual hierarchy so that the automated ...more
45%
Flag icon
One might think that less commonly shared professional knowledge, such as that in the medical field, would be more difficult to master than the general-purpose “common” knowledge that is required to play Jeopardy! Actually, the opposite is the case: Professional knowledge tends to be more highly organized, structured, and less ambiguous than its commonsense counterpart, so it is highly amenable to accurate natural-language understanding using these techniques.
46%
Flag icon
It is amusing and ironic when observers criticize Watson for just doing statistical analysis of language as opposed to possessing the “true” understanding of language that humans have. Hierarchical statistical analysis is exactly what the human brain is doing when it is resolving multiple hypotheses based on statistical inference (and indeed at every level of the neocortical hierarchy). Both Watson and the human brain learn and respond based on a similar approach to hierarchical understanding. In many respects Watson’s knowledge is far more extensive than a human’s; no human can claim to have ...more
46%
Flag icon
An ideal combination for a robustly intelligent system would be to combine hierarchical intelligence based on the PRTM (which I contend is how the human brain works) with precise codification of scientific knowledge and data. That essentially describes a human with a computer. We will enhance both poles of intelligence in the years ahead. With regard to our biological intelligence, although our neocortex has significant plasticity, its basic architecture is limited by its physical constraints. Putting additional neocortex into our foreheads was an important evolutionary innovation, but we ...more
47%
Flag icon
Let’s use the observations I have discussed above to begin building a brain. We will start by building a pattern recognizer that meets the necessary attributes. Next we’ll make as many copies of the recognizer as we have memory and computational resources to support. Each recognizer computes the probability that its pattern has been recognized.
47%
Flag icon
Recognition of the pattern sends an active signal up the simulated axon of this pattern recognizer. This axon is in turn connected to one or more pattern recognizers at the next higher conceptual level. All of the pattern recognizers connected at the next higher conceptual level are accepting this pattern as one of its inputs. Each pattern recognizer also sends signals down to pattern recognizers at lower conceptual levels whenever most of a pattern has been recognized, indicating that the rest of the pattern is “expected.”
47%
Flag icon
The pattern recognizers are responsible for “wiring” themselves to other pattern recognizers up and down the conceptual hierarchy.
48%
Flag icon
I would also provide a critical thinking module, which would perform a continual background scan of all of the existing patterns, reviewing their compatibility with the other patterns (ideas) in this software neocortex. We have no such facility in our biological brains, which is why people can hold completely inconsistent thoughts with equanimity. Upon identifying an inconsistent idea, the digital module would begin a search for a resolution, including its own cortical structures as well as all of the vast literature available to it. A resolution might simply mean determining that one of the ...more
48%
Flag icon
Finally, our new brain needs a purpose. A purpose is expressed as a series of goals. In the case of our biological brains, our goals are established by the pleasure and fear centers that we have inherited from the old brain. These primitive drives were initially set by biological evolution to foster the survival of species, but the neocortex has enabled us to sublimate them.
48%
Flag icon
More interestingly, we could give our new brain a more ambitious goal, such as contributing to a better world. A goal along these lines, of course, raises a lot of questions: Better for whom? Better in what way? For biological humans? For all conscious beings? If that is the case, who or what is conscious? As nonbiological brains become as capable as biological ones of effecting changes in the world—indeed, ultimately far more capable than unenhanced biological ones—we will need to consider their moral education.
49%
Flag icon
“Computers are not word processors.” It is true that a computer and a word processor exist at different conceptual levels, but a computer can become a word processor if it is running word processing software and not otherwise. Similarly, a computer can become a brain if it is running brain software. That is what researchers including myself are attempting to do. The question, then, is whether or not we can find an algorithm that would turn a computer into an entity that is equivalent to a human brain. A computer, after all, can run any algorithm that we might define because of its innate ...more
50%
Flag icon
redundancy.
50%
Flag icon
Simply repeating information is the easiest way to achieve arbitrarily high accuracy rates from low-accuracy channels, but it is not the most efficient approach.
50%
Flag icon
“Strong” interpretations of the Church-Turing thesis propose an essential equivalence between what a human can think or know and what is computable by a machine. The basic idea is that the human brain is likewise subject to natural law, and thus its information-processing ability cannot exceed that of a machine (and therefore of a Turing machine).
50%
Flag icon
Turing reports another unexpected discovery: that of unsolvable problems. These are problems that are well defined with unique answers that can be shown to exist, but that we can also prove can never be computed by any Turing machine—that is to say, by any machine, a reversal of what had been a nineteenth-century dogma that problems that could be defined would ultimately be solved. Turing showed that there are as many unsolvable problems as solvable ones.
51%
Flag icon
By 1943, an engineering team influenced by Turing completed what is arguably the first computer, the Colossus, that enabled the Allies to continue decoding messages from more sophisticated versions of Enigma.
52%
Flag icon
we remember only a very small fraction of our thoughts and experiences, and even these memories are not stored as bit patterns at a low level (such as a video image), but rather as sequences of higher-level patterns.
52%
Flag icon
the brain’s remarkable powers come from all its 100 billion neurons being able to process information simultaneously. As I have noted, the visual cortex makes sophisticated visual judgments in only three or four neural cycles. There is considerable plasticity in the brain, which enables us to learn. But there is far greater plasticity in a computer, which can completely restructure its methods by changing its software. Thus, in that respect, a computer will be able to emulate the brain, but the converse is not the case.
54%
Flag icon
Another restriction of the human neocortex is that there is no process that eliminates or even reviews contradictory ideas, which accounts for why human thinking is often massively inconsistent. We have a weak mechanism to address this called critical thinking, but this skill is not practiced nearly as often as it should be. In a software-based neocortex, we can build in a process that reveals inconsistencies for further review.
54%
Flag icon
Is a baby conscious? A dog? They’re not very good at describing their own thinking process. There are people who believe that babies and dogs are not conscious beings precisely because they cannot explain themselves. How about the computer known as Watson? It can be put into a mode where it actually does explain how it came up with a given answer. Because it contains a model of its own thinking, is Watson therefore conscious whereas the baby and the dog are not?
55%
Flag icon
My own view, which is perhaps a subschool of panprotopsychism, is that consciousness is an emergent property of a complex physical system. In this view a dog is also conscious but somewhat less than a human. An ant has some level of consciousness, too, but much less that of a dog. The ant colony, on the other hand, could be considered to have a higher level of consciousness than the individual ant; it is certainly more intelligent than a lone ant. By this reckoning, a computer that is successfully emulating the complexity of a human brain would also have the same emergent consciousness as a ...more
56%
Flag icon
So with regard to consciousness, what exactly is the question again? It is this: Who or what is conscious? I refer to “mind” in the title of this book rather than “brain” because a mind is a brain that is conscious.
57%
Flag icon
quantum computing in the brain has never been demonstrated; human mental performance can be satisfactorily explained by classical computing methods; and in any event nothing bars us from applying quantum computing in computers.
57%
Flag icon
There are certain types of problems for which quantum computing would show superior capabilities to classical computing—for example, the cracking of encryption codes through the factoring of large numbers. However, unassisted human thinking has proven to be terrible at solving them, and cannot match even classical computers in this area, which suggests that the brain is not demonstrating any quantum computing capabilities. Moreover, even if such a phenomenon as quantum computing in the brain did exist, it would not necessarily be linked to consciousness.
57%
Flag icon
Individual philosophical assumptions about the nature and source of consciousness underlie disagreements on issues ranging from animal rights to abortion, and will result in even more contentious future conflicts over machine rights. My objective prediction is that machines in the future will appear to be conscious and that they will be convincing to biological people when they speak of their qualia.
58%
Flag icon
There is a conceptual gap between science, which stands for objective measurement and the conclusions we can draw thereby, and consciousness, which is a synonym for subjective experience.
58%
Flag icon
Because a great deal of our moral and legal system is based on protecting the existence of and preventing the unnecessary suffering of conscious entities, in order to make responsible judgments we need to answer the question as to who is conscious.
59%
Flag icon
if the entity has no interest in communicating with me, and I don’t have sufficient access to its actions and decision making to be moved by the beauty of its internal processes, does that mean that it is not conscious? I need to conclude that entities that do not succeed in convincing me of their emotional reactions, or that don’t care to try, are not necessarily not conscious. It would be difficult to recognize another conscious entity without establishing some level of empathetic communication, but that judgment reflects my own limitations more than it does the entity under consideration.
59%
Flag icon
What Are We Conscious Of?
59%
Flag icon
Imagine that you are driving in the left lane of a highway. Now close your eyes, grab an imagined steering wheel, and make the movements to change lanes to the lane to your right.
59%
Flag icon
you turned the steering wheel to the right for a brief period. Then you straightened it out again. Job done.
59%
Flag icon
you got it wrong. Turning the wheel to the right and then straightening it out causes the car to head in a direction that is diagonal to its original direction. It will cross the lane to the right, as you intended, but it will keep going to the right indefinitely until it zooms off the road. What you needed to do as your car crossed the lane to the right was to then turn the wheel to the left, just as far as you had turned it to the right, and then straighten it out again.
59%
Flag icon
you’ve done this maneuver thousands of times. Are you not conscious when you do this?
59%
Flag icon
you have clearly mastered this skill. Yet you are not conscious of what you did, however many times you’ve accomplished this task.
60%
Flag icon
particles figure that if no one is bothering to look at them, they don’t need to decide where they are.
60%
Flag icon
The so-called collapse of the wave function, this view holds, is not a collapse at all. The wave function actually never goes away. It is just that a measurement device is also made up of particles with fields, and the interaction of the particle field being measured and the particle fields of the measuring device results in a reading of the particle being in a particular location. The field, however, is still present. This is the Western interpretation of quantum mechanics, although it is interesting to note that the more popular view among physicists worldwide is what I have called the ...more