The Pattern on the Stone Quotes

Rate this book
Clear rating
The Pattern on the Stone: The Simple Ideas that Make Computers Work The Pattern on the Stone: The Simple Ideas that Make Computers Work by William Daniel Hillis
821 ratings, 4.06 average rating, 94 reviews
The Pattern on the Stone Quotes Showing 1-30 of 67
“It is certainly conceivable, as at least one well-known physicist has speculated (to hoots from most of his colleagues), that the human brain takes advantage of quantum mechanical effects. Yet there is no evidence whatsoever that this is the case. Certainly, the physics of a neuron depends on quantum mechanics, just as the physics of a transistor does, but there is no evidence that neural processing takes place at the quantum mechanical level as opposed to the classical level; that is, there is no evidence that quantum mechanics is necessary to explain human thought. As far as we know, all the relevant computational properties of a neuron can be simulated on a conventional computer. If this is indeed the case, then it is also possible to simulate a network of tens of billions of such neurons, which means, in turn, that the brain can be simulated on a universal machine. Even if it turns out that the brain takes advantage of quantum computation, we will probably learn how to build devices that take advantage of the same effects—in which case it will still be possible to simulate the human brain with a machine.”
William Daniel Hillis, The Pattern on the Stone: The Simple Ideas that Make Computers Work
“I etch a pattern of geometric shapes onto a stone. To the uninitiated, the shapes look mysterious and complex, but I know that when arranged correctly they will give the stone a special power, enabling it to respond to incantations in a language no human being has ever spoken. ...Yet my work involves no witchcraft. The stone is a wafer of silicon, and the incantations are software. The patterns etched on the chip and the programs that instruct the computer may look complicated
and mysterious, but they are generated according to a few basic principles that are easily explained.”
William Daniel Hillis, The Pattern on the Stone: The Simple Ideas that Make Computers Work
“In a search space like that of the traveling salesman problem, where nearby points are likely to have similar scores, it is usually better to use a procedure that searches a path through the space by traveling from point to nearby point. Just as the best method for finding a peak in a hilly landscape is to walk uphill, the equivalent heuristic is to choose the best of nearby solutions found in the search space. In the traveling salesman problem, for example, the computer might vary the best-known solution by exchanging the order of two of the cities in the itinerary. If this variation leads to a more efficient tour, then it is accepted as a superior solution (a step uphill); otherwise, it is rejected and a new variation is tried. This method of search will wander through the space, always traveling in an uphill direction, until it reaches the top of a hill. At this point, the solution cannot be improved by exchanging any pair of cities. The weakness of this method, which is called hill climbing, is that although you thereby reach the top of a hill, it is not necessarily the highest hill in the landscape. Hill climbing is a heuristic, not an algorithm.”
William Daniel Hillis, The Pattern on the Stone: The Simple Ideas that Make Computers Work
“Generally, the reason that a search space is large is because the possibilities are produced by forming combinations of simpler elements—the individual moves in chess, the city-to-city hops in the traveling salesman problem. This combining of elements leads to a combinatorial explosion of possibilities—an explosion that grows exponentially with the number of elements being combined. Since the possibilities are built from combinations of elements, there is a sense of distance in the space; combinations that share common elements are “closer” than the combinations that do not.”
William Daniel Hillis, The Pattern on the Stone: The Simple Ideas that Make Computers Work
“A rule that tends to give the right answer, but is not guaranteed to, is called a heuristic. It is often more practical to use a heuristic than an algorithm: for instance, there are many effective heuristics for the traveling salesman problem—procedures that will provide an almost optimal route very quickly. In fact these heuristics usually do find the best route, although they are not absolutely guaranteed to do so. A real-life traveling salesman would presumably be happier with a good, fast heuristic than with a slow algorithm.”
William Daniel Hillis, The Pattern on the Stone: The Simple Ideas that Make Computers Work
“The theoretical limitations of computers provide no useful dividing line between human beings and machines. As far as we know, the brain is a kind of computer, and thought is just a complex computation. Perhaps this conclusion sounds harsh to you, but in my view it takes nothing away from the wonder or value of human thought. The statement that thought is a complex computation is like the statement sometimes made by biologists that life is a complex chemical reaction: both statements are true, and yet they still may be seen as incomplete. They identify the correct components, but they ignore the mystery. To me, life and thought are both made all the more wonderful by the realization that they emerge from simple, understandable parts. I do not feel diminished by my kinship to Turing’s machine.”
William Daniel Hillis, The Pattern on the Stone: The Simple Ideas that Make Computers Work
“Although one is hard pressed to come up with specific examples of noncomputable problems, one can easily prove that most of the possible mathematical functions are noncomputable. This is because any program can be specified in a finite number of bits, whereas specifying a function usually requires an infinite number of bits, so there are a lot more functions than programs.”
William Daniel Hillis, The Pattern on the Stone: The Simple Ideas that Make Computers Work
“The halting problem, which was dreamed up by Alan Turing, is chiefly important as an example of a noncomputable problem, and most noncomputable problems that do come up in practice are similar to or equivalent to it. But a computer’s inability to solve the halting problem is not a weakness of the computer, because the halting problem is inherently unsolvable. There is no machine that can be constructed that can solve the halting problem.”
William Daniel Hillis, The Pattern on the Stone: The Simple Ideas that Make Computers Work
“A rare example of a well-defined, useful, but noncomputable problem is the halting problem. Imagine that I want to write a computer program that will examine another computer program and determine whether or not that program will eventually stop. If the program being examined has no loops or recursive subroutine calls, it is bound to finish eventually, but if it does have such constructs the program may well go on forever. It turns out that there is no algorithm for examining a program and determining whether or not it is fatally infected with an endless loop. Moreover, it’s not that no one has yet discovered such an algorithm; rather, no such algorithm is possible. The halting problem is noncomputable.”
William Daniel Hillis, The Pattern on the Stone: The Simple Ideas that Make Computers Work
“there are also flawlessly defined computational problems that are impossible to solve. Such problems are called noncomputable”
William Daniel Hillis, The Pattern on the Stone: The Simple Ideas that Make Computers Work
“In computers, chaotic systems—systems whose outcomes depend sensitively on the initial conditions—are the norm.”
William Daniel Hillis, The Pattern on the Stone: The Simple Ideas that Make Computers Work
“A roulette wheel is an example of what physicists call a chaotic system—a system in which a small change in the initial conditions (the throw, the mass of the ball, the diameter of the wheel, and so forth) can produce a large change in the state to which the system evolves (the resulting number). This notion of a chaotic system helps explain how a deterministic set of interactions can produce unpredictable results. In a computer, there are simpler ways to produce a pseudorandom sequence than simulating a roulette wheel,”
William Daniel Hillis, The Pattern on the Stone: The Simple Ideas that Make Computers Work
“Like the roulette wheel, a computer can produce a sequence of numbers that is random in the same sense. In fact, using a mathematical model, the computer could simulate the physics of the roulette wheel and throw a simulated ball at a slightly different angle each time in order to produce each number in the sequence. Even if the angles at which the computer throws the simulated ball follow a consistent pattern, the simulated dynamics of the wheel would transform these tiny differences into what amounts to an unpredictable sequence of numbers. Such a sequence of numbers is called a pseudorandom sequence, because it only appears random to an observer who does not know how it was computed. The sequence produced by a pseudorandom number generator can pass all normal statistical tests of randomness.”
William Daniel Hillis, The Pattern on the Stone: The Simple Ideas that Make Computers Work
“One might suppose that analog computers would be more powerful, since they can represent a continuum of values, whereas digital computers can represent data only as discrete numbers. However, this apparent advantage disappears if we take a closer look. A true continuum is unrealizable in the physical world. The problem with analog computers is that their signals can achieve only a limited degree of accuracy.”
William Daniel Hillis, The Pattern on the Stone: The Simple Ideas that Make Computers Work
“What is amazing to me is not so much Turing’s imaginary construct but his hypothesis that there is only one type of universal computing machine. As far as we know, no device built in the physical universe can have any more computational power than a Turing machine. To put it more precisely, any computation that can be performed by any physical computing device can be performed by any universal computer, as long as the latter has sufficient time and memory. This is a remarkable statement, suggesting as it does that a universal computer with the proper programming should be able to simulate the function of a human brain.”
William Daniel Hillis, The Pattern on the Stone: The Simple Ideas that Make Computers Work
“One consequence of this principle of universality is that the only important difference in power between two computers is their speed and the size of their memory. Computers may differ in the kinds of input and output devices connected to them, but these so-called peripherals are not essential characteristics of a computer, any more than its size or its cost or the color of its case. In terms of what they are able to do, all computers (and all other types of universal computing devices) are fundamentally identical.”
William Daniel Hillis, The Pattern on the Stone: The Simple Ideas that Make Computers Work
“The work performed by the computer is specified by a program, which is written in a programming language. This language is converted to sequences of machine-language instructions by interpreters or compilers, via a predefined set of subroutines called the operating system. The instructions, which are stored in the memory of the computer, define the operations to be performed on data, which are also stored in the computer’s memory. A finite-state machine fetches and executes these instructions. The instructions as well as the data are represented by patterns of bits. Both the finite-state machine and the memory are built of storage registers and Boolean logic blocks, and the latter are based on simple logical functions, such as And, Or, and Invert. These logical functions are implemented by switches, which are set up either in series or in parallel, and these switches control a physical substance, such as water or electricity, which is used to send one of two possible signals from one switch to another: 1 or 0. This is the hierarchy of abstraction that makes computers work.”
William Daniel Hillis, The Pattern on the Stone: The Simple Ideas that Make Computers Work
“Some of this process of looking things up and finding the corresponding sequences of machine language can be done before the program is executed. This saves time, because if the program is going to be executed more than once, there’s no point in looking up the same things over and over again. When most of the work of conversion is done beforehand, the translation process is called compilation, and the program that performs the compilation is called a compiler. If most of the work is done while the program is being executed, then the process is called interpretation, and the program is called an interpreter. There is no hard and fast line between the two.”
William Daniel Hillis, The Pattern on the Stone: The Simple Ideas that Make Computers Work
“Some subroutines are so useful that they are always loaded into the computer. This set of subroutines is called the operating system. Useful operating-system subroutines include those that write or read characters typed on the keyboard, or that draw lines on the screen, or otherwise interact with the user. The computer’s operating system determines most of the look and feel of the interface to the user. It also governs the interface between the computer and whatever program is being run, since the operating system’s subroutines provide the program with a set of operations that are richer and more complex than the machine-language instructions.”
William Daniel Hillis, The Pattern on the Stone: The Simple Ideas that Make Computers Work
“Normally, the computer stores return addresses in a group of sequential locations known as a stack. The most recent return address is stored at the “top of the stack.” The memory stack works just like a stack of dinner plates: items are always added or removed from the top—a last-in, first-out storage system that works perfectly for storing the return addresses of nested subroutines, because a subroutine is never finished until all of its nested subroutines are finished.”
William Daniel Hillis, The Pattern on the Stone: The Simple Ideas that Make Computers Work
“Subroutines allow sequences of instructions to be used over and over from many places within the program. In effect, the subroutine-calling convention allows the programmer to define new instructions by using sequences of other instructions. The program accesses a subroutine by using a Jump instruction to load the program counter with the address of the subroutine; but before doing so, the computer saves the previous contents of the program counter in a special memory location. At the end of the subroutine, another instruction reads this return address and jumps back to the location from which the subroutine was called.”
William Daniel Hillis, The Pattern on the Stone: The Simple Ideas that Make Computers Work
“beginning as many times as is necessary. This operation is called a loop, and we saw an example of it in the description of programming in Logo.”
William Daniel Hillis, The Pattern on the Stone: The Simple Ideas that Make Computers Work
“The control instructions determine the address of the next instruction to be fetched; this address is stored in a special register called the program counter.”
William Daniel Hillis, The Pattern on the Stone: The Simple Ideas that Make Computers Work
“Other processing instructions combine data among the memory registers. There are also instructions to perform Boolean functions—And, Or, or Invert—on the patterns of bits in the registers.”
William Daniel Hillis, The Pattern on the Stone: The Simple Ideas that Make Computers Work
“There are two basic types of instructions in most computers: processing instructions and control instructions. The processing instructions move data to and from the memory and combine them to perform arithmetic and logical functions. The addresses of the memory locations, or registers, are specified by the processing instructions”
William Daniel Hillis, The Pattern on the Stone: The Simple Ideas that Make Computers Work
“The finite-state machine repeatedly executes the following sequence of operations: (1) read an instruction from the memory, (2) execute the operation specified by that instruction, and (3) calculate the address of the next instruction. The sequence of states necessary to do this is built into the Boolean logic of the machine, and the instructions themselves are specific patterns of bits—patterns that cause the finite-state machine to perform various operations on the data in the memory.”
William Daniel Hillis, The Pattern on the Stone: The Simple Ideas that Make Computers Work
“Some of the words stored in the memory represent data to be operated upon, like numbers and letters. Others represent instructions that tell the machine what sequence of operations to perform. The instructions are stored in machine language, which, as noted, is much simpler than a typical programming language. Machine language is interpreted directly by the finite-state machine. In the type of computer we will describe, each instruction in machine language is stored in a single word of memory, and a sequence of instructions is stored in a block of sequentially numbered memory locations.”
William Daniel Hillis, The Pattern on the Stone: The Simple Ideas that Make Computers Work
“Each register in the memory has a different address—a pattern of bits by means of which you can access it—so registers are referred to as locations in memory. The memory contains Boolean logic blocks, which decode the address and select the location for reading or writing. If data are to be written at this memory location, these logic blocks store the new data into the addressed register. If the register is to be read, the logic blocks steer the data from the addressed register to the memory’s output, which is connected to the input of the finite-state machine.”
William Daniel Hillis, The Pattern on the Stone: The Simple Ideas that Make Computers Work
“A computer is just a special type of finite-state machine connected to a memory. The computer’s memory—in effect, an array of cubbyholes for storing data—is built of registers, like the registers that hold the states of finite-state machines. Each register holds a pattern of bits called a word, which can be read (or written) by the finite-state machine. The number of bits in a word varies from computer to computer, but in a modern microprocessor (as I write this) it is usually eight, sixteen, or thirty-two bits. (Word sizes will probably grow with improvement in technology.) A typical memory will have millions or even billions of these registers, each holding a single word. Only one of the registers in the memory is accessed at a time—that is, only the data in one of the memory registers will be read or written on each cycle of the finite-state machine.”
William Daniel Hillis, The Pattern on the Stone: The Simple Ideas that Make Computers Work
“The most important advantage of an object-oriented programming language is that the objects—for instance, various objects in a video game—can be specified independently and then combined to create new programs. Writing a new object-oriented program sometimes feels a bit like throwing a bunch of animals into a cage and watching what happens. The behavior of the program emerges, as a result of the interactions of the programmed objects. For this reason, as well as the fact that object-oriented languages are relatively new, you might think twice about one for writing a safety-critical system that flies an airplane.”
William Daniel Hillis, The Pattern on the Stone: The Simple Ideas that Make Computers Work

« previous 1 3