More on this book
Community
Kindle Notes & Highlights
Read between
April 6 - April 11, 2025
False beliefs will mean you will not participate significantly in the inevitable and extensive computerization of your organization and society generally. In many senses the computer revolution has only begun!
The hard ai people claim man is only a machine and nothing else, and hence anything humans can do in the intellectual area can be copied by a machine. As noted above, most readers, when shown some result from a machine, automatically believe it cannot be the human trait that was claimed. Two questions immediately arise. One, is this fair? Two, how sure are you that you are not just a collection of molecules in a radiant energy field, and hence the whole world is merely molecule bouncing against molecule? If you believe in other (unnamed, mysterious) forces, how do they affect the motion of the
...more
This is the type of ai that I am interested in—what can the human and machine do together, and not in the competition which can arise. Of course robots will displace many humans doing routine jobs. In a very real sense, machines can best do routine jobs, thus freeing humans for more humane jobs. Unfortunately, many humans at present are not equipped to compete with machines—they are unable to do much more than routine jobs. There is a widespread belief (hope?) that humans can compete, once they are given proper training. However, I have long publicly doubted you could take many coal miners and
...more
This highlight has been truncated due to consecutive passage length restrictions.
Let me repeat myself: artificial intelligence is not a subject you can afford to ignore; your attitude will put you in the front or the rear of the applications of machines in your field, but also may lead you into a really great fiasco!
The situation with respect to computers and thought is awkward. We would like to believe, and at the same time not believe, machines can “think.” We want to believe because machines could then help us so much in our mental world; we want to not believe to preserve our feeling of self-importance. The machines can defeat us in so many ways—speed, accuracy, reliability, cost, rapidity of control, freedom from boredom, bandwidth in and out, ease of forgetting old and learning new things, hostile environments, and personnel problems—that we would like to feel superior in some way to them; they are,
...more
I suggest you pause and have two discussions with yourself on the topic “can machines think?” and review why it is important to come to your own evaluation of what machines can and cannot do in your future.
You could begin your discussion with my observation that whichever position you adopt there is the other side, and I do not care what you believe so long as you have good reasons and can explain them clearly. That is my task, to make you think on this awkward topic, and not to give any answers.
Another level of objection to the use of computers is in the area of experts. People are sure the machine can never compete, ignoring all the advantages the machines have (see end of Chapter 1). These are: economics, speed, accuracy, reliability, rapidity of control, freedom from boredom, bandwidth in and out, ease of retraining, hostile environments, and personnel problems. They always seem to cling to their supposed superiority rather than try to find places where machines can improve matters! It is difficult to get people to look at machines as good things to use whenever they will work;
...more
It is the abstraction from details that gives the breadth of application.
Again, all we suppose is there is such a source, and we are going to encode it for transmission. The encoder is broken into two parts. The first half is called the source encoding, which as its name implies is adapted to the source, various sources having possibly different kinds encodings. The second half of the encoding process is called channel encoding and it is adapted to the channel over which the encoded symbols are to be sent. Thus the second half of the encoding process is tuned to the channel. In this fashion, with the common interface, we can have a wide variety of sources encoded
...more
We begin by assuming the coded symbols we use are of variable length, much like the classical Morse code of dots and dashes, where the common letters are short and the rare ones are long. This produces an efficiency in the code, but it should be noted Morse code is a ternary code, not binary, since there are spaces as well as dots and dashes. If all the code symbols are of the same length we will call it a block code.
There are several things to note. First, the decoding is a straightforward process in which each digit is examined only once. Second, in practice you usually include a symbol which is an exit from the decoding process and is needed at the end of message. Failure to allow for an escape symbol is a common error in the design of codes. You may, of course, never expect to exit from a decoding mode, in which case the exit symbol is not needed.
The next topic is instantaneously decodable codes. To see what this is, consider the above code with the digits reversed end for end. Now consider receiving 011111…111. The only way you can decode this is to start at the final end and group by threes until you see how many 1s are left to go with the first 0. Only then can you decode the first symbol. Yes, it is uniquely decodable, but not instantaneously! You have to wait until you get to the end of the message before you can start the decoding process! It will turn out (McMillan’s theorem) that instantaneous decodability costs nothing in
...more
Comma codes are codes where each symbol is a string of 1s followed by a 0, except the last symbol which is all 1s. As a special case we have We have the Kraft sum and we have exactly met the condition. It is easy to see the general comma code meets the Kraft inequality with exact equality. If the Kraft sum is less than 1, then there is excess signaling capacity, since another symbol could be included, or some existing one shortened, and thus the average code length would be less. Note if the Kraft inequality is met, that does not mean the code is uniquely decodable, only there exists a code
...more
First, we want the average length L of the message sent to be as small as we can make it (to save the use of facilities). Second, it must be a statistical theory, since we cannot know the messages which are to be sent, but we can know some of the statistics by using past messages plus the assumption the future will probably be like the past. For the simplest theory, which is all we can discuss here, we will need the probabilities of the individual symbols occurring in a message. How to get these is not part of the theory, but can be obtained by inspection of past experience, or imaginative
...more
Having done all we are going to do about source encoding (though we have by no means exhausted the topic), we turn to channel encoding, where the noise is modeled. The channel, by supposition, has noise, meaning some of the bits are changed in transmission (or storage). What can we do? Detection of a single error is easy. To a block of n – 1 bits we attach an nth bit, which is set so that the total n bits has an even number of 1s (an odd number if you prefer, but we will stick to an even number in the theory). It is called an even (odd) parity check, or more simply a parity check. Thus if all
...more
When you find a single error you can ask for a retransmission and expect to get it right the second time, and if not then on the third time, etc. However, if the message in storage is wrong, then you will call for retransmissions until another error occurs, and you will probably have two errors which will pass undetected in this scheme of single error detection. Hence the use of repeated retransmission should depend on the expected nature of the error.
Going out of time sequence, but still in idea sequence, I was once asked by at&t how to code things when humans were using an alphabet of 26 letters, ten decimal digits, plus a “space.” This is typical of inventory naming, parts naming, and many other namings of things, including the naming of buildings. I knew from telephone dialing error data, as well as long experience in hand computing, that humans have a strong tendency to interchange adjacent digits (a 67 is apt to become a 76), as well as change isolated ones (usually doubling the wrong digit, for example a 556 is likely to emerge as
...more
This highlight has been truncated due to consecutive passage length restrictions.
Indeed, you see such a code on your books these days with their isbns. It is the same code except they use only ten decimal digits, and, ten not being a prime number, they had to introduce an eleventh symbol, labeled X, which might at times arise in the parity check—indeed, about every eleventh book you have will have an X for the parity check number as the final symbol of its isbn. The dashes are merely for decorative effect and are not used in the code at all. Check it for yourself on your textbooks. Many other large organizations could use such codes to good effect, if they wanted to make
...more
There are two subject matters in this chapter: the first is the ostensible topic, error-correcting codes, and the other is how the process of discovery sometimes goes—you all know I am the official discoverer of the Hamming error-correcting codes. Thus I am presumably in a position to describe how they were found. But you should beware of any reports of this kind. It is true at that time I was already very interested in the process of discovery, believing in many cases the method of discovery is more important than what is discovered. I knew enough not to think about the process when doing
...more
Working calmly will let you elaborate and extend things, but the breakthroughs generally come only after great frustration and emotional involvement. The calm, cool, uninvolved researcher seldom makes really great new steps.
It pays to know more than just what is needed at the moment!
A single error correcting plus double error detecting code is often a good balance.
We see by proper code design we can build a system from unreliable parts and get a much more reliable machine, and we see just how much we must pay in equipment, though we have not examined the cost in speed of computing if we make a computer with that level of error correction built into it. But I have previously stressed the other gain, namely field maintenance, and I want to mention it again and again. The more elaborate the equipment is, and we are obviously going in that direction, the more field maintenance is vital, and error-correcting codes not only mean in the field the equipment
...more
I have carefully told you a good deal of what I faced at each stage in discovering the error-correcting codes, and what I did. I did it for two reasons. First, I wanted to be honest with you and show you how easy, if you will follow Pasteur’s rule that “luck favors the prepared mind,” it is to succeed by merely preparing yourself to succeed. Yes, there were elements of luck in the discovery; but there were many other people in much the same situation, and they did not do it! Why me? Luck, to be sure, but also I was preparing myself by trying to understand what was going on—more than the other
...more
This highlight has been truncated due to consecutive passage length restrictions.
You establish in yourself the style of doing great things, and then when opportunity comes you almost automatically respond with greatness in your actions. You have trained yourself to think and act in the proper ways. There is one nasty thing to be mentioned, however: what it takes to be great in one age is not what is required in the next one. Thus you, in preparing yourself for future greatness (and the possibility of greatness is more common and easy to achieve than you think, since it is not common to recognize greatness when it happens under one’s nose), you have to think of the nature
...more
There is the famous story by Eddington about some people who went fishing in the sea with a net. Upon examining the size of the fish they had caught, they decided there was a minimum size to the fish in the sea! Their conclusion arose from the tool used and not from reality.
First, I never went to the office of my vice president, W.O. Baker; we only met in passing in the halls and we usually stopped to talk for a few, very few, minutes. One time, around 1973–1974, when I met him in a hall, I said to him that when I came to Bell Telephone Laboratories in 1946 I had noticed the Laboratories were gradually passing from relay to electronic central offices, but a large number of people would not convert to oscilloscopes and the newer electronic technology, and they were moved to a different location to get them out of the way. To him they represented a serious economic
...more
This highlight has been truncated due to consecutive passage length restrictions.
Learning a new subject is something you will have to do many times in your career if you are to be a leader and not be left behind as a follower by newer developments.
Thus there are three good reasons for the Fourier functions: (1) time invariance, (2) linearity, and (3) the reconstruction of the original function from the equally spaced samples is simple and easy to understand.
Therefore we are going to analyze the signals in terms of the Fourier functions, and I need not discuss with electrical engineers why we usually use the complex exponents as the frequencies instead of the real trigonometric functions. We have a linear operation, and when we put a signal (a stream of numbers) into the filter, then out comes another stream of numbers. It is natural, if not from your linear algebra course then from other things such as a course in differential equations, to ask what functions go in and come out exactly the same except for scale. Well, as noted above, they are the
...more
It is another example of why you need to know the fundamentals very well; the fancy parts then follow easily and you can do things that they never told you about.
We will first discuss nonrecursive filters, whose purpose is to pass some frequencies and stop others. The problem first arose in the telephone company when they had the idea that if one voice message had all its frequencies moved up (modulated) to beyond the range of another, then the two signals could be added and sent over the same wires, and at the other end filtered out and separated, and the higher one reduced (demodulated) back to its original frequencies. This shifting is simply multiplying by a sinusoidal function, and selecting one band (single-sideband modulation) of the two
...more
Least squares says we should minimize the sum of the squares of the differences between the data and the points on the line,
We now ask what will come out if we put in a pure eigenfunction. We know that because the equations are linear they should give the eigenfunction back, but multiplied by the eigenvalue corresponding to the eigenfunction’s frequency, the transfer function value at that frequency.

