More on this book
Community
Kindle Notes & Highlights
Read between
September 17 - November 5, 2025
You know you must run the mile if the athletics course is to be of benefit to you—hence you must think carefully about what you hear or read in this book if it is to be effective in changing you—which must obviously be the purpose of any course. Again, you will get out of this course only as much as you put in, and if you put in little effort beyond sitting in the class or reading the book, then it is simply a waste of your time. You must also mull things over, compare what I say with your own experiences, talk with others, and make some of the points part of your way of doing things.
Having done the calculation you are much more likely to retain the results in your mind. Furthermore, such calculations keep the ability to model situations fresh and ready for more important applications as they arise. Thus I recommend when you hear quantitative remarks such as the above you turn to a quick modeling to see if you believe what is being said, especially when given in the public media like the press and tv.
How are you to recognize “fundamentals”? One test is they have lasted a long time. Another test is from the fundamentals all the rest of the field can be derived by using the standard methods in the field.
In science, if you know what you are doing, you should not be doing it. In engineering, if you do not know what you are doing, you should not be doing it. Of course, you seldom, if ever, see either pure state.
In a lifetime of many, many independent choices, small and large, a career with a vision will get you a distance proportional to n, while no vision will get you only the distance . In a sense, the main difference between those who go far and those who do not is some people have a vision and the others do not and therefore can only react to the current events as they happen.
from observation I have seen the accuracy of the vision matters less than you might suppose, getting anywhere is better than drifting, there are potentially many paths to greatness for you, and just which path you go on, so long as it takes you to greatness, is none of my business.
No vision, not much of a future.
For many years I devoted about 10% of my time (Friday afternoons) to trying to understand what would happen in the future of computing, both as a scientific tool and as a shaper of the social world of work and play. In forming your plan for your future you need to distinguish three different questions: What is possible? What is likely to happen? What is desirable to have happen?
What will the situation be in 2020? As a guess I would say less than 25% of the people in the civilian workforce will be handling things; the rest will be handling information in some form or other.
It has rarely proved practical to produce exactly the same product by machines as we produced by hand. Indeed, one of the major items in the conversion from hand to machine production is the imaginative redesign of an equivalent product. Thus in thinking of mechanizing a large organization, it won’t work if you try to keep things in detail exactly the same, rather there must be a larger give and take if there is to be a significant success.
The Buddha told his disciples, “Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense.” I say the same to you—you must assume the responsibility for what you believe.
From a chart drawn up long ago by Los Alamos (lanl) using the data of the fastest current computer on the market at a given time, they found the equation for the number of operations per second was and it fit the data fairly well. Here time begins at 1943. In 1987 the extrapolated value predicted (by about 20 years!) was about 3 × 108 and was on target. The limiting asymptote is 3.585 × 109 for the von Neumann-type computer with a single processor.
most users seem too busy to think or observe how bad things are and how much the computer could do to make things significantly easier and cheaper. To see the obvious it often takes an outsider, or else someone like me who is thoughtful and wonders what he is doing and why it is all necessary.
While each has some merit I have faith in only one, which is almost never mentioned—think before you write the program, it might be called. Before you start, think carefully about the whole thing, including what will be your acceptance test that it is right, as well as how later field maintenance will be done. Getting it right the first time is much better than fixing it up later!
Many studies have shown programmers differ in productivity, from worst to best, by much more than a factor of ten. From this I long ago concluded the best policy is to pay your good programmers very well but regularly fire the poorer ones—if you can get away with it! One way is, of course, to hire them on contract rather than as regularly employed people, but that is increasingly against the law, which seems to want to guarantee even the worst have some employment. In practice you may actually be better off to pay the worst to stay home and not get in the way of the more capable (and I am
...more
I began, at any lecture I attended anywhere, to pay attention not only to what was said, but to the style in which it was said, and whether it was an effective or a non-effective talk.
It is first necessary to prove beyond any doubt the new thing, device, method, or whatever it is, can cope with heroic tasks before it can get into the system to do the more routine, and, in the long run, more useful tasks. Any innovation is always against such a barrier, so do not get discouraged when you find your new idea is stoutly, and perhaps foolishly, resisted.
In such a rapidly changing field as computer software, if the payoff is not in the near future then it is doubtful it will ever pay off.
As you go on in your careers you should examine the applications which succeed and those which fail; try to learn how to distinguish between them; try to understand the situations which produce successes and those which almost guarantee failure.
Science has traditionally appealed to experimental evidence and not idle words, and so far science seems to have been more effective than philosophy in improving our way of life.
You must struggle with your own beliefs if you are to make any progress in understanding the possibilities and limitations of computers in the intellectual area.
Some people conclude from this that if we build a big enough machine, then automatically it will be able to think!
This is the type of ai that I am interested in—what can the human and machine do together, and not in the competition which can arise.
in the past the path to better programs has been mainly through the detailed examination of possible moves projected forward many steps rather than by understanding how humans play chess. The computers are now examining millions of board positions per second, while humans typically examine maybe 50 to 100 at most before making a move—so
A teacher has to believe that if only the right words were said then the student would have to understand. And you behave similarly when raising a child. Yet the feeling of having free will is deep in us and we are reluctant to give it up for ourselves—but we are often willing to deny it to others! As another example of the tacit belief in the lack of free will in others, consider that when there is a high rate of crime in some neighborhood of a city, many people believe the way to cure it is to change the environment—hence the people will have to change and the crime rate will go down!
These are dumb examples, there is an obvious component of believing that the other party wants (at some level) the same thing that you want, so your wills are aligned but there is some other obstacle which you wish to remove.
In the words of the old song, “It ain’t what you do, it’s the way that you do it.” In the area of thinking, maybe we have confused what is done with the way it is done, and this may be the source of much of our confusion in ai.
Whatever your opinion is, what evidence would make you accept you are wrong?
Consider that in thinking, it may be the way something is done rather than what is done which determines whether it occurs or not. ai has traditionally stuck to the “what is done” and seldom considered the “how it is done.”
I think from my perspective, the most important component may be the evidence physical determinism of the machine. Given an LLM with fixed weights, set of queries, server architecture, etc., it must respond the same way every single time. There is no free will coming from beyond/above purely physical sources. It seems that the important part for “thinking,” or more specifically “consciousness” (which are frequently conflated; are these the same?), arises not only from the output but from the *experience*, and the experience occurs to the part “beyond” the physical, i.e., the will or the soul. Since there are purely physical explanations for the machine’s output, and the soul or consciousness is not an emergent property of physical systems, an AI can never be conscious (at least as they are currently developed).
I do not care what you believe so long as you have good reasons and can explain them clearly.
It is the combination of man and machine which is important, and not the supposed conflict which arises from their all too human egos.
you the reader should take your own opinions and try first to express them clearly, and then examine them with counterarguments, back and forth, until you are fairly clear as to what you believe and why you believe it.
in ten dimensions the inner sphere reaches outside the surrounding cube!
We have learned to “tune” the words we use to fit the person on the receiving end; we to some extent select according to what we think is the channel noise,
I have repeatedly indicated I believe the future will be increasingly concerned with information in the form of symbols and less concerned with material things, hence the theory of encoding (representing) information in convenient codes is a nontrivial topic.
Notice first this essential step happened only because there was a great deal of emotional stress on me at the moment, and this is characteristic of most great discoveries. Working calmly will let you elaborate and extend things, but the breakthroughs generally come only after great frustration and emotional involvement.
This is a common, endlessly made mistake; people always want to think that something new is just like the past—they like to be comfortable in their minds as well as their bodies—and hence they prevent themselves from making any significant contribution to the new field being created under their noses. Not everything which is said to be new really is new, and it is hard to decide in some cases when something is new, yet the all too common reaction of “it’s nothing new” is stupid. When something is claimed to be new, do not be too hasty to think it is just the past slightly improved—it may be a
...more
In short, I saw more clearly what “windows” were, and was slowly led to a closer examination of their possibilities.
Cooperation is essential in these days of complex projects; the day of the individual worker is dying fast.
How did Kaiser find the formulas? To some extent by trial and error. He first assumed he had a single discontinuity, and he ran a large number of cases on a computer to see both the rise time ΔF and the ripple height δ. With a fair amount of thinking, plus a touch of genius,
I asked him how he got the exponent 0.4. He replied he tried 0.5 and it was too large, and 0.4, being the next natural choice, seemed to fit very well. It is a good example of using what one knows plus the computer as an experimental tool, even in theoretical research, to get very useful results.
Moral: when you know something cannot be done, also remember the essential reason why, so later, when the circumstances have changed, you will not say, “It can’t be done.” Think of my error! How much more stupid can anyone be? Fortunately for my ego, it is a common mistake (and I have done it more than once), but due to my goof on the fft I am very sensitive to it now. I also note when others do it—which is all too often! Please remember the story of how stupid I was and what I missed, and do not make that mistake yourself. When you decide something is not possible, don’t say at a later date
...more
If you will only ask yourself, “Is what I am being told really true?,” it is amazing how much you can find is, or borders on, being false, even in a well-developed field!
Here you see a simple example of what happens all too often. The experts were told something in class when they were students first learning things, and at the time they did not question it. It becomes an accepted fact, which they repeat and never really examine to see if what they are saying is true or not, especially in their current situation.
Moral: to the extent you can choose, work on problems you think will be important.
My contribution? Mainly, first identifying the problem, next getting the right people together, then monitoring Kaiser to keep him straight on the fact that filtering need not have exclusively to do with time signals, and finally, reminding them of what they knew from statistics (or should have known and probably did not). It seems to me from my experience that this role is increasingly needed as people get to be more highly specialized and narrower and narrower in their knowledge.
In closing, if you do not, now and then, doubt accepted rules, it is unlikely you will be a leader into new areas; if you doubt too much you will be paralyzed and will do nothing. When to doubt, when to examine the basics, when to think for yourself, and when to go on and accept things as they are is a matter of style, and I can give no simple formula on how to decide. You must learn from your own study of life.
But that is why I am suspicious, to this day, of getting too many solutions and not doing enough very careful thinking about what you have seen. Volume output seems to me to be a poor substitute for acquiring an intimate feeling for the situation being simulated.

