More on this book
Community
Kindle Notes & Highlights
Read between
June 17 - June 19, 2017
It is incredible that humans are capable of building thermonuclear bombs. It is equally incredible that humans do in fact build thermonuclear bombs (and blow them up even when they don’t fully understand how they work).
Almost all animals have brains. The neuron was one of the earliest adaptations when animals branched off from other organisms. Even animals that don’t have fully structured brains have nervous systems, networks of neurons that work together to process information.
Just as people don’t think only associatively (as Pavlov thought we do), people do not reason via logical deduction. We reason by causal analysis.
Symbol-processing AI of this type had some minor successes, such as programs that could play a good game of chess or advise doctors on diagnoses, but nothing like the superintelligent computing machines that early researchers had dreamed about. The beginning of the end came when philosopher John Haugeland, a pioneer in the philosophy of artificial intelligence, dismissively dubbed this project Good Old-Fashioned Artificial Intelligence (GOFAI).
In a community of knowledge, what matters more than having knowledge is having access to knowledge.
Because we live inside a hive mind, relying heavily on others and the environment to store our knowledge, most of what is in our heads is quite superficial.
In each of these examples, our brain treats the tool we’re using as if it were part of our body. So there is nothing unnatural about technology. On the contrary, using technology is one of the key things that make us human.
One consequence of these developments is that we are starting to treat our technology more and more like people, like full participants in the community of knowledge.
As a rule, strong feelings about issues do not emerge from deep understanding.
So why do politicians and interest groups so often take a sacred values position rather than thinking through the causal consequences of various policies? The most obvious answer is obfuscation: The policy preference that will earn them votes or money is not what a consequentialist analysis dictates, so they avoid the consequentialist analysis. The other answer is that thinking through the consequences of a policy is hard—very hard. It’s much easier just to hide one’s ignorance in a veil of platitudes about sacred values. It’s an old politician’s ploy. The secret that people who are practiced
...more

