A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs That Made Our Brains
Rate it:
Open Preview
10%
Flag icon
“language . . . and reason, are all pretty simple once the essence of being and reacting are available. That essence is the ability to move around in a dynamic environment, sensing the surroundings to a degree sufficient to achieve the necessary maintenance of life and reproduction. This part of intelligence is where evolution has concentrated its time—it is much harder.”
10%
Flag icon
But the market, like evolution, rewards three things above all: things that are cheap, things that work, and things that are simple enough to be discovered in the first place.
22%
Flag icon
Dopamine is not a signal for reward but for reinforcement. As Sutton found, reinforcement and reward must be decoupled for reinforcement learning to work. To solve the temporal credit assignment problem, brains must reinforce behaviors based on changes in predicted future rewards, not actual rewards.
24%
Flag icon
In our metaphor, the basal ganglian student initially learns solely from the hypothalamic judge, but over time learns to judge itself, knowing when it makes a mistake before the hypothalamus gives any feedback. This is why dopamine neurons initially respond when rewards are delivered, but over time shift their activation toward predictive cues. This is also why receiving a reward that you knew you were going to receive doesn’t trigger dopamine release; predictions from the basal ganglia cancel out the excitement from the hypothalamus.
29%
Flag icon
Gambling and social feeds work by hacking into our five-hundred-million-year-old preference for surprise, producing a maladaptive edge case that evolution has not had time to account for.
73%
Flag icon
In the human brain, language is the window to our inner simulation. Language is the interface to our mental world. And language is built on the foundation of our ability to model and reason about the minds of others—to infer what they mean and figure out exactly which words will produce the desired simulation in their mind. I think most would agree that the humanlike artificial intelligences we will one day create will not be LLMs; language models will be merely a window to something richer that lies beneath.