More on this book
Community
Kindle Notes & Highlights
Read between
December 27, 2021 - January 5, 2022
how people working in groups can coordinate their individual areas of expertise (a process called “transactive memory”), and how groups can work together to produce results that exceed their members’ individual contributions (a phenomenon known as “collective intelligence”).
thought happens not only inside the skull but out in the world, too; it’s an act of continuous assembly and reassembly that draws on resources external to the brain. For another: the kinds of materials available to “think with” affect the nature and quality of the thought that can be produced. And last: the capacity to think well—that is, to be intelligent—is not a fixed property of the individual but rather a shifting state that is dependent on access to extra-neural resources and the knowledge of how to use them.
Fortunately, we are also able to collect and store the volumes of information we encounter on a non-conscious basis. As we proceed through each day, we are continuously apprehending and storing regularities in our experience, tagging them for future reference. Through this information-gathering and pattern-identifying process, we come to know things—but we’re typically not able to articulate the content of such knowledge or to ascertain just how we came to know it. This trove of data remains mostly under the surface of consciousness, and that’s usually a good thing. Its submerged status
...more
“Nonconscious information acquisition,” as Lewicki calls it, along with the ensuing application of such information, is happening in our lives all the time. As we navigate a new situation, we’re scrolling through our mental archive of stored patterns from the past, checking for ones that apply to our current circumstances. We’re not aware that these searches are under way; as Lewicki observes, “The human cognitive system is not equipped to handle such tasks on the consciously controlled level.” He adds, “Our conscious thinking needs to rely on notes and flowcharts and lists of ‘if-then’
...more
Research shows that the simple act of giving a name to what we’re feeling has a profound effect on the nervous system, immediately dialing down the body’s stress response.
The affect-labeling group showed steep declines in heart rate and skin conductance compared to the control group, whose levels of physiological arousal remained high. Brain-scanning studies offer further evidence of the calming effect of affect labeling: simply naming what is felt reduces activity in the amygdala, the brain structure involved in processing fear and other strong emotions. Meanwhile, thinking in a more involved way about feelings and the experiences that evoked them actually produces greater activity in the amygdala.
the body can be more rational than the brain.
People who are more interoceptively aware can interact more intimately and more skillfully with the emotions that interoceptive sensations help construct.
But first—understanding the relationship between interoception and emotion requires correcting a basic misconception most of us hold about how feelings come about. The story we’re used to telling goes like this: on the basis of what’s happening to us, the brain determines the appropriate emotion (happy, sad, scared), then directs the body to act accordingly (smile, cry, scream). In fact, the causal arrow points in the opposite direction. The body produces sensations, the body initiates actions—and only then does the mind assemble these pieces of evidence into the entity we call an emotion.
Because our hearts beat, because our lungs expand, because our muscles stretch and our organs rumble—and because all these sensations, unique to us, have carried on without interruption since the day of our birth—we know what it is to be one continuous self, to be ourselves and no other. Interoception, says Craig, is nothing less than “the feeling of being alive.”
Our culture conditions us to see mind and body as separate—and so we separate, in turn, our periods of thinking from our bouts of exercise. Consider
Students who practice connecting numbers with movements in this way later demonstrate more mathematics knowledge and skill.
Yet the fact is that—very unlike computers—humans solve problems most effectively by imagining themselves into a given scenario, a project that is made easier if the human in question has had a previous physical encounter on which to base her mental projections. Providing students with such physical encounters was the purpose behind a study designed by Sian Beilock, inspired by the work with hockey players we read about earlier. In collaboration with Susan Fischer, an associate professor
As education professor Dor Abrahamson puts it, “Learning is moving in new ways.”
“No scientist thinks in equations,” Einstein once claimed. Rather, he remarked, the elements of his own thought were “visual” and even “muscular” in nature.
Barbara McClintock, whose work on the chromosomes of corn plants earned her a Nobel Prize, recalled how it felt for her to examine the chromosomes through a microscope: “When I was really working with them I wasn’t outside, I was down there. I was part of the system. I was right down there with them, and everything got big. I even was able to see the internal parts of the chromosomes—actually everything was there. It surprised me because I actually felt as if I were right down there and these were my friends.”
Adopting a first-person perspective doesn’t mean we become limited by it; indeed, using the movements of our own bodies to explore a given phenomenon seems to promote the ability to alternate between viewing it from an internal perspective and from an external one, an oscillation that produces a deeper level of understanding.
Where language is discrete and linear—one word following another—gesture is impressionistic and holistic, conveying an immediate sense of how things look and feel and move.
Using gesture in this way can confer an enormous advantage in the start-up world, one group of researchers notes, since “entrepreneurs are operating on the boundary of what is real and what is yet to happen.”
From watching Heath’s tapes, it’s easy to conclude that most of our conversations are carried on with our hands, the words we speak a mere afterthought.
Such sequences suggest the startling notion that our hands “know” what we’re going to say before our conscious minds do, and in fact this is often the case.
When, today, we turn to nature when we’re stressed or burned out—when we take a walk through the woods or gaze out at the ocean’s rolling waves—we are engaging in what one researcher calls “environmental self-regulation,” a process of psychological renewal that our brains cannot accomplish on their own.
Just as our brains did not evolve to react with equanimity to speeding cars and wailing sirens, neither did they evolve to read, or to perform advanced math, or to carry out any of the highly abstract and complex tasks we ask of ourselves every day.
APART FROM OFFERING shelter from the elements, the most critical function of a built interior is simply to give us a quiet place to think. Such protected space is necessary because thinking—at least of the kind the modern world expects of us—doesn’t come naturally to the human animal.
The privacy afforded by walls represented a truly revolutionary extension of the mind, maintains John Locke, professor of linguistics at Lehman College of the City University of New York. “Our distant ancestors could see each other at all times, which kept them safe but also imposed a huge cognitive cost,” he notes. “When residential walls were erected, they eliminated the need to look around every few seconds to see what others were doing.” The result, he says, was that “a human vigil, one beginning with ancestors that we share with apes, was reduced to manageable proportions, freeing up many
...more
And they carved out a space of undisturbed quiet—space that made deep, fresh thinking possible. For one of the sixteenth century’s most original thinkers, Michel de Montaigne, the study became a central metaphor for the freedom of thought he prized.
While our sense of self may feel stable and solid, it is in fact quite fluid, dependent on external structure for its shape.
As the psychologist Mihaly Csikszentmihalyi has written, we keep certain objects in view because “they tell us things about ourselves that we need to hear in order to keep our selves from falling apart.”
Taylor specifically prohibited workers from bringing their personal effects into the factories he redesigned for maximum speed and minimum waste. Stripped of their individuality, he insisted, employees would function as perfectly efficient cogs in the industrial machine.
Likewise, don’t envision the mind telling the pencil what to do; instead, allow a conversation to develop between eye and hand, the action of one informing the other. Finally, we ought to postpone judgment as long as possible, such that this give-and-take between perception and action can proceed without becoming inhibited by preconceived notions or by critical self-doubt.
Feynman wasn’t (just) being crotchety. He was defending a view of the act of creation that would be codified four decades later in Andy Clark’s theory of the extended mind.
The naturalists of the late nineteenth century described imitation as the habit of children, women, and “savages,” and held up original expression as the preserve of European men. Innovation climbed to the top of the cultural value system, while imitation sank to an unaccustomed low.
Social memories are encoded in a distinct region of the brain. What’s more, we remember social information more accurately, a phenomenon that psychologists call the “social encoding advantage.” If findings like this feel unexpected, that’s because our culture largely excludes social interaction from the realm of the intellect. Social exchanges with others might be enjoyable or entertaining, this attitude holds, but they’re no more than a diversion, what we do around the edges of school or work. Serious thinking, real thinking, is done on one’s own, sequestered from others.
The way technologies like fMRI are applied is a product of our brainbound orientation; it has not seemed odd or unusual to examine the individual brain on its own, unconnected to others.
Using these tools, researchers have found persuasive evidence for what is known as the “interactive brain hypothesis”: the premise that when people interact socially, their brains engage different neural and cognitive processes than when those same people are thinking or acting on their own.
related finding emerged when scientists, again employing fNIRS, compared the brain scans of people playing poker with a human partner to those of people playing the same game with a computer. The areas of the brain involved in generating a “theory of mind”—inferring the mental state of another individual—were active in competing with a human but dormant in matching wits with a machine.
Evaluations of the program show that students who engage in tutoring earn higher grades, attend school more consistently, and stay enrolled at higher rates than similar students who do not participate. Such outcomes may be due, in part, to the experience of what psychologists call “productive agency”: the sense that one’s own actions are affecting another person in a beneficial way.
But because the brainbound approach to cognition regards information as information, no matter how it is encountered, the social element of thinking is often sacrificed in the name of efficiency and convenience.
When we reason alone, inside our own heads, we will be dangerously vulnerable to confirmation bias—constructing the strongest case for our own point of view, and fooling ourselves in the process. Of course, in our brainbound culture, thinking alone is how thinking is usually done, with predictably disappointing results. Mercier and Sperber urge a different approach: arguing together, with the aim of arriving jointly at something close to the truth.
Study author Diana Arya, an assistant professor of education at the University of California, Santa Barbara, wanted to see whether the difference in presentation would produce a difference in learning. It did. Students understood the material more thoroughly, and remembered it more accurately, when it was given to them in the form of a story—in particular, a story that captured the human motives and choices that lay behind the creation of what is now well-established knowledge.
“I use the knowledge management system all the time,” he assured Myers—but not in the way the company’s leaders intended: “I just scroll down to the bottom of the entry to see who wrote it, and then I call them on the phone.” What this individual is seeking is richly contextualized information, full of detail and nuance; what he’s looking for, in short, is a story.
Hutchins used his time aboard the ship to study a phenomenon he calls “socially distributed cognition,” or the way people think with the minds of others. His aim, he later wrote, was to “move the boundaries of the cognitive unit of analysis out beyond the skin of the individual person and treat the navigation team as a cognitive and computational system.”
Too often, however, we’re not alert to such instances of collective thought. Our culture and our institutions tend to fixate on the individual—on his uniqueness, his distinctiveness, his independence from others. In business and education, in public and private life, we emphasize individual competition over joint cooperation.
Complex ideas are “only accessible to crowds after having assumed a very simple shape,” asserted Le Bon in The Crowd: A Study of the Popular Mind, first published in 1895. “It is especially when we are dealing with somewhat lofty philosophic or scientific ideas that we see how far-reaching are the modifications they require in order to lower them to the level of the intelligence of crowds,” he wrote. McDougall sounded a similar note in The Group Mind: A Sketch of the Principles of Collective Psychology, published in 1920.
Le Bon conjectured about a “magnetic influence” at work within crowds. McDougall mused about the possibility of “telepathic communication.” Even the psychoanalyst Carl Jung got into the act, advancing the notion of a shared “genetic ectoplasm” that bound a group of people as one. Ultimately the entire field collapsed under its own imprecision and incoherence. The notion of a group mind “slipped ignominiously into the history of social psychology,” writes one observer. It was “banished from the realm of respectable scientific discourse,” notes another. Social scientists took as their near
...more
Emerging research even points to the existence of “neural synchrony”—the intriguing finding that when a group of individuals are thinking well together, their patterns of brain activity come to resemble one other’s. Though we may imagine ourselves as separate beings, our minds and bodies have many ways of bridging the gaps.
Such gaze-following is made easier by the fact that people have visible whites of the eyes. Humans are the only primates so outfitted, an exceptional status that has led scientists to propose the “cooperative eye hypothesis”—the theory that our eyes evolved to support cooperative social interactions. “Our eyes see, but they are also meant to be seen,” notes science writer Ker Than.
In this way, our mental models of the world remain in sync with those of the people around us.
The omnipresence of our digital devices can make it difficult to ensure that shared learning takes place, even among students gathered in a single classroom.
These investigations are rooted in a more general truth, says Aronson, one that that applies to every one of us. Intelligence is not “a fixed lump of something that’s in our heads,” he explains. Rather, “it’s a transaction”: a fluid interaction among our brains, our bodies, our spaces, and our relationships.

