More on this book
Community
Kindle Notes & Highlights
Read between
June 28 - August 23, 2025
The regions involved in musical semantics—associating a tonal sequence with meaning—appear to be in the back portions of the temporal lobe on both sides, near Wernicke’s area.
The most parsimonious explanation is that music and language do, in fact, share some common neural resources, and yet have independent pathways as well. The close proximity of music and speech processing in the frontal and temporal lobes, and their partial overlap, suggests that those neural circuits that become recruited for music and language may start out life undifferentiated.
Experience and normal development then differentiate the functions of what began as very similar neuronal populations. Consider that at a very early age, babies are thought to be synesthetic, to be unable to differentiate the input from the different senses, and to experience life and the world as a sort of psychedelic union of everything sensory. Babies may see the number five as red, taste cheddar cheeses in D-flat, and smell roses in triangles.
Because the hemoglobin of the blood is slightly magnetic, changes in the flow of blood can be traced with a machine that can track changes in magnetic properties. This is what a magnetic resonance imaging machine (MRI) is, a giant electromagnet that produces a report showing differences in magnetic properties, which in turn can tell us where, at any given point in time, the blood is flowing in the body.
Because neurons need oxygen to survive, and the blood carries oxygenated hemoglobin, we can trace the flow of blood in the brain too. We make the assumption that neurons that are actively firing will need more oxygen than neurons that are at rest, and so those regions of the brain that are involved in a particular cognitive task will be just those regions with the most blood flow at a given point in time.
The problem, however, is that the temporal resolution of fMRI isn’t particularly good because of the amount of time it takes for blood to become redistributed in the brain—known as hemodynamic lag. But others had already studied the when of musical syntax/musical structure processing; we wanted to know the where and in particular if the where involved areas already known to be dedicated to speech. We found exactly what we predicted. Listening to music and attending to its syntactic features—its structure—activated a particular region of the frontal cortex on the left side called pars
...more
Most astonishing was that the left-hemisphere regions that we found were active in tracking musical structure were the very same ones that are active when deaf people are communicating by sign language. This suggested that what we had identified in the brain wasn’t a region that simply processed whether a chord sequence was sensible, or whether a spoken sentence was sensible. We were now looking at a region that responded to sight—to the visual organization of words conveyed through American Sign Language. We found evidence for the existence of a brain region that processes structure in
...more
memory extracts an abstract generalization for later use.
voice, independent of the actual words. This could contradict the record-keeping theory by showing that it is only the abstract properties of the voice that are encoded in memory, rather than the specific details. But, we might argue that timbre is a property of sounds that is separable from other attributes; we can hold on to our “record-keeping” theory of memory by saying that we are encoding specific timbre values in memory and still explain why we can recognize the sound of a clarinet, even if it is playing a song we’ve never heard before.
The British philosopher Alan Watts, author of The Wisdom of Insecurity, put it this way: If you want to study a river, you don’t take out a bucketful of water and stare at it on the shore. A river is not its water, and by taking the water out of the river, you lose the essential quality of river, which is its motion, its activity, its flow. Rosch felt that scientists had disrupted the flow of categories by studying them in such artificial ways. This, incidentally, is the same problem with a lot of the research that has been done in the neuroscience of music for the past decade: Too many
...more
One clue is often the echo, or reverberation, used on the voice. Elvis Presley and Gene Vincent had a very distinctive “slap-back” echo, in which you hear a sort of instant repeat of the syllable the vocalist just sang. You hear it on “Be-Bop-A-Lula” by Gene Vincent and by Ricky Nelson, on “Heartbreak Hotel” by Elvis, and on “Instant Karma” by John Lennon. Then there is the rich, warm echo made by a large tiled room on recordings by the Everly Brothers, such as “Cathy’s Clown” and “Wake Up Little Susie.” There are many distinctive elements in the overall timbre of these records that we
...more
Prototype theory has a close connection to the constructivist theory of memory, in that details of individual cases are discarded, and the gist or abstract generalization is stored—both in the sense of what is being stored as a memory trace, and what is being stored as the central memory of the category.
First, when the category is broad and category members differ widely, how can there be a prototype? Think, for example, of the category “tool.” What is the prototype for it? Or for the category “furniture”? What is the prototypical song by a female pop artist?
A maxim of memory theory is that unique cues are the most effective at bringing up memories; the more items or contexts a particular cue is associated with, the less effective it will be at bringing up a particular memory. This is why, although certain songs may be associated with certain times of your life, they are not very effective cues for retrieving memories from those times if the songs have continued to play all along and you’re accustomed to hearing them—as often happens with classic rock stations or the classical radio stations that rely on a somewhat limited repertoire of “popular”
...more
In my laboratory we found strong activations in the cerebellum when we asked people to listen to music, but not when we asked them to listen to noise. The cerebellum appears to be involved in tracking the beat. And the cerebellum has shown up in our studies in another context: when we ask people to listen to music they like versus music they don’t like, or familiar music versus unfamiliar music.
To begin with, what might be the evolutionary basis for emotions? Scientists can’t even agree about what emotions are. We distinguish between emotions (temporary states that are usually the result of some external event, either present, remembered, or anticipated), moods (not-so-temporary, longer-lasting states that may or may not have an external cause), and traits (a proclivity or tendency to display certain states, such as “She is generally a happy person,” or “He never seems satisfied”). Some scientists use the word affect to refer to the valence (positive or negative) of our internal
...more
Ursula told Crick of Albert Galaburda’s discovery, at Harvard, that individuals with Williams syndrome (WS) have defects in the way their cerebellums form. Williams occurs when about twenty genes turn up missing on one chromosome (chromosome 7). This happens in one out of twenty thousand births, and so it is about one fourth as common as the better-known developmental disorder Down syndrome. Like Down syndrome, Williams results from a mistake in genetic transcription that occurs early in the stages of fetal development. Out of the twenty-five thousand or so genes that we have, the loss of
...more
In addition, as with most people struck with Williams syndrome, he had very poor eye-hand coordination, and had difficulty buttoning up his sweater (his mother had to help him), tying his own shoes (he had Velcro straps instead of laces), and he even had difficulty climbing stairs or getting food from his plate to his mouth. But he played the clarinet. There were a few pieces that he had learned, and he was able to execute the numerous and complicated finger movements to play them. He could not name the notes, and couldn’t tell me what he was doing at any one point of the piece—it was as
...more
Still sitting with me, long after the lunch plates were cleared, Crick mentioned “the binding problem,” one of the most difficult problems in cognitive neuroscience. Most objects have a number of different features that are processed by separate neural subsystems—in the case of visual objects, these might be color, shape, motion, contrast, size, and so on. Somehow the brain has to “bind together” these different, distinct components of perception into a coherent whole. I have described how cognitive scientists believe that perception is a constructive process, but what are the neurons actually
...more
The task was not easy; brain scan experiments produce millions and millions of data points; a single session can take up the entire hard drive on an ordinary computer. Analyzing the data in the standard way—just to see which areas are activated, not the new type of analyses we were proposing—can take months. And there was no “off the shelf” statistical program that would do these new analyses for us. Menon spent two months working through the equations necessary to do these analyses, and when he was done, we reanalyzed the data of people listening to classical music we had collected.
Music appears to mimic some of the features of language and to convey some of the same emotions that vocal communication does, but in a nonreferential, and nonspecific way. It also invokes some of the same neural regions that language does, but far more than language, music taps into primitive brain structures involved with motivation, reward, and emotion.
As the music unfolds, the brain constantly updates its estimates of when new beats will occur, and takes satisfaction in matching a mental beat with a real-in-the-world one, and takes delight when a skillful musician violates that expectation in an interesting way—a sort of musical joke that we’re all in on. Music breathes, speeds up, and slows down just as the real world does, and our cerebellum finds pleasure in adjusting itself to stay synchronized.
The story of your brain on music is the story of an exquisite orchestration of brain regions, involving both the oldest and newest parts of the human brain, and regions as far apart as the cerebellum in the back of the head and the frontal lobes just behind your eyes. It involves a precision choreography of neurochemical release and uptake between logical prediction systems and emotional reward systems. When we love a piece of music, it reminds us of other music we have heard, and it activates memory traces of emotional times in our lives. Your brain on music is all about, as Francis Crick
...more
The experimental controls were inadequate, and the tiny difference in spatial ability between the two groups, according to research by Bill Thompson, Glenn Schellenberg, and others, all turned on the choice of a control task. Compared to sitting in a room and doing nothing, music listening looked pretty good. But if subjects in the control task were given the slightest mental stimulation—hearing a book on tape, reading, etc.—there was no advantage for music listening.
Music listening enhances or changes certain neural circuits, including the density of dendritic connections in the primary auditory cortex. The Harvard neuroscientist Gottfried Schlaug has shown that the front portion of the corpus callosum—the mass of fibers connecting the two cerebral hemispheres—is significantly larger in musicians than nonmusicians, and particularly for musicians who began their training early. This reinforces the notion that musical operations become bilateral with increased training, as musicians coordinate and recruit neural structures in both the left and right
...more
Part of the reason we remember songs from our teenage years is because those years were times of self-discovery, and as a consequence, they were emotionally charged; in general, we tend to remember things that have an emotional component because our amygdala and neurotransmitters act in concert to “tag” the memories as something important.
At a neural level, we need to be able to find a few landmarks in order to invoke a cognitive schema. If we hear a piece of radically new music enough times, some of that piece will eventually become encoded in our brains and we will develop landmarks. If the composer is skillful, those parts of the piece that become our landmarks will be the very ones that the composer intended they should be; his knowledge of composition and human perception and memory will have allowed him to create certain “hooks” in the music that will eventually stand out in our minds.
Yet, simply knowing that the improvisation takes place over the original chords and form of the song can make a big difference in orienting the neophyte to where in the song the players are. I often advise new listeners to jazz to simply hum the main tune in their mind once the improvisation begins—this is what the improvisers themselves are often doing—and that enriches the experience considerably.
Our music listening creates schemas for musical genres and forms, even when we are only listening passively, and not attempting to analyze the music. By an early age, we know what the legal moves are in the music of our culture. For many, our future likes and dislikes will be a consequence of the types of cognitive schemas we formed for music through childhood listening. This isn’t meant to imply that the music we listen to as children will necessarily determine our musical tastes for the rest of our lives; many people are exposed to or study music of different cultures and styles and become
...more
Mother-infant interactions involving music almost always entail both singing and rhythmic movement, such as rocking or caressing. This appears to be culturally universal. During the first six months or so of life, as I showed in Chapter 7, the infant brain is unable to clearly distinguish the source of sensory inputs; vision, hearing, and touch meld into a unitary perceptual representation. The regions of
the brain that will eventually become the auditory cortex, the sensory cortex, and the visual cortex are functionally undifferentiated, and inputs from the various sensory receptors may connect to many different parts of the brain, pending pruning that will occur later in life. As Simon Baron-Cohen has described it, with all this sensory cross talk, the infant lives in a state of complete psychedelic splendor (without the aid of drugs).

