More on this book
Community
Kindle Notes & Highlights
Read between
June 23, 2020 - November 6, 2022
Phonemes are a different kind of linguistic object. They connect outward to speech, not inward to mentalese:
the vowels for which the tongue is high and in the front always come before the vowels for which the tongue is low and in the back. No one knows why they are aligned in this order, but it seems to be a kind of syllogism from two other oddities.
Consonants differ in “obstruency”—the degree to which they impede the flow of air, ranging from merely making it resonate, to forcing it noisily past an obstruction, to stopping it up altogether. The word beginning with the less obstruent consonant always comes before the word beginning with the more obstruent consonant.
An inventory of phonemes is one of the things that gives a language its characteristic sound pattern.
Phonemes are not assembled into words as one-dimensional left-to-right strings. Like words and phrases, they are grouped into units, which are then grouped into bigger units, and so on, defining a tree.
The group of consonants (C) at the beginning of a syllable is called an onset; the vowel (V) and any consonants coming after it are called the rime:
Syllables, in turn, are collected into rhythmic groups called feet:
Phonological rules are rarely triggered by a single phoneme; they are triggered by an entire class of phonemes that share one or more features (like voicing, stop versus fricative manner, or which organ is doing the articulating). This suggests that rules do not “see” the phonemes in a string but instead look right through them to the features they are made from.
A society of lazy talkers would be a society of hard-working listeners. If speakers were to have their way, all rules of phonology would spread and reduce and delete. But if listeners were to have their way, phonology would do the opposite: it would enhance the acoustic differences between confusable phonemes by forcing speakers to exaggerate or embroider them. And indeed, many rules of phonology do that.
In the comprehension of speech, the redundancy conferred by phonological rules can compensate for some of the ambiguity in the sound wave. For example, a listener can know that “thisrip” must be this rip and not the srip because the English consonant cluster sr is illegal.
Among the range of positions in the mouth that can define a phoneme, we place the tongue in the one that offers the shortest path to the target for the next phoneme. If the current phoneme does not specify where a speech organ should be, we anticipate where the next phoneme wants it to be and put it there in advance.
imply that human speech perception works from the top down rather than just from the bottom up. Maybe we are constantly guessing what a speaker will say next, using every scrap of conscious and unconscious knowledge at our disposal, from how coarticulation distorts sounds, to the rules of English phonology, to the rules of English syntax, to stereotypes about who tends to do what to whom in the world, to hunches about what our conversational partner has in mind at that very moment.
although language is an instinct, written language is not. Writing was invented a small number of times in history, and alphabetic writing, where one character corresponds to one sound, seems to have been invented only once.
English spelling is not completely phonemic; sometimes letters encode phonemes, but sometimes a sequence of letters is specific to a morpheme. And a morphemic writing system is more useful than you might think. The goal of reading, after all, is to understand the text, not to pronounce it.
writing systems do not aim to represent the actual sounds of talking, which we do not hear, but the abstract units of language underlying them, which we do hear.
Speaking and understanding share a grammatical database (the language we speak is the same as the language we understand), but they also need procedures that specify what the mind should do, step by step, when the words start pouring in or when one is about to speak. The mental program that analyzes sentence structure during language comprehension is called the parser.
One was memory: we had to keep track of the dangling phrases that needed particular kinds of words to complete them. The other was decision-making: when a word or phrase was found on the right-hand side of two different rules, we had to decide which to use to build the next branch of the tree.
Many linguists believe that the reason that languages allow phrase movement, or choices among more-or-less synonymous constructions, is to ease the load on the listener’s memory.
At the level of individual words, it looks as if the brain does a breadth-first search, entertaining, however briefly, several entries for an ambiguous word, even unlikely ones.
These are called garden path sentences, because their first words lead the listener “up the garden path” to an incorrect analysis. Garden path sentences show that people, unlike computers, do not build all possible trees as they go along; if they did, the correct tree would be among them. Rather, people mainly use a depth-first strategy, picking an analysis that seems to be working and pursuing it as long as possible; if they come across words that cannot be fitted into the tree, they backtrack and start over with a different tree. (Sometimes people can hold a second tree in mind, especially
...more
One principle for good style is to minimize the amount of intervening sentence in which a moved phrase must be held in memory
Comprehension uses the semantic information recovered from a tree as just one premise in a complex chain of inference to the speaker’s intentions.
Thus listeners tacitly expect speakers to be informative, truthful, relevant, clear, unambiguous, brief, and orderly. These expectations help to winnow out the inappropriate readings of an ambiguous sentence, to piece together fractured utterances, to excuse slips of the tongue, to guess the referents of pronouns and descriptions, and to fill in the missing steps of an argument.
subjects normally precede objects in almost all languages, and verbs and their objects tend to be adjacent. Thus most languages have SVO or SOV order; fewer have VSO; VOS and OVS are rare (less than 1%); and OSV may be nonexistent
The absence of a strong correlation between the grammatical properties of languages and their place in the family tree of languages suggests that language universals are not just the properties that happen to have survived from the hypothetical mother of all languages.
The second counterexplanation that one must rule out before attributing a universal of language to a universal language instinct is that languages might reflect universals of thought or of mental information processing that are not specific to language.
Chomsky’s claim that from a Martian’s-eye-view all humans speak a single language is based on the discovery that the same symbol-manipulating machinery, without exception, underlies the world’s languages.
Languages all show a duality of patterning in which one rule system is used to order phonemes within morphemes, independent of meaning, and another is used to order morphemes within words and phrases, specifying their meaning.
It is safe to say that the grammatical machinery we used for English in Chapters 4–6 is used in all the world’s languages. All languages have a vocabulary in the thousands or tens of thousands, sorted into part-of-speech categories including noun and verb. Words are organized into phrases according to the X-bar system (nouns are found inside N-bars, which are found inside noun phrases, and so on). The higher levels of phrase structure include auxiliaries (INFL), which signify tense, modality, aspect, and negation. Nouns are marked for case and assigned semantic roles by the mental dictionary
...more
This highlight has been truncated due to consecutive passage length restrictions.
Differences among languages, like differences among species, are the effects of three processes acting over long spans of time. One process is variation—mutation, in the case of species; linguistic innovation, in the case of languages. The second is heredity, so that descendants resemble their progenitors in these variations—genetic inheritance, in the case of species; the ability to learn, in the case of languages. The third is isolation—by geography, breeding season, or reproductive anatomy, in the case of species; by migration or social barriers, in the case of languages. In both cases,
...more
Even when a trait starts off as a product of learning, it does not have to remain so. Evolutionary theory, supported by computer simulations, has shown that when an environment is stable, there is a selective pressure for learned abilities to become increasingly innate. That is because if an ability is innate, it can be deployed earlier in the lifespan of the creature, and there is less of a chance that an unlucky creature will miss out on the experiences that would have been necessary to teach it.
If a universal grammar module defines a head and a role-player, their relative ordering (head-first or head-last) could thus be recorded easily. If so, evolution, having made the basic computational units of language innate, may have seen no need to replace every bit of learned information with innate wiring.
second reason for language to be partly learned is that language inherently involves sharing a code with other people. An innate grammar is useless if you are the only one possessing it: it is a tango of one, the sound of one hand clapping. But the genomes of other people mutate and drift and recombine when they have children. Rather than selecting for a completely innate grammar, which would soon fall out of register with everyone else’s, evolution may have given children an ability to learn the variable parts of language as a way of synchronizing their grammars with that of the community.
Reanalysis, a product of the discrete combinatorial creativity of the language instinct, partly spoils the analogy between language change on the one hand and biological and cultural evolution on the other.
Because of this overall conservatism, some patterns of vocabulary, sound, and grammar survive for millennia. They serve as the fossilized tracks of mass migrations in the remote past, clues to how human beings spread out over the earth to end up where we find them today.
A correlation between language families and human genetic groupings does not, by the way, mean that there are genes that make it easier for some kinds of people to learn some kinds of languages.
As far as the language instinct is concerned, the correlation between genes and languages is a coincidence.
We know that the connection is easily severed, thanks to the genetic experiments called immigration and conquest, in which children get their grammars from the brains of people other than their parents. Needless to say, the children of immigrants learn a language, even one separated from their parents’ language by the deepest historical roots, without any disadvantage compared to age-mates who come from long lineages of the language’s speakers. Correlations between genes and languages are thus so crude that they are measurable only at the level of superphyla and aboriginal races.
Most linguists believe that after 10,000 years no traces of a language remain in its descendants. This makes it extremely doubtful that anyone will find extant traces of the most recent ancestor of all contemporary languages, or that that ancestor would in turn retain traces of the language of the first modern humans, who lived some 200,000 years ago.
and the extinction of a language (say, Ainu, formerly spoken in Japan by a mysterious Caucasoid people) can be like the burning of a library of historical documents or the extinction of the last species in a phylum.
“Any language is a supreme achievement of a uniquely human collective genius, as divine and endless a mystery as a living organism.”
Infants come equipped with these skills; they do not learn them by listening to their parents’ speech. Kikuyu and Spanish infants discriminate English ba’s and pa’s, which are not used in Kikuyu or Spanish and which their parents cannot tell apart. English-learning infants under the age of six months distinguish phonemes used in Czech, Hindi, and Inslekampx (a Native American language), but English-speaking adults cannot, even with five hundred trials of training or a year of university coursework. Adult ears can tell the sounds apart, though, when the consonants are stripped from the
...more
By ten months they are no longer universal phoneticians but have turned into their parents; they do not distinguish Czech or Inslekampx phonemes unless they are Czech or Inslekampx babies. Babies make this transition before they produce or understand words, so their learning cannot depend on correlating sound with meaning. That is, they cannot be listening for the difference in sound between a word they think means bit and a word they think means beet, because they have learned neither word. They must be sorting the sounds directly, somehow tuning their speech analysis module to deliver the
...more
Between seven and eight months they suddenly begin to babble in real syllables like ba-ba-ba, neh-neh-neh, and dee-dee-dee. The sounds are the same in all languages, and consist of the phonemes and syllable patterns that are most common across languages.
Deaf children’s babbling is later and simpler—though if their parents use sign language, they babble, on schedule, with their hands!
The infant has been given a set of neural commands that can move the articulators every which way, with wildly varying effects on the sound. By listening to their own babbling, babies in effect write their own instruction manual; they learn how much to move which muscle in which way to make which change in the sound. This is a prerequisite to duplicating the speech of their parents.
Children’s two-word combinations are so similar in meaning the world over that they read as translations of one another. Children announce when objects appear, disappear, and move about, point out their properties and owners, comment on people doing things and seeing things, reject and request objects and activities, and ask about who, what, and where. These microsentences already reflect the language being acquired: in ninety-five percent of them, the words are properly ordered.
When researchers focus on one grammatical rule and count how often a child obeys it and how often he or she flouts it, the results are astonishing: for any rule you choose, three-year-olds obey it most of the time. As we have seen, children rarely scramble word order and, by the age of three, come to supply most inflections and function words in sentences that require them.
The errors children do make are rarely random garbage. Often the errors follow the logic of grammar so beautifully that the puzzle is not why the children make the errors, but why they sound like errors to adult ears at all.
The three-year-old, then, is a grammatical genius—master of most constructions, obeying rules far more often than flouting them, respecting language universals, erring in sensible, adultlike ways, and avoiding many kinds of errors altogether.