Helen H. Moore's Blog, page 664
September 11, 2016
Cleaning up after pundits: Dear media elites, No, Trump is not a populist
Donald Trump (Credit: Getty/Mandel Ngan)
Being a muckracking political writer often makes me feel like a custodian in a horse barn, constantly shoveling manure. It’s a messy, stinky job — but on the bright side, the stuff is plentiful, so the work is steady. Indeed, I’m now a certified equine excrement engineer, having developed a narrow but important professional specialty: cleaning off the horse stuff that careless politicos and sloppy media types keep dumping on the word “populist.”
As you might imagine, in this year of global turmoil, I’ve been especially busy. Populism — a luminous term denoting both an uplifting doctrine of egalitarianism and a political-economic-cultural movement with deep roots in America’s progressive history — has been routinely sullied throughout 2016 by elites misusing it as synonym for ignorance and bigotry:
When right-wing, anti-Muslim mobs in a few European nations literally went to their national borders to block desperate Syrian war refugees from getting safe passing into Europe, most mainline media labeled the boisterous reactionaries “populists.”
Flummoxed elites in Great Britain, frantic over Brexit, blindly blamed their people’s vote to exit the European Union on the “populist” bigotry of working-class Brits.
When in the United States, the unreal reality show “The Donald” spooked representatives of the corporate and political establishment, which denied that Trump harnessed public fury toward them, smugly attributed his rise solely to “populist” bumpkins who embraced his demeaning attacks on women, Mexicans, Muslims, union members, immigrants, people with disabilities and veterans, among others. Indeed, the power elites sneeringly branded Trump himself a “populist.”
Excuse me, but if that bilious billionaire blowhard is a populist, then I’m a contender in his Miss Universe contest.
Populism is not a style — and this is important to note in this moment of “The Donald” — nor is it a synonym for “popular outrage.” Populism is a historically grounded political doctrine that supports ordinary folks in their ongoing democratic struggle for power over their lives.
This past June, I was pleasantly surprised that out of the blue a major player in this year’s presidential race gave me a big helping hand in cleaning the manure off the democratic ideal of genuine populism. “I’m not prepared to concede the notion that some of the rhetoric that’s been popping up is populist,” said my fellow scrubber. He added that a politico doesn’t “suddenly become a populist” by denigrating people of other races, cultures, religions and nations.
“That’s not the measure of populism. That’s nativism or xenophobia or worse. Or it’s just cynicism. So I would just advise everybody to be careful about suddenly attributing to whoever pops up at a time of economic anxiety the label that they’re a ‘populist.’ Where have they been? Have they been on the front lines for working people? Have they been [laboring] to open up opportunity for more people?”
You tell ’em, Bernie! But wait. That wasn’t Sanders. It was Barack Obama delivering an impromptu tutorial on populist doctrine at a June 29 press conference.
Granted, Obama himself has hardly been a practicing populist. But he was nonetheless right about what populism is not. He also noted that real populists embrace the inclusive democratic values of egalitarianism and pluralism, which are presently under a ferocious assault by a horde of faux populists led by Trump, Sen. Ted Cruz, and other foam-at-the-mouth immigrant bashers.
While the incitement of anti-immigrant prejudice for political gain is shameful and socially explosive, it is certainly not new or uncommon in our country. Nor is it unbeatable. For more than two centuries, the U.S. has experienced periodic eruptions of such ugliness from within our body politic, yet generations of Americans have successfully overcome the xenophobic furies of their times by countering the bigotry with our society’s prevailing ethic that all people are created equal. And after all, almost all of our families came from somewhere else.
September 10, 2016
What we tell our sons about rape
A photo of the author with her mother.
My mom and I are walking in a big circle, covering the grounds of the last school that she attended, when she was twelve. Our feet kick up clouds of dust as we search for any entrance. To our left, remote but on the same piece of property, is the Mission San Luis Rey, a white behemoth brightened by lush flowering shrubs and trees. Here, the Old Mission Montessori school is recently closed, enrollment numbers having plummeted.
Once it was a boarding school. Every day, she and the others would get up at five a.m. to trudge sleepily across the campus to the Mission chapel for morning mass. The cluster of buildings are now vacant and locked, chain link guarding the interior courtyards like a secret.
My mother tells me this is where her parents sent her: after the rape; after the trial.
They had tried other, more local schools, but they could not make her go. At boarding school, attended by nuns and priests, she would be required to.
***
My two sons, ages 16 and 13, wander the grounds with us. Two of their friends are with us as well. Four teenaged boys.
My mom and I mug for the camera, my oldest son taking a picture of us with the dorms in the background. We are on our way home from a family vacation to San Diego. This visit is a side trip, taking us a little out of the way, but not too much. Worth it, I think. My mom’s memories of this place, however bitter the reasons for being sent here, are some of the best of her childhood.
Her mother, my namesake, was an alcoholic in that classic heartsick way: I picture her draped casually across the sofa, passed out, bottle of booze inches from her fingers on its side on the floor. My mother’s father was long dead, gone at the age of 39 after building a career that brought him wealth and a modest degree of fame. They had been madly in love; I still have some of his love letters. After his death, she remarried the man I would later know as my grandfather; a good man who loved her, but it was never the same.
On the cusp of becoming a teenager, reeling from trauma, my mom had no interest in school. For months on end, my mom would sneak out to use the one pay phone to beg her mother to come get her. My grandmother always said no. My mother was one of the few who stayed through weekends and holidays; those left to help the nuns clean the chalk boards and smack erasers, and scrub the latrines and sinks.
My mom shows me where the phone booth used to be, the outbuilding that used to be a snack shop, points to the second floor toward her old shared room. She seems happy to be here.
***
The boys find a bench with a bronze sculpture of a saint at one end, seated, looking pious, his arm raised plaintively to the sky, palm upturned. The boys sit on the bench beside him. My youngest sits on the statue’s lap and asks for a photograph. I oblige.
His friend says, “Now pretend he just raped you.” They all laugh.
My stomach churns and I snap. There are to be no rape jokes.
Of course, they don’t get it. I have not told them. Why haven’t I told them?
***
My mom spots a maintenance man and catches him. He tells us the building is locked today.
We had hoped to see inside. We continue circling the building, and across the courtyard, through chain link, I spot a woman carrying a book. I wave and she approaches. She had been a teacher there, and was here today for a Bible study group. We tell her our story. Yes, she will let us in. Yes, she will show us around.
The woman disappears, but soon the front doors open and we are escorted through. Children’s artwork still hangs on the walls, rooms still marked with purpose.
Down the long hall, a mural of Father Junipero Serra, a small indigenous boy at his side. Here, Serra appears merciful, benign; the boy, however, at least to my eyes, seems just a bit reticent, perhaps skeptical of the Father’s intentions.
Here, my mom seems to feel at ease, though I wonder truly what it must feel like to be here. How much of what we remember becomes mythologized? I consider this. Here, she occupied a space between where she had come from and where she might go, and it’s here that I sense her roads forked; she took a path that led her one way, toward me, rather than one that might have taken her toward a different sort of life.
Why did her mother not respond to her pleas to come home? I picture my grandmother trying to forget her daughter.
Today, my mother is dressed up, and she could be any one of those good Catholic school girls who grew up to marry a good Catholic boy. She is not herself today. My mom is hard to read. I recognize this closing off, the facade of seeming rather than being.
***
My mom’s best friend growing up was the daughter of a prostitute. Neighbors, the girls were fast friends. One day, the girls decide to go to the movies.
Four teenage boys, strangers, join them in the theater. The boys talk to the girls. Afterward, the boys invite both of the girls to walk with them, go back to one of their homes, to keep up the conversation. My mother’s friend declines, walks home. My mother accepts their offer.
I will never know the details. I imagine the boys cornering my eleven-year-old mother in a back room, then, one by one, taking turns.
My mother never told me any of this.
I learn this only once I am an adult, by way of a cousin who had heard it from her mother. In disbelief, denial, I ask my mom. She confirms.
***
Later, my oldest son’s friend makes another joke about rape. My mother is out of earshot. I pull my oldest son aside. No more jokes about rape.
I begin to tell him the story. He waves his hand, looks away as if to say he’s heard enough.
He and his friends are nearly the age of those long-ago teenage boys.
Whose sons were they? What did their mothers say?
There are no more rape jokes that day.
***
In an alcoholic stupor, my mother’s mother finally relents, sends one of her sons to bring her daughter home.
Months from then, my mother will meet my father. My mother will never return to school again. At 13, with the blessing of their parents, the young couple will try to marry; at 16, they will succeed. At 23, she will have me.
I sometimes think about what might have happened with those long-ago teenage boys, the ones who raped my mother, and the men that they would surely have become. And in that moment, I think of my own sons.
***
My mother and I say goodbye to our personal tour guide with a hug and thanks. I still can’t quite read my mom. We take one last photograph, our backs to the sun.
Later, I think, I will talk to my boys about this visit. Later, I will explain.
Later, I think. Always. Later.
What’s universal grammar? Evidence rebuts Chomsky’s theory of language learning
(Credit: Reuters/Jorge Dan)
This article was originally published by Scientific American.
The idea that we have brains hardwired with a mental template for learning grammar — famously espoused by Noam Chomsky of the Massachusetts Institute of Technology — has dominated linguistics for almost half a century. Recently, though, cognitive scientists and linguists have abandoned Chomsky’s “universal grammar” theory in droves because of new research examining many different languages — and the way young children learn to understand and speak the tongues of their communities. That work fails to support Chomsky’s assertions.
The research suggests a radically different view, in which learning of a child’s first language does not rely on an innate grammar module. Instead the new research shows that young children use various types of thinking that may not be specific to language at all — such as the ability to classify the world into categories (people or objects, for instance) and to understand the relations among things. These capabilities, coupled with a unique human ability to grasp what others intend to communicate, allow language to happen. The new findings indicate that if researchers truly want to understand how children, and others, learn languages, they need to look outside of Chomsky’s theory for guidance.
This conclusion is important because the study of language plays a central role in diverse disciplines — from poetry to artificial intelligence to linguistics itself; misguided methods lead to questionable results. Further, language is used by humans in ways no animal can match; if you understand what language is, you comprehend a little bit more about human nature.
Chomsky’s first version of his theory, put forward in the mid-20th century, meshed with two emerging trends in Western intellectual life. First, he posited that the languages people use to communicate in everyday life behaved like mathematically based languages of the newly emerging field of computer science. His research looked for the underlying computational structure of language and proposed a set of procedures that would create “well-formed” sentences. The revolutionary idea was that a computerlike program could produce sentences real people thought were grammatical. That program could also purportedly explain as well the way people generated their sentences. This way of talking about language resonated with many scholars eager to embrace a computational approach to, well, everything.
As Chomsky was developing his computational theories, he was simultaneously proposing that they were rooted in human biology. In the second half of the 20th century, it was becoming ever clearer that our unique evolutionary history was responsible for many aspects of our unique human psychology, and so the theory resonated on that level as well. His universal grammar was put forward as an innate component of the human mind — and it promised to reveal the deep biological underpinnings of the world’s 6,000-plus human languages. The most powerful, not to mention the most beautiful, theories in science reveal hidden unity underneath surface diversity, and so this theory held immediate appeal.
But evidence has overtaken Chomsky’s theory, which has been inching toward a slow death for years. It is dying so slowly because, as physicist Max Planck once noted, older scholars tend to hang on to the old ways: “Science progresses one funeral at a time.”
In the beginning
The earliest incarnations of universal grammar in the 1960s took the underlying structure of “standard average European” languages as their starting point — the ones spoken by most of the linguists working on them. Thus, the universal grammar program operated on chunks of language, such as noun phrases (“The nice dogs”) and verb phrases (“like cats”).
Fairly soon, however, linguistic comparisons among multiple languages began rolling in that did not fit with this neat schema. Some native Australian languages, such as Warlpiri, had grammatical elements scattered all over the sentence — noun and verb phrases that were not “neatly packaged” so that they could be plugged into Chomsky’s universal grammar — and some sentences had no verb phrase at all.
These so-called outliers were difficult to reconcile with the universal grammar that was built on examples from European languages. Other exceptions to Chomsky’s theory came from the study of “ergative” languages, such as Basque or Urdu, in which the way a sentence subject is used is very different from that in many European languages, again challenging the idea of a universal grammar.

Illustration by Lucy Reading-Ikkanda
These findings, along with theoretical linguistic work, led Chomsky and his followers to a wholesale revision of the notion of universal grammar during the 1980s. The new version of the theory, called principles and parameters, replaced a single universal grammar for all the world’s languages with a set of “universal” principles governing the structure of language. These principles manifested themselves differently in each language. An analogy might be that we are all born with a basic set of tastes (sweet, sour, bitter, salty and umami) that interact with culture, history and geography to produce the present-day variations in world cuisine. The principles and parameters were a linguistic analogy to tastes. They interacted with culture (whether a child was learning Japanese or English) to produce today’s variation in languages as well as defined the set of human languages that were possible.
Languages such as Spanish form fully grammatical sentences without the need for separate subjects — for example, Tengo zapatos (“I have shoes”), in which the person who has the shoes, “I,” is indicated not by a separate word but by the “o” ending at the end of the verb. Chomsky contended that as soon as children encountered a few sentences of this type, their brains would set a switch to “on,” indicating that the sentence subject should be dropped. Then they would know that they could drop the subject in all their sentences.
The “subject-drop” parameter supposedly also determined other structural features of the language. This notion of universal principles fits many European languages reasonably well. But data from non-European languages turned out not to fit the revised version of Chomsky’s theory. Indeed, the research that had attempted to identify parameters, such as the subject-drop, ultimately led to the abandonment of the second incarnation of universal grammar because of its failure to stand up to scrutiny.
More recently, in a famous paper published in Science in 2002, Chomsky and his co-authors described a universal grammar that included only one feature, called computational recursion (although many advocates of universal grammar still prefer to assume there are many universal principles and parameters). This new shift permitted a limited number of words and rules to be combined to make an unlimited number of sentences.
The endless possibilities exist because of the way recursion embeds a phrase within another phrase of the same type. For example, English can embed phrases to the right (“John hopes Mary knows Peter is lying”) or embed centrally (“The dog that the cat that the boy saw chased barked”). In theory, it is possible to go on embedding these phases infinitely. In practice, understanding starts to break down when the phrases are stacked on top of one another as in these examples. Chomsky thought this breakdown was not directly related to language per se. Rather it was a limitation of human memory. More important, Chomsky proposed that this recursive ability is what sets language apart from other types of thinking such as categorization and perceiving the relations among things. He also proposed recently this ability arose from a single genetic mutation that occurred between 100,000 and 50,000 years ago.
As before, when linguists actually went looking at the variation in languages across the world, they found counterexamples to the claim that this type of recursion was an essential property of language. Some languages — the Amazonian Pirahã, for instance — seem to get by without Chomskyan recursion.
As with all linguistic theories, Chomsky’s universal grammar tries to perform a balancing act. The theory has to be simple enough to be worth having. That is, it must predict some things that are not in the theory itself (otherwise it is just a list of facts). But neither can the theory be so simple that it cannot explain things it should. Take Chomsky’s idea that sentences in all the world’s languages have a “subject.” The problem is the concept of a subject is more like a “family resemblance” of features than a neat category. About 30 different grammatical features define the characteristics of a subject. Any one language will have only a subset of these features — and the subsets often do not overlap with those of other languages.
Chomsky tried to define the components of the essential tool kit of language — the kinds of mental machinery that allow human language to happen. Where counterexamples have been found, some Chomsky defenders have responded that just because a language lacks a certain tool — recursion, for example — does not mean that it is not in the tool kit. In the same way, just because a culture lacks salt to season food does not mean salty is not in its basic taste repertoire. Unfortunately, this line of reasoning makes Chomsky’s proposals difficult to test in practice, and in places they verge on the unfalsifiable.
Death knells
A key flaw in Chomsky’s theories is that when applied to language learning, they stipulate that young children come equipped with the capacity to form sentences using abstract grammatical rules. (The precise ones depend on which version of the theory is invoked.) Yet much research now shows that language acquisition does not take place this way. Rather young children begin by learning simple grammatical patterns; then, gradually, they intuit the rules behind them bit by bit.
Thus, young children initially speak with only concrete and simple grammatical constructions based on specific patterns of words: “Where’s the X?”; “I wanna X”; “More X”; “It’s an X”; “I’m X-ing it”; “Put X here”; “Mommy’s X-ing it”; “Let’s X it”; “Throw X”; “X gone”; “Mommy X”; “I Xed it”; “Sit on the X”; “Open X”; “X here”; “There’s an X”; “X broken.” Later, children combine these early patterns into more complex ones, such as “Where’s the X that Mommy Xed?”
Many proponents of universal grammar accept this characterization of children’s early grammatical development. But then they assume that when more complex constructions emerge, this new stage reflects the maturing of a cognitive capacity that uses universal grammar and its abstract grammatical categories and principles.
For example, most universal grammar approaches postulate that a child forms a question by following a set of rules based on grammatical categories such as “What (object) did (auxiliary) you (subject) lose (verb)?” Answer: “I (subject) lost (verb) something (object).” If this postulate is correct, then at a given developmental period children should make similar errors across all wh-question sentences alike. But children’s errors do not fit this prediction. Many of them early in development make errors such as “Why he can’t come?” but at the same time as they make this error — failing to put the “can’t” before the “he” — they correctly form other questions with other “wh-words” and auxiliary verbs, such as the sentence “What does he want?”
Experimental studies confirm that children produce correct question sentences most often with particular wh-words and auxiliary verbs (often those with which they have most experience, such as “What does…”), while continuing to make errors with question sentences containing other (often less frequent) combinations of wh-words and auxiliary verbs: “Why he can’t come?”
The main response of universal grammarians to such findings is that children have the competence with grammar but that other factors can impede their performance and thus both hide the true nature of their grammar and get in the way of studying the “pure” grammar posited by Chomsky’s linguistics. Among the factors that mask the underlying grammar, they say, include immature memory, attention and social capacities.
Yet the Chomskyan interpretation of the children’s behavior is not the only possibility. Memory, attention and social abilities may not mask the true status of grammar; rather they may well be integral to building a language in the first place. For example, a recent study co-authored by one of us (Ibbotson) showed that children’s ability to produce a correct irregular past tense verb — such as “Every day I fly, yesterday I flew” (not “flyed”) — was associated with their ability to inhibit a tempting response that was unrelated to grammar. (For example, to say the word “moon” while looking at a picture of the sun.) Rather than memory, mental analogies, attention and reasoning about social situations getting in the way of children expressing the pure grammar of Chomskyan linguistics, those mental faculties may explain why language develops as it does.
As with the retreat from the cross-linguistic data and the tool-kit argument, the idea of performance masking competence is also pretty much unfalsifiable. Retreats to this type of claim are common in declining scientific paradigms that lack a strong empirical base — consider, for instance, Freudian psychology and Marxist interpretations of history.
Even beyond these empirical challenges to universal grammar, psycholinguists who work with children have difficulty conceiving theoretically of a process in which children start with the same algebraic grammatical rules for all languages and then proceed to figure out how a particular language — whether English or Swahili — connects with that rule scheme. Linguists call this conundrum the linking problem, and a rare systematic attempt to solve it in the context of universal grammar was made by Harvard University psychologist Steven Pinker for sentence subjects. Pinker’s account, however, turned out not to agree with data from child development studies or to be applicable to grammatical categories other than subjects. And so the linking problem — which should be the central problem in applying universal grammar to language learning — has never been solved or even seriously confronted.
An alternative view
All of this leads ineluctably to the view that the notion of universal grammar is plain wrong. Of course, scientists never give up on their favorite theory, even in the face of contradictory evidence, until a reasonable alternative appears. Such an alternative, called usage-based linguistics, has now arrived. The theory, which takes a number of forms, proposes that grammatical structure is not innate. Instead grammar is the product of history (the processes that shape how languages are passed from one generation to the next) and human psychology (the set of social and cognitive capacities that allow generations to learn a language in the first place). More important, this theory proposes that language recruits brain systems that may not have evolved specifically for that purpose and so is a different idea to Chomsky’s single-gene mutation for recursion.
In the new usage-based approach (which includes ideas from functional linguistics, cognitive linguistics and construction grammar), children are not born with a universal, dedicated tool for learning grammar. Instead they inherit the mental equivalent of a Swiss Army knife: a set of general-purpose tools — such as categorization, the reading of communicative intentions and analogy making, with which children build grammatical categories and rules from the language they hear around them.
For instance, English-speaking children understand “The cat ate the rabbit,” and by analogy they also understand “The goat tickled the fairy.” They generalize from hearing one example to another. After enough examples of this kind, they might even be able to guess who did what to whom in the sentence “The gazzer mibbed the toma,” even though some of the words are literally nonsensical. The grammar must be something they discern beyond the words themselves, given that the sentences share little in common at the word level.
The meaning in language emerges through an interaction between the potential meaning of the words themselves (such as the things that the word “ate” can mean) and the meaning of the grammatical construction into which they are plugged. For example, even though “sneeze” is in the dictionary as an intransitive verb that only goes with a single actor (the one who sneezes), if one forces it into a ditransitive construction — one able to take both a direct and indirect object — the result might be “She sneezed him the napkin,” in which “sneeze” is construed as an action of transfer (that is to say, she made the napkin go to him). The sentence shows that grammatical structure can make as strong a contribution to the meaning of the utterance as do the words. Contrast this idea with that of Chomsky, who argued there are levels of grammar that are free of meaning entirely.
The concept of the Swiss Army knife also explains language learning without any need to invoke two phenomena required by the universal grammar theory. One is a series of algebraic rules for combining symbols — a so-called core grammar hardwired in the brain. The second is a lexicon — a list of exceptions that cover all of the other idioms and idiosyncrasies of natural languages that must be learned. The problem with this dual-route approach is that some grammatical constructions are partially rule-based and also partially not — for example, “Him a presidential candidate?!” in which the subject “him” retains the form of a direct object but with the elements of the sentence not in the proper order. A native English speaker can generate an infinite variety of sentences using the same approach: “Her go to ballet?!” or “That guy a doctor?!” So the question becomes, are these utterances part of the core grammar or the list of exceptions? If they are not part of a core grammar, then they must be learned individually as separate items. But if children can learn these part-rule, part-exception utterances, then why can they not learn the rest of language the same way? In other words, why do they need universal grammar at all?
In fact, the idea of universal grammar contradicts evidence showing that children learn language through social interaction and gain practice using sentence constructions that have been created by linguistic communities over time. In some cases, we have good data on exactly how such learning happens. For example, relative clauses are quite common in the world’s languages and often derive from a meshing of separate sentences. Thus, we might say, “My brother … He lives over in Arkansas … He likes to play piano.” Because of various cognitive-processing mechanisms — with names such as schematization, habituation, decontextualization and automatization — these phrases evolve over long periods into a more complex construction: “My brother, who lives over in Arkansas, likes to play the piano.” Or they might turn sentences such as “I pulled the door, and it shut” gradually into “I pulled the door shut.”
What is more, we seem to have a species-specific ability to decode others’ communicative intentions — what a speaker intends to say. For example, I could say, “She gave/bequeathed/sent/loaned/sold the library some books” but not “She donated the library some books.” Recent research has shown that there are several mechanisms that lead children to constrain these types of inappropriate analogies. For example, children do not make analogies that make no sense. So they would never be tempted to say “She ate the library some books.” In addition, if children hear quite often “She donated some books to the library,” then this usage preempts the temptation to say “She donated the library some books.”
Such constraining mechanisms vastly cut down the possible analogies a child could make to those that align the communicative intentions of the person he or she is trying to understand. We all use this kind of intention reading when we understand “Can you open the door for me?” as a request for help rather than an inquiry into door-opening abilities.
Chomsky allowed for this kind of “pragmatics” — how we use language in context — in his general theory of how language worked. Given how ambiguous language is, he had to. But he appeared to treat the role of pragmatics as peripheral to the main job of grammar. In a way, the contributions from usage-based approaches have shifted the debate in the other direction to how much pragmatics can do for language before speakers need to turn to the rules of syntax.
Usage-based theories are far from offering a complete account of how language works. Meaningful generalizations that children make from hearing spoken sentences and phrases are not the whole story of how children construct sentences either — there are generalizations that make sense but are not grammatical (for example, “He disappeared the rabbit”). Out of all the possible meaningful yet ungrammatical generalizations children could make, they appear to make very few. The reason seems to be they are sensitive to the fact that the language community to which they belong conforms to a norm and communicates an idea in just “this way.” They strike a delicate balance, though, as the language of children is both creative (“I goed to the shops”) and conformative to grammatical norms (“I went to the shops”). There is much work to be done by usage-based theorists to explain how these forces interact in childhood in a way that exactly explains the path of language development.
A look ahead
At the time the Chomskyan paradigm was proposed, it was a radical break from the more informal approaches prevalent at the time, and it drew attention to all the cognitive complexities involved in becoming competent at speaking and understanding language. But at the same time that theories such as Chomsky’s allowed us to see new things, they also blinded us to other aspects of language. In linguistics and allied fields, many researchers are becoming ever more dissatisfied with a totally formal language approach such as universal grammar — not to mention the empirical inadequacies of the theory. Moreover, many modern researchers are also unhappy with armchair theoretical analyses, when there are large corpora of linguistic data — many now available online — that can be analyzed to test a theory.
The paradigm shift is certainly not complete, but to many it seems that a breath of fresh air has entered the field of linguistics. There are exciting new discoveries to be made by investigating the details of the world’s different languages, how they are similar to and different from one another, how they change historically, and how young children acquire competence in one or more of them.
Universal grammar appears to have reached a final impasse. In its place, research on usage-based linguistics can provide a path forward for empirical studies of learning, use and historical development of the world’s 6,000 languages.
Inside the Attica prison uprising: “We’re all less than nothing to the people that matter”
The aftermath of the riot at Attica prison in September 1971. (Credit: AP)
Excerpted from “The Butler’s Child: An Autobiography.”
A flash came across the morning news on September 9, 1971, that a riot had broken out at Attica, an upstate New York penitentiary. The inmates had taken over a part of the prison and were holding some guards as hostages. I immediately thought of my client Tony Maynard, who was incarcerated there. Almost simultaneously the phone rang. It was Dotty Stoub from the National Lawyers Guild.
A post-breakfast scuffle and a defective bolt in a central gate at Attica had literally opened the doors to a full-spectrum revolt. Buildings were set on fire, and forty-two prison employees were taken hostage.
One guard was in extremely critical condition. About a thousand of the more than two thousand inmates housed in the severely overcrowded prison had seized a central hub called “Times Square” and occupied D yard — one of four large exercise areas at the center of the medieval-looking walled fortress. Inmates waved baseball bats. They turned prison blankets into ponchos, undershirts into do-rags and kaffiyehs. They thrust fists into the air and shouted “Black Power!” while others dug trenches and huddled to prepare for battle. Leaders emerged and began issuing demands to the prison administration. A few prisoners roamed the yard wearing football helmets. It was chaos.
I was sitting in my kitchen when Dotty called. My kids had just finished breakfast. There was a cup of coffee in front of me. I had recent experience with prison uprisings in the New York State system. Dotty told me what she knew about the situation at Attica, which wasn’t much.
The inmates were asking for observers, and a prison activist, probably someone from Youth Against War and Fascism (YAWF), had called the guild. And I was the right person to go. I had spent my entire career becoming the right person to go. A thirty-four-year-old former NAACP trial lawyer, I had been the protégé of the legendary civil rights attorney Robert L. Carter. In fact I had just started at the NAACP when Carter was working on Gaynor v. Rockefeller, an employment-discrimination class-action suit brought against New York’s then-governor Nelson Rockefeller, who, it turned out, would be the only person with the authority to end the crisis at Attica. In addition, four months earlier I had helped represent the Auburn Six, a group of prisoners from the Auburn Correctional Facility who were awaiting trial for doing more or less the same thing that was going on at Attica, only in that case no prison employees were harmed.
While Dotty was talking, my double life struck me. I already knew I was going, and I could see it in my mind’s eye. The prison yard at Attica would be filled with desperate men who faced consequences from the state that beggared the imagination. And the prisoners’ only real hope was that the activists who were summoned to be on the observers’ committee might somehow do something to avert bloodshed. Immediately the old familiar conflicts stared back at me. The facts were anything but simple. I had three little kids and my wife, Kitty, and we were concerned that I might be putting myself in harm’s way.
***
I left for Attica wearing a tan polyester summer suit with my banged-up leather briefcase holding some work papers, a change of underwear and a few basic toiletries. I had mutton-chop sideburns and wore horn-rimmed glasses. My hair was black and bushy. I walked past the doorman and the pretty flower arrangement in our lobby to hail a cab for LaGuardia Airport, where a plane would take me to Buffalo. It was sunny and warm out — almost fall.
In addition to Tony Maynard among the prisoners at Attica was Sam Melville, a young man from the Weather Underground, a radical organization that had split away from the Students for a Democratic Society (SDS) to, it said, bring the Vietnam War home to America. He was a client of my partner, Henry diSuvero. Tony being there was definitely a motivating factor for me, but I’m not sure I knew Sam was there until I saw him in D yard.
Maynard had been wrongfully accused of a 1967 shotgun killing in Greenwich Village, convicted of manslaughter, and sentenced to ten to twenty years. Using a shotgun as the murder weapon was completely out of character for this stylish man with an artist’s sensibility. The authors James Baldwin and William Styron, who knew Tony, and the editorial chairman and columnist of the then liberal New York Post, James Wechsler, had made a considerable amount of noise about the wrongful conviction, but it didn’t matter. As I saw it, the “crime” Tony committed was being black. Making matters worse, Tony had a beautiful white wife, and the two of them had spent enough time making the scene in Greenwich Village to become a target. As Baldwin would later tell me, more than being black, Tony became a target because he was “arrogant and didn’t know his place.”
I agreed with Baldwin. It certainly didn’t help that Tony had what you might call an attitude problem, but fighting the prevailing winds of racial prejudice in the 1960s criminal court system was more often than not impossible.
I had tried Tony’s murder case, and I bonded with him during the long days we spent together and the discussions on weekends and after court. When Dotty said “Attica,” I heard “Tony Maynard.” He was transferred there from the Green Haven Correctional Facility, where I had recently visited him in what was called “the Hole.” He was disciplined a lot, and was not what one might call a model prisoner. Well spoken, smart, unbending, and rebellious, Tony had all the qualities a prison guard would be unlikely to tolerate. He would make a tempting target when authorities put down the rebellion, which I assumed would happen — maybe even before I could get there.
Tony was wearing a tattered tailored suit — he refused to wear prison clothes — when I caught sight of him in D yard, which we entered with the state corrections commissioner, Russell B. Oswald, to negotiate with the leadership. Tony looked pretty out of place, more like one of the observers than a participant among the thousand or so black, Latino, and white convicts milling around D yard preparing to defend their revolution.
Tony, whose presence made me feel more secure in the chaos of the yard, said, “Once the hacks are back in control, you can forget racial harmony,” adding, “Nothing good can come of this.” Surveying his fellow prisoners waving homemade flags and chanting “Black Power!” he added contemptuously: “They’re all so blind. Today they’re kings. They think the world will listen. The TV cameras and negotiations add to the illusion. But no one really cares what happens to a bunch of convicts and the clock-punchers who run an asylum run amok. We’re all less than nothing to the people that matter.”
I shared Tony’s ambivalence about the sort of canned big-talk-but-often-empty radical rhetoric that had emerged from the heyday of the civil rights movement and migrated into the prisons.
Martin Luther King, Jr., once said, “A riot is the language of the unheard.” What happened at Attica came close to King’s definition. Before they rampaged through the prison, the inmates were an unheard group of people who now had access to the outside world. No one listened to them or even gave them a name. To the all-white guards who controlled their lives, their skin color denoted them as subhuman beings. Their only strength came from communication. That’s why what happened at Attica was different from a riot. It was an uprising. But unlike the few uprisings that have succeeded, there was no way the prisoners would be able to hold on to the territory they had taken, and failure appeared to be a given. To prevent the stranglehold the authorities had on the prisoners who were trapped in the yard they had seized from turning into a bloodbath, only the observers could open a dialogue, but the odds of either side listening were slim. That’s where things stood. Blacks were fed up. Jim Crow and other forms of apartheid like school segregation were now against the letter of the law, but still the norm all over the country and held in place by force and more passive forms of economic domination. Whites also were angry about the threat of black demands for a share of what they saw as their jobs, and the right to move into their neighborhoods and go to their schools. There was a lot of fear all around, but almost no willingness — or perhaps better, capacity — to occupy the gray area where race issues could evolve and change. As a not-quite-radical, not-quite-mainstream civil rights lawyer, I sensed how difficult it would be to find that gray area in the Attica yard.
The other prisoner I knew about at Attica was Sam Melville. As a white man, he was definitely in the minority there. Because he was my partner’s client, Sam sought me out in D yard. He had been convicted for a string of highly publicized Weather Underground bombings that took place in 1969.
When Melville saw me, he talked his way through the phalanx of prisoners guarding the negotiators.
“They’re going to come looking for me,” Sam said, in a matter-of-fact way. “And I’ll be here. I’m a dead man.”
“Is there anything I can do?” I asked.
He shook his head. We exchanged a few words, shook hands, and he disappeared back into the crowd.
After it was all over, there were reports that some of the prisoners who led the rebellion were killed long after authorities regained control of the facility. Sam Melville was one of the people mentioned on that list, though he was not part of the leadership. After retaking the prison, state spin doctors said that Melville got shot while trying to explode a fifty-gallon fuel tank. They said he had four Molotov cocktails.
It made no sense. The uprising was over. It would have been suicide, and I saw no inkling that Melville had that kind of ending in mind. To the contrary, the Weathermen issued warnings and planned their bombings to avoid hurting anyone.
***
Forty years after the Attica prison uprising was crushed, tapes were released on a Freedom of Information Act request that recorded conversations between Governor Rockefeller and President Richard Nixon discussing the retaking of Attica. The “silent majority” point of view is unmistakable:
“Tell me,” Nixon began one of the conversations. “Are these primarily blacks that you’re dealing with?”
“Oh, yes,” Rockefeller replied. “The whole thing was led by the blacks.”
“I’ll be darned,” President Nixon replied affably. “Are all the prisoners that were killed blacks? Or are there any white . . .”
“I haven’t got that report,” the governor replied, “but I’d have to — I would say just off hand, yes. We did [it] though, only when they were in the process of murdering the guards, or when they were attacking our people as they came in to get the guards.”
“You had to do it,” Nixon said, as if he were reassuring himself.
In reality Rockefeller didn’t have to do it. After four days of unrest and disorder, things were starting to fray. The weather was horrible. Conditions in D yard were bad and getting worse. Nixon was wrong. I was there. Rockefeller wasn’t. Everyone just needed to be patient. If we couldn’t talk it out, we could wait it out. Rockefeller didn’t want to wait it out. He wanted to make a point. As New York City’s most prominent Puerto Rican politician at the time, Herman Badillo, said, “There’s always time to die.” The claim that prisoners were “in the process of murdering the guards” was a bald-faced lie. Whether Rockefeller was repeating bad information or made it up out of whole cloth is unclear. After the lie became accepted truth in the public imagination, autopsies showed that troopers — not the prisoners — killed the nine prison guards that Monday morning. As for the racial makeup of the prisoners, Rockefeller was wrong about that too, unless he unconsciously lumped Puerto Ricans and blacks together under the heading of “minority” and never got word of the whites in that ocean of rage.
Either way, you get the picture.
***
A few hours after troopers retook the prison, I was in the back of a cab heading south on Central Park West feeling defeated, angry, and depressed. I came home wearing the same suit. I stank. Where there had been a toehold to push against what looked like an impending disaster and a sense of mission when I left, there was now a massacre. I feared Maynard was dead. I wondered if any of the inmate leadership had survived. For days afterward my calls to the prison went unanswered.
While we were waiting for the light to change, I remember looking at the Dakota where the rich and famous lived, with its Victorian gas lamps and bathysphere-like guard booth. We rolled to a stop at my building six blocks south, just above Columbus Circle. I don’t recall who the doorman was that night, or the floor captain. I noted the difference between the stewards’ room at Attica, where the observers’ committee was camped out, and the shimmering terrazzo floors of the lobby as I trudged toward the elevator at the far end of the southern hallway. The elevator man deposited me on the semiprivate landing my family shared with one other apartment. I could hear the sounds of daily life on the other side of our door. My three kids and Kitty were in there safe and sound. The door was unlocked. That familiar feeling that I led a double life was strong as I stood there with my hand resting on the doorknob. I turned it and opened the door. In the foyer my four-year-old, Patrick, came shooting past with a quick hello. I went to our bedroom to change, gathered all the clothes I’d been wearing, and threw them in the garbage.
There was a message waiting for me on the table from The David Frost Show, a big television program at the time. They wanted me to be a guest that night. Frost was hosting a special panel on what had happened that morning. I would join Senator John Dunne, Leo Zeferetti, the head of the Correction Officers’ Benevolent Association, and Clarence Jones. Although I was on the show, you won’t find my name in the online listing of who appeared that night. David Frost turned to me early for comment, which is the one and only reason I’m not listed as one of the guests. I was exhausted and angry, and to this day I don’t regret a thing about what I said. I don’t remember what Frost asked me. I do remember attacking Rockefeller: “He only cares about his class prerogatives. The white guards didn’t matter any more than the black prisoners to him. They were all expendable.”
Cutting me off, Frost turned to cooler, safer voices for the rest of the discussion.
From THE BUTLER’S CHILD by Lewis M. Steel and Beau Friedlander. Copyright © 2016 by the author and reprinted with permission of Thomas Dunne Books, an imprint of St. Martin’s Press, LLC.
The class struggle is real: India is making labor history with the world’s largest general strike
(Credit: Reuters/David McNew)
This article originally appeared on AlterNet.
Trade unions leaders are reticent to say how many people struck work on September 2, 2016. They simply cannot offer a firm number. But they do say that the strike — the seventeenth general strike since India adopted its new economic policy in 1991 — has been the largest ever. The corporate news media — no fan of strikes — reported that the number of strikers exceeded the estimated 150 million workers. A number of newspapers suggested that 180 million Indian workers walked off the job. If that is the case, then this is the largest reported general strike in history.
And yet, it has not been given much consideration in the media. Few front page stories, fewer pictures of marching workers outside their silent factories and banks, tea gardens and bus stations. The sensibility of individual journalists can only rarely break through the wall of cynicism built by the owners of the press and the culture they would like to create. For them, workers’ struggles are an inconvenience to daily life. It is far better for the corporate media to project a strike as a disturbance, as a nuisance to a citizenry that seems to live apart from the workers. It is middle-class outrage that defines the coverage of a strike, not the issues that move workers to take this heartfelt and difficult action. The strike is treated as archaic, as a holdover from another time. It is not seen as a necessary means for workers to voice their frustrations and hopes. The red flags, the slogans and the speeches — these are painted with embarrassment. It is as if turning one’s eyes from them would somehow make them disappear.
Deprivation
A leading international business consultancy firm reported a few years ago that 680 million Indians live in deprivation. These people — half the Indian population — are deprived of the basics of life such as food, energy, housing, drinking water, sanitation, health care, education and social security. Most of Indians workers and peasants are among the deprived. Ninety percent of India’s workers are in the informal sector, where protections at the workplace are minimal and their rights to form unions virtually non-existent. These workers are not marginal to India’s growth agenda. In 2002, the National Commission on Labor found that “the primary source of future work for all Indians” would be in the informal sector, which already produced over half the Gross Domestic Product. The future of Indian labor, then, is informal with occasional rights delivered to prevent grotesque violations of human dignity. Hope for the Indian worker is simply not part of the agenda of the current dispensation in India.
Prime Minister Narendra Modi, who once more zipped off as part of his endless world tour, did not pay heed to these workers. His goal is to increase India’s growth rate, which — as judged by the example of when he was Chief Minister of the State of Gujarat — can be accomplished by a cannibal-like attitude towards workers’ rights and the livelihood of the poor. Selling off state assets, giving hugely lucrative deals to private business and opening the doors of India’s economy to Foreign Direct Investment are the mechanisms to increase the growth rate. None of these strategies, as even the International Monetary Fund acknowledges, will lead to social equality. This growth trajectory leads to greater inequality, to less power for workers and more deprivation.
Class struggle
Only 4 percent of the Indian workforce is in unions. If these unions merely fought to defend their tenuous rights, their power would erode even further. Union power has suffered greatly since the Indian economy liberalized in 1991, with Supreme Court judgments against union democracy and with the global commodity chain pitting Indian workers against workers elsewhere. It is to the great credit of the Indian trade unions that they have embraced — in different tempos — the labor conditions and living conditions of workers and peasants in the informal sector. What power remains with unions can only grow if they do what they have been doing — namely, to turn towards the immense mass of the informal workers and peasants and draw them into the culture of unions and class struggle.
The class struggle is not the invention of the unions or the workers. It is a fact of life for labor in the capitalist system. The capitalist, who buys the labor power of workers, seeks to make that labor power as efficient and productive as possible. The capitalist retains the gains from this productivity, sloughing off the worker to their slums at night to find a way to get the energy to come back the next day. It is this pressure to be more productive and to donate the gains of their productivity to the capitalist that is the essence of the class struggle. When the worker wants a better share of the output, the capitalist does not listen. It is the strike — an invention of the 19th century — that provides the workers with a voice to enter the class struggle in a conscious way.
In India, the first strike was in April-May 1862, when the railway workers of Howrah Railway Station struck over the right to an eight-hour work day. What inconveniences the strike produces to the middle class has to be weighed against the daily inconveniences that the workers endure as their extra productivity is seized by the capitalists. Those workers in 1862 did not want an interminable 10-hour shift that depleted them of their life. Their strike allowed them to say: we will not work more than eight hours. The critic of the strike will say, surely there are other ways to get your voice heard. No other way has been shown to the worker, who had neither the political power to lobby nor the economic power to dominate the media. It is silent, but for these festivals of the working class.
From Gujarat to Kerala
Workers in Narendra Modi’s home state of Gujarat joined the strike with great enthusiasm. This included over 70,000 crèche and mid-day meal workers as well as port workers in Bhavnagar. Garment workers in Tamil Nadu and automobile factory workers in Karnataka closed their shops. Bank and insurance employees joined power loom operators and iron ore miners, while transport workers across the country decided to stand outside their bus and truck depots. Communist unions joined with other unions to ensure the widest mobilization of workers.
Each local union in this strike had its own grievances, its own worries and frustrations. But the broad issues that united these millions of workers revolved around the demand for workplace democracy, the demand for a greater share of the social wealth and the demand for a less toxic social landscape. The workers — through their unions — took their 12-point demands to the government, which ignored them. At the last minute, when it seemed as if the strike would be robust, the government attempted to deliver small concessions. This was not sufficient. It was, as the labor unions put it, an insult. There is no expectation that the strike itself would lead to major concessions from the government. After all, last year, 150 million workers went on strike and the government did not shift from its anti-worker policies. Instead, the government of Narendra Modi deepened its commitment to “labor market reforms” — namely to eviscerate unions and to enhance the right to fire workers at will.
What the strike says is that India’s workers remain alive to the class struggle. They have not surrendered to reality. In 1991, when the government decided to open the economy to the turbulent interests of global capital, the workers rebelled. In August 1992, textile workers in Bombay took to the streets in their undergarments — they declared that the new order would leave them in abject poverty. Their symbolic gesture is the current reality.
A little less “conversation,” a little more action on trans representation
Eddie Redmayne in "The Danish Girl;" Jeffrey Tambor in "Transparent;" Jared Leto in "Dallas Buyers Club"
Last week, things got rough for Mark Ruffalo. LGBTQ Twitter didn’t take kindly to the news that yet another non-trans actor would be assuming a trans role in a film. This time around, it was Matt Bomer, who was cast as a trans woman in the recently announced film, “Anything.” Executive produced by Ruffalo, “Anything” centers on a trans sex worker (Bomer) and her friendship with a grieving, suicidal man (John Lynch).
Trans actress Jen Richards (“Her Story”), who auditioned for another role in the film, shared, “I auditioned for this [film]. I told them they shouldn’t have a cis man play a trans woman. They didn’t care.”
In response to the outrage, Mark Ruffalo tweeted, “To the Trans community. I hear you. It’s wrenching to you see you in this pain. I am glad we are having this conversation. It’s time.” And then, “In all honesty I suggested Matt for the role after the profound experience I had with him while making ‘The Normal Heart’.”
In one response, he explained the film was already in post-production, pleading, “Matt poured his heart and soul into this part. Please have a little compassion. We are all learning.” Later, he reveled in getting “woke to the Transgender Experience” and somewhat ironically linked to “We’ve Been Around,” Rhys Ernst’s lovely mini docu-series about how transgender people have literally always existed. Ruffalo is late to the party — but welcome!
All sensors indicate Mark Ruffalo is a well-intentioned, conscientious dude. He founded his own clean-water nonprofit, tweets vigorously about social and political issues, and I’d probably cast him to be my celebrity Dad. So what did he miss here?
There is a strain of angst in progressive allies like Ruffalo regarding the negative public feedback about benevolent attempts to tell trans stories without trans people. As a media-saturated, 23-year-old trans writer, I would like to clarify some points regarding trans representation:
Media made about us is almost never made for us.
Media made about us is almost never made by us.
Much trans-centric media exploits the public’s interest in trans stories (and objectifies our bodies) without providing professional opportunities for actual trans folk.
There is a difference between telling trans stories and representing trans lives.
“Learning” includes making mistakes, but not the same mistakes over and over.
Non-trans people can certainly play trans roles; the problem is that they do so almost all the time.
You cannot tell someone else how to feel about their own media representation, or how to react to it.
Director of GLAAD’s Transgender Media program Nick Adams responded to Bomer’s casting in a recent editorial, arguing that cisgender male actors playing trans women teaches viewers that being trans is “just a matter of playing dress-up,” and that a “transgender woman really is a man.”
Adams points out that trans performers have led or featured in many recent successful media projects, citing “I Am Jazz,” “Sense8,” “Transparent” and “Orange Is the New Black”‘s Laverne Cox’s upcoming CBS series, “Doubt.” And yet, as he writes, “Hollywood is having a very difficult time letting go of the idea that putting a male actor in a dress, wig and makeup is an accurate portrayal of a transgender woman.”
However progressive TV might be, Hollywood lags behind. Trans stories have only recently been legitimized as wide-release films like “The Danish Girl” and only with bankable actors, like Eddie Redmayne or Jared Leto, cast in trans roles. While I have no doubt that trans stereotypes exist in Hollywood, there are many other systematic components that favor casting an actor like Bomer over a trans unknown.
Julia Serano, biologist, trans woman, and author of the mind-splintering essay collection “Whipping Girl,” recently wrote about the issue on Medium. Serano asserts that the question of trans representation has become a kind of meme that “offers a simplistic solution (don’t cast cis actors!) to the extraordinarily complex issue of transgender stereotypes and media representations.”
In regards to Adam’s point about reinforcing toxic stereotypes, Serano argues that the gender identity of an actor isn’t necessarily essential to an “authentic” and resonant portrayal of a trans person. Actors are only superficial targets, and the conversation needs to go deeper. Serano urges, for example, casting trans performers in cisgender roles.
But what about “the conversation?” You know, the conversation Ruffalo references in his tweets?
In its weakest form, “the conversation” is a kind of classic damage-control phenomenon that follows many public outcries of legitimate concern. It is, sadly, often a consolation prize for change. Having or continuing a “conversation” allows people who make media about trans people, without trans people, to brush off criticism and perpetuate false promises. More to the point, conversations generally center around the one episode or issue currently under scrutiny, when we need a much larger, more expansive discussion.
A strong, sincere discussion really can raise awareness and instigate change. And that’s why I don’t care that much about reversing Bomer’s casting or boycotting the film. I don’t want Mark Ruffalo to statically react and apologize to a one-note issue. I want him to genuinely wake up to the reality of disparities in trans employment and authorship in media.
I am a 23-year-old white trans guy. I went to college for film and media studies. Do you know how many times I saw a trans guy represented in a class screening or reading? Never. I was not once represented in my own curriculum, and it was only when watching “Transparent” my senior year that I saw a trans man played by a trans man on TV (Ian Harvie).
“Transparent” is a good example of the nuances inherent in trans representation at its forefront. The show’s creator Jill Soloway envisioned the lead role of trans matriarch Maura for actor Jeffrey Tambor, and cast it thus. But Soloway also hired (and maintains) a production team, crew and writer’s room loaded with queer and trans identities. She took the issue of representation seriously rather than brushing it aside, because she knew it was good for the quality of her show and for the talented trans folk in her employ and beyond.
So who is empowering young trans people like me to feel like this part of identity is worth sharing, or that my stories are worth telling, long-term?
It’s other trans artists and professionals, mainly. Writers and advocates like Julia Serano, Janet Mock and Thomas Page McBee, the video artist Wu Zhang, filmmakers Lily and Lana Wachowski, photographer Amos Mac, Youtubers Skylar Kergil and Kat Blaque, producers Zachary Drucker and Rhys Ernst. The stars of the acclaimed Sundance film “Tangerine,” Mya Taylor and Kiki Rodriguez.
Representation does not only mean movies about trans people, but also movies with trans people. It is about supporting trans people’s participation and success in a sphere in which they have previously been expelled, mocked or ignored. It is championing the full complexity of representation — and engaging that complexity without tapping out — that can tire even the sturdiest of allies and trans folk alike. (The difference? We don’t get to tap out of the conversation.)
I can appreciate a great, authentic trans performance from a cisgender actor (I recall loving Lee Pace in “Soldier’s Girl”), and I can appreciate a great, authentic trans performance from a trans actor. But what I really want is access and possibility extending beyond an individual’s hyper-scrutinized relationship to a single role. I want more auditions for my trans peers, more projects that welcome their involvement, and more auditions coordinated by them for their own projects.
So I’ll keep being positive and frustrated at the same time. Because I know we will do better, and I know Mark Ruffalo can handle a call-out. In fact, I challenge him to.
Robert Reich: There’s one big unfinished promise by Bill Clinton that Hillary should put to bed
Bill Clinton (Credit: Reuters/Brian Snyder)
This originally appeared on Robert Reich’s blog.
What can be done to deter pharmaceutical companies from jacking up prices of critical drugs? To prevent Wall Street banks from excessive gambling? To nudge CEOs into taking a longer-term view? To restrain runaway CEO pay?
Answer to all four: Fulfill Bill Clinton’s 1992 campaign pledge.
When he ran for president, Bill Clinton said he’d bar companies from deducting executive pay above $1 million. Once elected, he asked his economic advisors (among them, yours truly) to put the measure into his first budget.
My colleagues weren’t exactly enthusiastic about the new president’s campaign promise. “Maybe there’s some way we can do this without actually limiting executive pay,” one said.
“Look, we’re not limiting executive pay,” I argued. “Companies could still pay their executives whatever they wanted to pay them. We’re just saying society shouldn’t subsidize through the tax code any pay over a million bucks.”
They weren’t convinced.
“Why not require that pay over a million dollars be linked to company performance?” said another. “Executives have to receive it in shares of stock or stock options, that sort of thing. If no linkage, no deduction.”
“Good idea,” a third chimed in. “It’s consistent with what the President promised, and it won’t create flak in the business community.”
“But,” I objected, “we’re not just talking about shareholders. The pay gap is widening in this country, and it affects everybody.”
“Look, Bob,” said the first one. “We shouldn’t do social engineering through the tax code And there’s no reason to declare class warfare. I think we’ve arrived at a good compromise. I propose that we recommend it to the President.”
The vote was four to one. The measure became section 162(m) of the IRS tax code. It was supposed to cap executive pay. But it just shifted executive pay from salaries to stock options.
After that, not surprisingly, stock options soared — becoming by far the largest portion of CEO pay.
When Bill Clinton first proposed his plan, compensation for CEOs at America’s 350 largest corporations averaged $4.9 million. By the end of the Clinton administration, it had ballooned to $20.3 million. Since then, it’s gone into the stratosphere.
And because corporations can deduct all this from their corporate income taxes, you and I and other taxpayers have been subsidizing this growing bonanza.
Hillary Clinton understands this. “When you see that you’ve got CEOs making 300 times what the average worker’s making you know the deck is stacked in favor of those at the top,” she has said in her presidential campaign.
And she’s taken direct aim at executive stock options.
“Many stock-heavy pay packages have created a perverse incentive for executives to seek the big payouts that could come from a temporary rise in share price,” she said in July. “And we ended up encouraging some of the same short-term thinking we meant to discourage.”
Yes, we did. Specifically, her husband and his economic team did.
Case in point: In 2014, pharmaceutical company Mylan put in place a one-time stock grant worth as much as $82 million to the company’s top five executives if Mylan’s earnings and stock price met certain goals by the end of 2018.
But the executives would get nothing if the company — whose star product is the EpiPen allergy treatment — failed to meet the target. Almost immediately, Mylan began stepping up the pace of EpiPen price increases. The price of an EpiPen two-back doubled to $600 — a move Hillary Clinton has rightfully called “outrageous.”
Stock options doled out to Wall Street executives in the early 2000s didn’t exactly encourage good behavior, either. They contributed to the near meltdown of the Street and a taxpayer-funded bailout.
Now that Wall Street is no longer restrained by the terms of the bailout, it’s back issuing stock options with a vengeance.
According to a recent report from the Institute for Policy Studies, the top 20 banks paid their executives over $2 billion in performance bonuses between 2012 and 2015. That translates into a taxpayer subsidy of $1.7 million per executive per year.
Hillary Clinton has proposed penalizing pharmaceutical companies like Mylan that suddenly jack up the prices of crucial drugs. And she’s promised to go after big banks that make excessively risky bets.
These are useful steps. But she should also consider a more basic measure, which would better align executive incentives with what’s good for the public.
It’s doing what her husband pledged to do in 1992, if elected president — but which his economic advisors then sabotaged: Bar corporations from deducting all executive pay in excess of $1 million. Period.
Trump and the media: An outrageous celebu-candidate, plus the debased culture of journalism, produced this disaster
Jake Tapper; Matt Lauer; Megyn Kelly (Credit: AP/Andrew Harnik/Charles Sykes/Getty/Alex Wong)
In the wake of Matt Lauer’s dismal performance as moderator and/or Trump-enabler during this week’s presidential campaign forum, we’ve gotten to experience another round of hand-wringing about the nature of the parasitic or symbiotic relationship between the Republican nominee and the media. Most of this hand-wringing occurred in the media, of course, and there’s nothing the members of that tribe (OK, it’s my tribe too) enjoy more than inflating their own importance, gazing into their navels at great length and telling each other to repent of their evil ways and don sackcloth and ashes.
Still, there are important questions at play here, in a year when American democracy seems to be on the critical list and journalists seem torn between playing doctor, playing priest and playing the drunk guy at the wake who barely knew the deceased but makes a long speech anyway. (That was a deeply Irish analogy; I apologize.) Some of the questions are obvious and have easy answers: Matt Lauer is a lightweight TV personality (i.e., an idiot) who should never have been allowed near such an important gig, and everybody who approved that idea should immediately be reassigned to “Weather and Traffic on the 6’s” at the third-biggest market in Tennessee.
Other questions are more complicated: Why haven’t journalists done their jobs with respect to Donald Trump — easily the least qualified presidential candidate in the history of everything? If Lauer was especially craven in his desire to look impartial and his unwillingness to ask Trump any meaningful follow-up questions or challenge outrageous falsehoods, he’s not alone. Jake Tapper of CNN, whom I know to be a journalist of decency and integrity (he was once Salon’s Washington correspondent), has defended his courteous demeanor toward Trump as both fair and necessary. But Tapper’s measured, cordial interviews with Trump have undeniably played into the candidate’s hands and burnished his image, at least in a semiotic sense: Never mind the nonsensical content of Trump’s responses; there he is chatting amiably with a CNN anchor, looking more or less “presidential.”
Even Megyn Kelly’s feud with Trump, back in the days when Roger Ailes still ran Fox News (and badly wanted some other Republican to win), ended in retreat and confusion. Trump’s ugly, misogynistic attacks on Fox’s star anchor briefly made Kelly into an unlikely feminist hero and threatened to torpedo Trump’s campaign early in the cycle. But Kelly was stilted and awkward in her later interactions with Trump, first as a debate moderator and then in a one-on-one interview. She seemed trapped between competing roles and competing demands: She was a journalist with a reputation for straight shooting, she was an accidental symbol of American womanhood and she was the leading voice of a right-wing network whose audience was eager to swallow Trump’s snake oil. At least you could tell Kelly really wanted to rip Trump’s face off, while Lauer just looked defeated and confused and Tapper was too consumed in perfecting his Sam Donaldson impression.
But let’s get back to the questions: What if some journalists have done their jobs, and it’s just that the public hasn’t noticed or doesn’t care? What is the media’s job in 2016, anyway? How has that changed, and why? Are we entertainers, public servants, speakers of truth to power or flunkeys on the outer edges of celebrity culture? Are we responsible for inflicting this dangerous buffoon on the country in some significant way, or are we just along for the ride?
I don’t think any of those have entirely straightforward answers, but my conclusion (self-serving as it may be) is that blaming the media for the rise of Donald Trump is a form of lazy intellectual shorthand that avoids the larger cultural, political and economic picture. It’s not entirely false, because Trump could not possibly be where he is today without the endless amounts of airtime and verbiage and bandwidth devoted to his unlikely ascendancy. But the media, and even the “corporate media,” is not some baleful monolith operating all by itself, with no context. The “Human Centipede”-style dialectic between Trump and the media, where he feeds us and we feed him — if you click that link, don’t say you weren’t warned — is fueled by intense demand on all sides. People who love Trump want more Trump to love, people who hate Trump want more to hate, and people who think we cover Trump way too much want more Trump coverage to excoriate.
Donald Trump and the media was a marriage made in heaven, or perhaps in the other place. To use Marxist terminology, it was overdetermined — driven by external and inexorable factors no one could control. Once they were together, there was no tearing them apart. If you want to argue that mainstream journalism in the United States is often mendacious and shallow, too easily distracted by meaningless scandal, false drama or superficial factoids to pay attention to what’s really going on, I’m right there with you. Has Trump’s erratic and outrageous campaign persona appealed to many of the mainstream media’s worst instincts? Absolutely. Has Trump exploited a longtime institutional weakness in the media, its internal conflict between an increasingly outmoded ethical code and a ravenous appetite for ratings at any cost? Yes, with almost surgical precision. (I’m talking to you, Jake Tapper.)
Dan Rather, who can remember every presidential campaign since Franklin D. Roosevelt’s fourth and final run in 1944, recently told Chris Hayes of MSNBC that Trump can dominate and manipulate the news cycle more effectively than any candidate he’s ever seen. Trump doesn’t always do so to his own benefit, but even when he contradicts himself, spins outrageous lies or insults the grieving parents of a fallen American soldier, he sucks up all the oxygen in the media-sphere and keeps his opponents off the stage. Trump’s farcical jaunt to Mexico City last week, followed by his rage-fueled immigration speech in Phoenix, succeeded in shoving a supposedly major foreign-policy speech by Hillary Clinton into limbo.
But when Clinton supporters wonder plaintively why the media hasn’t told the truth about Trump’s hateful and incoherent proposals or his dreadful record as a person and a businessman, I would say that many reporters and commentators have done that repeatedly, and Trump’s supporters either don’t believe it, don’t notice or don’t care. (Some of all three, but mostly the latter.) Anyone who waded through the long New York Times investigative article about Trump’s disastrous business operations in Atlantic City, for instance, wasn’t voting for him anyway.
When agonized Democrats retreat to the opposite rhetorical corner — if only the media had ignored Trump from the beginning, he might have gone away! — well, now they know how Jeb Bush felt. That one requires an elaborate alternate universe: If we didn’t live in a consumer-capitalist society where everyone walks around glued to screens all day long desperate for sensation, or in a nation with a paralyzed political system that is profoundly divided by race, economics, geography and culture, then sure. But there’s no Trump in that universe in the first place.
As I have previously argued, it’s too simplistic to say that the media created Trump, or willed him to win because he was telegenic. Trump was the right guy (or the wrong guy) to capture the attention of a large and disgruntled demographic of downscale white voters at this historical moment. Conditions had to be right, particularly the zombie-like brain-body split within the Republican Party, whose electoral base had turned against its Washington leadership. CNN and Fox News and the New York Times didn’t compel all those people to show up at Trump’s rallies or drive the turnout in state after state that produced the most votes for any candidate in GOP history. Donald Trump ran for president and millions of Americans responded with enthusiasm. It’s no good pretending they’re a bunch of brainwashed sheeple who didn’t understand what he was really like. They understood perfectly well, and they dug it.
At the same time, Trump is a media creature, who came to the 2016 race as a prefabricated personality, molded by years of reality TV, gossip shows, the business press and the New York tabloids. Trump knows almost nothing about government or world affairs or any aspect of foreign or domestic policy, and his expertise in business is debatable. But he understands the media, at least in its mass-consumption mode: He understands its cynical cast of mind and its widespread contempt for the public. He understands its ingrained belief that American cultural and political life is a symbolic spectacle with little reference to the real world. He understands its obsession with itself, a quality he shares.
There’s no question that Trump coverage has driven ratings and page views, which has driven more Trump coverage. Love her or hate her, Hillary Clinton is simply not capable of moving the dial to that extent — unless she’s being subjected to a public pillory about Benghazi and the emails, as Lauer feebly tried to do on Wednesday night. But that didn’t happen because I decided, or because Jake Tapper and Megyn Kelly decided, that we couldn’t possibly get enough Trump and wanted to cover nothing else week after week. It happened because we have a sick democracy, a debased standard of public discourse and a deeply damaged culture. All of that came together in one awful man and his awful presidential campaign, and none of us can look away.
Welcome to your local NFL stadium: Home of the sensitive and easily offended
FILE - In this Nov. 8, 2015, file photo, San Francisco 49ers quarterback Colin Kaepernick stands on the field during an NFL football game against the Atlanta Falcons in Santa Clara, Calif. Kaepernick's protest of the national anthem over what he describes as oppression of minorities in the United States is apparently winning support from some veterans on Twitter under #VeteransForKaepernick. Kaepernick said he'll continue the protest during San Francisco's preseason game at San Diego on Thursday, Sept. 1, 2016. (AP Photo/Ben Margot, File) (Credit: AP)
This piece originally appeared on BillMoyers.com.
A new NFL season started this week, and I’m going to let you in on a little secret: Sporting arenas can be a rancid soup of racism, misogyny, homophobia, jingoism and all-around alcohol-soaked nastiness. So much so that the NFL and other major league sports have had to initiate “fan conduct classes.” Which is what makes the criticism of Colin Kaepernick’s principled refusal to stand the national anthem at NFL games so laughable. If what goes on in sporting arenas represents some kind of benchmark for the exhibition of respect and national pride, well, sorry…our country is in deep, deep trouble.
Here’s another secret: When the national anthem is playing at stadiums, fans don’t immediately drop what they are doing and stand to attention. Many unsubtly check their phones. Many are out buying hot dogs and beer (and don’t flinch in the line when the song begins). Many use the anthem as a chance to go and take a leak before kickoff. So, what? They’re all Kaepernick-esque traitors? (And, while we’re at it, let’s ask TV stations to zoom in on the owner’s luxury skyboxes during the national anthem to see exactly what is going on in there. Is the level of patriotism among NFL brass acceptable? Do we need some more deportations?)
I write this with a certain authority: I am a lifelong sports fan, especially of the San Francisco 49ers. I also write this piece as a person who has attended professional and college football games (at my alma mater, the University of Texas at Austin) and heard fans hurl the most vicious abuse at players in the interests of nothing more than getting a laugh from those around them and/or venting some hetero-macho steam. When called out by fellow fans for their actions, it was common for these guys (and they are almost always guys) to respond, “Hey man, I paid for this ticket, and I can say what I want.” It’s their right.
And so we are back to rights…and we do love our rights. That’s our rights, of course, not their rights.
Ultimately, Kaepernick critics are pitching a well-worn trope: that there are correct and incorrect ways in which our rights can be exercised. The right to bear arms? Correct. The right for a gay couple to marry? Incorrect. The right to practice Christianity freely? Correct. The right to practice Islam freely? Incorrect. The right to show your patriotism by standing during the national anthem? Correct. The right to show your disappointment at systemic racism in the United States by not standing during the national anthem? Incorrect. Of course, all of these are rights protected under the Constitution, but the dog-whistle message from large sections of the U.S. population is that some people — African-Americans, the LGBT community, Latinos and women — just need to learn the unwritten rules of when and where it is appropriate to exercise those rights.
This is the utter perversity and hypocrisy of the criticism leveled against Kaepernick: the idea that he needed to double-check in advance if his mode of expression met some kind of approved community standard for patriotism. Critics, by the way, that included former 49ers head coach Jim Harbaugh, who appeared unaware of the glaring contradiction inherent in quoting dissident Nelson Mandela to his team in 2013, only to then rip Kaepernick’s comparatively tame “method” for calling attention to the very racism against which Mandela had fought. Kaepernick’s jersey is now the NFL’s best seller, suggesting that the fans are siding with him over such conspicuous flag-wavers as NFL Commissioner Roger Goodell.
This was all about sensitivity and feelings, we were told, and knowing when and where to make a stand so as not to offend the delicate, silent majority. We respect your right to voice an opinion, Colin, but won’t you think of the children?!? And what place could be more overflowing with sensitivity than an NFL stadium? The Thought Police in the United States don’t want Kaepernick soiling the purity of the flag by sitting or kneeling, but they don’t mind TV stations sandwiching that same sacred anthem between ads for watered-down beer and erectile dysfunction.
They don’t mind pitching Pat Tillman as the poster child for service and patriotism, but they object to discussing the fact that Tillman criticized the war and that the military blatantly lied about his death. They don’t mind 67 percent of NFL players being African-American, but they do mind when those African-American players express an opinion they find unsettling.
So, as we open the new NFL season, and before we all begin screaming for the players to draw blood and knock each other unconscious, let us all spare a thought for the easily offended in stadiums across the country.
Engineering ourselves against terror attacks: How building design changed after 9/11
The One World Trade Center building, second from right, is reflected in the windows of the 9/11 Museum, in New York, Monday, March 23, 2015. The first stair-climb benefit will be held at One World Trade Center in May to raise money for military veterans, two foundations, the Stephen Siller Tunnel to Towers Foundation and the Captain Billy Burke Foundation, formed after the 9/11 attacks announced Monday. (AP Photo/Richard Drew) (Credit: AP)
This article was originally published on The Conversation.
When buildings collapse killing hundreds — or thousands — of people, it’s a tragedy. It’s also an important engineering problem. The 1995 collapse of the Alfred P. Murrah Federal Building in Oklahoma City and the World Trade Center towers in 2001 spawned many vows to never let anything like those events happen again. For structural engineers like me, that meant figuring out what happened and doing extensive research on how to improve buildings’ ability to withstand a terrorist attack.
The attack on the Murrah building taught us that a building could experience what is called “progressive collapse,” even if only a few columns are damaged. The building was nine stories tall, made of reinforced concrete. The explosion in a cargo truck in front of the building on April 19, 1995, weakened key parts of the building but did not level the whole structure.
Only a few columns failed because of the explosion, but as they collapsed, the undamaged columns were left trying to hold up the building on their own. Not all of them were able to handle the additional load; about half of the building collapsed. Though a large portion of the building remained standing, 268 people died in the areas directly affected by the bomb and in those nearby areas that could no longer support themselves. (A month after the attack, the rest of the building was intentionally demolished; the site is now a memorial to the victims.)
A similar phenomenon was behind the collapse of the World Trade Center towers on September 11, 2001, killing nearly 3,000 people. When exposed to the high temperatures created by burning airplane fuel, steel columns in both towers lost strength, putting too much load on other structural supports.
Until those attacks, most buildings had been built with defenses against total collapse, but progressive collapse was poorly understood and rarely seen. Since 2001, we now understand progressive collapse is a key threat. And we’ve identified two major ways to reduce its likelihood of happening and its severity if it does: improving structural design to better resist explosions and strengthening construction materials themselves.
Borrowing from earthquake protection
Research has found ways to keep columns and beams strong even when they are stressed and bent. This property is called ductility, and higher ductility could reduce the chance of progressive collapse. It’s a common concern when building in earthquake-prone areas.
In fact, for years building codes from the American Society of Civil Engineers, the American Institute of Steel Construction and the American Concrete Institute have required structural supports to be designed with high enough ductility to withstand a major earthquake so rare its probability of happening is once every 2,000 years. These requirements should prevent collapse when a massive earthquake happens. But it’s not enough to just adopt those codes and expect they will also reduce or prevent damage from terrorist attacks: Underground earthquakes affect buildings very differently from how nearby explosions do.
Another key element structural engineers must consider is redundancy: how to design and build multiple reinforcements for key beams and columns so the loss of, say, an exterior column due to an explosion won’t lead to total collapse of the entire structure. Few standards exist for redundancy to improve blast resistance, but the National Institute for Building Sciences does have some design guidelines.
Making concrete stronger
The materials that buildings are made of also matter. The steel columns in the World Trade Center towers lost strength rapidly when the fire reached 400 degrees Fahrenheit. Concrete heated to that temperature, though, doesn’t undergo significant physical or chemical changes; it maintains most of its mechanical properties. In other words, concrete is virtually fireproof.
The new One World Trade Center building takes advantage of this. At its core are massive three-foot-thick reinforced concrete walls that run the full height of the building. In addition to containing large amounts of specially designed reinforcing bars, these walls are made of high-strength concrete.
An explosion generates very high pressure — how much depends on how big the blast itself is and how close it is to the structure. That leads to intense stress in the concrete, which can be crushed if it is not strong enough.
Regular concrete can withstand 3,000 to 6,000 pounds of compression pressure per square inch (psi); the concrete used for One World Trade Center has a compressive strength of 12,000 psi. Using materials science to more densely pack particles, concrete’s strength has been increased up to 30,000 psi.
Improving reinforcement
While traditional reinforced concrete involves embedding a framework of steel bars inside a concrete structural element, recent years have brought further advancement. To enhance concrete’s toughness and blast resistance, high-strength needle-like steel microfibers are mixed into the concrete. Millions of these bond with the concrete and prevent the spreading of any cracks that occur because of an explosion or other extreme force.
This mix of steel and concrete is superstrong and very ductile. Research has shown that this material, called ultra-high-performance fiber-reinforced concrete, is extremely resistant to blast damage. As a result, we can expect future designers and builders to use this material to further harden their buildings against attack. It’s just one way we are contributing to the efforts to prevent these sorts of tragedies from happening in the future.