Peter L. Berger's Blog, page 111
February 6, 2018
In Defense of Adjuncts
Technology will ultimately upend higher education, many people believe, since it seems to be upending everything else—whether for better or worse, or some of both is, as always, a matter of opinion. The massive open online course (MOOC) craze that began in 2012 furnishes a case in point. Kevin Carey predicted in The End of College that most brick-and-mortar colleges would close as free global online courses eliminated the need for classrooms and most professors. Many others agreed about the trend line but thought the phenomenon disastrous to genuine education.
Five years later it is clear that, either way, it didn’t happen and isn’t going to happen. MOOCs have not been able to replace traditional class instruction because learning involves more than just sitting in front of a computer screen. The champions of online courseware got something very fundamental wrong: Human beings are not passive receptors of external stimulus, but social animals who, by engaging together, construct the ideational realities in which they dwell. Online education remains a useful adjunct to the traditional class format, particularly for certain subjects involving rote application of mathematical procedures or scientific knowledge—subjects Karl Popper famously described as “clocks” as opposed to “clouds.” It also lets older students already in the workforce enhance their skillset or continue their education. But technology has embellished the methods of traditional higher education more than it has transformed them.
Still, a revolution in higher education is proceeding, albeit in the dullest of places: not in technology but in personnel. Trends in hiring for adjunct professors, combined with larger sociological trends in student attitudes and workforce participation, have brought and will bring significant changes in the nature of the American university. But before examining these changes and exploring whether we ought to be encouraging them, we must understand the unease among undergraduates, graduate students, professors, and administrators that is already driving them. There are four basic concerns.
First, most undergraduate students worry about getting a job after college. Ninety percent of students claim that they are going to college to get a job. Chances are good they will get one, if predictions of a shortfall of five million workers with postsecondary education by 2020 turn out to be more or less accurate. Yet students must struggle to get the job they want. Many try to build a network to gain an edge, but doing so is hard, especially for students from low-income families who must work while attending college, or minority students who self-segregate. Students also try to develop skills they think employers value, but these include intangible skills such as leadership, decision-making aptitude, and analytical power. The logical place to learn such skills is in the liberal arts, but many students discover that the liberal arts are taught in ways that only academic specialists find interesting. College accreditation bodies ask professors to complete a form stating whether they teach these and other skills, but such forms are useless without sufficient resources to follow up with in-class observation.
Second, many young graduate students invest precious time and money training for jobs in academia that do not exist. This is especially true in the humanities and social sciences, where nine years of post-baccalaureate training, plus debt, are the norm. Forty percent of graduate students have no job lined up at graduation. Even STEM graduates face a glut, especially in life sciences such as biology, where the chance of getting a full-time academic job is the equivalent of a coin toss. Understandably, many graduate students are nervous.
Third, some full-time professors are disheartened by their work. They know many students take their classes only to fulfill a requirement, and that many come to college unprepared because of a poor K-12 education. Indeed, one reason that so many jobs of the future require postsecondary education is because applicants must learn skills in college they should have learned earlier. This dulls down the teaching experience. And then there are the more commonplace causes of boredom: As one professor explained to me, “I’ve been teaching Chaucer for thirty years. I’m tired of it.”
Fourth, college administrators worry about how to keep the cost of college down. Since the 1970s, tuition has increased at four times the rate of inflation. Administrative costs account for much of the increase, sometimes for unnecessary luxuries such as athletic centers but also because the responsibility to manage certain societal problems has been thrust onto colleges—for example, campus counseling or psychiatric centers to care for the large numbers of students with mental health issues. Administrators in many universities have tried to “export the contradictions” by enticing full-tuition foreign students, but that financial strategy has its limits. Worse, colleges have spent the past three decades antagonizing Republicans, who now hold power at the Federal level and often even greater power at the state level, and who feel no compunction about taxing college endowments or graduate tuition waivers and fellowships.
All this unease in the university cries out for resolution. And aid is coming from a most unlikely quarter: adjuncts.
“Adjunct professor” is an umbrella term for a variety of part-time faculty positions, including lecturers, course-per-contract instructors, assistantships, and half-time non-tenured professors. Adjacent to the adjunct is a full-time category of equally contingent faculty that includes professors with time-limited contracts and professors of practice (professionals with work experience in a particular field but without an academic background). Today, more than half of all faculty appointments are adjunct or part-time. When adding in full-time non-tenure-track faculty the number rises to 70 percent. Compare this with 1969, when tenured or tenure-track faculty made up almost 80 percent of American college faculty, while non-tenure-track positions accounted for a little more than 20 percent. In many colleges today adjunct professors teach two-thirds of all classes. The rise of adjunct faculty is the most significant trend in faculty hiring during the past forty years.
At first glance, the idea that adjuncts might revolutionize the university seems dubious, as adjuncts live at the bottom of the academic food chain. They also have their own troubles, well captured in the following experience: At a funeral for an adjunct professor, I overheard a dean whisper to another dean, “Why are adjuncts so unhappy? Look at what a nice service we’re giving this man.” Several people listening in on the conversation rolled their eyes: Adjuncts are paid low wages, have little to no job security, are rarely given office space, and many tenured professors and college administrators view them with contempt, but they do get a nice perk in the form of a funeral service, although they have to die to get it.
Yet there are two more or less distinct populations of adjuncts that need be distinguished from one another. According to the 2010 National Survey of Part-time/Adjunct Faculty, more than a third of adjunct professors teaching in the United States are over 55. Most adjuncts teach because they enjoy it, and actually prefer to teach part-time; younger adjuncts worry more about compensation and usually crave full-time work. The older adjuncts get less publicity because they complain less; younger adjuncts often feel frustration for having been denied full-time teaching posts, sometimes because they are under-credentialed within the guild to qualify for full-time positions in major institutions. Older adjuncts typically have good jobs (and therefore livelihoods) outside of the university, or had such jobs and so are more or less comfortably retired. Some are former professors, but many others enjoyed non-teaching careers.
Because of their career experience, older adjuncts often have valuable contacts in the economy that undergraduates can tap into—more than many tenured professors have. For example, an adjunct may sit on a recruitment committee or a hiring panel; he or she may run a company; but, most important, he or she will know lots of people in the working world. Even a student without a network can gain much just by going to an adjunct’s office hours. I have steered students to jobs or internships in both the medical and non-medical world by virtue of my double career as a doctor and a writer. An informal but highly valuable career guidance system run by contingent faculty is emerging inside many universities.
Older adjuncts also bring life experience into their pedagogy, sprinkling their lectures with personal experiences and anecdotes to make classroom subjects more relevant to everyday life. In journalism, law, medicine, and business schools, adjuncts and professors of practice have been providing this service for many decades. For example, most of the vital information about patient care I learned as an anesthesia resident, and that I later relied on in practice to stay out of trouble, came not from academic professors but from clinical professors and part-time adjuncts. Older adjuncts are now transmitting practical experience of many kinds to undergraduates.
Traditional professors without much (or any) practical experience have little choice but to teach systems of thought. Yet people do not always behave in the real world the way the textbooks say they will. Systems of thought fail to open the full meaning of the worlds they project. Older adjuncts provide a valuable corrective. An economics professor may teach the theory of free markets, but the businessperson-turned-adjunct also knows about employer-employee relationships from direct experience. An English professor may teach post-colonial theory, but a human resources manager-turned-adjunct will also pass along wisdom about people gleaned from novels that he or she applied to real-life work situations. A medical humanities professor may teach concepts such as autonomy, patient rights, and patient-centered care; my role as an adjunct is also to show how these concepts play out in actual medical practice. Older adjuncts supply undergraduates with vital unwritten knowledge across a range of disciplines that helps nurture decision-making capacities and hone analytical and leadership skills. They give undergraduates a feel for the real world before they enter it.
A century ago, young men embarking on a diplomatic career were encouraged to tour European countries to get a sense of the nations with which they would one day negotiate. This important experience could not be taught. Even as education grew more formalized, the diplomatic service found ways to screen for that experience. As late as the 1970s, for example, the questions on the U.S. Foreign Service exam were obviously biased toward candidates who had grown up in cosmopolitan areas and travelled widely. Just being smart was not enough to answer them. This practice went by the wayside, leaving only formal instruction. Yet undergraduates still need worldly experience, and not just to enter the diplomatic service. No matter what field they enter, students must understand how to work with people as they are, not as we think they ought to be. This bleeds into other essential but not easily taught qualities that students must learn, such as patience and discretion. Indeed, the phrase “worldly experience” sums up many of the intangible and unquantifiable skills that today’s employers often complain they find lacking among undergraduate job candidates who are otherwise quite smart.
Many students sense this deficiency in their education and want to fix it. While older adjuncts and professors of practice cannot substitute for real life experience, they do bring life experience into the classroom as a useful corrective, not by design but by virtue of their own life’s path into higher education.
Older adjuncts also provide benefits to full-time tenure-track professors. At the very least, adjuncts are paid less so full-time professors can earn more. Younger adjuncts often resent this, but many older adjuncts don’t care. For example, I became an adjunct after I had already been an anesthesiologist for twenty years, with my bank account being larger than my department’s annual budget by several orders of magnitude. Older adjuncts content with their low pay ease the consciences of full-time faculty, who often feel vulnerable to accusations that they are exploiting adjunct labor.
Older adjuncts are also eager to teach subjects that some full-time professors have grown bored with, thereby bringing vital energy back into the classroom. After having spent decades in non-academic work—for example, in solving a company’s problems or managing production quotas—many older adjuncts welcome with enthusiasm, as if living a dream, the chance to re-enter the world of ideas and talk on subjects that to them seem as fresh as ever. One older adjunct described it to me “as a privilege.”
This is important in ways often hard to measure. If a college may be thought of metaphorically as an army, where supplies are crucial, then a classroom is a guerilla unit where “morale” is crucial. Teachers in classrooms must be more than just dispensers of information. They must be more combat leader than quartermaster; they must inspire students and stir enthusiasm. This is one reason why MOOCs have failed to replace the traditional classroom. MOOC proponents saw the classroom as a small army unit to be redrawn to larger scale using technology. But the classroom is not an army of any size; it is fundamentally a different type of organization. When older adjuncts bring freshness, excitement, and zest to teaching they often accomplish more with their small platoons of students than bored tenured professors do when teaching thousands of students online.
Older adjuncts also solve problems for college administrators. First, they come cheap. Second, they usually don’t complain. Over the past four years, 35 colleges have seen their adjuncts unionize. Unions often make life difficult for administrators, as they reduce a college’s flexibility in meeting curricular and manpower needs. Angry young adjuncts are the ones who usually push for unionization; throwing older adjuncts into the mix dials down the anger. Third, older people attending graduate school often have jobs at the same time, so they can pay tuition. This brings money into the university.
Finally, older adjuncts ease the plight of miserable young graduate students by draining the swamp that creates them in the first place. University departments create a moral hazard when they accept young students into graduate programs knowing all too well that many of them will never get teaching jobs. Science and engineering programs are less blameworthy, as they also prepare their graduates for jobs in industry. But many humanities departments teach graduate students to teach, and nothing more, so that many of these students are left helpless after graduation. True, young graduate students enter the academy of their own free will. Like gamblers, they think they will beat the odds and win the education lottery; like gamblers, too, most of them suffer for their delusion.
Tenured professors say they have no choice. A credible university department must teach graduate students, they insist, and that requires a critical mass of graduate students. Yet tenured professors have self-interest mixed up in all this. Some of them are bored with teaching, especially entry-level classes; some of them want to spend more time doing research. The proportion of full-time faculty spending nine or more hours a week teaching has dropped from 64 percent twenty years ago to 44 percent today. These professors exploit graduate students to teach for them. It is therefore not very surprising that resentful graduate students have started to unionize.
A department that includes adjuncts, especially older adjuncts, means fewer graduate students are needed to carry out this ugly scheme. The change is already happening. In 1975, adjuncts taught 24 percent of all classes; graduate students taught 21 percent. In 2015, adjuncts taught 40 percent of all classes, while graduate students taught only 14 percent. True, many adjuncts today are young graduates who failed to get full-time academic jobs. But as older people enter graduate school in larger numbers and become adjuncts, the need to tempt young people into graduate programs and possibly ruin their lives will disappear.
It is unclear how many graduate students become older adjuncts. For example, more than 60 percent of part-time graduate students attending private universities are over 30 years old, while 30 percent are over 40. The numbers are similar for public universities. Yet the numbers fail to tell us what the actual life course of any individual graduate student will be.
That said, larger societal trends make the rise of older adjuncts inevitable. In the coming decade, people over 65 years old will likely be the fastest-growing segment in the labor force. Currently, among all workers age 65 and older, 40 percent work-part time—the sweet spot for older adjuncts. Some of these older workers work for the sheer pleasure of it. Even among those who need the money, 90 percent of them say they “want to stay active and involved,” while 82 percent say they simply enjoy working. Toss in a love of books and ideas, and one has the typical mentality of the older adjunct.
Tenured faculty benefit from the presence of adjuncts, yet they also fear adjuncts in large numbers. Some of this is self-interest. More adjuncts mean more work for tenured professors advising students, setting curriculum, and serving on college-wide committees. At the very least, the tenured faculty’s argument that adjuncts hurt academic freedom, because their job insecurity keeps them from expressing genuine views, does not apply to older adjuncts. The actual truth is that many tenured professors self-censor today out of fear they may lose their tenure, and their jobs, if they stray outside the bounds of political correctness. This has already happened in some cases. With a source of income outside the university, and therefore less concern about being fired, older adjuncts are probably the freest and most frank people on American campuses today.
Besides the assistance they give to students, the major reason to encourage older adjuncts is their salutary effect on the liberal arts and on college more generally. That is because two ominous trends haunt the liberal arts.
The first of these concerns the intensification of the academic division of labor. A history professor today might specialize not in European history, or German history, or even early German history, but in early German religious history. The salami is getting sliced way too thin. For this reason only a few like-minded specialists will read another professor’s work. Some analysts have claimed that roughly half of all academic journal articles today are read only by the journal editor and the author. Worse, jargon-laden, narrow-minded specialists often lack the inclination or the capacity to teach undergraduates in the broad manner that excites young minds, or that offers wisdom about life relevant to a non-academic career.
I once asked a research specialist on Tolstoy what she worked on. She replied, “Trains.” She was interested in the symbolic meaning of trains in Tolstoy’s novels. This is not why average people read Tolstoy. This is not why accomplished people study Tolstoy later in life to become adjuncts. This is also not what attracts undergraduates to the liberal arts.
Older adjuncts resist this trend. Without the burden of having to earn tenure, they need not publish in obscure academic journals or write tomes for academic presses that only a few specialists will ever read. They are free to explore and teach on the liberal arts more generally. In this way they help to put the brakes on the disastrous effects of academic hyper-specialization.
The second ominous trend in the liberal arts involves the near opposite of over-specialization: an emphasis on multi-disciplinary work that requires adopting universal categories of thought and methodology rather than retaining distinctive categories of thought and using them synergistically. Subfields of the humanities now approach issues from a homogenized perspective, often with the same terminology, thus risking the loss of their distinctiveness. If historians are historians and literature professors are “cultural historians,” and if both classicists and philosophers study “race” and “gender,” then increasingly it makes sense to lump the subfields together into one big department called “the humanities,” thereby saving the college money. This is a threat to the liberal arts.
Again, because older adjuncts have the liberty to engage the subfields in traditional ways, they need not adopt the universal methodology of the current liberal arts “trade.” They can enjoy the unique qualities of each subfield that vaulted them to greatness in the first place and then pass that enjoyment on to their students. In the process they can help save the liberal arts, or at least retard their decay.
Older adjuncts also benefit the college more generally. That college is now more about getting a job than about the free exchange of ideas is concerning to many people and is sometimes even blamed for the lax attitude of today’s students toward free speech. Yet the American college was never a philosopher’s paradise. In 1908, the great Abraham Flexner described the American college as a kind of cafeteria where unserious students sampled various subjects in no particular order to pass the time. Already in 1918, Thorstein Veblen—inventor of the concept of the leisure class—complained that the American college was all public relations and football. During the same period, Edward Steiner wrote of a certain college president giving a campus tour: “With classic pride he stood upon the athletic field, looking as some Caesar must have looked when he showed visitors to Rome his arena.” For many students the American college has always been about having fun, sports, learning a few things, and getting a job. The difference, of course, is that college used to be mainly for the elite, and now it’s also for other kinds of people as well; the combination of sheer numbers of college-educated young people and persistent job insecurity in a shifting labor market pretty much accounts for the current anxious mood.
If anything, the effort to turn the American college into a philosopher’s paradise has had the perverse effect of turning some parts of college into an academy of medieval scholastics. Self-absorbed, and dwelling on the morbid currents within their own self-alienated personalities, some liberal arts professors today fuss over the most esoteric topics, using invented terms and convoluted language beyond the understanding of even well educated people. One gets the feeling this is often deliberate at one level or another; it allows self-styled geniuses to avoid being noticed too much by “ordinary” people.
Except for a handful of students, the American college should be about getting a job and doing well in that job. But doing so involves students learning the connections between theoretical thinking and active thinking, and between thought and praxis. To do this they need to make their mental model of the outside world as exact as possible. If the maps in their minds resemble fairly closely those of the outside world, if they represent with relative precision the world that they will one day work in, there is a better chance that the actions of students will fit well into the existing scheme of things, and students will not only succeed but also make a contribution. To create a map they need professors whose goal is not to dispense information, but to help students organize reality in a way that creates in their minds a living cognitive world in the image of the real world.
Many full-time professors do a fine job preparing their students for this necessary task, and we must give them their due. But as a group, older, seasoned, and experienced adjuncts who have spent years in the real world form the new vanguard of a revolution that will greatly help to make this happen.
The post In Defense of Adjuncts appeared first on The American Interest.
February 5, 2018
With All Due Respect
On July 10, 2017, the Ooni of Ife, one of Nigeria’s many ceremonial monarchs, boarded a Delta flight from Lagos to Ontario. Born Adeyeye Enitan Oguwusi, the Ooni (also known as Arole Oduduwa, Ooni Ile Ife, and King Ojaja ll) is the cultural and spiritual ruler of the Yoruba people, the second largest ethnic group in Nigeria. For such a person, boarding a commercial flight amounts to making a public appearance, so the Ooni was accompanied by several male attendants, one of whom walked through the aisle dressed in a white robe and leopard skin, rattling a pair of shekere, or West African maracas, presumably to mark the king’s approach.
Caught on a passenger’s smartphone, a video of the shekere-bearer went viral on YouTube with 61K views, and was quickly taken up by the writing staff of “The Other News,” a satirical news program produced in Lagos for Channels Television, an English-language news outlet with a sizable national audience. Now poised to begin a second season, “The Other News” was launched with the help of a Brooklyn-based NGO called Pilot Media Initiatives (PMI), which seeks to transplant the format of The Daily Show into foreign soil. (Their website boasts of programs in Nigeria, Macedonia, and Kyrgyzstan.)
The New Yorker recently ran an article about “The Other News,” in which the reporter, Adrian Chen, witnesses the show’s mostly youthful writers and producers debating how best to make use of the video. As Chen describes it, they were “trying to figure out how to make fun of a king.” But to judge by some of the comments, they were also trying to figure out how not to make fun of a king.
In the make-fun camp was David Hundeyin, a 27-year-old Nigerian who has studied in the UK and is a self-professed fan of the American TV show South Park (best described as Charlie Hebdo for libertarian adolescents). For Hundeyin, monarchs like the Ooni of Ife were “literally relics of the dead past in the modern world,” and the article says he urged his colleagues to “take a similarly no-holds-barred approach to Nigerian culture.” In the same camp was Ned Rice, an American and veteran writer for The Daily Show with Jon Stewart, who reminded the group, “If you do a comedy show, you’re going to step on toes.”
In the don’t-make-fun camp were the two head writers, who by the New Yorker’s account did not really need to be reminded that comedy steps on toes. Looking at it from a Nigerian perspective, their concern was with whose toes:
Some of the other writers urged a more cautious approach. The Ooni is seen by some Yoruba as a descendant of Oduduwa, who was sent down by God to found the Yoruba kingdom. “Sometimes we need to go to the other side of the audience or other people’s culture and try to see how it’s going to look to that person,” Sodi Kurubo, one of the two head writers, said. Nkechi Nwabudike, the other head writer, pointed out that the host of “The Other News” was Igbo, another major ethnic group. “We have to be careful, because we have a host from the east, so we can’t really make fun of someone’s traditions,” she said.
Americans have mixed reactions to this cautious approach. On the one hand, our commitment to free expression tells us to throw all such caution to the winds. On the other, our investment in cultural diversity tells us to avoid offending the “Other.” Neither imperative is of much use when confronted with a spectacle as utterly unfamiliar as a dark-skinned African rattling a pair of beaded gourds on a wide-body jet. At this we are likely to laugh or cringe, because whatever the belief system supporting such behavior, some of us will judge it “a relic of the dead past,” while others will reject it as tainted with sexism, homophobia, transphobia, and other up-to-date antiquated prejudices. Mostly, Americans will find it hard to imagine that such an atavistic figure could be living in the same “modern world” as we.
As it happens, there’s another way to react. After watching the video that went viral on YouTube with 61K views, take a look at this other video, also on YouTube, that went semi-viral with 36K views. You will see a very different scene, one that would be instantly recognizable to any American. A handsome 42-year-old celebrity and his entourage get on a plane, but instead of sequestering himself in first class, the celebrity walks through the aisle greeting the other passengers, several of whom rise from their seats to see him better. A few passengers summon the courage to ask the celebrity for a selfie, and he complies in a manner both gracious and dignified, at one point agreeing to pose with a baby. As for the shekere-bearer, he engages in friendly banter with the Delta flight attendant, one of the few white faces on the aircraft.
What this second video reveals is not some bizarre disconnect between the 21st century and darkest medieval Africa, but a scene that is completely normal in many parts of the world: men and women going about their 21st-century business while also seeking to preserve some favored strands of their ancestral past. In Nigeria, many of those strands are connected to the monarchs. I am not in a position to vouch for the Ooni of Ife, whose marital life seems to make headlines and whose symbolic authority is wielded in a divided society rife with corruption. But I can report that before being selected from a field of 21 candidates for the throne, the Ooni was a certified accountant and bank director who among other real estate ventures founded a beach resort in Lagos.
I can also report that, on a recent visit to Nigeria, I witnessed first-hand a similar process of trying to find the sweet spot, or golden mean, between tradition and modernity. The occasion was a training session for journalists held in Abuja, the capital, by the Hausa-language service of the Voice of America (VOA). The trainer was a stout, jovial fellow from the Igbo ethnic group, who are concentrated in the southeast and have very different traditions, not to mention religious beliefs, from both the Yoruba, who are concentrated in the southwest, and the Hausa-speaking Muslims who live in the north. In 1967 the Igbo tried to secede from Nigeria, resulting in the devastating Biafran civil war that left them defeated and disadvantaged under a national government typically dominated by northerners, who also dominate the military.
The majority of men and women participating in the training session were Muslims from the Hausa and Fulani ethnic groups, and with their traditional dress and rather severe manner, they presented a striking contrast to the trainer. For the first hour or so, the trainer tried to draw them out but did not appear to be having much luck. But then he introduced a topic that caused the participants to drop their reserve and engage in a lively, at times passionate debate. The topic was not something that struck me as significant at the time: the use of honorific titles by journalists interviewing important people, including government officials, religious leaders, and monarchs. But in retrospect I can see my mistake.
Like the Ooni of Ife, the monarchs (called emirs) of northern Nigeria have considerable moral, cultural, and spiritual authority over large populations. And some of them are well known outside Nigeria. For example, before donning the ceremonial robes of his present office, the Emir Muhammad Sanusi II of Kano State was a respected economist and head of the Nigerian national bank. (You can see him in his previous, pinstriped incarnation in this TEDx talk from 2013.) In addition, Emir Sanusi was, and is, a courageous opponent of corruption, and his efforts to bring kleptocrats to justice have earned him wide esteem both inside Nigeria and around the world.
Now, I’d been calling this man “Emir Sanusi,” but if I were a Hausa-speaking journalist in northern Nigeria, I might feel compelled to call him something like “Your Highness, Alhaji Muhammad Sanusi II, Emir of Kano.” Based on the comments of the participants in the training session, I would probably be expected to use that full title every time I asked him a question. I would also, like the journalists, be loath to show disrespect by unilaterally using what might in that cultural context be a nickname or worse. Hence the spirited debate, in which several participants and the trainer argued against using full honorifics, especially for corrupt politicians whom they did not personally respect. As one put it, “We are journalists, not supplicants. We are not paying tribute to these powerful people but holding them to account in the name of the people they are supposed to be serving.”
At the same time, though, there were no South Park fans in that training session, urging a no-holds-barred approach that would not stop at dispensing with honorifics but go all the way toward being as outrageous as possible. Nor were there any Jon Stewart fans, urging the VOA Hausa-language service to adopt the Daily Show style of satirical and irreverent news reporting. That style is very popular with Western-oriented viewers around the world, which is why PMI, the American NGO, is working with an English-language TV channel in Nigeria. And according to Dillon Case, one of the co-founders of PMI, it is also “a proven tool for democratic engagement.”
But is it? Writing about the “’Daily Show’ alums” who dominate late-night comedy in the Trump era, Ross Douthat suggests that “to flip from Stephen Colbert’s winsome liberalism to Seth Meyers’s class-clown liberalism to [Samantha] Bee’s bluestocking feminism to John Oliver’s and Trevor Noah’s lectures on American benightedness is to enter an echo chamber from which the imagination struggles to escape.” And that was before Trump was elected. In 2018 The Daily Show and its fellow late-night rant-fests have abandoned any attempt to elicit laughter in the ranks of their political opponents. This has never been easy, of course, but satire benefits greatly from the attempt, because the funniest comedians are those who can laugh at themselves. In America today, that seems a lost art.
In case you were wondering, the writers for “The Other News” did manage to find the golden mean: Rather than step on the toes of the Ooni of Ife, they decided to use the video of the shekere-bearer to poke fun at all those corrupt politicians who think they’re too good to fly in the same commercial plane with ordinary Nigerians, and instead spend their ill-gotten gains on private jets. And in a similar manner, the journalists at the training session in Abuja found their golden mean by agreeing to use the full honorific when first addressing a highly placed interview subject, but thereafter to shorten it in a suitable manner. In other words, they hoped to demonstrate their seriousness as reporters by maintaining an appropriate degree of deference while at the same time asking the hard questions.
In both cases, the individuals involved were able to find the sweet spot between trashing their inherited customs in the name of progress, and ossifying them in the name of ancestral memory. In many such settings, a major obstacle to finding that sweet spot are the attitudes that we Americans bring to the table. Whether those attitudes come from the liberationist Left or the libertarian Right, they too often blind us to the fact that most human beings on the planet want to preserve their religious and cultural traditions while also enjoying the blessings of modernity.
The post With All Due Respect appeared first on The American Interest.
The Dangers of Democratic Determinism
Before the wars broke out, Yugoslavia didn’t feel particularly un-European to me when I’d visit; it was poorer, yes, but it was not notably more brutish or atavistic than, say, Italy. And when the wars came, though their horrors were difficult to watch, I was not nearly as baffled as the American commentariat seemed to be by what was going on. The pundits talked about “ancient hatreds,” about religious and even tribal differences, about ill-defined forces that were supposed to have been vanquished from the European continent. To a younger me, it was all a lot less complicated: These wars were primarily about self-determination.
Having spent a good part of my childhood in the United States, I was puzzled as to why no one saw parallels with America’s struggle for independence. Yes, no soaring talk about universal values came from Zagreb or Ljubljana, but given the way in which the constituent republics sought to throw off Belgrade’s yoke, the parallel seemed clear enough to me. Instead, alongside the honest horror and disgust at the violence, it was easy to detect a note of moral superiority among foreign observers. It took me a long time to understand that for Americans and West Europeans, a struggle that did not evoke universal principles was just a form of barbarism. We in the West were better than this, they seemed to be saying. We had left all this behind. The Balkan Wars bequeathed to me a special sensitivity to this kind of preening.
Fast forward a few decades, to March 2014. An echo of this same kind of condescension—more bewildered than exasperated, but still uncomprehending—could easily be heard in Secretary of State John Kerry’s remarks on the occasion of Russia invading Crimea. “You just don’t, in the 21st century, behave in 19th century fashion by invading another country on a completely trumped up pretext,” Kerry complained to CBS. In retrospect, Kerry’s lament marked the start of a process still ongoing today: the slow discovery by Western policymakers, politicians, activists, and journalists that their mental frames do not usefully correspond to reality. But the discovery has not led to reflection. Instead, it’s all been confusion and frustration. “This is not supposed to be happening,” has been the repeated refrain to Brexit, the anti-migrant backlash across Europe, and the election of Donald Trump. “What has gone wrong? Aren’t we better than this?”
One answer is, “Obviously, no.” Some, like Damon Linker, argued that our elite worldview has depended on the establishment of a kind of “antipolitical politics”—a bloodless technocratic approach to governance that has lost legitimacy among several clots of the voting public. Others, myself included, have argued that the values behind modern liberalism have never had the kind of legitimating, cohesive power behind them we thought they did—certainly not in “New” Europe, where EU integration has failed to deliver on most of its promises.
Then again, maybe the questions themselves are misleading. Perhaps something deeper is amiss. Maybe it’s not about “liberalism” however defined or construed, but rather about our interpretation of history. Or, to put it more precisely, maybe our problem lies not so much with the strength or weakness of liberalism, but rather in how we have misunderstood its role in recent events. In doing so, we have acquired several huge blindspots that are preventing us from seeing what’s staring us in the face.
Let’s first look at the stories we have told ourselves about the period from 1945 to 1989. As Tony Judt remarked, historians and statesmen have invoked several recurring themes in describing those years in Western Europe: “Europe’s recovery was a ‘miracle’. ‘Post-national’ Europe had learned the bitter lessons of recent history. An irenic, pacific continent had risen, ‘Phoenix-like’, from the ashes of its murderous—suicidal—past.” These themes constitute a hopeful and morally redemptive narrative, especially for West Europeans who in large numbers had acquiesced to German occupation and had collaborated with the Nazis right up until liberation. Judt notes that Hitler managed to administer Norway with only 806 German overseers, and that 35 million Frenchmen made little trouble for some 1,500 German officials and 6,000 German civilian and military police. It was humiliating on a grand scale, even before these nations began to grapple with their complicity in the Holocaust.
The way in which these stories were used is also significant. Judt pointed out that a kind of ahistorical determinism related to these redemptive myths was built over time into the project of European unification. To oversimplify a bit, a set of trade treaties had set up an increasingly complex bureaucracy that had started to encroach on national sovereignty. It needed legitimation to continue doing so. “[T]he real or apparent logic of mutual economic advantage not sufficing to account for the complexity of its formal arrangements, there has been invoked a sort of ontological ethic of political community,” Judt wrote. “Projected backward, the latter is then adduced to account for the gains made thus far and to justify further unificatory efforts.”
This dynamic helps explain the otherwise baffling reality of the European project as we know it today: a largely undemocratic bureaucracy that talks in the lofty language of a post-national political community grounded in a set of universal Enlightenment values. But it’s the inherent determinism of the project—going backwards being unthinkable—that most concerns us here.
Unlike Europe, the United States was not hamstrung by guilt and self-doubt over the cataclysms of the 20th century’s first half. Indeed, the Soviet challenge was quickly understood in Manichean terms, with American foreign policy driven by a form of secularized Protestantism. As James Kurth pithily described it in our pages more than a decade ago, “After World War II, the characteristic pattern of American foreign policy—‘realism’ toward the strong and ‘idealism’ toward the weak—developed further.” Where it could, it sought to impose a version of the American Creed onto the world it encountered. Where it couldn’t, it chose to wait things out.
After 1989 and the fall of global communism, this narrative became turbocharged—triumphalist and self-certain. With the formal declaration of the EU in 1991 at the signing of the Maastricht Treaty, the ambitions for European unification expanded, not just for ever-closer union, but for an ever-broader one as well. By the mid-2000s, with most of the Warsaw Pact countries admitted, some dreamers had started talking about the “European idea” as being universal—applicable to all humanity. In parallel, intellectuals in the United States began to grasp the implications of the fall of not just its chief geopolitical rival, but also of its main ideological foe. Some wrote grandiloquently of the “unipolar moment.” Though historians came to understand that communism was not overthrown but had rather collapsed in on itself, a growing segment of the American public saw the end of the Cold War as a moral triumph. It had everything to do with long-suffering, oppressed people realizing the universal truth of the values that had sustained the anti-Soviet coalition since the defeat of Hitler.
Let’s call all this “democratic determinism,” a vulgarized version of Frank Fukuyama’s more nuanced “End of History” thesis. As a catechism, it was first internalized by liberal internationalists/neoconservatives/the democracy promotion community. But through the years, its ideological vapors have seeped into the public square and are now a part of the air we breathe, undetectable to all but the most sensitive noses.
Like all successful narratives of its kind, it captured important truths about the time it sought to describe. And like all good stories well told, it chose to focus on some things in lieu of others. It is of course also an aspirational and quasi-religious narrative—a story that gives important meaning and purpose to a set of mostly secular societies. It’s thus not wrong in any simple sense, but as a means of understanding reality it is very incomplete.
It can be tricky to point out that which someone is predisposed not to see. Luckily for us, Dr. Branko Milanovic, formerly the lead economist at the World Bank, has written a short essay that manages to do just that. His essay sets out to explain the divergence in perceptions as to the value of ethnic homogeneity between East Europeans on the one hand and Western liberals on the other. But it does much more than that.
The first part of his argument is purely historical. He notes that the struggle for nationhood for most of the countries found along a line extending from Estonia to Greece was one of emancipation from crumbling empires. It is a process that still festers on in parts of the Western Balkans, Milanovic notes, but otherwise has completed. Its end result has been the birth of remarkably homogenous ethno-states.
None of this is particularly controversial, nor is it incompatible with a worldview undergirded by democratic determinism. But Milanovic does not stop there. He goes on to argue that the events of 1989 are best understood not as a casting off of the false god of communism and an embrace of universally true Western values. Rather, he says, they were experienced by most of the people in Eastern Europe primarily as “revolutions of national emancipation”—a rejection of Soviet imperialism.
This is not as revisionist as you might think. Stephen Kotkin’s remarkable monograph Uncivil Societies goes to great lengths to document just how insignificant a force pro-Western liberals represented across the former Warsaw Pact countries on the eve of the communists’ collapse. Poland is the outlier, Kotkin shows, with Solidarity enjoying significant support in the run-up to 1989. But even there, the collapse was ultimately a top-down affair, and had more to do with Gorbachev’s political recklessness than anything else.
The persecuted idealists and artists left standing in the wreckage of the old system were in some cases best-positioned to capture the imagination of a broader public as it emerged, blinking, from the gray realities of one-party rule and into the bright lights of pluralistic democracy. But the role of their values-based activism in bringing down their countries’ communists has long been overstated, first by the idealists themselves and their supporters on the Western side of the Iron Curtain, and then by the reformed aparatchiks who took over power and kept repeating the catechism in order to keep aid flowing.
Average “Eastern” citizens, on the other hand, were mostly glad to be rid of the threat of Soviet tanks rolling in to prop up a rotten, thieving nomenklatura, and were looking forward to prosperity which they believed would come as a result of adopting Western ways of doing things. This entailed embracing markets and competitive elections, but not, as Milanovic points out, ethnic heterogeneity within their borders. “For Westerners this may be an obvious implication of democracy and liberalism,” he argues. Not so for the Easterners, who had no intention of sacrificing their key accomplishment—national consolidation—“in order to satisfy some abstract principles” they never endorsed in the first place.
The purpose of Milanovic’s essay is narrow: to show how difficult it will be to compel these recalcitrant countries to accept migrants anytime soon—maybe ever. But the essay’s deeper implications are striking, and help illuminate one of the blindspots plaguing democratic determinists. The discomfiting truth is that some amount of ethnic nationalism is not just tolerated, but accepted as completely legitimate by many voters throughout Eastern Europe.
Unlike Milanovic, a democratic determinist sees 1989 primarily as an ideological triumph, and understands the values that underpin it as universal and indivisible from the proper functioning of a modern state. If 1989 is thought of as a successful democratic revolution, then much of the politics of the past ten years in Eastern Europe can only be seen as backsliding. Someone like Viktor Orban, who has self-consciously positioned himself as a kind of soft nationalist, is seen as inherently illegitimate—a symptom of political decay.
But insofar as Milanovic’s model is correct, an “Easterner” listens to the incessant complaining coming from democratic determinists in Brussels and bemusedly scratches his head. His legitimately elected leaders are merely protecting values dear to him and his country from a bunch of messianic foreigners preaching an idealistic universalism he’s never signed up for, and that he doubts exists. He just doesn’t see what the big deal is.
The persistence and legitimacy of nationalism is not the only blindspot for democratic determinists. It’s just one example of a broader pattern of thinking. Several essays could be written on the theme. Freedom House’s annual Freedom in the World report frames the current moment in terms of rollback: “At the end of the Cold War, it appeared that totalitarianism had at last been vanquished and liberal democracy had won the great ideological battle of the 20th century,” it states. “Today, it is democracy that finds itself battered and weakened.” The methodology employed by Freedom House is rigorous, and their reports tell us important things about institutional trends. But their framing misrepresents reality and ultimately misunderstands the problems. The tide of “democracy” is not receding, for example; it never really rolled in in exactly the way they think it did. It’s true that authoritarian tendencies are emerging in their data sets, but the very idea of the threat of a resurgent “authoritarianism” is a category error. There is no such ideology.
We can be critical of democratic determinism and still be troubled by what is going on in, say, Central and Eastern Europe. For one, it’s increasingly clear that the rising crop of “Eastern” politicians are leveraging their popular support to entrench themselves and their parties, to expand their patronage networks, and to enrich their cronies. It’s this kind of behavior, and the accompanying undercutting of any credible opposition, that finds its fullest expression in Putin’s mafia-like rule in Russia. These kleptocratic regimes have a nasty polluting effect. Their billionaire oligarch class are the disease carriers; they throw their money around freely in healthier, more developed societies, and begin to rot them out, too.
In addition, we know from history that appeals to nationalism can ultimately lead to zero-sum thinking and unpredictable foreign policies. Though most of these countries are far more ethnically homogeneous than their West European counterparts, their national borders rarely encompass all their irredenta. Hungarians in particular make up notable minorities in Slovakia, Romania, and even Ukraine, and casually bringing up the Treaty of Trianon in conversation will trigger a certain kind of Hungarian conservative.
And provocative nationalism doesn’t just have to do with irredentism. The Polish Holocaust law, which currently awaits ratification in the Sejm’s upper house, is as much directed at Germany as it is at Ukraine, whose own laws about language and history have inflamed Polish sentiments. Overall, even if today’s leaders are soft nationalists, there’s nothing to guarantee that they won’t harden their stances with time, out of expediency or conviction, or some tangle of both. It’s a slippery slope that leads nowhere good.
These unpleasant truths need to be understood in the proper historical context, and our hopes for positive change need to take into account how very long these things can take. We must resist the temptation to think in missionary terms—to save supposedly “fallen” liberal democracies. At the same time, we must not succumb to cynicism either. Tolerance and pluralism are important values that can bring unheard of prosperity and peace to societies that embrace them. Properly constituted, liberal democracy is indeed the best organizing principle.
Overall, humility must be our watchword. We shouldn’t “Orientalize” people, or exaggerate the differences in how societies understand politics or perceive values. Nor should we assume a stubborn, unchanging world. But we also must not assume that these differences do not exist or are immaterial—or that the power of ideas is always revolutionary rather than evolutionary. Without forgetting the importance of values, we need to be wary of a strategy that prioritizes “defending” them, as that approach will likely backfire. Beating people over the head with the idea that they are insufficiently virtuous can only cause resentment. Change may well come; if it does, it will be gradual.
The 1990s-era Wars of Yugoslav Succession.
Of course, the wars ceased to be quite so tidy and one-dimensional as the fights dragged on.
Judt, Postwar: A History of Europe Since 1945 (Penguin Books, 2006), p. 5.
Judt, A Grand Illusion?: An Essay on Europe (New York University Press, 2011), p. 23.
In Central Asia, myths of popular resistance are less widespread and the real dynamics that brought down the Soviet regimes are much easier to see clearly.
The post The Dangers of Democratic Determinism appeared first on The American Interest.
February 4, 2018
The Meaning of the Super Bowl
This Sunday, February 4, a major American cultural event will take place. The 52nd Super Bowl game will be played in Minneapolis. To decide the championship of American professional football, the New England Patriots will play the Philadelphia Eagles.
As in the past, this year the game will hold the attention of much of the nation, commanding a television audience larger than for any other event in 2018. The five most-watched American television programs in history have all been Super Bowl broadcasts. For that reason, commercial time for this telecast will cost more than for any other: around $5 million for thirty seconds of airtime. Advertisers lavish effort and resources on producing these brief messages, and surveys have shown that a small but significant fraction of the Super Bowl’s huge viewership tunes in principally to see the commercials.
Tens of thousands of parties will convene across the country to watch the game. Americans consume more food on the day of the Super Bowl than on any other day of the year except Thanksgiving, including an estimated 28 million pounds of potato chips, 1.25 billion chicken wings, and eight million pounds of guacamole. In addition to commerce and conviviality, Sunday will be a landmark day for another American obsession: gambling. In the United States, more money is bet on the Super Bowl than on any other event.
The first of them, on January 15, 1967, was a far more modest affair. It was not, in fact, the first championship game of the National Football League (NFL), which had been operating since 1920. In 1960 a rival group of teams, the American Football League, came into existence, the two leagues merged in 1966, and the 1967 contest marked the first meeting of their respective champions. While all the seats for next Sunday’s game have long been sold, the venue for the first game, the Los Angeles Memorial Coliseum, was only two-thirds full. Tickets to that game cost $12; now they sell for as much as $5,000 apiece. Indeed, that initial game was not called the Super Bowl: that honorific was only added for the third one, in 1969.
Thus, what was, more than half a century ago, an interesting but hardly earth-shaking addition to the American sports calendar has become something akin to a national holiday—“Super Bowl Sunday.” How and why did this happen, and what does the event’s cultural significance tell us about the country in which it is celebrated?
The Super Bowl’s timing, occurring as it does on the first Sunday in February, is propitious. It comes in the middle of winter, when alternative outdoor activities are sparse. It takes place over three and a half hours beginning shortly after 6 p.m. Eastern time, ending early enough for adults to go comfortably to work and children to school the next day. It is a single, decisive, and therefore maximally dramatic event. By contrast, to become the champion of the other major American professional sports—baseball, basketball, and ice hockey—a team has to win four games (of a possible seven), so single contests rarely carry the same weight as the Super Bowl.
Moreover, it is easy to take part in this particular national ritual: all that is required is a television set. Indeed, the Super Bowl is unimaginable without television: only a tiny fraction of the total audience witnesses it in person. The event therefore testifies to the importance of television in American life. And while television is often said to have a fragmenting, atomizing impact on society, directing the individual’s attention to the screen rather than to other people, the Super Bowl has the opposite effect: It not only provides an occasion for social gatherings, it supplies one of the most widely shared experiences in American life, the subject of countless conversations in homes, workplaces, and elsewhere, among people of all ages, occupations, and educational backgrounds.
The Super Bowl began at a time when the influence of television in American life had reached its zenith. Virtually every household had at least one set, and those offering transmission in color were just beginning to replace the black-and-white models. American viewers’ choice of programming was restricted: Three major networks (two of which, NBC and CBS, televised the 1967 game) dominated the airwaves, which gave them far greater influence than any single network enjoys today. Fifty years later, cable has created a world with hundreds of channels, and television has to compete for Americans’ attention with the internet and social media. Audiences for all television programs have shrunk—except in the case of the Super Bowl, where the other media reinforce rather than compete with it. What is it, then, about this particular annual television program that has made it so massively popular?
What is being televised is a game, and Americans, perhaps the most competitive people since the ancient Greeks, with a penchant for turning everything from architecture to eating into a contest, are inordinately fond of games. They are fond of playing them and they are just as fond of watching them: Games—sports—are a form of mass entertainment. They differ from the other principal form of mass entertainment, scripted drama, in three ways that help to account for their appeal. They are spontaneous. Unlike in films and theatrical productions, the outcome is not known in advance: No one bets on the outcome of a play or movie. They are authentic: Unlike film stars, athletes really are doing what audiences see them doing. And games are coherent. Unlike so much of life they have a beginning, middle, and end, with a plot line and a conclusion that can be easily understood.
A certain development in football over the past fifty years has added to its appeal: There is now more scoring, in large part through changes in the rules that make advancing the ball through the forward pass easier. An old adage has it that while defense—preventing scoring—wins championships, offense puts people in the seats. So it has been with football; but so it has been, as well, with the other American sports, which have also changed their rules for this purpose. The Super Bowl is America’s most popular televised event because football has become, over the last half century, America’s most popular sport. Why is that?
Football differs from the country’s other major team sports in that it has violence at its heart. The game’s basic activities are blocking (trying forcibly to move an opposing player or impeding his progress) and tackling (knocking to the ground the player in possession of the ball). Football’s violence is organized, indeed choreographed, rather than random or purely individual. It therefore resembles a very old form of organized violence: war. Football is a small-scale, restrained, relatively (but not entirely) safe version of warfare. The tens of millions of people who will watch the Patriots play the Eagles are the very distant descendants of the Americans who, in 1861, brought picnic baskets to northern Virginia to watch the First Battle of Bull Run.
From a distance, a football game resembles a pre-modern battle: two groups of men in uniforms, wearing protective gear, crash into each other. Like most military battles, football is a contest for territory, with each team trying to advance the ball to the opposing side’s goal. In football, as in war, older men draw up plans for younger, more vigorous men to carry out: The sport’s coaches are its generals, the players its troops. Football teams mirror the tripartite organization of classical armies: the beefy linemen correspond to the infantry; the smaller, lighter players who actually carry the ball are the equivalent of cavalry; and the quarterback who advances the ball by throwing it through the air and the receivers who catch it are the sport’s version of an army’s artillery.
Football borrows some of its vocabulary from the terminology of war. A forward pass far down the field is a “long bomb.” The large men arrayed against each other along the line of scrimmage, where each football play begins, are said, like the soldiers on the western front in World War I, to be skirmishing “in the trenches.” An all-out assault on a quarterback attempting to pass is a “blitz,” taking its name from a comparable German tactic in World War II—the blitzkrieg, or lightning war.
War was once considered a normal, natural, and even, under some circumstances, an admirable part of social and political life. That is no longer the case, which perhaps helps to account to football’s popularity: The game is the one remaining socially acceptable form of organized violence.
War has not disappeared, of course, but in the United States it has changed in yet another way, which also contributes to the appeal of football. In traditional warfare individuals confronted each other directly and did battle with their personal weapons. Today the American military has become a high-tech organization that does some of its fighting impersonally and at very long range. The controller sitting in a shed maneuvering by remote control a drone thousands of miles away has partly replaced the warrior armed with his sword or battle-ax or bayonet. The result of a conflict waged in this new way depends not only on the bravery of a nation’s soldiers, but also on the ingenuity of its engineers.
Football, however, preserves some of the features of war before the advent of sensors and lasers. It is, as war was in the past (and sometimes still is), a test of will and skill. The traditional martial virtues, which societies admired and individuals sought to emulate—persistence, discipline, grace under pressure and, above all, courage—still matter in football. In this way Super Bowl Sunday, like other American holidays such as the Fourth of July or the birthdays of George Washington and Abraham Lincoln, honors the past by celebrating, implicitly, human qualities that appear, in 2018, to have been more common in the past than they are in the present. Football players, like those who will take part in Super Bowl 50, become heroes for some of the same reasons that Alvin York and Audie Murphy, highly decorated American soldiers in World War I and II respectively, were regarded as heroic.
Football players do not face death on their field of conflict, but the sport does share, up to a point, war’s dark side. Their common feature gives reason to wonder whether the Super Bowl will have the same cachet fifty years from now.
Like war, football is a dangerous activity. The multiple collisions between large, powerful men take a toll. Fractured and broken arms and legs and torn ligaments are occupational hazards for football players. Many of the participants in Sunday’s game, who will, by the time their careers end, have played for a decade or more—in high school and college before joining the professional ranks—will ultimately suffer from chronic arthritis.
In recent years, moreover, a far worse kind of injury has come to be associated with the game: brain damage. Examinations of the brains of deceased players, many of whom displayed signs of mental illness when alive and some of whom committed suicide, have shown evidence of chronic traumatic encephalopathy (CTE) a degenerative disease that leads to memory loss, aggression, confusion, and depression.
A number of retired players sued the NFL on the grounds that it had not adequately warned them of the risks of the game it employed them to play. The suit was settled out of court, with the League not conceding any liability but making payments to the players. The settlement will surely not prevent future suits.
Moreover, if further research confirms the suspected link between the game and CTE, football may meet the same fate as boxing or smoking—once popular activities that remain legal but are now strongly discouraged and take place on a far smaller scale than in the past. The more persuasive the evidence connecting football with brain damage becomes, the more reluctant parents will be to allow their sons to play the game, which will restrict, perhaps dramatically, the pool from which professional teams draw their players. That pool will be severely restricted, as well, if the school districts that sponsor high school football and the universities whose teams play each other at the collegiate level are forced to take out expensive insurance policies against the claims that players or their families may make against them. In an era of straitened finances local school boards and the state legislatures that fund most football-playing universities may simply refuse to pay the premiums. In that case, while football may not die out completely, it will surely wither.
The proprietors of football at all levels will try—indeed, are trying today—to make the game safer for the players. If they do not succeed, however, Super Bowl 102, in 2068, will be a much-reduced version of the spectacle that will unfold this Sunday—that is, if the 102nd Super Bowl takes place at all.
A version of this essay was published in these pages two years ago.
The post The Meaning of the Super Bowl appeared first on The American Interest.
February 2, 2018
Kenya’s Democratic Drift
On Tuesday afternoon, Raila Odinga, the portly and morose torch-bearer of Kenyan opposition politics, made history. In front of an audience of thousands of supporters in Nairobi’s Uhuru Park, the leader of the National Super Alliance (NASA) opposition coalition and longtime rival of the Kenyatta political dynasty was sworn in as president. It was the culmination of his life’s work, he told his audience, describing “a high calling to assume the office of the people’s president of the Republic of Kenya.”
The only problem is that the actual President of Kenya had already been sworn in last November. Uhuru Kenyatta, Odinga’s political arch nemesis, began his second term that month after winning a widely boycotted October election—itself a rerun of a highly suspect August election which Kenyatta had won before seeing his victory overturned in a historic Supreme Court ruling. Ridiculous as it may seem, Tuesday’s ceremony was neither a farce, nor the opening shots of a civil war—or, at least, we should hope not. An exercise in assuaging Odinga’s behemoth of an ego? Yes, in part, but this demonstration was also a massive rebuke of October’s election, a clear repudiation by millions of Kenyans—if maybe not the majority—of this oft-lauded nation’s electoral system. Kenya has been in uncharted waters since the nullification of the August poll, but the bizarre events of Tuesday make clear that we are witnessing a new and troubling epoch for an already troubled democracy.
If the opposition’s tactics seem absurd, counterproductive, or even dangerous—albeit not strictly illegal, given the symbolic nature of the ceremony—their frustration is understandable. Kenyatta’s initial victory in August was genuinely suspect (hence media voices prematurely celebrating the Supreme Court’s nullification) and the government refused to make any significant changes to the electoral commission between the August and October polls. One senior member of the electoral commission fled to the United States a week before the October rerun after receiving death threats. Then two days before the October poll, a Supreme Court judge’s bodyguard was shot in Nairobi. The government reportedly refused to grant the justices extra security following the shooting, which unsurprisingly meant the Court failed to reach quorum the next day for a vote on delaying the elections when several justices failed to appear out of concern for their safety. The election thus went ahead in an atmosphere of trepidation and amidst a nationwide opposition boycott. Kenyatta won with a handy 98 percent of the vote, but with turnout of only around 38 percent, or roughly half that of the initial poll, the opposition immediately decried the results as invalid.
After months of overwrought, ethnically charged political rhetoric and deadly clashes between opposition protesters and police, many Kenyans were skeptical that the opposition would proceed with their plans to anoint a “People’s President.” Kenyatta did what he could short of arresting Odinga to stymie his rival’s ambitions. Hours before the ceremony, the government officially labeled the National Resistance Movement, the popular wing of Odinga’s NASA opposition alliance, a criminal organization, and Kenyatta’s chief ministers had previously stated that a mock inauguration would constitute high treason, a crime punishable by death. It came as a huge relief, then, that the government allowed the proceedings to take place, instead opting for a media blackout of the event (but crucially, not an internet blackout). Given the Kenyan police’s extraordinarily high rate of extrajudicial killings and its brutal suppression of protests over the past few months, Nairobi undoubtedly avoided a bloodbath on Tuesday. In another promising sign, the Supreme Court temporarily suspended the government’s shutdown of three TV stations, although the government has yet to comply, much to the anger of millions of Kenyans for whom the evening news is a time-honored ritual.
The hope of many Kenyans has been that Kenyatta will simply ignore Odinga and his “Resistance.” The General Secretary of Kenyatta’s Jubilee Party simply laughed when asked by reporters about Odinga’s mock inauguration, and the party stalwarts have repeatedly said that they feel no need to engage with NASA MPs to achieve their agenda. While Jubilee is just short of a majority in Parliament, they maintain majorities in both the Senate and among the county governors, and they are wealthy and powerful enough that they can easily influence the smaller independent parties in Parliament. Further, the fact that the National Resistance Movement is now “criminal” does not necessarily mean the government will take any measures against NASA (besides, actual criminal movements often do alright in Kenya).
Unfortunately for Kenyatta, NASA is a political force to be reckoned with, and ignoring them is not a tenable strategy so long as they continue to make noise. Indeed, no one seems to be ignoring the opposition at present. On Wednesday morning, Kalonzo Musyoka, a senior NASA politician who had been billed as Odinga’s new deputy, reported that he had survived an assassination attempt at his Nairobi home. The police confirmed that a non-lethal stun grenade had been thrown at the residence and also announced that they had discovered two bullets at the site, but they would not comment on whether the attackers intended to assassinate Musyoka or simply intimidate him. Also on Wednesday, a NASA MP who had helped administer the “oath” to Odinga was arrested for participation in an illegal assembly, although he was released on bail shortly thereafter. Then Friday morning, the opposition lawyer and self-declared “General” of the National Resistance Movement, Miguna Miguna, was arrested after a dawn raid on his home. Further, Jubilee MPs have begun removing their NASA colleagues from various committees in Parliament in legally questionable manners.
It is too early to judge the scale of the fallout from Odinga’s theatrics, but these recent developments suggest a gradual deterioration of the situation in a manner characteristic of Kenyan politics. Kenyan elections are undeniably driven by ethnic considerations, with the five largest ethnicities—the Kikuyu, Luhya, Kalenjin, Luo and Kamba—generally voting as a single constituency. In one notable and tragic instance, in 2007, a disputed election led to a brief outburst of ethnic violence nationwide—much of it choreographed by leading politicians—which officially killed around 1,500, though most Kenyans believe the actual toll to be higher. Yet the two elections since 2007, this one included, have not catalyzed anything close to that level of intercommunal violence. Violence between opposition protesters and the police has killed between 50 and 70 people since August—hardly an insignificant number—but we should not assume that this week’s developments will be the spark that ignites a conflict, as there are several factors that make Kenyan politics distinct from other countries where ethnicity is paramount to politics.
For one thing, in Kenya the oligarch class is comprised of businessmen. By contrast, in failed states beset by ethnic conflict such as the Democratic Republic of Congo or South Sudan, the oligarchs are warlords, while in police states like Eritrea or Ethiopia they tend to be military officers. When disputes arise in the DRC or South Sudan, the warlords duke it out with their militias, and the civilians caught in the crossfire pay the biggest price. In Ethiopia, the government responds to protests in Oromia by simply shutting down all internet and media. (I was present for one such indefinite blackout last summer; even prominent businessmen knew better than to complain publicly).
In Kenya, the politicians are generally too business-savvy to risk such destabilizing behavior, or at least they have been more cautious since 2007. They may try to buy elections (with varying degrees of success), and they may tolerate or even encourage a degree of violence around election time in order to raise the stakes, intimidate their opponents, or let their constituencies vent. But as East Africa’s largest economy, and one that is increasingly digitalized, Kenya needs security and the free flow of information in order to thrive. All the political stakeholders, including Kenyatta and Odinga, understand this. Indeed, both men played key roles in instigating violence in 2007, only to stop when they realized how dangerously unmanageable the situation had become. This is why the government knew better than to shut down the internet this week, and why Kenyatta’s decision to impose a media blackout on the mock inauguration was so risky—and why it could very well backfire given popular resentment of the move. Similarly, the alleged assassination attempt on Kalonzo Musyoka is deeply troubling, but it bears all the signs of mafia-esque intimidation rather than a concerted effort to eradicate the Kenyan opposition.
None of this, of course, is good news. The situation as it stands represents a clear decline in Kenya’s political fortunes from a year ago, even if things are not as desperate as they were in 2007. But by no stretch of the imagination does a failure of democracy in Kenya entail the cataclysm and ruination most Western readers envision when they read of failed states in Africa. There will be no bloodthirsty Derg nor any equivalent of the Rwandan genocide in Kenya, and it strains credulity to suggest that Nairobi will ever become Juba. An African democracy, including one that has always been highly flawed, can die as slow and unspectacular a death as any other democracy. The judiciary attempts to check the executive but is rebuffed because it alone cannot make a government function fairly. An incumbent President then wins a boycotted election, entering office with a weak mandate but all the powers of the executive, including control of the security services. Voters lose faith in their institutions and many begin to begrudgingly accept a corrupt strongman who at least keeps chaos at bay and the wheels of the economy turning. What point are free and fair elections anyway, many will ask, when neither side is willing to accept even a legitimate defeat?
In other words, Kenya could become more like its neighbor, Uganda. Yoweri Museveni is no Idi Amin, but his Uganda is one in which the opposition is constantly harassed, the security services are unaccountable, elections are held on time—if unsubtly tilted in the incumbent’s favor—and Parliament’s role alternates between that of a rubber stamp and a fight club. Odinga already shares striking similarities to Kizza Besigye, the perennial Ugandan opposition candidate who also once swore himself in as “People’s President.” Besigye has been detained multiple times and can hardly leave his house without incurring some degree of police harassment, but, much like Odinga, he is far too popular to remove from the scene or prevent from politicking. And just like many of their Ugandan neighbors, it seems that Kenyans are increasingly learning to treat their politics as a practical joke, if a cruel one.
For Kenya, a shambolic and superficial democracy is much more conceivable in the near term than any ethnic conflagration that could grab global headlines and elicit temporary hand-wringing from Western luminaries. But for a country once prized as an emerging democratic hope, such a scenario is concerning enough in its own right.
The post Kenya’s Democratic Drift appeared first on The American Interest.
February 1, 2018
Happy Imbolc!
Groundhog’s Day, February 2, as Americans and Canadians know but most non-North Americans may not, is all about Punxsutawney Phil up in western Pennsylvania predicting the weather for the next several weeks. No one takes this seriously as meteorological prognostication, although, truth be told, old Phil out-predicts the folks on television as often as not. And, of course, the terrific 1993 Harold Ramis movie, using Phil and his annual antics as a backdrop, starring Bill Murray and Andie MacDowell, has become a classic, forever altering the aura of the phrase “Groundhog’s Day.”
Some people profess to see deep spiritual meaning in that movie, something sort of vaguely Buddhist I think, about reincarnation or parallel universes or some such subject I neglected to study in college. So I don’t know about that. But what’s interesting to me is that Groundhog’s Day did not start out as the tall-tale, pop-folk folderol it is today. It started out associated with—as is most everything of this sort, even Valentine’s Day and Halloween—the spiritual. Groundhog’s Day goes back to the Old Country, otherwise known for practical purposes as Europe but really meaning most of the time Britain. But goes back to exactly what?
February 2 is a significant date in the Christian calendar. It’s Candlemas, which is also known, with slight variations according to religious tradition, as the Feast of the Presentation of Jesus at the Temple. But the Church calendar is coincidental with—and hence very likely an overlay on—much older Celtic agricultural observances. In this case, Groundhog’s Day has to do with a very ancient, astronomically linked celebration called Imbolc, which later became Brighid’s Day and even later, after the Christianization of the British Isles, Saint Brighid’s Day.
There is a great deal of lore and legend associated with Imbolc, much of it involving Cailleach, the hag of Gaelic tradition. And yes, that lore and legend very much includes weather prognostication and careful observation of the emergence from hibernation of badgers and snakes. Imbolc is about halfway between the winter solstice and the spring equinox, and markings on ancient megaliths testify to its origins in astronomical observation. It was a time thought to be a harbinger of spring on account of the onset of ewes’ lactation in expectation of spring lambs, and the blossom setting of certain plants, principally the blackthorn (itself associated with much lore).
Here is how the British Almanac of 1828, at page ten, describes the matter, and you will see right away the connection between Candlemas and Groundhog’s Day (as well as note the association of New Year’s Eve with the Festival of the Circumcision):
Our ancestors had a great many ridiculous notions about the possibility of prognosticating the future condition of the weather, from the state of the atmosphere on certain festival days. The Festival of the Circumcision (January 1) was thus supposed to afford evidence of the weather to be expected in the coming year. For St. Vincent’s Day (January 22) . . . . The Conversion of St. Paul (January 25) was . . . Candlemas day (February 2) supplied another of these irrational inferences from the weather of one day to that of a distant period:
If Candlemas day be fair and bright
Winter will have another flight;
But if Candlemas day be clouds and rain
Winter is gone and will not come again.
In other words, if Phil sees his shadow (“fair and bright”, as the old poem has it), we’re icily in for it; if not (“clouds and rain”), then not.
The much older Scottish Gaelic verse goes (in translation) like this:
The serpent will come from the hole
On the brown Day of Bríde,
Though there should be three feet of snow
On the flat surface of the ground.
As the British Almanac quote shows, many days we today in America rarely take heed of were once believed to be predictive of this and that. We have no secular American equivalent for St. Vincent’s Day, let alone the conversion of St. Paul. For some reason, however, Candlemas translated into Groundhog’s Day has stayed with us, except that most likely it is Imbolc or Brighid’s Day that has stayed with us, carried to the New World by Scotch and Irish immigrants. That is how we got from poetry at Candlemas in medieval Britain to television news reporters converging on Punxsutawney, Pennsylvania, to record Inner Circle guys in black top hats and tuxedoes talking earnestly to an allegedly 123-year-old groundhog.
You can’t make stuff like this up, folks. Happily, we don’t need to.
This is a revised version of the essay published on February 1, 2013.
The post Happy Imbolc! appeared first on The American Interest.
The U.S. and Turkey: Past the Point of No Return?
U.S.-Turkish relations have deteriorated for some time. But until recently, no one would have thought that the American and Turkish militaries, closely allied since the 1950s, could end up confronting each other directly. Yet in northern Syria today, that is no longer unthinkable.
In mid-January, to forestall U.S. intentions to build a “Border Security Force” composed mainly of Syrian Kurdish fighters, Turkey launched a military operation in the Kurdish-controlled Afrin enclave in northwestern Syria. On January 24, Turkish President Recep Tayyip Erdoğan expressed his determination to move beyond Afrin into other parts of northern Syria, mentioning specifically the town of Manbij, where U.S. forces are deployed alongside Kurdish YPG troops. Turkish officials warned the United States to sever its ties to the Kurdish forces, which Turkey considers a terrorist group. This led President Donald Trump to tell Erdoğan to “avoid any actions that might risk conflict between Turkish and American forces.”
The collision course Ankara and Washington are on is making any notion of a Turkish-American alliance increasingly hollow. If a point of no return is to be avoided, both sides will have to rethink their priorities, and begin to build trust. That process can begin with an honest appraisal of how we got to this point, with America and Turkey on the verge of coming to blows.
In the United States, much of the blame has naturally been laid at the feet of Erdoğan, the headstrong and authoritarian Turkish President. To American eyes, it is easy to see how Erdoğan’s growing intolerance of dissent goes hand in hand with an increasingly adventurist foreign policy that directly challenges American interests. Yet while Erdogan is part of the problem, its full scope goes far beyond a single individual. The real story of the past several years is how the Syrian and Kurdish issues have interacted with Turkish domestic politics to pull Ankara and Washington apart.
Turkey, Syria, and the Kurds: A Long Story
For a variety of reasons ranging from water distribution to border disputes, Turkey and Syria were archenemies during the Cold War. Even then, the Syrian and Kurdish questions were interrelated: Hafez al-Asad provided safe haven to the leadership of the Kurdish separatist PKK, which Turkey, the European Union and the United States all rightly considered a terrorist organization. After the Cold War, the threat hardly abated: From training camps in Lebanon’s Syria-controlled Bekaa Valley and bases in northern Iraq, the PKK mounted an increasingly sophisticated campaign of terror targeting the Turkish state and Turkish civilians in the early 1990s.
Herein lies the seed of Turkish-American discord: While Turks had no love lost for Saddam Hussein, Ankara and Baghdad had cooperated quite effectively against the PKK. By contrast, it was the American intervention in Iraq, and the subsequent creation of a de facto Kurdish state in northern Iraq, that allowed the PKK to establish a foothold in the mountainous areas bordering Turkey. This generated frustration, but America was still helping Turkish efforts to fight the PKK. By the mid-1990s, Ankara had made numerous military operations on Iraqi soil to manage the problem. In 1998 Turkish threats of military action forced Assad to expel the PKK and its leader, Abdullah Öcalan. With the help of American and Israeli assistance, Turkey was eventually able to apprehend Öcalan in Kenya, and confine him to the prison island where he remains today.
By the time Erdoğan was redesigning Turkish foreign policy in the mid-2000s, Syria occupied center stage. It was Turkey’s conduit to the Arab Middle East, where Erdoğan wanted to play a bigger role. The objective was to turn Syria from an adversary into a vassal—essentially replacing Iran’s role for the Assad regime. Yet these plans came to naught with the onset of the Arab upheavals of 2011. Those events touched a sectarian and ideological nerve among Erdoğan’s Islamists: They saw in the upheavals the impending crumbling of the post-Ottoman order in the Middle East, and a historic chance to impose a new order led by the Muslim Brotherhood under Turkish tutelage. This led Erdoğan to support the opposition against Assad, and in particular to help arm the Free Syrian Army components that were close to the Muslim Brotherhood.
Lacking deep understanding of the regional dynamics, however, Ankara miscalculated. Evidently, Erdoğan and his then-Foreign Minister Ahmet Davutoğlu thought the Assad regime would fall much like Qaddafi had in Libya. But they underestimated both Tehran’s commitment to the Assad regime and Assad’s ability to counter Turkish moves. In July 2012, the Syrian regime effectively ceded the northeast of Syria to the Kurdish Syrian YPG forces that are aligned with Turkey’s archenemy, the PKK.
This move had deep implications for Turkey. As the Syria conflict turned into a quagmire, the rise of a Kurdish entity emboldened Kurdish nationalism in Turkey itself, thus sabotaging Erdoğan’s attempt to negotiate with the imprisoned PKK leader from a position of strength. For Turkey, the biggest threats in Syria were the PKK-aligned PYD and the Assad regime. The Sunni jihadis fighting the regime were seen not so much as a problem as an asset: Turkey’s initial protégés on the battlefield had turned out hopelessly inept, leading Ankara to move to support increasingly radical factions, including domestic jihadi groups like Ahrar al-Sham and the Nusra front, while turning a blind eye for some time to ISIS’s use of Turkish territory as a rear base for its establishment of a caliphate in Syria.
Thus, American and Turkish interests began to diverge. Obama and Erdoğan had initially coordinated closely on Syrian matters, with Turkey calling for an American intervention to topple Assad, and planning to be America’s subcontractor in Syrian affairs. Disagreements were initially minor, as when Secretary of State Hillary Clinton sought a much more broad-based opposition coalition than the Muslim Brotherhood-dominated version boosted by Ankara. But gradually, America’s main objective shifted from overthrowing Assad to containing and combating the ISIS caliphate. This, in turn, pushed the United States into the arms of the Syrian Kurds, who had the only fighting force willing and capable of fighting ISIS in Syria. Meanwhile, Americans were growing increasingly suspicious of Turkish covert support for jihadi factions in the war.
Domestic politics now intervened to worsen matters: In 2013, the repression of the Gezi Park demonstrations that began in Istanbul but spread across Turkey wrecked Erdoğan’s international image. A disappointed President Obama now essentially stopped talking to Erdoğan. Meanwhile, the split between Erdoğan and his erstwhile allies in the Fethullah Gülen movement intensified into an open and direct conflict. Erdoğan, who was growing increasingly conspiratorial, saw an American hand behind both Gezi and the Gülen movement, whose leader he believed to steer a vast network of supporters from his home in the Pocono Mountains of Pennsylvania.
To counterbalance the Gülen network, Erdoğan now rehabilitated, then struck up an alliance with the neo-nationalist America skeptics within the Turkish military that had been purged in previous years. By 2015, this alliance led him to end talks with the Kurds and adopt the military’s preferred option: a renewed reliance on the military option to destroy the PKK inside Turkey. This had the added benefit of shoring up nationalist support for Erdoğan, making his transition to a presidential system possible. His new friends also happened to fervently buy in to the notion that America’s aims in Iraq and Syria included the promotion of Kurdish nationalism, and that this policy in the long term envisaged the breakup of Turkey itself. Unfortunately, it is increasingly clear that Erdoğan himself bought into this conspiracism.
American Errors
It goes without saying that America’s dithering in Syria has been a major factor in the growing suspicions in Turkey concerning America’s intentions. As noted, Turkish suspicion of American intentions started with the creation of autonomous Iraqi Kurdistan in the early 1990s. It intensified with the Iraq War in 2003. And it has reached a boiling point with the conflict in Syria. In all three cases, Turkey has entertained the notion of partnering with America, but ultimately seen America take steps that undermine Turkey’s interests and security.
Americans frequently look back to the Presidency of Turgut Özal as the golden age of Turkish-American relations. Özal, indeed, supported America’s war against Iraq, provided America with the use of the Incirlik base in southern Turkey, and closed pipelines delivering Iraqi oil to Turkey. But he did so at great cost: In 1990, both the chief of general staff and the foreign minister resigned in protest against Özal’s Iraq policies. In subsequent years, the economic costs to Turkey were estimated in the billions of dollars, not counting the rising PKK insurgency, which would hardly have been as intense had Baghdad remained in control of northern Iraq.
These matters were very much on the minds of Turkish leaders in late 2002, when the George W. Bush Administration came calling to enlist Turkey’s help to invade Iraq once again. Immense pressure was brought to bear on the newly elected AKP government—formally run by Abdullah Gül, because Erdoğan had yet to rid himself of a ban prohibiting him from political activity. The Turkish military remained far from enthusiastic, and a parliamentary vote in March 2003 failed to approve the use of Turkey’s territory for a U.S. land invasion. This debacle sent Turkish-American relations into a tailspin, fostering lingering resentment between what had been the core of the relationship: the respective military leaderships of the two countries. While Turkey’s various power brokers mishandled the matter, there was enough blame to go around: U.S. officials largely failed to provide Turkey with an incentive to support American plans in Iraq.
From Ankara’s vantage point, the main consequence of America’s invasion was that the PKK, sensing an opportunity, broke a long-standing ceasefire and began operations on Turkish soil again. America, preoccupied with Iraq, did little to mitigate this, and even went as far as apprehending Turkish special forces officers in northern Iraq, generating fury across the Turkish political spectrum. Meanwhile, Iran was actively cooperating with Turkey in cracking down on the Iranian PKK affiliate, PJAK. Ironically, to most Turks Iran now seemed a better ally against terrorism than the United States.
Against this background, it may seem surprising that Erdoğan actively encouraged an American intervention against Assad, while his population and much of the Turkish elite were largely opposed. But at the time, Erdoğan thought he could use American cover to implement his vision of a “moderate Islamist” order in the Middle East under Turkish leadership. This is how Erdoğan interpreted Obama’s support for the Arab upheavals.
Yet over a few months of 2013, Erdoğan came to revisit this assumption. The starting point was the Gezi protests of May and June, followed in early July by the removal of the Muslim Brotherhood regime in Egypt, in which Erdoğan had invested heavily. Turkish fury at America’s equivocation on the coup (which Turkey’s Islamists equated with the Gezi protests) was exacerbated only weeks later by Obama’s Syria red line controversy. It was now clear that the United States was not going to play along with Erdoğan’s regional plans. Instead, due to a combination of domestic and foreign factors, U.S. actions in the Middle East came to be viewed as directly antithetical to Turkey’s vital interests.
Indeed, the trigger for the current crisis was the American decision to create a largely Kurdish “border security force” of over 30,000 personnel in northern Syria. There is no question that when the Pentagon developed that plan, Turkey was not the main motive. It was at least as much about establishing a foothold in Syria to contain Iranian hegemony, and to ensure that ISIS was unable to regroup. But to the Turks, none of those factors are relevant: American actions are viewed against the background of the events of the past three decades, and through the prism of the leadership’s particular penchant for conspiracy. American officials are aware that Erdoğan blames Washington for involvement in the failed July 2016 coup against him, and are equally cognizant of the vehemence with which Turkey opposes America’s intimacy with the Syrian Kurdish forces. Erdoğan has lately even come to speak obliquely of America as the force behind ISIS, echoing Russian propaganda to that effect. Erdoğan’s reaction should have been quite predictable: To Turks it all follows a clear pattern of America working over three decades to establish a Kurdish vassal entity in the Middle East that undermines the security and integrity of Turkey itself.
Is There a Way Out?
Whether or not the current crisis is overcome, the longer trajectory of U.S.-Turkish relations is alarming. The leadership of a close NATO ally has effectively become a cheerleader of anti-Americanism; its leadership views America as its primary adversary, accusing it of scheming to undermine its very statehood. And unfortunately, as this analysis has sought to demonstrate, this is not due solely to the idiosyncrasies of an erratic leader. Erdoğan’s perspective on America’s role in Syria and Iraq is shared by broad segments of Turkey’s political spectrum.
The Turks have a point: American policies in Syria and Iraq have had the effect of undermining Turkey’s interests. And it borders on the absurd for the United States to “train” a PKK affiliate in Syria, while hoping that this will not affect relations with a country it terms an ally. Any Turkish government will see this as a hostile act; Erdoğan enjoys the support of over 80% of Turks on this issue.
But the United States, too, has a point. The growing anti-Americanism of Turkey’s leaders—Erdoğan first and foremost—is not primarily a result of America’s Syria policy, or even of any of America’s actions. Rather, it is a result of an ideologically grounded, conspiratorial mindset that sees America as a force for evil in the world. It is not America’s fault that Erdoğan now appears to view everything from protests in Istanbul and coups in Cairo and Ankara to campaigns against his Qatari friends as efforts to undermine Turkey’s prestige and his own position of power. If this is what Turkey is becoming, why should America defer to Ankara on matters of regional security in the Middle East?
The problem is, effectively, on two levels. First, American and Turkish objectives in the region have come increasingly to diverge. Were there trust and goodwill between leaders on both sides, this divergence could be overcome, or at least managed. Defense Secretary James Mattis has expressed understanding for Turkey’s security concerns, and Turkish Foreign Minister Mevlut Çavuşoğlu seeks to convince Americans that his country is a better partner for America than the YPG. Left to their own devices, these leaders and others like them would probably be able to work things out. For example, Washington and Ankara could agree to the creation of a Turkish security zone on the Syrian side of the border. That would significantly calm tempers in Ankara.
But on another level, America lacks a strategy for either the region or for its relationship with Turkey. Without such a strategy, U.S. officials will likely bounce from crisis to crisis, seeking to contain the damage while being unable to take on the underlying problem. And similarly, as long as Erdoğan and important forces in the Turkish leadership continue with their anti-American pronouncements, the likelihood of anyone making a serious effort to rescue the relationship will diminish by the day.
In the final analysis, U.S. officials would be well-advised to take a long view: How important is Turkey for American interests in Europe, Eurasia, and the Middle East in a 20-year perspective? If they determine that it continues to maintain the immense strategic value that many assume, they should focus on ensuring that the average Turks find no reason to buy into the loony conspiracies peddled by some of their leaders, and instead view America as a reliable and positive force. That will require adjustments to the Administration’s Syria policies. In the meantime, Erdoğan’s government can be treated in a transactional way—as a troublesome force that needs, somehow, to be managed with that broader objective in mind.
The post The U.S. and Turkey: Past the Point of No Return? appeared first on The American Interest.
The Trump Deception
Protectionism is like dope. First you get a brief rush, and then you crash. Junkies know this, but they go for meth, crack, and smack over and over again. So do trade warriors. They think they can enjoy the kick without having to pay the price down the line. They believe that trade walls are good for the nation, damn the evidence. “This time it is going to work,” they say, beguiling the credulous. But it never does, not for advanced economies as heavily enmeshed in world markets as is the United States.
Now Donald Trump is hawking the snake oil. But wait! Didn’t he sweet talk the “globalists” at their annual Davos bash by cooing that “America first” does not mean “America alone?” Didn’t he tell them that America was wide “open for business?”
Sure, but as they say, actions speak louder than words. Just before hopping on Air Force One, Trump launched the first shots against two mighty American competitors, China and South Korea. He slapped a 30 percent punitive tariff on solar cells and panels while hitting washing machines with a 20 to 50 percent tariff, plus quotas. In an editorial, the Wall Street Journal minced no words. The headline read: “Trump Starts His Trade War.”
How did Trump justify his opening salvo? “Our action today helps to create jobs in America for Americans”—more for us, less for you! Before we consult trade theory, let’s savor the irony. The move on the solar front came in response to the complaints of two U.S.-based companies. One is Chinese-owned Suniva, which had filed for bankruptcy. The other is SolarWorld America, whose German parent had declared insolvency last fall.
So making America “great again,” comes down to being nice to two countries that are among America’s fiercest rivals in the global trading arena. In soccer, they call this an “own goal.”
Let’s play truth and consequences. In the United States, Korea’s washing machine giant LG reacted by announcing a $50 price hike for its laundry appliances. Goldman Sachs predicts higher prices in the range of 8 to 20 percent. So the first effect of the Trump tariff is to make American consumers poorer because pricier goods lower real income.
But do the duties at least enrich coddled companies and their workers? Initially, they may preserve profits and jobs. Now, look again. The two solar cell companies demanding protection used to have 3,200 workers. On the other hand, the entire solar industry, which assembles the stuff, designs the software and puts the contraptions on roofs, employs 260,000 workers. If key ingredients like cells and panels become more expensive as result of the Trump tariff, the price for the end product goes up, as well.
Down go demand, revenue and employment. The Solar Energy Industry Association (SEIA) predicts a loss of 23,000 jobs. Do the worst-case arithmetic. If you save, perhaps, 3,000 jobs at those two failing companies, you might lose eight times as many in the industry. Not smart, to use Trumpspeak.
Back to Mr. and Mrs. Consumer. Say they were planning to install their very own power plant on their roof. But if prices go up, the project goes down. They will postpone the purchase or stick with the local utility. Hence, overall investment drops, and so does employment in the solar industry for roofers, mechanics, software designers and suppliers.
The same goes for the laundry room downstairs. Once LG and Samsung respond to the tariffs by raising prices, their American competitors like GE and Whirlpool are free to do the same. So consumers are hit once again.
Nor does this tale of woe end here. The snake oil of protectionism promises to make the country as a whole better off. But it doesn’t. It merely helps a small privileged segment—the beneficiaries of tariffs and import quotas. The many always pay for the few, and the country as such loses out in the global trade competition. Why so? If the prices of protected products like solar systems and washing machines rise, their U.S.-based makers will falter in the global market where leaner and meaner rivals push them aside. So still more job losses.
In the long term, the most insidious effect of privilege is indolence. Recall the old U.S. automotive industry protected by government regulations and popular tastes for gas guzzlers. At the beginning of Detroit’s long decline in the 1960s, folks just laughed at those funny Beetles and boxy Japanese contraptions trying to look like a stunted V-8. Detroit just kept building the same old stuff while the global competition worked on looks and technology. Now, only GM and Ford are left. The rest died or fell into foreign hands. Spare the rod of competition and spoil the industrial child.
For Donald Trump, the worst is yet to come. Start a trade war, and your rivals will retaliate, shutting out U.S. exports. In the tit for tat, he will not create, but destroy “American jobs for Americans.” Meanwhile, note yet another irony. The key target of the solar tariff is China. Yet China is only number four among foreign suppliers of solar cells. Hence, the profiteers of Trump’s opening move will be Malaysia, South Korea and Vietnam. Their exports will rise, creating zero American jobs. The same story holds true on the laundry front, where South Korea will fall back while Mexico takes up the slack.
The worst outcome is lurking in the wings. Trump has threatened retaliation against the “very unfair” European Union. If he goes through with it, Brussels will hit back. Who will lose out? Workers and consumers on either side of the Atlantic—all thanks to Donald Trump, the “very stable genius.”
The post The Trump Deception appeared first on The American Interest.
January 31, 2018
Trump’s State of the Union Coup
Makes no difference what you think about Donald Trump, last night’s State of the Union address stands as a stellar example of brilliant political speech. For his purposes—mainly to keep but especially expand his political base looking to the midterms and to November 2020—the speech (but not necessarily the delivery in its entirety) was nearly flawless.
To understand why and how, you must first realize that crafting political language in all its many forms—of which the art of conceiving, writing, and delivering a speech is but one—is not a didactic exercise. It is an excursion in impression management or, if you like a rawer term, manipulation. As I invariably tell audiences and students when I am speaking from my book Political Writing: A Guide to the Essentials, in political writing—and in speechwriting especially—one should never “commit a truth.” This language is deliberately chosen to gather attention in order to suggest that saying something just because it’s true amounts to a crime in the “dark arts” of impression management. Truth has to serve an impression management function or it doesn’t belong there.
This is hard for a lot of academics and intellectuals to accept—so hard, in fact, that it rarely even occurs to them, leading many to misjudge the power of political language on account of a category error. This is perhaps one the reasons why virtually no American political scientist took Trump’s chances of winning the Republican nomination, let alone the presidency, seriously.
This doesn’t mean that a political actor making a speech should go out of his or her way to lie, but it does mean that manicuring the truth, let’s put it, is not always off limits if it serves a noble purpose. And of course it is very easy for a political actor, no matter his or her views, to be persuaded of noble purpose, or at least a purpose nobler than the other guy’s, because it’s baked into the very nature of open political competition. Donald Trump served up 18 lies or misleading statements last night, as calculated—fairly accurately, it seems to me—by the Washington Post’s fact-checkers. That’s a lot for a speech of just more than 5,000 words, maybe even a record for State of the Union addresses if one is trying to count lies as a proportion of the entire text.
Did Trump or his speechwriters know they were peddling lies and varyingly misleading statements? Hard to know, but that’s not what matters in the assessment of an effective political speech. What matters is that the lies, misstatements, and over-the-line exaggerations need be of a certain kind to work in an impression-management exercise. They must be of a sort that refuting them takes at least three times as many words as it took to fire off the original fib. That means, in turn, that the fibs have to nest in a context ambiguous enough to sound plausible without being actually true. When that’s the case, wordy and didactic refutation after the fact comes way too late to erase the impression left by the original “story,” in which each fib is part of a tapestry of impression management that makes its mark as a whole, not as a mere assemblage of sentences and phrases.
If the refutation involves numbers, or what passes for an actual, factual explanation, the effort is doomed before it begins. As Mark Twain put it in A Connecticut Yankee in King Arthur’s Court, “There, there, never mind, don’t explain, I hate explanations; they fog a thing up so that you can’t tell anything about it.” That was true for politics in Twain’s time and remains true in ours when you realize that roughly three-quarters of the electorate has not graduated from a four-year college. The number quickly gets a lot larger when you factor in those who have graduated from a second- or third-tier college and those who chosen majors put little or no premium on critical thinking or dealing with abstractions. In short, dear reader, if you’re reading this in The American Interest you may need to be reminded that, no, you are not a modal voter, and your infosphere differs in significant ways from the modal audience Trump was addressing last night. If that shocks you, it means you need to get out more.
Let us count the ways that last evening’s State of the Union address was brilliant.
It started with a skein of talk about natural disasters and tragedies such as the Las Vegas shootings. Why is this brilliant? Not just because of its emotive character, and not just because these are uncontroversial issues—no one, after all, outside of a psychiatric institution roots for a hurricane or a madman shooting at innocent people from a hotel window. It’s brilliant because dwelling on these kinds of tragedies create an instantaneous community of souls. It enables the rest of the speech to proceed on the tacit basis of “we” instead of “him and me.” It works that way because everyone is a part of the story: Everyone experiences the weather; it’s small-talk, and most ordinary quotidian speech is small talk.
Note the speech’s structure, too: simple vocabulary, short sentences, short paragraphs. (Yes, it’s true that Trump stumbled trying to pronounce the monosyllabic “scourge,” but hey, that’s a tough one for someone who doesn’t read much and is proud of it.) Even the most complex issues raised, like immigration or taxes, took up only a few sentences. There was absolutely nothing didactic about the speech, no words in it that did not point the listener’s emotions in the right direction.
Contrast that, if you feel like taking the time, with any of the State of the Union addresses delivered by Barack Obama, or for that matter, by George W. Bush or Bill Clinton before him. The difference is obvious, and large. Trump can pull off this level of simplicity in part because no one really expects more from him. But pull it off he did.
Note also that there are no real numbers in the speech, only symbolic/iconic numbers, most of which turn out literally to be either false or out of context. Doesn’t matter: Audiences like to hear some numbers because it lends an air of gravitas to the speaker, but they don’t like to wade through a thick statistical fog—and if you lose your audience once in a political speech, you’ve emotionally lost them for the duration. You’ve broken the spell.
Trump appealed to a fine balance of fear and nostalgia in the speech. But even the fear is managed so as to convey the idea that the source of the fear is being overcome, being bested. This is Reaganesque, “morning in America” territory, and it works. Most voters do not like to hear doom and gloom about threats to America or about their own circumstances; if they did, then Jimmy Carter’s infamous “malaise” speech would have been recognized, at the time and since, as one of the most profound and philosophically challenging speeches delivered by an American President since the Civil War—which is of course the reason it flopped so bad and hobbled him politically. Most people like happy endings, and they especially like to be able to visualize themselves as being included in those happy endings. It’s a June Allyson film, adapted to politics. You may find this sort of thing cloying and simpleminded, just as I laughed hysterically through Love Story years ago and nearly got thrown out of the theater. But most people don’t find this sort of thing cloying or simpleminded, and anyway, as Max Frankel once wrote, “Simplemindedness is not a handicap in the competition of social ideas.” To the contrary.
This State of the Union speech was not different in essence from Trump’s wild campaign rhetoric, but last night all the campaign wildness was toned down. We heard nothing about “carnage.” We heard instead about “MS-13.” This language carried exactly the same message, of course, but without the sharper edge; that’s what you do when you want not just to reassure your base but also to expand it.
Meanwhile, remember how the Democrats looked when the camera panned across them during the speech? They all looked like they were in a dentist’s chair with their thumbs firmly up their butts. They did not applaud, they did not smile, they barely moved at all, not even to blink. And what impression did this make on viewers? That the Democrats are downers, no fun, barely human at best. From a political image point of view, Trump boxed their ears last night, and there was nothing they could do about it but glumly play their part.
Above all, this was a television performance speech. I have already made the point that Trump’s is a quintessential television presidency. Last night’s speech illustrated the point in spades. We have grown used to the soap opera interludes in State of the Union performances, in which Joe Hero and Jill Heroine stand in the balcony, to be lauded and applauded by those present beneath the Capitol dome, but more importantly also by those sitting in front of their television screens across the land.
This has been going on for years, and the reason is clear. Once the political consultants got hold of these political guys, they persuaded them that concretizing their points in personal narratives, with images of real people broadcast far and wide, was vastly more persuasive than making general points using non-visualized abstract language. But last night Trump smashed the soap-opera record into smithereens. I counted twenty Joe Hero-type name droppings, and that doesn’t even include the image-evoking language about the “Cajun Navy,” doctors, soldiers, policeman, and so on.
This is the Oprah model. The model’s unbreakable iron rule is that no abstract thought is permitted unless attended by human images that concretize the message the dark artist wishes to convey. This is, just by the way, how much of the programming “content” of television came fairly quickly to imitate the commercials, which is where the production money really goes, and where the manipulation is thickest, for obvious reasons.
And it works. It sells razor blades, bacon, toilet paper, and politicians alike. It works for politicians because modal Americans do not feel comfortable listening to general discussions about which they lack thick personal experience, but they like and (think they) understand stories. So that’s what a smart political engineer provides for them.
What about the policy substance in the speech? A lot of it consisted of gimmes. Who thinks the VA hospital system works well and doesn’t need improvement? No one who’s been paying attention. Who thinks the FDA has done a good job in allowing very ill Americans to try experimental therapies as a last resort? No one. Who thinks the Federal bureaucracy isn’t bloated, ponderous, and self-serving? Only people who are part of it—and, frankly, not even that many of them.
Plenty of people, too—some of them Democrats, independents, or Whigs like me—think that sequestration has harmed national security and that the defense budget needs boosting. Plenty of Never Trumpers also think that the individual mandate within Obamacare was unconstitutional, notwithstanding the weird, convoluted reasoning of the Supreme Court, and that there were and still are much better ways to stabilize the health care insurance pool.
Beyond the gimmes and the mini-points, the speech had three real deliverables, as we speechwriters, current and “recovering,” call them: taxes, immigration, and infrastructure. The discussions of the opioid problem, drug prices, the individual mandate repeal of Obamacare, and the whole short and non-specific foreign policy/national security pieces were sprinkled nicely, along with the gimmes, around these three main areas.
So let’s look at the three. As to taxes, the new law is a plutocratic heist of the highest order. But those aspects are opaque to most people, and deliberately so. But it does contain elements that suffice to cheer fairly large numbers of middle and lower-middle class voters. A lot of these elements are deceptive, and many of the promises proffered are either false or won’t be kept. No matter; a lot of people will focus on what provides up-front gratification, and when the other shoe drops—if they ever hear it—it will make a noise too complicated for most taxpayers to care about.
As to immigration, what Trump outlined last night actually makes good sense to a lot of Americans, even those who do not especially like The Donald. It does not differ very much from what George W. Bush proposed back in January 2004. That proposal made sense to me, in perhaps the only case in which Bush got sideways from his base. But the Bush White House screwed up the sale, unfortunately; if it hadn’t, we wouldn’t be stuck in the mess we’re in now.
I think most Americans in the sane center believe in a path to citizenship for many undocumented aliens, believe in the need for better border security if not some idiotic physical wall, believe in reconfiguring immigration to aid the economy, and know that the family reunification aspects of the law have long been out of control. Many people who have devoted some time and effort to studying the problem also understand that, for all the many benefits immigration brings both economically and culturally to the United States, it has hurt many lower-income citizens, and that too much immigration too fast has exacted a toll on social trust.
These are not radical views, and they bespeak no racism or xenophobia; those who claim otherwise should join the demagogy sweepstakes along with a lot of lately loony Republicans. No, they are ideas that the Trump White House may be able to use to successfully triangulate, much as Bill Clinton did with welfare reform. Democrats may boast now that they will never defect in sufficient numbers to help Trump pass an immigration bill that looks like the one he described last night, but that’s what many Republicans said about welfare reform back in the day, too.
As to infrastructure, it’s foolish to throw money at fixing legacy infrastructure arrangements when so many opportunities exist to make it so much better. Then problem here is not the technology but the mis-organization of government all at levels for thinking, planning, building, and maintaining new ways of doing things. The same problem afflicted the Obama Administration’s so-called shovel-ready projects. It still hasn’t been solved, and it won’t be solved until we have the kind of leadership capable of understanding both the potential and the obstacles we face.
That said, no one I know thinks that the current, constipated, NIMBY-propelled way we do infrastructure is anything but catastrophic. The chairman of TAI’s own editorial board, Francis Fukuyama, is no more a fan of Donald Trump than I am, but his analysis of the “vetocracy” we all labor under cannot but at least grudgingly give Trump’s language its due. Just because Trump says something doesn’t ipso facto make it wrong. It’s not like everything in American public policy was so peachy before November 2016; if it had been, Donald Trump would never have gotten anywhere within belching range of the Oval Office. When Trump said last night, “is it not a disgrace that it can now take ten years just to get a permit to build a simple road?” it positively galled me to have to agree. But I had no choice.
That doesn’t mean that I will ever vote for this man or anyone he goes out of his way to endorse for office. I agree with Philip Roth’s recent remark (notwithstanding my general distaste for him and his oeuvre) that Trump is an “ominously ridiculous commedia dell’arte figure of the boastful buffoon” and “the evil sum of his deficiencies, devoid of everything but the hollow ideology of a megalomaniac.” But most voters have never heard of Philip Roth, don’t know what a commedia dell’arte figure is, and, most important, don’t give a shit.
Trump may be all these things and worse. But what Roth and so many others seem not to appreciate is that this deeply insecure man has managed to deploy a range of coping mechanisms over the years in a way that has primed him for political success in the current dislocated American context. Head-spinning changes that spite many of our cherished inherited beliefs about ourselves are the vanguard of that dislocation, a dislocation whose institutional dysfunctions have overtaken the skills and imaginations of a self-satisfied and insular hollow elite from both major parties as though they were standing still in a forced sprint for relevance.
Into the vacuum Trump has come, having done, almost certainly without knowing it, what many character actors have been known to do—people who come alive when in character, but who deflate to a personality-deficient blob when not in character. He has managed to export his demons into a television persona that, in the current American cultural environment, has hit the celebrity political jackpot—with last night’s State of the Union address the apex, so far, of his success. He is truly a man of his times but, alas, his times are poor in virtue and much real national political talent. That, at least, is not his fault.
The post Trump’s State of the Union Coup appeared first on The American Interest.
How to Make Russia Sanctions Bite
On Monday night, the U.S. Treasury Department issued its long-awaited report on Russian oligarchs and Vladimir Putin’s inner circle. The report, issued in time for the Russian markets to absorb it before trading began on Tuesday, was required by Section 241 of the Countering America’s Adversaries Through Sanctions Act of 2017 (CAATSA). Meanwhile, the State Department chose not to designate any individuals or companies pursuant to Section 231 of CAATSA, which authorizes sanctions on those who do business with the Russian defense and intelligence sectors. That decision is consistent with the letter of the law, but designations under Section 231 should come soon if State is serious about turning up the heat on Russia.
The oligarchs and officials on Treasury’s list include both individuals who have already been sanctioned (22 to be exact) as well as those not yet sanctioned but with varying degrees of closeness to Vladimir Putin. For his part, Putin unsurprisingly called the action a “hostile step” while trying to diminish its importance at a campaign event. Russian opposition leader Alexey Navalny praised the report, saying it was a “realistic list describing what we would call the ‘Putin mafia.’ Obviously these people have different roles, but they are absolutely the backbone of a corrupt regime.”
Treasury Secretary Steven Mnuchin indicated at a Senate hearing on Tuesday that “there will be sanctions that come out of this [oligarchs] report.” Coupled with such statements from the Trump Administration, the report should serve as a catalyst for financial institutions to evaluate their exposure to the individuals on Treasury’s list. Likewise, those listed may begin to look at ways to protect their assets abroad, either by repatriating them to Russia or having family or business associates assume ownership.
The Russian government, for one, is happy to have its economic elite bring home their fortunes to invest in Russia. In a December meeting with a group of oligarchs, Putin laid out a plan in which oligarchs could return overseas capital through euro bonds issued by the Ministry of Finance. While Putin has for some time pushed for Russia’s elite to bring home their billions, doing so would ultimately lessen Russian financial influence abroad. Putin’s oligarchs play an important role in greasing the skids on issues of importance to Moscow in foreign capitals, more easily done when those individuals are key players in that country’s economic interests. Diminishing that influence would be a win for the United States and Europe.
Treasury’s oligarch report also contains a classified annex, which may include additional individuals whose wealth does not surpass the $1 billion threshold for a public listing. It is important to identify this second tier, since family members and business associates may help the principal targets evade sanctions, perhaps by taking nominal ownership of their assets.
The classified annex should also help lawmakers assess the details of the relationship between Putin and those listed, including how they interact and other relevant information about the power dynamics at play. If the annex does not address these key issues, Congress should go back to the Administration and request more information.
To justify the State Department’s decision to hold off on sanctions, which was called into question by both Democrats and Republicans, a spokesmen claimed that CAATSA itself is already “deterring countries from acquiring Russian military and intelligence equipment,” which is the law’s main purpose. After a classified briefing, both the chairman and ranking member of the Senate Foreign Relations Committee, Sens. Bob Corker (R-TN) and Ben Cardin (D-MD), seemed to be satisfied with the Department’s explanation, but both indicated that Section 231 sanctions should be coming sooner rather than later. Senator Cardin and nearly two dozen of his colleagues followed up with the State Department in a letter requesting that certain information in regards to this decision be provided.
On the basis of publicly available information, it is hard to justify the State Department position that CAATSA itself is a deterrent, since there are no clear examples of deals with Russian defense and intelligence firms that have either been stopped outright or fallen through due to financing difficulties, which could be attributed to fear of impending U.S. government actions. What we do see are countries such as Turkey, a NATO member, and Qatar, host of a key U.S. military base, moving forward with multi-billion dollar deals to acquire S-400 missile air defense systems from Moscow. They do not seem to be deterred as of yet.
To be clear, Section 231 does have a provision for delayed implementation, but both Republicans and Democrats will be displeased if they are not given sufficient updates on success in the Administration’s efforts behind the scenes, while those clearly continuing to do business with Russian partners escape being sanctioned.
While attention has focused heavily on the oligarchs list and Section 231, the most interesting report sent to Congress on Monday evening may be another classified document mandated by Section 242 of CAATSA. This provision required a report on the effects of expanding potential sanctions to include Russia’s sovereign debt and derivative products. For instance, if the United States moved to prohibit U.S. and foreign persons from investing in such debt, Russia would have a far more difficult time bailing out key sectors of its economy, particularly during downturns. Just this month, one of Russia’s sovereign wealth funds, the Reserve Fund, was completely drained and shuttered. The fund, which held $87 billion in 2014, became a key instrument for Putin to fix leaks in Russia’s economy. Targeting Russia’s ability to bail out its economy would be a serious escalation; the threat to do so is a powerful tool for Congress to consider.
What we know about Vladimir Putin is that he will continue to aggressively press and prod against the national security interests of the United States and our allies. He will seek to continue to undermine the global order and seek to position Russia and like-minded states to counter us wherever and whenever they see an opening. The Administration should not be shy about using the authorities granted to it in CAATSA.
The post How to Make Russia Sanctions Bite appeared first on The American Interest.
Peter L. Berger's Blog
- Peter L. Berger's profile
- 227 followers
