Peter L. Berger's Blog, page 26
October 16, 2019
Defeat Iran with Methanol
The United States faces a dilemma in its dealings with Iran. The mullahs have become ever more provocative, not only seizing ships, but recently going so far as the launch a strike that knocked out half of Saudi oil production (5 percent of the world’s). In principle we could easily retaliate in kind. Iran’s oil terminal on Kharg Island handles 65 percent of the nation’s petroleum exports. It is a sitting duck that could be transformed into a blazing inferno with a dozen cruise missiles. But doing so would set off an oil price spike that would inflict massive damage on the world economy. While American oil production has climbed over the past decade from 5.5 million barrels a day to 12 million barrels, we consume more than 20 million barrels daily, making us the world’s largest oil importer. Moreover many of our key trading partners, including Europe, Japan, and China, would also be forcibly hit, making things even worse. So no matter what mischief they make, we can’t hit Iran where it would really hurt. But that’s not even the worst of it, because the decision to retaliate might not rest with us. From the Saudi point of view, the Iranian attack was not the equivalent of the 1941 raid on Pearl Harbor. It was more like a Japanese attack that destroyed every factory west of the Mississippi. Can we really count on the Saudi response to remain restrained forever?
We, and our friends all over the civilized world, depend for our critical fuel supplies on a region that can well be described as an explosives-filled madhouse. This vulnerability has been evident since 1973. It should have been corrected long ago. It needs to be fixed now.
But how? Some say electric cars are the answer. But newsmakers though they may be, electric vehicles comprise less than 1 percent of global auto sales, barely register among trucks, and are essentially zero among aircraft and ships. While a substantial fraction of trains are electric powered, virtually all other transportation systems are nearly 100 percent dependent on petroleum-derived liquid fuels.
The answer is to open the transportation fuel market to methanol and its derivatives. Methanol is the simplest alcohol, with chemical formula CH3OH. It can be made from anything that is, or was once, part of a plant, including coal oil, natural gas, biomass, cellulosic or plastic trash, or even CO2. It can be burned in automobile engines with trivial modification. It can also be readily transformed into a secondary product known as dimethyl ether (DME), a superior fuel for the diesel engines that power trucks, ships, and many trains.
Replacing gasoline with methanol made from coal would increase carbon emissions, but using methanol made from natural gas, biomass, trash, or CO2 would decrease them. At this time, the most economical way to produce methanol is from natural gas, as the world is currently awash with it, so much so that hundreds of billions of cubic feet of it are being flared globally every year. Turning trash into methanol could sharply reduce harmful ocean dumping. Creating a massive global market for methanol would allow the transformation of such wastes into resources.
However it is made, replacing petroleum fuels with methanol would improve air quality, as internal combustion engines burn much cleaner on methanol than gasoline, emitting sharply reduced amounts of carbon monoxide, NOx, hydrocarbons, and particulates. It was for this reason that in 1970s and 1980s, the state of California initiated a pilot programthat demonstrated the use of tens of thousands of methanol cars for smog reduction. The cars worked very well, cutting smog and satisfying drivers with their excellent pickup (methanol is 105 octane.) They also demonstrated superior safety, since methanol fires can be put out with water. However few outside the state government wanted to buy methanol cars, because there were very few places to fill up. The state solved this problem by getting the Ford Motor company to develop the flex fuel car, which could run equally well on methanol or gasoline. Such vehicles had no restricted-supply downside. Far from it, by offering customers fuel choice, they allowed drivers to shop for bargains, buying whichever fuel was cheaper at any given time. Unfortunately, during the 1990s, the methanol advocates who had led the program all retired, leaving the flex fuel cause to be picked up by the ethanol industry, as flex fuel cars can also use that fuel as well. As a result, about 7.4 percent (20 million, compared to 1 million electric) of American cars are now flex-fuel, with a particularly large percent owned in corn growing states where the ethanol cause has a lot of support. Flex-fuel cars have also become very popular in Brazil, where sugar ethanol is frequently cheaper than gasoline.
Currently, flex-fuel cars can be produced at identical cost to non-flex-fuel versions of the same vehicle, as now that there is computer controlled electronic fuel injection the only difference is the use of different seal materials to insure methanol compatibility. If Congress passed a law requiring that all new cars and light trucks sold (not made–sold) in the United States be fully flex-fuel, able to run equally well on methanol, ethanol, or gasoline, within three years there would be about 60million such vehicles on the road. (20 million such vehicles are already on the road; and about 13 million new cars and trucks are sold annually in the United States).
At that point, methanol pumps would start appearing at filling stations coast to coast, in response to the market. Since methanol is cheaper than gasoline on a miles per dollar basis (it’s current spot price is $1.03/gallon—made from $0.15 worth of natural gas—with about two-thirds the miles per gallon as gasoline), drivers of older cars would have a strong incentive to convert their vehicles to flex fuel, as such conversions can be done for less than $5 in parts and about $100 in labor. In this way, the U.S. fleet could be substantially flex-fuel in less than 5 years. Furthermore, since foreign car makers would need to switch to producing flex fuel models in order to sell in the United States, car sales worldwide would quickly switch to meet the flex-fuel standard.
The strategic consequences of this transformation would be profound. By giving consumers worldwide fuel choice, such an “open fuel standard” would put a permanent competitive constraint on the price of oil. This would protect the world economy from destructive oil price shocks, as well as price manipulation by OPEC and associated bad actors. This would destroy the strategic leverage enjoyed by Iran and Saudi Arabia, among others, forcing them to conform their behavior to international norms or face the consequences. It would relieve poor developing sector countries from the harsh regressive consequences caused by OPEC price fixing, allowing them to use more of their precious foreign exchange to buy technology. This would benefit us as well. U.S. oil producers would also benefit compared to their foreign competitors, because we produce much more associated natural gas per barrel, and have much better pipeline systems for gathering it. The benefits for air quality, and thus public health, as well as other environmental issues, would also be marked.
As noted, methanol can also be made from CO2. This requires energy, which could come from nuclear, solar, wind, hydroelectric, or other emission-free sources. So, while transforming our vehicle fleet to flex fuel won’t solve the problem of rising global CO2emissions, it will make the problem much more solvable.
It is sometimes said that “we are addicted to oil.” This is not accurate. We are not addicted to oil. Our cars are addicted to oil. Our vehicles are like people who can only eat sardines, who must suffer the consequences if there is a shortage of this one commodity. One is much better off being omnivorous.
To paraphrase Shakespeare, the fault is not in ourselves but in our cars that we are made vulnerable. Let’s do something about it.
The post Defeat Iran with Methanol appeared first on The American Interest.
American Withdrawal and the Future of Israeli Security
Near the close of Israel’s election campaign last month, Prime Minister Benjamin Netanyahu made a public commitment to extend Israeli law and jurisdiction to the Jordan Valley and the Northern Dead Sea immediately following the elections. This was understood as a pledge to annex what has been viewed as probably the most important part of the West Bank when it comes to protecting Israel as a whole.
How this land came to be widely perceived as being so vital for Israel’s security is not well known. More importantly, how the Jordan Valley still remains the front line of Israel’s defense despite so many developments in military technology and Middle Eastern politics is also not well understood. What remains a constant for many years is the idea that Israel must be able to defend itself by itself and not accept external guarantees, even from the United States, in lieu of its own self-defense capabilities. This applies especially to the discussion over its retention of the Jordan Valley.
Israel captured the valley and the rest of the West Bank from Jordan in the 1967 Six Day War. Almost immediately, the Jordan Valley zone was integrated into Israel’s security system facing east. The idea that Israel was entitled to modify its borders became part of the diplomatic discourse right after combat operations ended. Israeli Foreign Minister Abba Eban wrote in his memoirs that the pre-war lines had been fashioned through armistice agreements that were based on military considerations; they were not international political borders. New borders were required.
This was enshrined in the language of UN Security Council Resolution 242, whose territorial clause did not insist upon a full Israeli withdrawal to the old armistice lines. Reference was made to an Israeli withdrawal “from territories,” but not “all the territories,” to new lines that needed to be “secure boundaries.” Britain’s Ambassador to the UN at the time, Lord Caradon, who helped draft UNSCR 242, commented on PBS: “We all knew—the boundaries of ’67 were not drawn as permanent frontiers.” Replacing the previous military lines with new international borders opened the door for revising the pre-war lines.
The most important advocate of the Jordan Valley as Israel’s new front line was the Deputy Prime Minister at the time, Yigal Allon, who in 1948 had served as the commander of the Palmach, the elite pre-state strike force. His deputy was a young commander named Yitzhak Rabin. Allon emerged as his mentor, and when Rabin served as Prime Minister, a portrait of Allon hung on a wall in Rabin’s office.
Immediately after the Six Day War, Allon became the architect of a plan to initiate a string of mostly agricultural settlements in the Jordan Valley and along the hills that dominate it. Today, nearly 30 Israeli settlements are situated in this area. Allon’s map became known as the Allon Plan.
Prior to 1967, the old armistice line with the Jordanians left Israel extremely exposed. Only nine miles separated the West Bank city of Tulkarm from the Israeli city of Netanya on the Mediterranean Sea. It did not require a stretch of the imagination to consider that an invading army from the east could slice Israel in two at this point. Israeli planners needed to avert such a scenario.
Israel’s structural problem in providing for its own defense has been the gross asymmetry between its own standing forces and those of its neighbors. Whereas the Arab states were able to form multi-state war coalitions, Israel fought alone. Moreover, the Arab states organized their armies in active service formations, while the Israel Defense Forces (IDF) were structured around mostly reserve units. In short, to match the quantitative superiority of its neighbors, Israel had to mobilize its reserve forces, which required up to 48 hours.
The terrain Israel captured in the West Bank, particularly in the Jordan Valley, provided Israel with a formidable barrier for the first time that would allow the IDF to absorb an attack and buy the precious the time it needed to complete its reserve call-up. How did this exactly work? To Israel’s east is the Hashemite Kingdom of Jordan. Alone, Jordan did not constitute an existential threat to Israel. Moreover, the Jordanians signed a formal treaty of peace with Israel in 1994.
Israel’s eastern challenge historically has come from other states that exploited Jordan as a platform to attack Israel. Thus in 1948 and 1967, an Iraqi expeditionary force, made up of one third of Iraq’s ground order of battle, crossed Jordan to attack Israel. (In 1973 that same expeditionary force decided to cross Syria, instead, and attack the IDF in the north.) Today, there are still multiple sources of instability to Israel’s east. For example, Iran projects its military power across the region through fully equipped Shiite militias which it has used successfully to defeat conventional armies in Syria and Iraq.
[image error]
YouTube screen capture, Jerusalem Center for Public Affairs
What the Jordan Valley gave Israel was not strategic depth, but rather strategic height. It is important to recall that the area where the Jordan River pours into the Dead Sea is the lowest point on Earth; it lies 1,300 feet below sea level. But this area is also adjacent to the steep eastern slopes of the West Bank mountain ridge that reach a maximal height of 3,300 feet. When taken together, the lowest parts of the Jordan Valley and its mountain ridge form a virtual strategic wall with a net height of 4,500 feet.
This steep barrier provided a daunting challenge for armored and mechanized units, which started to be employed widely at that time by modern armies in the Middle East. In 1973, with the IDF fully engaged in combat along two fronts on the Golan Heights and in the Sinai Peninsula, Israel had only small formations left to defend the hilly terrain of the West Bank from a ground assault. It was instructive that the Jordanians did not open this front, but rather sent their forces instead to the Golan area, where they could supplement the Syrians and the Iraqis.
In the years that followed, Israel continued to adhere to a military doctrine that viewed the Jordan Valley as a vital building block for its defense. Even after the Oslo Agreements were signed between Israel and the Palestine Liberation Organization (PLO) in 1993, Prime Minister Yitzhak Rabin reiterated a vision for a final peace settlement that kept the Jordan Valley under Israel. In a speech before the Knesset (Israel’s Parliament) on October 5, 1995, Rabin declared: “The borders of the State of Israel, during the permanent solution, will be beyond the lines which existed before the Six-Day War. We will not return to the 4 June 1967 line.”
Rabin gave special treatment to the Jordan Valley in the same speech. He emphasized that “the security border of the State of Israel will be located in the Jordan Valley, in the widest meaning of that term.” Rabin wanted to be certain that the geography he was describing was understood. He was not defining the Jordan Valley according to the width of the Jordan River alone. He did not want a future Israeli border extending right up to the water’s edge. What he had in mind was Israel continuing to control the high ground along the eastern slopes of the mountain ridge that descended down to the Jordan River. Israel named the north-south route that ran along this high-ground the Allon Road. It served as a reminder that the Allon Plan was not just about the Jordan River alone.
Israelis learned another context for appreciating the principles of the Allon Plan. In October 1976, serving as Israel’s Foreign Minister, Allon wrote an article in Foreign Affairs, entitled “Israel: The Case for Defensible Borders.” It essentially laid out the strategic logic of his plan. While there were those who asserted that in an era of advanced military technology, territory had lost its importance, Allon was convinced that wars were still decided by the movement of land armies. He wrote: “. . . as far as conventional wars are concerned, the following basic truth remains: without an attack by ground forces that physically overrun the country involved, no war can be decisive.” As long as that was the case, he believed that factors like topography, terrain, and strategic depth were still very much determinants of Israeli national security.
Allon also explained that any territory from which Israel would withdraw in the West Bank would have to be demilitarized. The question he posed was how demilitarization would be ensured. There was an arid zone, which included the Judean Desert, to the east of where the bulk of the Palestinian population lived. Allon estimated that this security zone was about 700 square miles. Thus he offered a second argument for Israel retaining the line he proposed that ran above the Jordan River. That line would safeguard the demilitarization regime that he had in mind.
Why such a line was absolutely essential was demonstrated in 2005, when Prime Minister Ariel Sharon implemented his unilateral Disengagement Plan from the Gaza Strip, which involved a full withdrawal from the area. Critics of the plan, like former Deputy Chief of Staff, General Uzi Dayan, stressed that Israel should at least retain the border zone between the Gaza Strip and Egyptian Sinai, known in Israeli parlance and the Philadelphi Route. What clearly happened in the aftermath of the Israeli pullout was a massive increase in weapons smuggling by Hamas and other Palestinian terror organizations from Egypt into the Gaza Strip. This directly influenced the rate of rocket fire on Israel.
For example, in 2005, the year of the Gaza Disengagement, a total of 179 Palestinian rockets were fired on Israeli territory. One might have anticipated that following the Israeli withdrawal the number of rocket attacks would drop sharply, along with the motivation to fire on Israel. But the exact opposite occurred: In 2006, Palestinian rocket attacks on Israel shot up to 946—more than a 500 percent increase in the rate of rocket fire. By 2008, 1,730 rockets were fired from the Gaza Strip into Israel. Hamas had built a system of tunnels over the years that allowed the Palestinian terror organizations to smuggle enormous quantities of rockets and even shoulder-fired anti-aircraft missiles. Three wars resulted from this escalation in Palestinian rocket fire.
The Jordan Valley was to the West Bank what the Philadelphi Route was to the Gaza Strip. It was the outer perimeter of the territory and adjacent to a neighboring Arab state. In the case of Gaza, Palestinian terror groups not only smuggled weaponry, they also built up for themselves a military presence in Northern Sinai which began to work closely with ISIS and ultimately undermined Egyptian sovereignty in that area. The Egyptian Army soon faced a counterinsurgency campaign on its own territory. By analogy without Israel in control of the Jordan Valley, a similar process could be expected within Jordan itself.
The main factors which worked against Israel’s position in the Jordan Valley were diplomatic. To the extent that Israeli elites believed that a negotiated settlement was around the corner, the political leadership in Israel became prepared to consider jettisoning Rabin’s legacy. This occurred during the negotiations at Camp David in 2000, under Prime Minister Ehud Barak, and several years later during the talks that were held by Prime Minister Ehud Olmert with Palestinian Authority President Mahmoud Abbas.
There was also U.S. input in this process. In early 2001, President Bill Clinton issued the “Clinton Parameters,” which summarized the negotiations held by Israel and the Palestinians. The Jordan Valley was not allocated to Israel. In 2014, General John Allen, who retired after commanding U.S. forces in Iraq and Afghanistan, worked on a security model for the Jordan Valley predicated on an Israeli pullback.
But the Israeli public was not persuaded by what their elites were prepared to consider or by the newest U.S. proposals. The idea that the latest military technology or international forces could reliably replace the Israeli Army did not move most Israelis.
Israeli public opinion clearly internalized the importance of the Jordan Valley for Israeli security. In the last decade, massive majorities of Israeli voters, reaching as high as 81 percent, stated that in any peace arrangement Israel must preserve its sovereignty over the Jordan Valley (polling commissioned by the Jerusalem Center for Public Affairs, executed by Dahaf and Midgam). The support for the Jordan Valley appeared at times to rival the support for retaining a united Jerusalem. In a county whose politics have been extremely polarized, the Jordan Valley stands out as an area where a strong national consensus has prevailed.
The post American Withdrawal and the Future of Israeli Security appeared first on The American Interest.
October 15, 2019
Can Americans Unlearn Race?
Self-Portrait in Black and White: Unlearning Race
Thomas Chatterton Williams
W. W. Norton and Company, 2019, 192 pp., $25.95
Reflecting on why he decided to leave America for Europe, James Baldwin once explained that he wanted to “find out in what way the specialness of my experience could be made to connect me with other people instead of dividing me from them.” The racism of American society in the late 1940s prohibited him from doing so at home, where he was always “merely a Negro.” Only by going abroad could he find the freedom to really ask himself what it meant to be black, to be American, to be African-American. By encountering people so different from himself, Baldwin wrote, he felt at last “a shattering in me of preconceptions I scarcely knew I held.” The constraints of American notions of race and identity were loosened by the existence of entirely different notions. “The time has come,” Baldwin decided, “for us to examine ourselves, but we can only do this if we are willing to free ourselves of the myth of America and try to find out what is really happening here.”
The American writer Thomas Chatterton Williams has followed in the footsteps of Baldwin’s Parisian emigration. Raised in suburban New Jersey by a white mother and black father, Williams grew up thinking of himself not as half-white or of mixed race but as “black, period.” In his literary debut, Losing My Cool (2010), he recounted an adolescence suffused with hip-hop culture and received ideas about a particular kind of black identity. In high school, in the mid-to-late 1990s, Williams strode the hallways with a sweatshop’s worth of flashy apparel, paid homage to the gods of BET, and lived by the dubious moral code of the Big Tymers and Master P. At the local basketball court, he was awestruck by a player known as RaShawn, who sipped Olde English before games, kept in his pocket a knot of bills “as thick and layered as a Spanish onion,” and often resorted to viciously beating up his opponents. “He was like a star to me,” Williams admitted.
But his erudite and autodidact father, Clarence, kept Williams and his brother in line by making sure they did their homework and studied hard. As a result, Williams was eventually accepted to Georgetown University, where he slowly began to question his allegiance to street culture and its peculiar notions of young black masculinity. At the end of his sophomore year, Williams abandoned his initial goal of becoming a wealthy investment banker and decided to study philosophy instead. Under the aegis of his supportive professors, he encountered the work of Nietzsche, Heidegger, and Hegel, and came to the realization that the lifestyle he had spent his youth idolizing was actually one of stifling groupthink. For Williams, BET became a symbol of black cultural regress.
Yet it was not until he travelled abroad for the first time that Williams was able to make the decisive break with the cultural values of his younger self. “Being serious about realizing myself as an individual,” he wrote, “required nothing less than my leaving for an extended period of time the black culture I had grown up in, severing myself completely from the miasmic influence of my group.” As it had previously done for Baldwin, France opened Williams’s mind and irrevocably shifted the tectonic plates of his intellectual and cultural life.
If this innocence of experience made Losing My Cool, on occasion, a slightly moralizing tale (Williams’s conception of hip-hop culture can be extremely narrow), his new book is a far more subtle, courageous, and moving achievement. Contrary to its title, Self-Portrait in Black and White: Unlearning Race is less preoccupied with the self than with the world that surrounds it, and in particular with the culture and politics of contemporary American society.
The difference between the two books is partly attributable to the short but significant span of history that separates 2010 from 2019, or the early Obama years from the early Trump years: a decade of police killings of unarmed black men; Black Lives Matters and “woke” anti-racism; Donald Trump and the specter of white nationalism. From talk of a post-racial society and the promise of change, America now is a bitterly divided country presided over by a President who began his political career by popularizing a racist conspiracy theory about his predecessor in the Oval Office.
Yet the difference is also personal. Unlike Baldwin, Williams has not returned from France to lend his voice to any movement or cause, however just. Instead, he has married a white Frenchwoman and settled permanently in Paris, a vantage point that, as was initially the case with Baldwin, has afforded Williams the requisite distance to look with fresh eyes at the rigid logic of race in America:
Outside the confines of the United States, I was coming to the startling—and at times unmooring—realization that our identities are really a constant negotiation between the story we tell about ourselves and the narrative our societies like to recite, between the face we see in the mirror and the image recognized by the people and institutions that happen to surround us.
This adopted outsider’s perspective on American cultural and political life has been crucial to Williams’s emergence as one of the most original and interesting contemporary interpreters of that life. Since making his literary breakthrough in 2010, he has written a number of acclaimed and sometimes controversial essays for the London Review of Books, the Virginia Quarterly Review, and the New York Times Magazine, in which he has proven himself a forceful critic of the dominant anti-racist discourse on the Left and contemporary identity movements more generally.
Most notably, Williams has written at length about Ta-Nehisi Coates, the bestselling author of Between the World and Me and We Were Eight Years in Power. Williams may well be tired of seeing his name mentioned in the same breath as Coates’s, but the similarities between the two writers are too considerable to pass over. Both have written memoirs about race in America from their personal perspectives as fathers; both have lived for a time in Paris, where they described feeling liberated from American categories of race; and both have wrestled with the intractable influence of James Baldwin. (Toni Morrison said that Coates filled the “intellectual void” left after Baldwin’s death.)
They have also both written at length about what Coates has called “the invention of racecraft” and Williams the “fiction of race,” yet the conclusions they have drawn from the exploration of this phenomenon has taken them in two radically different directions. Where Coates can occasionally sound like a fatalist on race, believing it to be America’s inescapable original sin, Williams has taken to calling himself an existentialist, and in his new book expresses an ambition to break free of the categories of race entirely. “If the point is for everyone to build ships, set sail, and be free,” Williams wrote in a 2015 essay for the Virginia Quarterly Review, on which his new book expands, “if we are collectively ever going to solve this infinitely trickier paradox of racism in the absence of races, we are, all of us—black, white, and everything in between—going to have to do considerably more than contemplate façades.”
Self-Portrait in Black and White opens with the birth of Williams’s first child, a blonde and blue-eyed girl, and the author’s subsequent realization that “whatever personal identity I had previously inhabited, I had now crossed into something new and different.” For someone who has spent his whole life believing in the American binary between black and white, the sight of his “impossibly fair-skinned” daughter comes as something of a shock, Williams admits: “On some deeply irrational but viscerally persuasive level, I think I feared that, like a modern Oedipus, I’d metaphorically slept with my white mother and killed my black father.”
But rather than send him spiraling into a tragedy of Greek proportions, the birth of his daughter prompts Williams to reflect not only on the fluidity of racial borders but on their ultimate absurdity. His book, anchored in the personal, untethers itself to become a kind of existential meditation on the whole sorry “fiction of race.” Along the way, we are reminded that racial categories as we understand them were only invented during the European Enlightenment (“I have stayed in inns in Germany and eaten at taverns in Spain that have been continuously operating longer than that,” Williams quips); that Williams’s daughter would have been considered a “Negro” under the one-drop laws of the previous century; and that racial identity is determined not only by the color of our skin but by our geographical location.
This broadening of the discussions around race and identity yields unexpected results. Repeating some of his criticisms of Coates and the “woke” anti-racist movement, Williams likens what he calls the “one-size fits all contemporary populism around implacable white supremacy” to the German notion of Sonderweg, or “special path,” a once-common myth on both the Left and the Right that the trajectory of German history could be explained according to a collective essence peculiar to the German people. Some historians, for instance, have viewed the Holocaust as the logical product of centuries of German history, drawing a clear line from Luther to Hitler, which, as Williams rightly points out, leaves a lot of nuance, ambiguity, and general historical messiness unaccounted for.
Williams argues that a similar idea has recently taken hold in America: “Its roots lie in the national triple sin of slavery, land theft, and genocide. In this view, the conditions at the core of the country’s founding don’t just reverberate through the ages—they determine the present. No matter what we might hope, that original sin—white supremacy—explains everything, an all-American sonderweg.”
This view of race and history only further legitimizes the fiction of race, Williams argues. Race is viewed as real, not in a biological sense, but as a social construct that at bottom is no less deterministic.
Yet if race is not measurable in any biological or scientific way, why keep it alive by other means? One of the most salient arguments in Self-Portrait is Williams’s claim, gleaned from the critic and essayist Albert Murray, that “black” and “white” are essentially just bad metaphors that do not stand up to the complexity and messiness of real life, to say nothing of any kind of scientific security. (Williams is fond of quoting Murray: “But any fool can see that the white people are not really white, and that black people are not black.”) To go on using these terms is to become the victim of racism a second time, Williams claims, an insight he arrives at by quoting a passage from the Polish journalist Ryszard Kapuscinski’s Imperium:
Only one thing interests the [dissident]: how to defeat communism. On this subject he can discourse with energy and passion for hours, concoct schemes, present proposals and plans, unaware that as he does so becomes for a second time communism’s victim: the first time he was a victim by force, imprisoned by the system, and now he has become a victim voluntarily, for he has allowed himself to be imprisoned in the web of communism’s problems. For such is the demonic nature of great evil—that without our knowledge and consent, it manages to blind us and force us into its straitjacket.
In other words, by continuing to use its metaphors we are merely prolonging the fiction of race. “The truth is that no matter how hard you try,” Williams comments, “you cannot struggle your way out of a straitjacket that does not exist. But pretending it exists, for whatever the reason, really does leave you in a severely restricted posture.”
Better, then, to reject the concept of race altogether. Williams is under no illusion that this will be easy or even, at times, desirable to achieve. “I am aware—and from time to time still feel it in myself—of the terror involved in imagining a total absence of race,” he admits. The proud and defiant traditions of black culture in America speak for themselves, yet the idea that in order to go on appreciating Bessie Smith or Ralph Ellison or Henry Ossawa Tanner one must accept the logic of race is absurd. On the contrary, we can reject the abstract mystifications of race and still appreciate the very real cultural and artistic achievements inspired by it, as indeed the worldwide appeal of so much “black” culture clearly demonstrates.
This is not to say that by rejecting race we can eradicate race hatred overnight (if at all), or that racism will disappear of its own accord if we acknowledge that race is fiction. Williams’s book is neither prescriptive nor blind to the injustices and crimes visited on the nonwhite populations of the United States. Here and elsewhere he writes sensitively of the sufferings of his father’s generation in the Jim Crow era and beyond, including the stigma his parents endured by entering into a mixed-race marriage, a union Williams’s white maternal grandfather disapproved of. On her annual visits to see her two grandchildren in Newark, Williams’s grandmother was never once accompanied by her husband, whose absence was always put down to a bad back or fear of flying. Not until adulthood did Williams fathom the unspoken reason his grandfather never visited. “I realized how profoundly ungenerous—how impressively unimaginative—my grandfather’s entire worldview could really be,” he writes. What strikes Williams most is the overwhelming poverty of the racist worldview, the significant loss his grandfather inflicted not only on himself, but above all on his wife, his daughter, his son-in-law, and his grandchildren.
Williams’s decision to reject race will undoubtedly strike many as eccentric, naive, or perhaps even disloyal—objections Williams both anticipates and addresses. But it is a protest entirely in keeping with the journey of self-realization he first embarked on as a young student of philosophy at Georgetown. It is a journey that has always been oriented toward greater individual freedom and away from group identity. Reading Self-Portrait, I was often put in mind of the Pied-Noir writer Albert Camus, whom Williams quotes in his concluding chapter: “Poverty kept me from thinking all was well under the sun and in history; the sun taught me that history is not everything.”
Williams knows that the pernicious and enduring history of racism in the United States is very real and unlikely to disappear any time soon, but he also knows that this history is not everything, and he refuses to accept a deterministic view of human identity. In this respect, Williams, whose appeal as a writer owes as much to the force of his arguments as it does the lyricism of his prose, is very much a latter-day Camus, especially the Camus of The Rebel, who rejected history as an object of worship and championed a “vigilant rebellion” in its place. Like Camus’s rebel, Williams says no to the abstractions of race and history in order to defend what is most important to him: the freedom and dignity of the individual human being. “History’s utility, while necessary, is diminished greatly when it smothers the possibility and beauty the here and now may contain with reference to nothing further than itself,” he writes. “We have a responsibility to remember, yes, but we also have the right and even the duty to continuously remake ourselves anew.”
Self-Portrait is a plea for us to live up to the promise of our democratic, multicultural societies, to develop a vision of ourselves that acknowledges our inherited historical and cultural differences without letting them define or divide us. Whether we are able or willing to do so is of course questionable, given the low, dishonest time we live in. But for crafting that vision with empathy and passion, Thomas Chatterton Williams deserves our gratitude.
The post Can Americans Unlearn Race? appeared first on The American Interest.
The Rumors of War
The debate concerning Confederate statues has quieted down since the 2017 Unite the Right rally in Charlottesville, Virginia, but many cities in the South are still grappling with how to handle or address the numerous statues that remain. Kehinde Wiley, the artist who painted Barack Obama’s presidential portrait, has unveiled a response and challenge to the hegemony of those honored by the iconic equestrian statues in question: “Rumors of War,” a new sculpture that currently resides in Time Square. It is massive—standing at 27 feet high and 16 feet wide is a man with dreadlocks tied back wearing ripped jeans and a hoodie as he pulls the reins of a horse that has one hoof raised. The work is Wiley’s first public sculpture and his largest work to date.
The Unite the Right rally was initially planned as a protest of the removal of a Robert E. Lee statue. The rally escalated to violence as hundreds of the alt-right and counter-protesters shut down the city in a clash that left one woman dead. The events in Charlottesville led to a greater national debate about the alt-right and the significance of icons and imagery celebrating the Confederacy that can still be found south of the Mason Dixon line. The only minimally addressed enshrinement of the “Lost Cause” finally being called into question led to many citizens taking matters into their own hands and tearing down offending statues.
Often the toppled statues look “crumpled” or like deflated balloons due to the haste with which they were made. Originally intended to be propaganda during the early 1900s Jim Crow era years after the Civil War had ended, they’re no more than cheap idols to a failed ideology and fallen regime. “Rumors of War” is a one-man stand against the upwards of 1,700 remaining Confederate Monuments littered throughout the South.
The statue has an artist statement explaining that the bronze sculpture is “a powerful repositioning of young black men in our public consciousness, and engages the national conversation around monuments and their role in telling incomplete narratives and perpetuating contemporary inequities.” By electing to portray the everyman instead of a notable figure in Black heritage, Wiley avoids familiar debates about who specifically should replace individuals commemorated by Confederate agitprop. Instead, Wiley’s statue invites the viewer to consider all the possible riders.
In a statement on the Times Square Arts announcement of the piece, Wiley lays out the inspiration for the work as: “an engagement with violence. Art and violence have for an eternity held a strong narrative grip with each other. ‘Rumors of War’ attempts to use the language of equestrian portraiture to both embrace and subsume the fetishization of state violence.”
The work’s status as a public monument is important because it is pushing back against the intimidation that was behind the erection of Confederate statues. This is also not the first project where Wiley has grappled with overt themes of violent intimidation in his own work. For example, his paintings of the biblical story of Judith and Holofernes show black women beheading white women. These are based on canonical paintings by Caravaggio and Gentileschi, and suggest a kind of chaotic retribution. “Rumors of War” succeeds where those paintings failed in positioning the merger of past and present. The statue is not a revenge fantasy, but the work of a mature artist handling his subject matter with refined gravity.
In Regarding the Pain of Others, Susan Sontag addressed why America has so much difficulty confronting the nation’s history of enslavement. She said there was no “memory museum” to slavery because it was “a memory judged too dangerous to social stability to activate and to create.” She continued:
To have a museum chronicling the great crime that was African slavery in the United States of America would be to acknowledge that the evil was here. Americans prefer to picture the evil that was there, and from which the United States—a unique nation, one without any certifiably wicked leaders throughout its entire history—is exempt.
America is finally acknowledging and facing that evil. By subverting the power dynamic traditionally reinforced by the equestrian statue, Wiley demonstrates the capacity of art to move beyond the limits of photographic evidence and cinema in documenting the experience of people of color. It takes both the work of memorials and contemporary art to define cultural understanding for each generation. In the post-Obama world, we finally have the National African American History Museum, which highlights not only how far American culture has come but also the ways it has yet to go towards equality and equity. Wiley’s art is a knife to the museum’s whetstone of memory.
“Rumors of War” is epic—while it is in conversation with the past, it simultaneously ushers the viewer toward new possibilities and understanding. That is to say, it functions at the highest register of art—the capacity to explore the less immediately accessible, the potential, the future, without being constrained by present reality. While it does not eradicate evil, Wiley’s work does demonstrate the power of standing up to it.
The statue will be on display in Times Square until December, at which point it will go to Richmond, Virginia, to be permanently installed at the Virginia Museum of Fine Arts.
The post The Rumors of War appeared first on The American Interest.
October 14, 2019
Open a School Door, Close a Prison
“Why did you do it?” This was the first question posed to then-17-year-old Jarrod Wall by his attorney. The year was 1989, and Jarrod was facing a felony murder charge. The “why” could have provided an important explanation, or at least context, for his actions. But Jarrod wasn’t able to answer. He didn’t have the insight or language—tools that often come with an education—to understand his own behavior.
Thirty years later, Jarrod reflects on that moment and the impact of his education behind bars during an interview with R Street Criminal Justice Fellow Emily Mooney, “Not knowing terrified me. Stricken with remorse, confusion and distrust of myself, I. . . . committed to answering my attorney’s question to and for myself. That is, I committed to change.”
Jarrod would go on to serve 26 years in prison for his crime. Yet far from marking the end of his story, prison served as the beginning of a new chapter in his life. Along with mentoring and counseling, Jarrod credits the education he received behind bars for helping him change.
The year after he was arrested, Jarrod began his first semester of postsecondary education through Indiana’s Ball State University. He was fortunate enough to be able to fund his bachelor’s degree through a combination of federal Pell Grants and state grants. “I was on the end of the golden years of education. . . .We were even able to keep our books,” he shares with Mooney. Jarrod went on to get a second bachelor’s degree and a master’s degree while in prison through the support of private sponsors.
An education like the one Jarrod received can be incredibly transformative for people who find themselves in prison. Access to postsecondary education enables these individuals to gain the knowledge needed to understand their past decisions and to make better choices when tense circumstances arise. In other words, they become equipped to do more than respond to difficult situations out of their own pain. Jarrod recounts taking a class on the psychology of violence: “[Education] helped me understand myself, understand my problems, my background, and then it helped me understand the culture around me.”
What’s more, education has positive trickle-down effects for everyone involved in the individual’s life. While receiving an education behind bars, Jarrod sent his report cards to his parents. Watching their son pursue a degree gave Jarrod’s parents something to be proud of—it gave them hope for their son’s future.
Given that more than nine out of ten people in state prisons are destined to re-enter society, providing people like Jarrod with access to education is a no-brainer. Granting them this access not only helps those in prison, but can also help enhance the economy of the surrounding community as well as public safety.
Unfortunately, others sentenced just a few years after Jarrod were not able to access the same opportunities. In the midst of the “tough on crime” era, then-President Bill Clinton signed the Violent Crime Control and Law Enforcement Act, which, among other things, prohibited incarcerated individuals from accessing Pell Grants. Before the ban, prison education programs like the one Jarrod took part in were thriving, with about 23,000 impoverished individuals relying on Pell Grants to fund their education in 1993 and 1994. Following the ban, the number of these programs plummeted; some estimate that the amount in existence a few years after the ban could be counted on two hands.
In Indiana, where Jarrod was incarcerated, those imprisoned after the Act went into effect were still able to access state education grants. But that changed in 2011, when the Indiana Legislature rendered individuals behind bars ineligible for state scholarships. The state Department of Corrections was ultimately forced to phase out the remaining degree-granting programs in early 2012.
For many students behind bars, these policy changes signaled the end of hope. But starting in 2016, the Federal Second Chance Pell pilot program re-opened a window of opportunity for some of those in prison interested in broadening their educational horizons. The pilot is currently taking place in over 60 institutions and, during this past academic year, enrolled approximately 10,000 students. While we don’t yet have data on the program’s results, site participants are already reporting that providing access to education within facilities has had positive effects.
Sadly, since they are not serving time at a pilot site, most people who are academically eligible for Pell Grants (an estimated 64 percent of all prisoners) cannot access federal tuition assistance. This barrier means that we cannot maximize the positive impact that education is having in our nation’s prisons.
Fortunately, signs of support for re-entry-based programs in prison have recently emerged. Last year, the Federal government passed the bipartisan First Step Act of 2018, part of which helps provide individuals released from prison with programming to help them reintegrate into society. Today, additional proposals are pending in the Senate and the House that would reinstate Pell Grant access to otherwise eligible incarcerated students. This legislation is backed by diverse groups including, among others, the U.S. Chamber of Commerce, the National District Attorneys Association, the American Correctional Association, Prison Fellowship Ministries, and the Unlock Higher Ed Coalition—a national coalition on behalf of which Jarrod presented on Capitol Hill in July.
Some may argue that funding Pell Grants for those in prison is costly, but the fact is that we are paying a far higher cost by not funding education. We spend far too much on our justice system in this country: approximately $182 billion annually. Few interventions in prison are so effective at promoting successful re-entry, and the savings incurred from reduced recidivism rates add up. Estimates from a 2018 RAND study suggest that educational programs behind bars are associated with reductions in future reoffending rates by as much as 32 percent, and an earlier study calculated that every dollar spent educating a person in prison could result in up to five dollars in savings from the avoided costs of future incarceration. A more recent report by the Vera Institute of Justice and the Georgetown University Law Center on Poverty and Inequality estimates overall annual savings for state prison costs following Pell reinstatement to be hundreds of millions of dollars.
What’s more, the Pell Grant program is not a zero-sum game. When individuals in prison receive an education supported by grants, it does not mean that an eligible young person on the outside will be less able to receive funding for their higher education. Every person that qualifies for a Pell Grant can receive one. It also would not reduce the award amounts that other eligible individuals who are not incarcerated receive.
But what about individuals in prison who are unlikely to re-enter society? What benefits could an education have for them?
If we are pushing to fund the education of people behind bars solely for the benefits it might provide upon their re-entry into the community, then there is no reason to make “lifers”—those serving life sentences, or sentences so long they equate to life sentences—eligible. But such an approach is both practically and philosophically wrong-headed.
On the practical end, the general culture behind bars plays a key role in determining whether time in prison promotes individual growth and rehabilitation or hinders it. When violence is commonplace and positive role models are absent, the potential for a transformative prison culture to emerge or endure greatly diminishes. This is harmful to both the individuals who live or work in prison and the communities to which most of those in prison will return.
As the ones who have been and will remain behind bars the longest, lifers set the tone in the prison environment. They see generations cycle through the system and often serve as mentors and informal leaders to those incarcerated for shorter periods. Jarrod experienced the impact of lifers firsthand. When he first arrived at prison, he was just a young man. Luckily, he found a group of lifers and other long-timers interested in mentoring him. They saw that Jarrod wanted to change and encouraged him to make growth-oriented decisions. One of them was a jailhouse lawyer who looked through his case and began asking him questions. “He became the first person that I really talked to about my background,” Jarrod observes. Thanks to the lawyer’s encouragement, Jarrod soon began therapy.
Other lifers mentored Jarrod as he pursued his education. They taught him how to run the prison education program for which he would eventually serve as a clerk and administrator for 15 years. “I was young, and they knew I didn’t want to be stuck in a cell all day, so they sent me passes so I could come work.” During lunch breaks, the lifers would discuss concepts like social learning theory. These men were ultimately Jarrod’s role models. When they were transferred to other facilities, Jarrod took their place—mentoring and encouraging younger men to make wise choices and to pursue education and healing.
Given lifers’ influential position within the prison environment, we must ask ourselves: What values do we want them to teach others? Do we want them to remain stagnant or model poor behavior, or do we want to help them grow into positive role models?
The fact is that individuals who participate in educational programs while in prison tend to have better institutional disciplinary records than those who do not. They are also far more motivated to stay out of trouble. And those who complete their classes are less likely to engage in violence while incarcerated, making prisons safer and more productive environments for staff and incarcerated individuals alike. So even when the student is a lifer who will never see the sky as a free person, their behavior sets an example for others to follow, meaning that we should be doing everything we can to ensure that the example they set is a positive one. By providing Pell Grant access to lifers, policymakers can help ensure that everyone housed in our nation’s prison is immersed in a more positive culture.
There are also other practical benefits to making sure lifers are educated. The children of incarcerated individuals—including long-timers and lifers—experience positive effects when their parents gain a higher education. Today, 2.7 million children have a parent behind bars. These children are at a higher risk of being stigmatized and shamed, having issues in school, and experiencing poverty. Yet children with educated parents are more likely to have aspirations of education for themselves—and to view their parents with pride, as role models. Even when parents are serving long sentences, they can use education as a connecting point and encourage their children to pursue a new life alongside them.
On top of all this, as a more pragmatic matter, sentences can differ for the same offense depending on where and when the offense was committed. Jarrod himself is one of those individuals: “Had I been sentenced a few years later, I could have been sentenced to life without parole. It wasn’t law yet in Indiana.” Excluding long-termers or lifers from accessing Pell Grants makes an unfair pre-emptive judgment of both the person’s ability to change and the policy climate ahead.
As a philosophical matter, offering educational opportunities to lifers cements a commitment to recognizing the inherent dignity of all human beings. The truth is that decreasing the number of people behind bars is not enough; improving re-entry prospects is not enough. What is required to address our nation’s incarceration problem is a new approach to how we treat individuals in prison. It requires showing them that they are worthy, not because they will be one day released, but because they are human beings, deserving of basic respect and care.
Yet our carceral state currently undermines human dignity at almost every turn—by isolating individuals, by denying them basic avenues of communication with their families, and by limiting access to necessities like medical care, clothing and hygiene products. Sprinkle in violence, untreated trauma, and mental illness, and it’s easy to see how the environment within prisons can be both dehumanizing for incarcerated individuals and, relatedly, toxic for prison staff.
Thus, postsecondary education’s value lies not just in the way it improves re-entry prospects for those in prison, but in how it offers a path for individuals to discover their potential, achieve something tangible, and bring about positive shifts in their development. When we unnecessarily prohibit individuals serving life or virtual life sentences from accessing education, we deny them a critical tool they can use to understand their own pasts and to claim their own redemption. We’re denying them a chance at a future in which they are also known for the good they can do rather than only the bad they’ve done. As Jarrod puts it, “We’re sentencing them to a living execution.”
Grants for funding postsecondary education have the potential to improve the lives of so many Americans. But by preventing hundreds of thousands of people from accessing them simply because they are behind bars, we are making it harder for prisons to become constructive environments that foster growth and transformation, and for families and communities to become productive and whole. Moreover, we are preventing people from gaining the knowledge, skills and maturity necessary for them to become positive examples of sons or daughters, fathers or mothers, and leaders within their networks—whether they remain within prison facilities or continue their lives outside the iron fence.
It’s time we reverse directions and embrace postsecondary education for those who are incarcerated. Only then can we truly unleash the potential of the hundreds of thousands of men and women locked up within our communities.
The post Open a School Door, Close a Prison appeared first on The American Interest.
October 13, 2019
How the Press Can Avoid Playing Trump’s Game
In the face of President Trump’s Biden-Ukraine smear, the Biden camp has declared a low intensity war on the press, determined to head off the false equivalency and other mind games that helped derail Hillary Clinton’s 2016 presidential bid.
When a reporter asked Joe Biden late last month how many times the former Vice President had talked to his son, Hunter, about his foreign business dealings, Biden said never, and demanded: “Ask the right questions.” A Biden campaign fund raising video posted the next day asked, “Will media see through Trump’s sleazy playbook? Or fall for it again?” followed by a clip with one commentator attacking a New York Times reporter for suggesting Hunter Biden’s work on the board of Burisma Holdings, a Ukrainian natural gas company, could be a political liability and another declaring that “at some point we have to ask if a lot of journalists don’t want to learn anything.”
Even the most responsible news organizations blew it in 2016. The press focused obsessively on the Clinton email stories (the conflation of Clinton’s use as Secretary of State of a private email server with the completely separate story of Russian hacks of the DNC and Clinton campaign staff was just one of several problems), and underplayed/normalized Trump’s myriad personal, business, and familial corruptions. Trump’s determination to re-up this 2016 playbook was clearly driving his and Rudy Giuliani’s attempt to suborn Ukraine’s President Volodymyr Zelensky into endorsing the lie that Biden forced Kiev to oust its top prosecutor to shield Burisma and his son. The truth is that the prosecutor was ousted for his refusal to investigate corruption.
Thanks to a highly literate whistleblower and the first-rate reporting that has followed, Trump may now face an impeachment trial for that abuse of power. But the President’s instincts on what can draw viral political blood remain unerring.
The same day Biden was pointing his finger in the face of a reporter in Iowa, the Trump campaign team was tweeting out its own video of clips from ABC, NBC, CNN, and the Times raising questions about Hunter Biden’s business dealings. As of October 11, the Trump video had been viewed 2.9 million times on Twitter, while the Biden video taking the press to task had been viewed only about 18,800 times. Earlier this month, Biden angrily brushed off another reporter’s question about a possible conflict of interest between his role as the Obama Administration’s point man on Ukraine and Hunter’s work in Ukraine (a separate issue from Trump’s baseless charges about the prosecutor). The Trump campaign quickly posted a clip on Twitter asking, “What is he hiding?” As of October 13, the “What is he hiding?” clip had been viewed almost 898,000 times, while a Trump campaign ad on Facebook falsely accusing Biden of promising Ukraine $1 billion “to fire the prosecutor investigating his son’s company” has had more than five million views. Facebook has refused the Biden campaign’s demand to take the ad down, saying its rules on false or misleading content don’t apply to political advertising.
Biden’s stonewalling is only making it easier for Trump to push out his smears and lies. He needs to explain why he looked the other way as his son traded on the family name in a long career of deals that may have been legally okay but still look sleazy. The answer will be painful. Hunter Biden has had a troubled life, including battles with alcoholism and drug abuse. And Americans may well be forgiving if Joe Biden acknowledges that, as much as he wanted to protect his son, he should have drawn a brighter line. He wouldn’t be the first politician with a difficult relative.
Still, the Biden team is right when it argues that reporters also need to be asking serious questions about lessons learned from 2016 and the years since covering the Trump circus of deflection, distraction, and lies. Biden won’t be the last candidate to get the sleazy playbook treatment.
The University of Pennsylvania’s Kathleen Hall Jamieson has written persuasively about the impact of reporters’ focusing on the embarrassing content of the DNC and Podesta emails, rather than their hacked provenance and asking why the Russians or Julian Assange wanted to bring down Hillary Clinton. Jamieson says she has yet to see even the best news organizations take responsibility for their errors in 2016 coverage: “I would feel better as a consumer of news if I saw the kind of awareness that the Times showed after the Judith Miller (Iraq WMD reporting) fiasco and read a statement that the New York Times isn’t going to get suckered again.”
With the double bill of the campaign and impeachment hearings now unfolding, there are cautions for producers and consumers of news to consider.
Reject False Equivalence (But Not High Standards)
I can already hear the complaints that Biden’s not-so-gentle scolding/stonewalling of the press isn’t in the same universe as President Trump’s “enemy of the people” authoritarian’s rhetoric, incitement of violence against reporters, stonewalling on his tax returns, or more than 12,000 false or misleading claims since entering the White House. And then there are the Trump children’s many and ongoing conflicts of interest. (Donald Trump Jr. took time out last week from pitching “grandfathered” overseas Trump Organization projects to tweet about Hunter Biden: “At the VERY LEAST, there’s the appearance of impropriety.”)
But Trump’s base behavior can’t become the standard. Politicians who usually play within the norms can’t get a pass on difficult questions or criticism because they are far better than Trump. I mean, who isn’t?
The challenge for journalists is to ensure that the reporting and writing is clear on the differences. Mortal sins (bullying or buying off a foreign leader to make up dirt on your opponent) are far worse than venial sins (choosing to look the other way while your son trades on your position). But I still want to know if the person aspiring to the White House knows that venial sins aren’t a good thing and has a plan to do better.
Editors also need to ensure that the volume of coverage of such sins merits the level of possible concern or offense. This is exactly what didn’t happen in the 2016 campaign. Here are a few of the cringe-worthy findings from academic studies of the coverage:
On issues of “fitness for office,” Harvard’s Thomas Patterson found that Clinton and Trump got equally negative treatment (87 percent negative stories to 13 percent positive) in top newspapers and television broadcasts during the critical campaign weeks of mid-August to early November.
A study published by Harvard’s Berkman Klein Center found that, over 18 months before the election, what they deemed the top 50 media sources devoted more sentences to the various Clinton-related email scandals than all of Trump’s scandals combined—65,000 vs. 40,000—including Trump’s taxes, Trump and women, the Trump Foundation, Trump University, and Trump and Russia.
Duncan Watts and David M. Rothschild reviewed the New York Times’ election coverage and found that “in just six days” (from the day after FBI Director James Comey reopened his investigation into Clinton’s use of a private email server to five days before the vote) the paper ran as many front page “stories about Hillary Clinton’s emails as they did about all the policy issues combined in the 69 days leading up to the election.”
There are a variety of possible explanations, not excuses, for why all this happened.
One is that many reporters and editors, like pretty much everyone else, were convinced Clinton would win and felt a duty to judge her positions and behavior more rigorously than that of the clown show of the Trump campaign. Trump’s relentless ability to stay on message also meant that even routine coverage of the Trump campaign—whether critical or not of Trump—usually ended back on “Hillary and the emails.” Harvard’s Patterson argues that because campaign reporting is incessantly negative (the 2000 Bush-Gore campaign drew even more negative reporting), it gives a particular advantage to “deeply flawed” candidates. “When everything and everybody is portrayed as deeply flawed, there’s no sense making distinctions on that score.”
The Atlantic’s James Fallows wrote recently about false equivalence in the 2016 coverage and the dangers of a replay in the Trump Ukraine-Biden story:
On the merits, Donald Trump’s finances were a hundred times as suspicious as those of other candidates, and statements about anything were a hundred times as likely to be false. But it went against the nature of most news organizations to run a hundred times as many articles about his lies and shadiness as about his opponents. Or even twice as many. By the time of the general election, it seemed “fairest” and most comfortable to aim for something more like 50-50.
Reporters and editors need to jettison what Fallows calls “procedural balance” for clear judgment. What Trump and Giuliani are trying to do to Biden is a smear. That is supported by rigorous reporting in Ukraine and Washington, a White House-released “not a verbatim” transcript, and the President’s own declarations.
The more direct the judgments, and the less procedural balance, the more we can expect to hear screams of bias and elitism from the Trump team—and even more threats from the President. It’s inexcusable and frightening but at this point inevitable. The Democrats are not going to sit in a corner when it’s their turn. In August, Bernie Sanders accused the Washington Post of biased reporting, supposedly because of his criticisms of Amazon’s tax payments and treatment of its workers (the Post is owned by Amazon’s Jeff Bezos). The Sanders campaign waited three days after the Senator went into a Las Vegas hospital with chest pains to announce that he’d had a heart attack. His campaign co-chairwoman called criticisms of the delay “asinine.” The voters need more information.
“Who Hacked the Emails?” and “This Is a Smear” Are Not Background Information
Readers are impatient. There will always be a temptation to shorthand information—“hacked” vs “WikiLeaks”—or to move caveats about what we know or don’t know further down in the story. There be monsters.
I can’t imagine many things more complex, or tempting to shorthand, than Ukrainian politics (remind me again which prosecutor was the good guy and which one was the crook). Except perhaps the byzantine fantasies of Trump and his Ukraine-whisperer Giuliani (now reportedly under investigation for his part in the scheme). But like the Russian-hacked Clinton campaign emails, the question of who is driving the Biden-Ukraine story—and why—has turned out to be far more important.
Last year, Peter Schweizer, the same conservative author/opposition researcher who flogged the Clinton Foundation “scandals” (the Clintons spent even less time worrying about the appearance of conflicts of interest), began pushing the idea that Biden may have abused his office to benefit Hunter’s businesses in Ukraine and China. Some of the stories highlighted by the Trump team—and tweeted out by the President—took their leads from Schweizer and his book, Secret Empires. Others had a breathlessness similar to the Clinton reporting. The headline for a now much criticized May 2019 story in the Times flags the political forces at work—“Biden Faces Conflict of Interest Questions That Are Being Promoted by Trump and Allies”—but waits 10 paragraphs to describe those interests. In the 19th paragraph, the story finally offers this caveat: “No evidence has surfaced that the former vice president intentionally tried to help his son by pressing for the prosecutor general’s dismissal.”
There was also strong reporting early on, including a May 2019 deconstruction by the Post’s Fact Checker that reported that Biden’s demand to oust Ukraine’s widely mistrusted Prosecutor General was supported by European allies, the IMF, the World Bank, and others. Since the whistleblower story broke, the reporting in the Times, Post, and Wall Street Journal has been rigorous and (on the whole) careful with wording. We are already seeing the sort of infographics, timelines, and podcasts that helped readers navigate the complex, multiplayer Russia investigations. I especially recommend a recent story in the Times that explains the alt-right origins of Trump’s bizarre belief that a DNC server was somehow spirited to Ukraine to hide the “truth” that Hillary Clinton and Ukraine were actually behind the 2016 election hacking and not Russia.
Not all of the readers will read most of the stories (or even a tiny fraction of them). But vigilant, in-depth reporting, with no short cuts, is the only way to cover an issue that could change the outcome of a presidential election or shape the debate around a presidential impeachment.
Keep Asking the Right Questions
Asking Joe Biden how many times he spoke to his son about his Ukraine business is a legitimate question. (Yes, the reporter was from Fox News, but any of us could have asked this one.) In a first-rate profile this summer, Hunter Biden told the New Yorker’s Adam Entous that he and his father only discussed Burisma once: “Dad said, ‘I hope you know what you are doing,’ and I said, ‘I do.’” According to a statement released by his lawyer Sunday, Hunter will be stepping down from the board of a Chinese investment fund by the end of this month and “under a Biden administration . . . will agree not to serve on boards of, or work on behalf of, foreign owned companies.”
There are other questions that need to be answered: Why didn’t Joe Biden press his son harder? Why wasn’t he concerned about the appearance of a conflict? Did he ever worry that the Ukrainians might read Hunter’s lucrative seat on the Burisma board as an “everybody does it” wink and nod on nepotism or favor currying?
And there are public policy questions to answer as well, starting with: After this experience, does the candidate think White House (and cabinet-level) ethics and financial reporting rules should be broadened to include close family members, including adult children? In June, the Biden campaign gave ABC a statement saying that Biden has always followed “well-established executive branch ethics standards” and that, if he won the White House, he would issue an as yet undefined executive order on his first day in office to “address conflicts of interest of any kind.”
And then there is Trump. The confounding thing about reporting on, like reading about, Trump is that there are so many outrages that last year’s or even last month’s are quickly buried in this week’s avalanche. Now that the White House has made alleged familial corruption a national issue, what better moment to grab the public’s attention on the subject of the Trump family’s behavior?
There is plenty to write about: Unlike her brothers, Ivanka Trump is an official White House adviser. But last year, in the midst of U.S.-China trade negotiations, her now closed, but likely not forever, fashion brand received approval from Beijing for more than 30 trademarks (including for sunglasses, wedding dresses, and child care centers). A report from Citizens for Responsibility in Ethics in Washington lists 2,310 ethical violations resulting from “President Trump’s decision to retain his business interests.” What’s the status of Trump’s announced plan to hold next year’s G-7 meeting at his Florida Doral golf resort? Why did the President have to overrule intelligence officials and his own White House counsel to get a top-secret clearance for his son-in-law? Is the Intelligence Community still worried about Jared Kushner?
Breaking Out of the Trump Agenda
Media critic Jay Rosen wrote in 2018: “One of the problems with election coverage as it stands is that no one has any idea what it means to succeed at it. Predicting the winner? Is that success? Even if journalists could do that (and they can’t) it would not be much of a public service, would it?”
Instead he has argued for a “citizens’ agenda,” in which reporters ask voters what they want candidates to be talking about—and then press the candidates to talk about those issues. While Rosen has championed the idea for years, he updated it for campaign reporters in the Trump era. “You can’t keep from getting sucked into Trump’s agenda without a firm grasp on your own. But where does that agenda come from? It can’t come from you, as a campaign journalist. Who cares what you think? It has to come from the voters you are trying to inform.”
Voters may say they don’t want to hear more about Washington’s scandals, but that certainly can’t be an argument for not asking candidates to account for their alleged misbehaviors. But it is certainly a call for vigilant self-awareness, for choosing substance over Trumpian (or any other candidates’) mind games, for and remembering who we are writing for—which isn’t just each other.
The post How the Press Can Avoid Playing Trump’s Game appeared first on The American Interest.
October 11, 2019
The Banality of Bigotry
The almost daily outrages of the Trump era have convinced a growing number of Americans to pay attention to racial inequality. A majority now believe that African-Americans do not currently enjoy equal opportunity or fair treatment and agree that the country must do more to guarantee equality. It’s hardly surprising that people of color believe that racism remains a serious problem, but the most dramatic Trump-era change has been in the attitudes of white Democrats. In 2014 a slim majority of 57% agreed that “the country needs to continue making changes to give blacks equal rights”; in 2017 80% did. Ta-Nehisi Coates sums it up well: “the triumph of Trump’s campaign of bigotry presented the spectacle of an American president succeeding. . . .in spite of his racism and possibly because of it.” This awakening of anti-racist sentiment is welcome news, but there is a risk in a conception of racism that takes the spectacle of Trump’s flamboyant bigotry as its inspiration. The conspicuous examples of prejudice that Trump both creates and inspires in others are dramatic and therefore highly salient, but they are not characteristic of contemporary racial disadvantage. They have become parts of a compelling but misleading narrative about the nature of racism and the best ways of opposing it.
To be blunt, racism is not typically dramatic and ostentatious. It is most often banal—a mundane, omnipresent fact of life that is embedded in the daily habits and routines of our society. Despite the rhetoric that likens prejudice to a violent assault, racism is, in its most consequential form, more like gravity—something so routine it barely registers, but which affects everything. Racial injustice is perpetuated, not by monsters devoted to atrocity but by typical people doing typical things, each one a small part of a larger machine that grinds on heedless of moral consequences. To say that racism is banal is not to say that it is without profound moral significance. Instead, it is to say that the immorality of racism lies not in discrete malice or emblematic vicious acts, but in routinized cruelty, indifference and, as the philosopher Hannah Arendt put it in another context, a failure or refusal to think.
For example, black or Latino men are more likely to be stopped by police during routine neighborhood patrols. The overwhelming majority of these stops are inconsequential, but the cumulative effect is a sustained insult and impediment to the freedom of movement that law-abiding people have a right to expect. These banal forms of bigotry only occasionally cause more dramatic injuries: Because blacks and Latinos have a greater number of tense encounters with police, they are necessarily more likely to experience more of the encounters that go wrong and end in violence. It is these encounters that we hear about in the news, and many people, understandably, suspect that the officers involved in these encounters are abusive bigots, malevolent sadists, monsters. But often the tragedies are a consequence of bad luck or poor judgment rather than malice. Accordingly, prosecutors and grand juries often decline to indict the officers involved or, if they do indict, fail to convict. The injustice in these cases is often hard to pin down to an individual: The bias is diffuse, it lies in the daily patterns of policing, in the impoverished and segregated neighborhoods that are the context of those patterns and in the constricted opportunities for a dignified and meaningful life that many people living in those neighborhoods face.
This description may seem unsatisfying: Blaming “society” for an injustice can seem like a cop-out. But blaming an individual for a collective or systemic evil is the essence of scapegoating. Herein lies the challenge and risk of today’s Trump-inspired recognition of racism. Trump’s Vaudevillian bigotry is contrived performance, designed to provoke. It is reckless in that is stirs up latent racial resentment and hatred that a responsible leader would seek to contain, but it is no more representative of our nation’s deep and unresolved racial problems than a carnival sideshow barker’s sales pitch is of whatever sleazy entertainment he is hawking. The root causes of most racial injustice do not involve loud, crass, and conspicuous bigots like Trump. But, like a carnival barker, Trump is hard to ignore. He provides a compelling focus for the anger, anxiety, and frustration of people of color and white liberals, inspiring heroic fantasies of dramatic battles in which good anti-racists can vanquish vile, vulgar, and corrupt bigots.
The paradigm for this narrative of an epic struggle against bigotry is, of course, the civil rights movement of the mid-20th century. Here, Trump is a tech era Lester Maddox and his detractors are 21st-century Freedom Riders. This is a comforting story because we know—or think we know—how it ends and what the moral is: After great effort and personal sacrifice, bigotry will be vanquished and the transition from racism to racial justice will blend seamlessly into a national story of moral progress. The story has deep cultural resonance. It makes our history of racism redemptive in a biblical sense. Triumph over racism involves the defeat or conversion of evildoers, both of which offer cathartic release: either retribution against the truly recalcitrant or reconciliation with those who come to see the error of their ways and repent their sins. Most importantly of all, those who have suffered racist contempt, torment, and deprivation have done so for a reason—their suffering has an intelligible cause and an ultimate purpose; it is both understandable and redeemed by the ultimate triumph of moral truth.
By contrast, a banal racism doesn’t fit this narrative of redeemed suffering. To be sure, there are evildoers, but the most vile don’t necessarily do the greatest harm so exposing and defeating them won’t necessarily change much. The worst aspects of the system are embedded in routines of institutionalized cruelty and callousness, perpetuated by people acting out of rational self-interest or just following orders. The interchange between Kamala Harris and Joe Biden in the June 27 debate of Democratic Presidential hopeful gave us a glimpse of this banal form of racism and the mistake of trying to place it in the heroic social justice epic. When Harris confronted Biden about his opposition to busing, she placed herself in the narrative as a classic aggrieved victim of injustice seeking retribution or demanding repentance: that little girl was me. Biden was left to sputter: What I opposed was busing ordered by the Department of Education—perhaps a relevant distinction in principle but utterly tone deaf to the drama of the moment.
Even though it resonated, Harris’s attack was oddly off the mark because by the late 1970s, when Biden opposed desegregation, school segregation had become banal. Massive resistance was over, there were no more red-faced bigots insisting on segregation now, segregation tomorrow segregation forever. Instead, there was only the routine administration of neighborhood school assignment, the normal patterns of residential settlement, the typical inequities of school funding, the daily, silent customs and assumptions that kept the races apart in social practice, even if they were allowed to mix in constitutional theory. Biden’s role in all of this wasn’t exactly heroic, but in terms of individual culpability, he was no worse than the untold millions of parents who moved or chose where to settle based on the racial make-up of the public schools, fled neighborhoods when the non-white population got too large for comfort and for stable property values, or turned to private schools to avoid “troubled” urban ones. In fact, his opposition to busing was a perfect reflection of the taken-for-granted racial aversion that has been part of American social life since the first unfortunate African set foot on North American soil. What had begun as an epic struggle against ostentatious white supremacists had become a question of administrative procedures, jurisdictional conflicts, economic incentives, and the cumulative effect of unspoken biases. It was a story of the quiet, mundane, unthinking prejudice of typical people—not the viciousness of dramatic villains.
The banality of racism is a hard truth, harder, in some ways, even than the idea of racism as intractable but ultimately discrete and knowable, an idea advanced by the pathbreaking legal scholar and civil rights activist Derrick Bell or, more recently, by the writer Ta-Nehisi Coates. It denies us not only the catharsis of redemption but even the catharsis of recognition: We cannot look at Donald Trump or even George Wallace or Lester Maddox and say a ha—there is the face of my enemy and the author of my suffering. The notion of banality suggests that suffering may have no author, no meaning, and no greater purpose—it is just suffering, nothing more.
But there is also a sense in which understanding bigotry as banal can give us renewed hope and guide us toward more effective avenues of social change. Every struggle against racism is not an epic battle with clear victories and defeats; most involve mundane changes and careful reforms. To return to an issue at the heart of both criminal justice and busing, neighborhood segregation is not just a consequence of racial hatred and contempt, although it is that; it is also a response to financial incentives—the desire for stable property values and good schools. Public policy can change these incentives. We may never understand the motivations of bigots, nor are we likely to enjoy the satisfaction of seeing them decisively beaten or made to repent the error of their ways. But we don’t need to understand or reverse the logic of racism in order to make meaningful improvements and perhaps even hasten the glorious day when racism will truly be a thing of the past. Indeed, if racism is banal, there may be no underlying logic or reasons to understand. Instead, racism is a cultural artifact that emerged as a byproduct of many different circumstances and projects and eventually took on a grubby and sordid life of its own. Like any cultural custom, it can also fall into disuse and disrepute and it will do so according to the same kind of chaotic historical cultural process that created it. Moral narrative will play a role here, and likely a very important one. But it will not help us to understand racism nor bring about its end—it will just be one of many contributing factors. It is just as likely that morally ambiguous, unpredictable, or fundamentally amoral forces such as art, fashion, and social etiquette will also play a role.
With a conspicuous racist occupying 1600 Pennsylvania Avenue and white supremacy on the rise, this may seem an especially inopportune time for such an idea. Surely now an epic struggle, a decisive battle, is unavoidable. Perhaps. But there is another lesson to be taken from recent events. A banal-but-powerful form of bigotry comes into view whenever a baseline assumption of white superiority is challenged. One sees it in relatively trivial forms when iconic heroes and heroines are cast as people of color in films: Consider the bizarre uproar over a black Little Mermaid in the latest Disney confection or over a black stormtrooper in the latest Star Wars films. (There is of course something similar to be said with respect to sex; some of the same folks were apoplectic over a female Jedi Knight.) One sees it in a more consequential form in a backlash to Barack Obama’s Presidency, which propelled Trump—the anti-Obama—to power. What has been for many a comforting assurance of racial status is called into question when positions and roles of prestige once exclusive to whites are occupied by people of other races. Perhaps more than anything else, Trump’s genius has been to tap into this inarticulable but quite potent sense of racial anxiety. But this doesn’t suggest that all of even Trump’s most committed supporters are irredeemable bigots; it suggests instead that many are typical Americans imbibing the banal, unremarkable racism that runs through American culture like capillaries. The best way to counteract their prejudices may not be to defeat them or compel them to repent, but to nudge them away from their biases and toward their better selves.
The post The Banality of Bigotry appeared first on The American Interest.
As the World Marches for Freedom, Where is Trump?
Look beyond the chaos in Washington, and you can spot some good news in a number of settings across the globe: people in the streets, on social media, or at the ballot boxes are demanding freedom and insisting that corrupt and tyrannical governments step aside. This encouraging phenomenon, alas, has largely been overshadowed by the twists and turns of the Trump presidency. The launch of an impeachment inquiry against the President, and his recent decision to tacitly greenlight a Turkish invasion of northeastern Syria, will further drown out the good news.
The demonstrations in Hong Kong are the most visible manifestation that the fight for democracy lives on. Activists in Russia, too, have challenged the Putin regime’s iron-fisted control over national politics, with more than 800 protests across the country this year. In Sudan and Algeria, street protests spearheaded by civic groups have opened the door to changes in autocracies that seemed destined to survive forever. In Turkey, the “do-over” election for Istanbul mayor produced a stunning setback for President Erdogan.
Then, of course, there is Ukraine, a country much in the spotlight these days. Earlier this year Ukrainians elected a President and Parliament that included many new faces, candidates relatively untainted by the country’s pervasive corruption.
Meanwhile, outrage over China’s human rights abuses has erupted within the United States, after the NBA apologized for a pro-Hong Kong tweet by Houston Rockets general manager Daryl Morey. The controversy has suddenly focused American attention on China’s long train of human rights abuses, from Hong Kong to Xinjiang, and its increasingly brazen attempts to squelch discussion of those issues beyond its borders.
Those Americans incensed by such abuses are standing up for our fundamental freedoms. And those taking to the streets abroad to demand better for their countries are taking enormous risks—from harassment and intimidation to loss of jobs, arrest, and even torture and death at the hands of their own governments. The least they deserve is the moral and political backing of the United States.
But the response from the White House has been one of silence and indifference—in what amounts to a profound rupture with America’s human rights tradition.
The U.S. government has supported democracy for decades. While this principle has never been applied evenly—all Presidents have made compromises in the name of national security—the policy paid off with the end of communism and toppling of dictatorships around the world. But freedom is not inexorably sustained; it requires steadfast commitment and sacrifice. And as Freedom House has amply documented, democracy has slipped during the past 13 years, with a steady decline in the number of democracies worldwide and an erosion of political liberties in democracies once thought stable.
The United States is not responsible for this erosion, but it hasn’t always helped. The war in Iraq tainted the cause of democracy promotion for many, who came to associate it with open-ended war and quixotic attempts at regime change. By 2009, only 21 percent of Americans thought promoting democracy should be a top priority of U.S. foreign policy. Perhaps overlearning this lesson, the Obama Administration was often marked by passivity toward autocratic regimes, especially Iran and Syria. Even so, both Bush and Obama administrations incorporated democracy assistance as a core policy during their terms.
Things have changed dramatically under Trump, who has been downright disdainful of human rights and democracy. While awaiting the arrival of Egyptian President Mohamed el-Sisi at a meeting on the margins of the last G-20 summit, Trump was overheard saying, “Where’s my favorite dictator?” On Saudi Arabia, he dismissed the significance of the state-sponsored murder of American resident and journalist Jamal Khashoggi and defended the relationship with Crown Prince Mohamed Bin Salman. Doubling down on his terrible deal with Erdogan abandoning our Kurdish allies, Trump has invited the Turkish leader to the White House on November 13.
On North Korea, Trump began reasonably, highlighting the unparalleled abuses of dictator Kim Jong Un—before he “fell in love” with Kim. On Russia, while his administration has grudgingly imposed minimal sanctions mandated by Congress for gross human rights abuses, Trump himself never utters a bad word about Vladimir Putin. Instead he says in response to human rights criticism of the Russian leader, “There are a lot of killers. We’ve got a lot of killers. What do you think? Our country’s so innocent?”
Only Cuba, Venezuela, and Iran have consistently drawn the ire of the Trump Administration for their human rights record. While Secretary of State Mike Pompeo has spoken out on China’s human rights abuses, including the appalling persecution of the Uighurs, Trump says virtually nothing. Instead, he congratulated the Chinese Communist Party on its 70th anniversary, comments members of his own party condemned. After the NBA controversy this past weekend, Trump shrugged that the basketball league and China “have to work out their own situation.”
On Hong Kong, where the mass marches over the past several months have included a fair number of American flag-waving protesters, it’s been left to bipartisan leaders in Congress, not the administration, to come out firmly in defense of the brave activists confronting Chinese political and military might. This week, members of Congress as diverse as Rep. Alexandria Ocasio-Cortez (D-NY) and Senator Tom Cotton (R-AR) co-signed an open letter to NBA Commissioner Adam Silver, condemning the NBA for kowtowing to Beijing and blasting China for “using its economic power to suppress the speech of Americans inside the United States.” Why can’t Trump issue such a forceful denunciation himself?
Ukraine, a country on the front line of the struggle between democracy and autocracy, is another case in point. Ukraine’s people have opted for democracy and an alignment with Europe. Threatened by Putin, Ukraine needs help from the United States. Instead, the President is trying to drag it into American domestic politics.
Making matters worse, Trump’s relentless denigration of the American media, his demonization of political opponents, his abuse of power and his nepotism send terrible messages to other countries about rule of law.
How do we tell Azerbaijan’s authoritarian, corrupt leadership not to mix business with government responsibilities when our own President does so? Who are we to advocate for press freedom abroad when our President refers to reporters as “the enemy of the people”? And how do we stress the importance of rule of law and separation of powers when Trump urges newly elected Ukrainian President Volodymyr Zelensky to launch an investigation against his political opponent, Joe Biden?
Despite Trump’s lack of support for democracy movements, compounded by the terrible example he is setting within the United States, citizens in Hong Kong, Russia, Sudan, and elsewhere remain determined. The recent protests in Cairo, the first such massive demonstrations against Sisi, show that even in the most brutal dictatorships the desire to live freely is inextinguishable. And the bipartisan outrage over China’s abuses show that human rights continue to resonate profoundly with the American people.
The next American administration, whether it comes in 2021 or 2025, must make support for democracy and human rights a top foreign policy priority. The advance of democracy in other countries will make the world safer and more prosperous. The United States should back this cause and return to leading by example.
The post As the World Marches for Freedom, Where is Trump? appeared first on The American Interest.
October 10, 2019
The Consequences of a Mad King
Kaiser Wilhelm II was far from the absolute monarch of the German Empire, despite Allied World War I propaganda that still resounds today. Instead, like Donald Trump in our own time, Wilhelm became chief executive of an overcentralized, personalized system where his idiosyncratic behavior (and extravagant hairstylings—for Wilhelm, a mustache) smashed norms. For Germany and the western world, all this worked—until it didn’t, with the start of World War I.
These days, the Great Man theory of history is out of fashion—yet a destabilizing personality can still upend institutions and relationships. Given Wilhelm’s long and colorful reign, a blow-by-blow account would approach the length of John C. G. Röhl’s definitive, three-volume, 3,884-page, 16.2 lb. biography of the man. Instead, we will narrow the focus, to look at Wilhelm as a disrupter of domestic and international systems.
He was born on a bad heir day. A difficult breech birth left Wilhelm with a useless left arm—the first thing that visitors noticed was his tiny hand. He spent his childhood wearing iron braces and doing painful, exhausting physical therapy, with little improvement. Photographers were instructed to avoid showing his hand, which was always concealed with a glove. Perhaps Wilhelm would have suffered fewer injuries at birth—he also likely suffered oxygen deprivation—if his father, Prince Friedrich Wilhelm, had sent for the obstetrician by messenger instead of regular mail.
Other aspects of young Wilhelm’s family situation were also unhappy. His mother Princess Victoria, daughter of the United Kingdom’s Queen Victoria, was Elizabeth Warren with a tiara: a rigid progressive. Not only was she repelled by her son’s disability, but Princess Victoria found his personality distasteful. She was not alone: His tutor worried about the “crystal-hard egoism” at “the innermost core of his being.”
As an adolescent, Wilhelm tried to connect with his mother, writing Victoria a series of letters about a recurring dream in which he fetishized her gloved left hand. His fetish continued in adult life—his handlers paid hush money to a woman professionally known as Miss Love, who possessed letters on his taste for hand bondage.
While Fred and Mary Trump hoped to improve their son’s character by enrolling him in a military academy, Friedrich Wilhelm and Victoria went the Tiger Mother route. Despite Wilhelm’s physical disability and a likely learning disability, they sent him to live for three years in the Spartan confines of a gymnasium (an academic high school), where he competed with far better prepared middle class students. Röhl’s biography reports:
The prince crammed from six in the morning until ten at night, including Saturdays: nineteen hours a week of Greek and Latin, six hours of mathematics, three hours of history and geography, three hours of German and two of English . . .
The physical discomforts and antiquated curriculum did little to educate the future monarch of an industrial giant, but much to embitter him.
In 1888, after the deaths of his 90-year-old grandfather Wilhelm I and cancer-stricken father within three months, the 29-year-old Wilhelm ascended to the throne. Pre-war Imperial Germany’s political system can be described (if it’s fair to do so at all) as a federal semi-absolutist aristocratic semi-democracy. The Empire consisted of several sovereign states, with Prussia by far the largest. These states even exchanged ambassadors. Each had its own executive (Wilhelm as King of Prussia) and parliament. At the imperial level, the Emperor appointed the Chancellor and a Cabinet which largely consisted of aristocrats. The popularly elected imperial Reichstag had urban/agricultural, east/west and Protestant/Catholic/Socialist splits.
Otto von Bismarck claimed that Germany under Wilhelm I was the Emperor’s personal monarchy, a form of government superior to Western Europe’s constitutional monarchies. In reality, Bismarck, as Chancellor (and also Prime Minister of Prussia) was the key figure, employing a combination of compromise, bullying, and parliamentary management.
Unlike his grandfather, Wilhelm II expected to rule and not reign, modeling himself on Prussia’s 18th-century Frederick the Great: “I am the sole master of German policy and my country must follow me wherever I go.” Röhl observes that his portraits were “in martial pose, with an oversized field marshal’s baton and a defiant expression.” A man of vociferous opinions, few consistent, he was confident of his universally superior knowledge. His confidence was not matched by an ability to read briefing materials or master policy details.
The Emperor had absolute power over appointments to his cabinet. Soon after Wilhelm’s accession, at the urging of personal advisors outside the government, he began disrupting Bismarck’s administration. Without the Chancellor’s approval (which the constitutional system required), Wilhelm suddenly intervened in a miners’ strike, ordered abandonment of a Russian bond transaction, and issued proclamations on social policy. When, after huge losses in the 1890 Reichstag election, Bismarck began negotiating to dismantle his anti-Catholic Kulturkampf legislation, Wilhelm’s advisors saw a Bismarck-Jesuit-Jewish conspiracy. After summoning the 75-year-old Bismarck from bed in order to rage at him, Wilhelm unceremoniously dismissed the Iron Chancellor.
With Bismarck gone, Wilhelm installed nonentities in the Cabinet. General Georg Leo von Caprivi succeeded Bismarck despite his lack of diplomatic experience, and despite his lack of ties with the Emperor’s court, the landed aristocracy, or the Reichstag. The Cabinet (the “responsible government” in Imperial political parlance) still needed working relationships with the Reichstag, which had power over legislation and budgets. But Wilhelm’s eruptions made it difficult to finalize deals.
After pursuing “Social Kaiserdom” early on, the Emperor reversed course and advocated the violent suppression of what he considered revolutionary groups, particularly the primarily non-revolutionary Social Democratic Party (SPD). He told military recruits that he might order them to “shoot down and stab to death your own relations and brothers.” Germany’s free and highly competitive mass press served as the Twitter of the day, disseminating the incendiary language. According to SPD leader August Bebel, every sulfurous Wilhelm speech brought the party 100,000 more votes.
This chaos was unpleasant for those in the government. As Wilhelm’s closest friend and informal advisor, Philipp Eulenburg, wrote to the future Chancellor, Bernhard von Bülow:
Wilhelm II takes everything personally. Only personal arguments make any impression on him. . . . He cannot stand boredom; ponderous, stiff, excessively thorough people get on his nerves and cannot get anywhere with him. Wilhelm II wants to shine and to do and decide everything himself. What he wants to do himself unfortunately often goes wrong. . . . To get him to accept an idea one has to pretend that the idea came from him. . . . Never forget that H.M. needs praise from time to time. . . . If one remains silent when he deserves recognition, he eventually sees malevolence in it.
Christopher Clark, a Wilhelm biographer and author of the neo-revisionist pre-World War I history The Sleepwalkers, argues that Wilhelm’s erratic interventions caused informal power in the system to gravitate away from him and toward the responsible government and Reichstag. This may have been true for domestic policy (the fin de siècle boom continued, and Germany’s civil liberties were largely unaffected), but is less persuasive for national security policy. This was legally the realm of the monarch and the aristocrats he directly appointed, with few checks.
By the time Wilhelm fired the Iron Chancellor, the international balance of power was roiled by Balkan nationalisms, Ottoman-Russian-Austro-Hungarian rivalries, the global race for colonies, and the rising industrial might of Russia, the United States, and Japan. Even Bismarck had found this increasingly difficult to manage, but Wilhelm, who anticipated a racialized final struggle between “Gallo-Slavs” and “Teutons,” made the international environment more fraught.
Early on, the Kaiser proved willing to overturn international agreements. Bismarck had entered into a secret Reinsurance Treaty with Russia up for renewal in 1890. After at first wanting to renew, Wilhelm suddenly terminated it, which helped drive Russia toward France.
Other interventions played out in diplomatic circles and in public. Wilhelm demanded an ill-defined “place in the sun” for Germany, echoing the current slogan of “America First.” In 1908, he provoked a German constitutional crisis by asserting his personal control of foreign policy in a rambling interview with Britain’s Daily Telegraph, filled with false assertions about other powers.
In another 1908 interview—this one with the New York Times—he asserted that the sooner that war came with England, the better. Bülow managed to prevent publication, but it became widely known among diplomats. Wilhelm frequently expressed hostility toward the United Kingdom, yet melodramatically raced to the bedside of the dying Queen Victoria in 1901. The joke, similar to the American joke about Theodore Roosevelt, was that Wilhelm “wanted to be the stag at every hunt, the bride at every wedding, and the corpse at every funeral.”
The Kaiser was also fond of sudden public appearances. In 1905, when the French, Germans, and British were contesting control of Morocco, Wilhelm unexpectedly arrived in Tangier, risking war. Yet at the subsequent Algeciras Conference, Berlin failed to provide instructions to the German delegation for months. He was more loquacious in an 1898 visit to Ottoman-controlled Jerusalem, entering the city on horseback in a field marshal’s uniform and supporting German protection of a Jewish homeland in Palestine, of Catholics in the Holy Land, and of a Lutheran church in Bethlehem, as well as declaring his friendship for Muslims throughout the world. As unsettling as the Ottomans found this, it was less grandiose than President Trump’s recent endorsement of a comparison to the King of Israel and the second coming of God.
Like the current U.S. President, Wilhelm avoided prior staff consultation. He preferred personal communication with the royals of other powers, such as the two-decade-long “Willy-Nicky correspondence” with Czar Nicholas II that the Bolsheviks revealed in 1918. And while his 1914 “blank check” to Austria-Hungary for military measures against Serbia is famous as a proximate cause of World War I, it was preceded by blank checks in 1895 and 1908.
Nor was the Kaiser shy about direct threats. In the decade before World War I, he told the Netherlands’ Queen Wilhelmina that he planned to occupy the Dutch coast in the event of war with France, and threatened two successive Kings of Belgium with invasion in that scenario. He threatened a naval war in King Edward VII’s presence in 1908, and told Prince Louis of Battenberg (a UK royal family member and senior UK naval official) in 1911: “You must be brought to understand in England that Germany is the sole arbiter of peace or war on the Continent. If we wish to fight, we will do so with or without your leave.” The German bureaucracy tried to soft-pedal Wilhelm’s statements, but officials in other countries questioned his sanity. Diplomats would have perceived a kindred spirit when Donald Trump threatened “fire and fury” against North Korea and bragged of his ability to wipe Afghanistan “off the face of the earth,” and “totally destroy and obliterate the economy of Turkey.”
The Emperor filled his national security establishment with people who shared his aggressive desires, resulting in Grand Admiral Alfred von Tirpitz’s naval arms race with the United Kingdom and the army’s Schlieffen Plan for a two-front war with France and Russia. The quality deteriorated as Wilhelm personally abused national security officials and promoted favorites. Groupthink contributed to inadequate resources for the grandiose plans, and to missed signals when UK officials indicated that they would stand by the Triple Entente with France and Russia.
The Kaiser made the national security establishment’s job harder with his impulsive and contradictory demands. As the 1914 crisis approached its final phase, with the military solidly in favor of war, he began waffling. Nonetheless, on July 19, he ordered the bombardment of Russia’s Baltic naval bases, followed by a July 27 order to blockade the eastern Baltic. On July 30, he ordered Chancellor Theobald von Bethmann-Hollweg to reach out to London with a security pledge to defuse the Austrian-Serbian crisis. Then, on August 1, he demanded that the planned Western Front offensive be abandoned and that hundreds of thousands of troops be moved to the Eastern Front. All these commands were deflected or ignored, and the invasion of Belgium began on August 4. Said Moltke: “I am happy to wage war against the French and the Russians, but not against such a Kaiser.”
Even before the Trump Administration, U.S. national security policy had uncomfortable parallels with Wilhelmine Germany. From at least the turn of the millennium, it has been dominated by Presidents who are served by a coterie of White House staff from the right schools or the right families. Presidents make heavy use of executive action without congressional approval and, thanks to claims of executive privilege, the White House staff is largely exempt from legislative oversight. When departmental bureaucracies offer independent perspectives, the White House staff can cut them out of the national security process. If the members of the German General Staff had tattoos instead of duelling scars, they might look like this.
Centralization allows Presidents to act on their messianic instincts: bringing the Pax Americana to the world for George W. Bush, ending the Pax Americana for Barack Obama and Donald Trump, and transforming the Middle East for all three. Like Wilhelm, Obama and Trump also felt free to undermine their respective predecessors’ policies, and even their own administrations’ policies, as with Obama’s sudden withdrawal and return to Iraq and Trump’s reversals on Syria. Political scientist Daniel Drezner worries that future Presidents may find it impossible to make a credible foreign policy commitment.
Kaiser Wilhelm suggests the risks of a personalized policy based on threats, bluffs and reversals—and the danger that foreign powers, reading United States policy seriously and literally, may take risks of their own.
The post The Consequences of a Mad King appeared first on The American Interest.
A Time of Crisis
We are not prone to alarmism. Yet for a variety of reasons, we see the crisis begun over Ukraine and most recently accelerated by the decision to abruptly leave Syria as part of a larger and disturbing pattern. We view the extreme impulsivity and flaws of judgment repeatedly evidenced by our President as an increasingly grave concern.
We share with you below the work of two TAI contributing editors, David J. Kramer and Gabriel Schoenfeld, who both respectively urge former senior administration officials to come forth to speak to President Trump’s temperamental suitability, stability, and competence to lead us in these difficult times. We endorse David and Gabe’s perspectives: In the Washington Post, David writes that “Those who worked with Trump must now tell Congress what they know.” And in USA Today, Gabe writes: “Trump is in free fall. We need insights on his fitness from Mattis, Kelly and others. Now.”
We urge our friends who have supported President Trump to consider that, whatever the legitimacy of the roots of Trumpism— and party allegiances and partisan concerns—it is entirely possible that their leader may end in disgrace, damaging badly a policy agenda they deem otherwise worthy.
Jeffrey Gedmin, Editor-in-Chief
Charles Davidson, Publisher
The post A Time of Crisis appeared first on The American Interest.
Peter L. Berger's Blog
- Peter L. Berger's profile
- 226 followers
