Peter L. Berger's Blog, page 55

January 26, 2019

Don’t Shoot The Messenger

The Kremlin’s trusted polling firm WCIOM, which also happens to be state-owned, has been releasing the results of its surveys faster and faster, and the news isn’t good for Russia’s President Vladimir Putin. The latest poll, measuring Russians’ “trust” in politicians,1 shows Putin registering only 32.8 percent support—his lowest rating in more than 13 years. This follows a poll released one week ago that had Putin at 33.4 percent. Both of those are a big fall from May of last year, when almost 47 percent of Russians trusted him. But the real news is less the drop, and more how frequently WCIOM is releasing its results.

In a normal Western country, polls coming out in quick succession don’t usually raise eyebrows among observers by themselves. And even in Russia, polling is a normal part of political life; yes, some pollsters are more independent than others, but there is a culture of paying attention to polling even as democratic institutions have atrophied under Putin.

What makes these most recent “trust” polls notable is the company releasing them. WCIOM is staffed by Kremlin-friendly people, and they appear to be trying to send a message to Putin that things are not OK. The pace at which they are being released underlines the urgency of their message. Is Putin listening?

Vladimir Putin, a man who has been refining his populist pitch before “populism” was a hot topic among Western pundits, has always worked to have “the silent majority” on his side—or, at least making it look like he did. It was these “ordinary people” that he turned to when Moscow’s elites turned on him and took to the streets in 2011 to protest his orchestrated retaking of the Presidency from his loyalist Dmitry Medvedev. During these protests, Putin’s United Russia party bussed in its own people to Moscow to take part in alternative rallies in support of the move.

The difference between the Moscow intelligentsia and the crowds rallying a mile away was striking—and intentionally so. It was a skilled PR operation. Putin’s people were in large part poorer community college students from the provinces around Moscow, a visual signal designed for television that he represented the majority of ordinary people, that he did not need the approval of the minority of intellectuals, professors, and the creative class. I am the legitimate voice of the people, Putin seemed to be saying to the West.

It was this same silent majority that supported the annexation of Crimea, and that drove Putin’s approval ratings through the roof. It was this same silent majority Putin leaned on when sanctions started flying from the West.

And it is from these very same “ordinary people” that Vladimir Putin seems to have become estranged as 2018 came to a close. As Aleksandr Baunov at Carnegie Moscow Center recently wrote, the word “stability,” a term synonymous with Putin’s rule up until now, has almost disappeared from his public pronouncements. In the so-called May decree, an executive order outlining policy goals that Putin signs every time he is inaugurated, stability is mentioned only once, and only with regards to the economy. In his annual call-in show, Putin mentioned stability again only once. And at his national press-conference, the sacred word was spoken four times, only one of which referred to Russia’s internal affairs. Putin now finds himself arguing with factory workers in Primorye (the transcript of this run-in was deleted from the Kremlin’s website) and defending highly unpopular pension reforms while Russian CEO compensations increase. He appears exhausted and demoralized by his base.

Speaking in DC last week, Carnegie Moscow Center’s Andrey Kolesnikov pointed out that Putin’s approval ratings have been frozen in place for the past three months. Russia’s leader appears unable to improve the public’s perception of his performance no matter what he does. Russia’s attack on Ukrainian vessels in the Kerch strait in November may have been expected to revive Putin’s revanchist appeal among Russians, but it didn’t work. Putin’s press-conference similarly didn’t budge the numbers.

Could it be that bread-and-butter issues are finally getting their due? Russia under Putin has always been subject to a unique unwritten social contract, where majorities united behind the regime for the sake of geopolitical goals related to national greatness. Putin always seemed to enjoy a wide margin of support as he invaded Ukraine, or sent troops to Syria, or fought Western sanctions. Inflation, drops in real disposable income, foreign investment drying up, capital fight—none of this seemed to make a meaningful dent in his polling figures. It’s as if the Russian public was convinced that a better future was just around the corner as long as they kept faith with Putin’s Make Russia Great Again project.

It feels Putin’s base is experiencing something similar to what Moscow’s liberals felt in 2011. However naive or exaggerated, the expectations that a second Medvedev term represented a chance at a better future were very real; and these expectations were crushed when Putin announced that Medvedev would not be running. It’s not as clear-cut now as it was then, in large part because there is no single identifiable event or moment that caused these expectations to crumble. But it feels analogous.

Up until now, Putin has managed to shift responsibility for unpopular policies onto his scapegoat Medvedev. The “counter-sanction” ban on food imports to Russia? That was the Prime Minister’s decision, Putin’s spokesman would always point out. Or the VAT tax increase, up from 18 percent to 20 percent, set to take effect this year? That, too, was also a cabinet decision, not the Kremlin’s.

Kolesnikov argued that this might no longer be working. Polls now show that Russians do not see Vladimir Putin as a Tsar anymore—as a leader somehow floating above the fray. Instead, people increasingly see him as a bureaucrat, a part of the government, and are holding him responsible for unpopular social policies.

If Kolesnikov’s read is right, a different set of polls becomes relevant. The respected independent pollster Levada reported that 53 percent of the public wants the government to resign, 20 points up from a month ago. Price increases and income drops were cited as prime causes for the discontent.

In a normal country, bad polling often leads to policy adjustments. In Russia, we don’t even know if these polls are reaching Putin. This could be the reason for the accelerating pace at which WCIOM is releasing its findings these days.

At the same time, it’s also well established that Putin doesn’t like to be seen to be caving under pressure, and never walks back his decisions. Russians joke that Alexey Navalny’s anti-corruption exposes have served as a kind of job guarantee for Putin’s most corrupt cronies, since Putin can’t bring himself to appear weak before the opposition. So Medvedev’s job is probably safe for the time being, but not only for face-saving reasons; he remains the most likely eventual successor to Putin, in no small part because he is so reliably loyal.

Still, it’s important to not underestimate the gravity of the situation. A recent article in the Kremlin-aligned newspaper MK, titled “When to Expect the Government to Resign,” sounded an ominous note for Putin. “Ukraine and Trump” are no longer working as distractions, the author argued. Living standards needed to improve, and the people running the show for years are not up to the job. “A simple change of the facade will not be enough,” he said. “The very policies of the state must change, and those policies must be run by a new, effective governing team.” Such an adjustment could not happen immediately, the author acknowledged, but continuing on the present course would be unwise. After all, he concluded, we mustn’t forget Pyotr Stolypin and the perilous path Russia’s Tsar Nicolas II chose at the beginning of the last century.


1. The “trust” rating differs from traditional approval ratings, but the picture is not that rosy there, either. 66 percent approved of Putin in December 2018—a slight recovery from a jarring fall in the summer of 2018 when Putin’s pension reform plans were announced. Approvals dropped from 78 percent to 64 percent in just two weeks. ↩



The post Don’t Shoot The Messenger appeared first on The American Interest.

 •  0 comments  •  flag
Share on Twitter
Published on January 26, 2019 12:33

January 25, 2019

A Remedy for Government Shutdowns: “No Play, No Pay.”

As we head into the second month of the now longest government shutdown in U.S. history, we have a better sense of how the government got into this mess than how to resolve it or prevent it from happening again in the future. This political crisis has exposed an institutional flaw: namely, the personal risks associated with failing to fund the government are not properly aligned with the responsibilities for reopening it.

We are mired in what political scientists call the gridlock interval, a zone of inaction in which every move towards resolution is blocked by a critical player.  Efforts by the House Democrats to reopen the government are blocked by the President, and the House in turn is blocking the President’s demands for 5.6 billion dollars to build out the wall on the Mexican border. Any new legislative solution this year from the Senate would require Democrat votes to avoid a filibuster. And without Donald Trump’s approval, anything that managed to make it out of the Congress would need 2/3rds of both houses to override the President’s likely veto. To riff on an old song, “clowns to the left of me, jokers to the right, here I am, stuck in the gridlock interval with you.”

We have seen this act before, including twice earlier in the Trump Presidency when the Republicans had trifecta control of the government. Those deadlocks ended quickly. By comparison, shutdowns under divided government are harder to resolve, especially in an era of heightened partisan polarization and tight electoral contestation. Moreover, there are other various factors contributing to the current stalemate such as a President who cannot afford to alienate his base given the threat of impeachment, the far right’s antipathy to “deep state” government workers, and an energized progressive faction in the Democratic party caucus. This all makes it hard to find common ground.

But in the end, the government will reopen because outside pressures will build up inexorably. All of the Administration’s various tactical moves to shield the public and the economy from the consequences of disrupted government services will fail at some point. Government workers are already pushing back against efforts to make them work without pay. Stories about the hardship that these 800,000 government workers face are multiplying daily. The inconvenience of not receiving Federal tax refunds or passing through airport security in less than two hours will fuel more public discontent. Adverse economic effects may even rouse the stock market to send strong negative signals. President Trump’s job approval numbers are already worsening in ways even he cannot deny. This will all translate into growing pressure on the President to negotiate in good faith as opposed to laying out unconditional demands.

While I am optimistic that some compromise will emerge in time, I am less hopeful that we can avoid repeated cycles of fiscal brinksmanship over the next two years. What bothers me is not the use of crisis to induce compromise, but the displacement of the consequences onto people who do not have a role in resolving the dispute.

There is a clear disconnect between responsibility for this crisis and its consequences. Congressional salaries are constitutionally protected from the regular appropriations process. As a consequence, the shutdown does not affect members’ personal finances. This means that the country and affected government workers have to wait for the political heat and/or economic damages to build up to a critical level before Congress will act. At the moment, voters blame the President first and foremost, the Democrats to a lesser degree and the Senate Republicans hardly at all. The latter is a shame as the Senate could be a key actor in ending this showdown given its historical record of bipartisan deals.

While people in Washington sometimes talk the talk of states as laboratories, they rarely look outside the contours of the Capital for possible solutions, especially from California. But California offers fertile ground for possible reforms precisely because it has had to grapple with severe partisan polarization for decades and has a user-friendly direct democracy system that enables bold (if sometimes unwise) reform proposals.

Prior to 2010, California was prone to precisely the kinds of government shutdowns that the Federal government is now experiencing. Between fiscal year 2000-2001 and 2010-2011, only two budgets were enacted by July 1, the start of the new fiscal year. As with the current Federal crisis, private sector contractors, government workers, and those dependent on government services suffered while the state legislature deadlocked over the budget.

One of the problems behind this was the requirement of a supermajority vote to approve the California state budget. This meant that the minority party could hold out for a better deal, much as the cloture vote functions as a legislative hurdle in the U.S. Senate. But the other problem was that legislators were entitled to back pay when the budget was finally resolved.

In 2010, the voters passed Proposition 25, changing both features. It changed the budget threshold to a majority vote (as it is in the U.S. Congress) and it stipulated that legislators would forfeit their pay and per diem reimbursements for the period of time it took to pass the budget beyond July 1. So did this work? We had a test the very next year.

The Democrats passed a budget by a majority vote without input from the Republicans or the approval of newly elected Governor Jerry Brown. Brown then vetoed parts of the budget that caused revenues and expenses to become unbalanced in violation of the state constitution. The State Controller, John Chiang, ruled that since the budget was not balanced, it was not finished. He declared that legislators would forfeit $400 a day plus their per diem expenses until they fixed the problem. The state legislature then made the necessary changes, and the 2010-2011 budget passed on time. There have been no missed budget deadlines since.

Given the state of U.S. politics, we should anticipate that the budget and debt limit negotiations will continue to be polarizing and contentious. It is easy to take symbolic stands when there are no personal consequences. It is harder when you have to explain to your family why you have given up money for some political cause. In short, one problem in the current shutdown is that members of Congress do not have enough skin in the game. The electoral signals are too slow to develop, and in the meantime, other people endure the consequences of the choices that Congress and the President have made.

Even if there are too many wealthy members of Congress (particularly in the U.S. Senate) for financial incentives to work as well in D.C. as they have in California, the idea of sharing the risk of a shutdown accords with widely held norms of fair play and shared sacrifice. It would of course take a constitutional amendment to make this happen, which is never an easy task. But the last amendment to pass was the 27th, which dealt with Congressional pay. Perhaps a 28th Amendment is the remedy for what ails Congress right now.


The post A Remedy for Government Shutdowns: “No Play, No Pay.” appeared first on The American Interest.

 •  0 comments  •  flag
Share on Twitter
Published on January 25, 2019 08:34

Checking the Fact Checkers

The Lifespan of a Fact

Directed by Leigh Silverman

Studio 54, New York, NY


The attempted strangling that takes place in the latter half of The Lifespan of a Fact is one of the most convincing violent escalations I’ve seen on stage. On paper, it might sound implausible. The two men involved are an essayist and a fact-checker, and the dispute that has sent the essayist into a semi-murderous rage is the fact-checker’s steady patter of questions: Can a moon that is only a waxing crescent (12 percent illuminated) be accurately described as half-full? How long can a woman live in Las Vegas before it’s inaccurate to describe her as being “from Mississippi”?

Sometimes, as the strangler in this three-hander would argue, it’s hard to get a sense of the scene from a simple recitation of the facts.

The Lifespan of a Fact is based on an implausible true story. Essayist John D’Agata wrote a piece for Harper’s that was rejected due to his loose approach to facts in the service of story. The essay “What Happens There” was ultimately published in The Believer—once it had been given a careful fact check by Jim Fingal.

D’Agata and Fingal swore at each other, argued about the nature of journalism, and, ultimately, published their correspondence as a book, The Lifespan of a Fact, which sets their argument Talmudically. Their back and forth surrounds the essay that spurred the argument (which sometimes progresses at the rate of a sentence a page, as they litigate in the margins).

Playwrights Jeremy Kareken, David Murrell, and Gordon Farrell adapted this brawl into a Broadway play, adding an editor, Emily (Cherry Jones) to the mix. Jim, a stringy Harvard graduate, is played with a vibrating anger by Daniel Radcliffe, while Bobby Cannavale brings a muscular presence to John.


[image error]

Pictured L to R: Daniel Radcliffe, Cherry Jones, and Bobby Cannavale
(Photograph by Peter Cunningham, Courtesy of Polk & Co.)


From the beginning of the show, it is clear that the problem at the heart of the play isn’t limited to writers and fact-checkers. As Jim begins his email correspondence with John, he types, erases, and edits in front of the audience. (The well-integrated projection design is by Lucy Mackinnon). Jim refers to the piece as an “article;” John angrily corrects him—he writes “essays.”

Words matter. The audience is drawn into Jim’s role, if not necessarily onto his side, as we play fact-checker from the sidelines. The scenic design for their final confrontation in John’s house makes the contradictory points of view tangible. Mimi Lien’s trapezoidal frame and sloping wall makes the house resemble an Ames room, used to distort our judgement through tricks of perspective.

When both men summarize Emily’s expectations for their work, their language diverges again. John the writer tells Jim, “She knows I’m not beholden to every detail,” as if preemptively excusing any factual slip-ups. Jim the fact-checker pushes back, saying that Emily warned him that John “take[s] a few liberties.” The audience can confirm only Jim’s claim (we just saw Emily say this in the preceding scene), but we can’t disconfirm John’s. She may have made both comments to both men, and either, both, or neither may be an accurate impression of her view of John, or an accurate description of John himself.

The show resembles Tom Stoppard’s Arcadia, a play that makes high drama of epistemological uncertainty. In Stoppard’s classic, the duelists are historians poring over primary sources, trying to intuit the whole from fragments. The audience can see what the academics cannot, following the action in the past as it plays out in parallel in the present.

The historians of Arcadia scour the historical record, looking for, as the character Hannah Jarvis puts it, a “peg” to hang their theories of literature on. The peg is in fact a person, with a richer, more complicated life than their pet theories would suggest. But that’s not always a concern of the historians, or of John in Lifespan.

We get no glimpse behind the curtain in Lifespan. The subject of John’s piece, Levi Presley, is dead. This teen’s suicide is the lens through which John is telling the story of modern Las Vegas. John never met Levi (though he’s not above implying he might have—spinning a call he fielded during his volunteer work on a suicide hotline as not provably not from Levi), but he recognizes in the boy the chance to tell a story he’d already wanted to share.

In the original book, D’Agata makes the case for his methods to Fingal, arguing that his art is impressionistic. Merely sticking to the facts occludes the truth he’s trying to tell. D’Agata writes:


Numbers and stats can only go so far in illustrating who a person is or what a community is about. At some point, we must as writers leap into the skin of a person or a community in an attempt to embody them. That’s obviously an incredibly violent procedure, but I think that unless we’re willing to do that as writers (and go along for that ride as readers), then we’re not actually doing our job.

It’s the question of who experiences the violence of that procedure that brings Emily up short in the play, and seems to give Fingal some of his biggest qualms. In the book, Fingal objects that Levi may not be able to bear the weight of the argument D’Agata wants to make. After all, he’s “not a cultural figure or an icon whose life is for the taking and can be radically manipulated and reinterpreted.”

D’Agata admits that his essay is really about an idea, not Levi, but asks what he should have done instead. Does Fingal wish D’Agata had “completely made up a suicide victim so that I could use him however I wished?”

In the play and in the book, it’s clear he was willing to rescript another suicide victim’s death to suit his story. In the opening paragraphs of his essay, D’Agata lists the other deaths that happened on the same day as Levi’s jump, and falsely states there was a suicide by hanging, when, in fact, the other victim also jumped to her death. D’Agata wants Levi’s death to not be cluttered by other similar deaths that aren’t part of his story, but this is the lie that gets the biggest reaction from Emily in the play. “She is as dead as Levi and you pissed on her,” Emily tells him coldly.

The playwrights’ addition of Emily adds a third perspective to the folie à deux of the original book. Emily is the voice of the publication, torn between profit and prestige. She is willing to run the article if it is approximately correct. (She suggests shooting for 90 percent true, and apologizing for remaining gaps, if anyone happens to notice.)

Her pragmatism appalls Jim, who tends to carry the sympathies of the audience throughout the show. Radcliffe’s Jim is so fixated as to be completely unselfconscious, never getting the jokes at his own expense. At the performance I attended, the audience broke into spontaneous applause at his deepest moment of pedantry: Jim pulls out a posterboard reconstruction of a traffic jam mentioned in the article, arguing that the street is too wide for the number of cars cited to cause a serious snarl. This moment of high-school science fair sincerity impresses Emily less than it did the audience.

Jim does have to handle the only moment in the play that rang false to me. His character delivers a too-topical speech, making the case that fact checking is the only possible rebuttal to cries of “fake news.” John’s smallest elision, he suggests, gives fuel to conspiracy theorists like those who accuse shooting victims of being crisis actors.

The strange thing is, if the conspiracy theorists came to see the show at Studio 54, the character who would validate their fears is Emily, not John. She wants to see the essay published, not just because her backup piece is the fluffy-sounding “Congressional Spouses and the Burdens They Bear,” but because she thinks John’s essay is the right story at the right time. It’s the kind of narrative, she argues, that has the potential to change its readers and the world. And if the change is desirable, does it matter too much if the story is true?

In Emily’s telling, writing is a tool for shaping readers. The art is a means to an end. Her view has sympathizers off stage. Boris Kachka, the New York books editor, argued that books don’t need to be read to be important. In a Columbia Journalism Review piece about the rising influence of book reviews, he said “You can have a blog post that at least draws people’s attention to the book. Maybe they’ll read it, maybe they won’t. But at least the ideas from the book will filter through into the conversation. I think it’s important to get those ideas in, so books can have an influence beyond their readership, whatever it might be.”

Jim would bristle at the deployment of a noble lie, John at the dismissal of his artistry as a mere vehicle for insinuating an idea into the mind of the reader.

The ultimate justification for John’s style is different onstage than in the book. The playwrights give him new ammunition, having John tell the story of his mother’s death (there’s no indication in the book that his mother has died). He makes the case that following the verifiable facts would lead away from the truth. The rules about calling time of death tell you less than John can about being with her and seeing her go.

John’s attack on what Edmund Burke would call the tyranny of “sophisters, economists, and calculators” is his most sympathetic moment. Who, on the phone with their insurer or interacting with any other piece of precise bureaucracy, hasn’t wanted to cry out that their interlocutor has all of the facts and none of the truth? If John doesn’t win out, it’s because it’s not clear there really is an underlying truth to his piece, let alone one that would justify the numerous distortions in his essay.

The argument John is making is best defended by one of Jim Fingal’s marginal comments in the original book. Fingal is patiently adding context to D’Agata’s overbroad tour of theories of suicide throughout human history.

While he concedes that D’Agata is technically correct that “The Talmud forbids even mourning [suicide] victims,” Fingal piles on marginal citations to prove that D’Agata isn’t telling the whole story. Fingal notes, “The severity of this punishment caused rabbis of the time to consider a self-inflicted death as only that which was announced beforehand and carried out in front of eyewitnesses.”

In other words, the rabbis understood that asking for a sufficiently severe fact-check would render a claim unprovable. For them, this precision was a way of sneaking in mercy as a technicality. Journalists like D’Agata haven’t proven themselves worthy of the same benefit of the doubt.


The post Checking the Fact Checkers appeared first on The American Interest.

 •  0 comments  •  flag
Share on Twitter
Published on January 25, 2019 08:29

January 24, 2019

U.S. Strategy Towards Afghanistan And (The Rest Of) Central Asia

From Europe to Asia, everything is in motion. Russia’s growing weakness as a state tempts it more than ever to employ its refurbished military in risky adventures. China faces an unfamiliar fragility at home and pushback to its policies abroad. India is rising but must still make up for decades of clumsy domestic policies. Pakistan has a growing middle class but is failing nonetheless. In Afghanistan, a talented new generation is rising but solutions to decades of turmoil require constant replenishing. The people of Iran are once again flirting with revolution. Turkey is lurching towards an Islamic and neo-Ottoman identity, and has in the process upended most conventional thinking about its strategic importance. The European Union’s process-driven raison d’être appeals to fewer and fewer citizens of the nations it hoped to homogenize. And the Middle East continues to breed the pathologies that have characterized it for a century.

Only the most wooden strategist would still try to characterize this vast region in terms of traditional balances of power or spheres of influence. On the contrary, its dominant feature is a still amorphous but general realignment, the likely outcome of which will be new and unprecedented alliances, relationships, and transactional tradeoffs. Within a few years the Eurasia that will emerge from this churning will be unrecognizable. Shaping the geopolitics of this region into landscapes that affirm long-term American strengths will require thinking and actions that transcend our conventional strategic paradigms.

This rethinking might usefully begin by assessing U.S. objectives in the geographic heart of the Eurasian continent, Central Asia, through a different set of strategic optics. As part of a larger shaping strategy, the U.S. could benefit from an approach which envisions strategy outward from Central Asia rather than through the traditional and exclusive analytical lenses of Russian or Chinese interests. Here, on the vast territory between real or imagined modern empires, lies a dynamic region with historical and cultural connections in all directions and with deep ties with all the major powers and problematic regimes on its periphery.  It is also the only region on earth whose neighbors and near-neighbors include four and possibly five nuclear powers, as well as NATO member Turkey.

Central Asian leaders today consider Afghanistan an inseparable part of their region. Inevitably, Afghanistan figures prominently in the political, economic, and security planning of all the states that surround it. Central Asians find it hard to imagine their region as a zone of stable states without Afghanistan as an integral part of it. What they already share is of vital importance, namely common values, cultures, and histories. Moreover, their economies are fast becoming interwoven.

Any American policy that seeks to lessen or withdraw U.S support from Afghanistan is bound to impact negatively all the other states of Central Asia. The timing for such a move could not be worse, for it would occur precisely at the moment when Central Asia is successfully evolving into a more stable, prosperous, open, and integrated world region. A U.S. withdrawal from Afghanistan would also signal to Afghanistan’s neighbors in Central Asia that America is timid and uncertain about its own interests, even at a time when Central Asians themselves increasingly support some kind of U.S. presence as a means of balancing Russia and China.

By leaving Afghanistan to its fate, the U.S. would also close off Central Asia’s access to the booming Indian sub-continent. India, a key American ally, recently signed 17 pacts with Uzbekistan, covering nearly every sector, including defense. If these regions are cut off from each other by an American withdrawal it will leave Central Asian economies ever more dependent on just Russia and China. Abandoning Afghanistan will therefore send the wrong signal at the wrong time.

Russia meanwhile seeks to draw Central Asians into the neo-colonial economic and security organizations it controls. Dreaming of an imagined past, both Iran and Turkey harbor ambitions in Central Asia, as do a number of Middle Eastern states. Further afield, Japan, South Korea and Southeast Asia, all long-term investors in the region, are seeking to expand their roles in Afghanistan and the rest of Central Asia, but are unlikely to do so in the face of a fast U.S. withdrawal.

China’s bid for economic supremacy in Central Asia is notably ambitious. It has engaged all of Central Asian countries in its Belt and Road Initiative (BRI) linking Asia to Europe via the Caucasus, and provided support for infrastructure projects to achieve this end. In many parts of the world BRI is encountering serious pushback, as grantees begin reading the fine print of agreements they have signed or are being asked to sign. Many recipients of Chinese aid seek a way out of the debt trap aid has engendered, among them Pakistan, Malaysia, Myanmar, Tanzania, Bangladesh, Djibouti, Laos, the Maldives, and Montenegro. Sri Lanka’s current political crisis is caused in no small part by its government having taken BRI money to build its port at Hambantota, then ceding the port to Chinese ownership when Sri Lanka could not repay China’s loans.

America has serious concerns over BRI’s aspirations in Southeast Asia but need not object to BRI in Central Asia, provided that Chinese loans do not become a strategic tool by which Beijing exerts control over recipients. The Pentagon has declared this aspect of BRI as a direct threat to American security interests. Both Kyrgyzstan and Tajikistan are already struggling to repay BRI loans. Only if the U.S. is present in the region as a major investor and supporter of self-rule will Central Asians be able to moderate and balance China’s powerful influence.

The United States is late to the table, though it appears to be wakening to the challenge. National Security Advisor John Bolton recently traveled to the Caucasus, which form a bridge between Europe and Central Asia.  There he noted that the U.S.-Georgia relationship “is one of our highest priorities.” He was well aware of Georgia’s robust participation in U.S.-led operations in Afghanistan and Iraq. But what seized his attention in 2018 was Georgia’s new deep water port of Anaklia on the Black Sea, a key link in the emerging corridor between Europe and China, and yet with closer links (for now, at least) with Europe and America than with China.

In May of last year President Donald Trump hosted a very visible and positive meeting with Uzbekistan’s new president, Shavkat Mirziyoyev, highlighting a radical shift in America’s official attitude toward that pivotal Central Asian country. Secretary of Commerce Wilbur Ross echoed this sense of strategic opportunity in his remarks in October at a business forum held in Tashkent. The United States, he told the Uzbeks, “is committed to being a strategic partner in your growth and development, through trade, investment, and your outreach to other nations in Central Asia.”

It would be surprising if these sentiments were not reflected in the priorities of the new U.S. International Development Finance Corporation (DFC), which President Trump signed into law on October 5th. The DFC will direct more American development assistance to many of those countries that are now balking at China’s growing involvement in their economies and politics. Congressional sponsors of the DFC legislation did not mask their intention to counter BRI, albeit with smaller and more precisely focused investments. Central Asia is a particularly attractive target.

At the same time, the U.S. recognizes that China’s efforts to open east-west corridors to Europe help counterbalance pressures the Central Asians feel from Moscow. Because of this, the U.S. seeks not to exclude China from the region, which would be impossible under any circumstances, but to strengthen Central Asians’ own ability to maneuver between their two goliath neighbors, Russia and China, and thus preserve their sovereignty and that of their region. Mongolia, which is on the fringes of Central Asia and is wedged between China and Russia, has demonstrated a sophisticated capacity for such strategic balancing. Not surprisingly, Mongolia’s involvement in greater Central Asia is growing, as its lessons penetrate subtly throughout the region.

Central Asia itself is on the move. The wave of reform sweeping Uzbekistan far surpasses anything we’ve seen in other societies with Muslim majorities, and is bound in time to influence its neighbors and other Muslim countries further afield. Regional trade has surged, and the heads of state are conferring regularly on heretofore taboo topics like water and hydroelectric power. As cooperation and coordination increase, Central Asia will be better able resist the unsettling “divide and conquer” strategies of its big neighbors, and become itself a stabilizing force across the region.

Ancient Central Asia’s emergence as a new world region has profound geopolitical significance. If it successfully resists the threat of Islamic extremism, it will have removed the chief cause that both Russia and China cite in defense of their meddling in Central Asian affairs. Religious moderation in Central Asia, which claims a heritage dating back to the tenth century, offers a strong foundation for this resistance. Meanwhile, at a recent conference in the Uzbek capital of Tashkent, the Central Asian countries not only welcomed Afghanistan as a new member of their movement but pledged to promote both domestic and foreign investment in Afghanistan, and also to expand educational opportunities there. All these measures directly support U.S. interests. Yet the Central Asians have framed them so deftly that they equally support the ends that both China and Russia profess to support.

The United States urgently needs to find its long-term role in this increasingly important region. Stability and progress in Central Asia will best come from within. Solutions imposed from without, including those from the West, will not work. The key, then, is for the U.S. to help the region strengthen its economies and societies, and to be attentive to its security needs. If the U.S. fails to enhance Central Asia’s strengths, mitigate its weaknesses, and help shape its strategic outlook, prospects of happy endings to turmoil in Iran, Russia, or even in Turkey or China, will diminish. Whatever the regional powers may be saying publicly, an autonomous and prosperous Central Asia will serve the real interests of all its neighbors and, equally, of the U.S. It will reduce extremism, undermine the pillars now supporting drug trafficking, cut back corruption, and provide a strategic shock absorber for turmoil that might result from instability in states near Central Asia. The alternative could be a sharp escalation of the regional and great-power conflicts that have generated chaos and suffering over the past generation.

The way to achieve this is for Washington fully to embrace the concept of Central Asia as a single region comprised of six sovereign but collaborating states. These countries are fast creating new westward links across the Caspian Sea to the Caucasus, eastward links to China, and links via Afghanistan to the economies of South and Southeast Asia. Expect the Central Asians within the coming year to set up a regional entity similar to the Association of Southeast Asian Nations (ASEAN), a move that warrants robust American support. Having already welcomed Afghanistan as part of their region, Central Asians are starting to invest in both Afghanistan’s economy and in its human capital, by giving thousands of Afghan children modern educations. Assisting Afghanistan to become an active member of a larger, more integrated Central Asia will multiply opportunities to advance American interests in many directions.

These positive developments have all arisen from within the region itself. They are not owned or dominated by any outside power and are not against anyone. A coherent and integrated U.S. strategy for Central Asia would encourage and help shape these positive trends, while supporting America’s interests on many fronts.


The post U.S. Strategy Towards Afghanistan And (The Rest Of) Central Asia appeared first on The American Interest.

 •  0 comments  •  flag
Share on Twitter
Published on January 24, 2019 09:04

The Catholic Invention of Representative Government

A long line of research, stretching back to the German sociologist Max Weber’s seminal work, identifies Protestantism as the sledgehammer that broke down autocratic barriers, giving rise to modern liberal societies. A good example is work by American political scientist Robert Woodberry, which demonstrates that Protestants pioneered a series of innovations that eased the advent of modern representative democracy, including religious pluralism, voluntary associations, printing, and mass education. More generally, the Weberian notion of Protestantism as the midwife of modernity received a great deal of attention during the 500th anniversary of the Protestant Reformation in 2017. Often as an accompaniment to these views, writers like Samuel Huntington have portrayed either the Catholic Church itself or aspects of Catholic culture as historical impediments to modern liberalism and modern democratization. But the story about the origins of our political institutions, and the way religious institutions affected it, is much more interesting and complicated than implied by this conventional narrative. In fact, modern representative democracy is well-nigh unthinkable without constitutionalist practices and doctrines pioneered by the medieval Catholic Church.

It is perfectly correct to say that many 18th– and 19th-century leaders of the Catholic Church—including several Popes—were openly hostile to liberalism and democracy, which they associated with the anti-clericalism of the French Revolution. The best example is probably Pius IX’s Syllabus of Errors from 1864, which openly denounced modern liberalism. But the political conservatism of the Catholic Church up until the Second Vatican Council (1962-65) does not alter the fact that modern representative government owes its origins to Catholic innovations that grew during the period from approximately 1100 to 1500. This thesis was first defended by the American medievalist Brian Tierney in 1955, it has been corroborated by several generations of later historical research, and it is today part of the consensus among medieval historians.

If this sounds surprising, it is because students of democracy have largely ignored the Catholic origins of these core elements of representative government. However, this story is worth revisiting for several reasons. First, it helps explain why representative government (and later modern democracy) came into being in the Latin West and not elsewhere. Second, it testifies to the intimate historical connection between religious institutions and teachings and politics. Third, it shows how quickly norms and institutions could diffuse from the religious to the lay sphere due to what Tierney termed the numerous “areas of interaction” in medieval and early modern Europe.

This tangled relationship holds a lesson for an age where religion and politics are again interacting in ways that secularists had not anticipated one or two generations ago, and that many are uncomfortable about. We have recently seen several examples of the political flexibility of religion. One of the more spectacular is how, since the 1970s, many North American Protestant churches have moved from quietly bolstering liberal political principles to vocally supporting conservative ones—whereas mainline Protestants have, broadly speaking, moved in the opposite direction. A more general example is how political Islamism—inside and outside of the Middle East—has been leveraged as an attack on democracy and modernity in a way that resembles what happened in the certain quarters of the Catholic Church after the French Revolution.

To understand these present-day political uses of religion, we first need to jettison the idea that religious doctrines are inherently supportive of certain political principles. As the English sociologist John A. Hall puts it when describing the great universalist world religions, “These belief-systems are loose and baggy monsters, full of saving clauses and alternatives that can be brought by an interested group when occasion demands.” The story about the Catholic invention of representative government reminds us that religious doctrines are multivocal: Catholicism contributed both to constitutionalist theories and to theories about absolute monarchy, just as Protestantism was later to do.

To see how this is so, let’s turn all the way back to 1414. In this year, a great church council met in the south German city of Constance to solve what we now call the “Western Schism.” Since 1378, there had been two rival Popes, one residing in Avignon in southern France and one in Rome. Indeed, for five years up until the gathering at Constance there had been three rival Popes, as a prior council in Pisa in 1409 had added to the confusion by appointing a third. Led by the great theologian and chancellor of the University of Paris, Jean (or John) Gerson, the Council of Constance deposed two Popes and persuaded the third to resign. The council then appointed a new Pope, Martin V, who was recognized throughout the Latin west. To justify these acts, the council—which sat until 1418—passed the decree Haec sancta (1415), declaring church councils to be superior to Popes in matters of faith, unity, and reform. The decree Frequens (1417) further required that future church councils should be called according to a fixed schedule. These decrees were justified by the theological work of a string of “conciliarists,” including Gerson himself. Conciliarists held that God had intended his church to be led not by infallible Popes but by the Christian community via broad councils with representatives from ecclesiastical institutions across Western and Central Europe. They thereby formulated the first systematic theory about representative government.

This was only fitting, as the core practices of medieval representative institutions or parliaments—political representation and political consent—had been developed and put in practice within the Catholic Church in the preceding centuries. Political representation meant that a corporate group such as a town or a cathedral chapter appointed an agent (a proctor), gave him full powers to act on its behalf, and sent him to a council or assembly, the decisions of which were binding for this group. Political consent meant that towns, cathedral chapters, or other corporate groups had to be consulted if decisions made in a council or assembly affected, for example, their property rights (the best example is taxation). The combination of representation and consent made medieval assemblies a place where taxpayers could be committed to cough up, often against a promise that the rulers would respect their rights or address their grievances. We find a famous echo of this quid pro quo understanding in the American revolutionary slogan, “no taxation without representation.”

These practices, which in altered form remain the core principles of representative democracy even today, are commonly associated with the Magna Carta of 1215 and the English parliament in Westminster that developed in the 13th and 14th century. The first English summoning of townsmen as representatives occurred at Simon de Montfort’s famous parliament in London on January 20, 1265. In 1295—convening what is today known as the “Model Parliament”—King Edward I inserted into summons to parliament the clause that designated political consent to the medieval mind: quod omnes tangit ab omnibus approbetur (“that which affects all people must be approved by all people”).

However, the story about representation and consent does not begin in the British Isles with Magna Carta, Simon de Montfort, or Edward I. It begins in Rome. The first documented use of the Roman Law concept of representation at councils or assemblies thus date to the pontificate of the great lawyer-Pope Innocent III between 1198 and 1216. The prime example is Innocent’s summons for the Fourth Lateran Council, which went out in April 1213. The council, which met in November 1215, was one of most important Church Councils of the high middle ages. More than 400 bishops from dioceses all over Europe attended, as did abbots, deans, and even agents of lay rulers such as Frederick II of Sicily (the later Emperor Frederick II) and the Kings of France and England. But so did representatives of cathedral chapters all over Western and Central Europe, based on the Roman Law notion of proctorial representation. Not long after, the notion of political consent (in the form of quod omnes tangit) also began to be used at councils within the Church. The first conclusive evidence comes from the French church council at Bourges in November 1225, where representatives of cathedral chapters rejected a tax that was designed to finance papal government.

These events marked the culmination of a gradual—and fascinating—process. The principles of representation and political consent had been formulated via an extremely flexible interpretation of the Roman Law that had been revived in the 11th and 12th centuries. According to tradition, the medieval study of Roman Law began in earnest when a copy of the fifth-century Byzantine Emperor Justinian’s Law Code, the Digest, was found in an Italian monastery in 1070. In the subsequent centuries, lawyers would study Roman Law in new centers of learning such as the famous law school of Bologna in Northern Italy.

Roman public law had nothing to say about representation and consent; both notions were found in Roman private law. The principle of representation hailed from a clause to the effect that a private corporation could appoint an agent who could negotiate on its behalf with full powers. Political consent was based on an otherwise unremarkable section of Roman private law that concerned co-guardianship. This included the aforementioned expression quod omnes tangit ab omnibus approbetur (“that which affects all people must be approved by all people”).

How could these private law clauses transform into political principles used for broader communities at councils or assemblies? The first step in this process was that church legists started treating ecclesiastical institutions such as monasteries and cathedral chapters as corporations in the Roman Law sense. This was useful because the church centralization that had begun in the late 11th century raised the problem of how to govern an international organization that spanned an area from Trondheim to Palermo and from Dublin to Riga—in a period where long-distance travel was extremely cumbersome. Canon Law, that is, the law of the church, had to be constantly interpreted in ecclesiastical courts to determine the rights of, for example, cathedral chapters and monasteries. Often this would occur in Rome, far away from the Franco-German heartlands of Latin Christendom. In this situation, the use of representatives with full powers provided a great advantage, as this meant that distant groups and institutions could be present in court or councils, and that they would be bound to recognize the decisions made there.

This process was well under way by the late 12th century. The next step was to construe the entire Catholic Church as a corporation in the Roman Law sense. This meant that representation and political consent could be used at general Church councils, which could therefore make decisions that bound everyone. As American legal scholar Harold J. Berman puts it in his magisterial Law and Revolution: The Formation of the Western Legal Tradition, church councils such as Innocent’s Fourth Lateran of 1215 and the Council of Bourges in 1225 were thus the first medieval parliaments.

Once this leap from legal delegation to constitutional principle had been made, the practices of representation and consent spread like wildfire across the Latin West. This was possible because Catholic Europe was in many ways a borderless society around 1200, and because the primitive lay administrations of the day were mainly staffed by churchmen, who had a monopoly on education until universities (themselves developed within the church) secularized in the late middle ages. The use of clergy in lay government facilitated the spread of administrative and political models developed within the church. This was further eased by the fact that revived Roman Law and Church Law principally applied in all of the Latin west.

It was due to these “areas of interaction” that representation and consent could quickly leap to lay politics. Medieval parliaments soon cropped up across Western and Central Europe. The Iberian Peninsula and England led the way, followed by France, Hungary, and a number of different German states. This marked a completely new regime form in which monarchs would rule together with elites via assemblies (the model that has become known as “king-in-parliament” in English). As the British historian Lord Acton put it, with his penchant for exaggerating to make a point, “Representative government, which was unknown to the ancients, was almost universal” in Europe by the late middle ages.

However, the development of a corresponding ideology or theory of representative government lagged behind. Medieval political theory was lofty rather than empirical. It revolved around the writings of Aristotle, which centered on the pure regimes, democracy, aristocracy, and monarchy, as well as the “mixed regime” that combined elements from several of these, but had nothing to say about representative institutions, which did not exist in Aristotle’s day.

It therefore fell to churchmen to provide the theoretical defense of representative government, just as they had earlier developed the practices of representation and consent. As mentioned earlier in this article, the trigger for this was the “Western Schism” that began in 1378. The papacy had gone from strength to strength in the high middle ages and had successfully humbled the greatest secular rulers of Latin Christendom, the Holy Roman Emperors of Germany, during the 11th- and 12th-century “crisis between church and state.” But in the early 14th century the Popes had overplayed their hand in an attempt to similarly bring to heel the French kings. The result was the so-called Babylonian Captivity from 1309 to 1376, in which the Popes resided not in Rome but in the southern French city of Avignon. In 1376, Pope Gregory XI finally decided to return to Rome, where he arrived the next year. But Gregory died in 1378 before this move had been consolidated. The Italian party within the church elected Urban VI, who stayed in Rome, whereas the French cardinals elected Clement VII, who took up residence at Avignon.

There had been several earlier instances where there had been more than one Pope. What was unusual this time was that the two camps developed into rival lineages, with new Popes succeeding deceased Popes in both Avignon and Rome, supported by different parts of the Church and different coalitions of secular monarchs. The result was an intolerable situation in which the Church—a community that is by definition “catholic,” namely, of universal reach—was torn apart not only by a vicious struggle for power but also by constitutional crisis and administrative disorder, with dire spiritual effects on believers.

As often happens, chaos on the ground had creative consequences for political theory. Theologians now formalized ideas about conciliar government of the Church that had been germinating for centuries. I have so far focused on Roman Law, but this body of legal rules was constantly cross-fertilized by notions drawn from the other great legal system of the middle ages, Canon Law, or the law of the Church. The most important compilation of Church Law had been accomplished by the monk Gratian around 1140. This compilation—popularly known as the Decretum—included a clause to the effect that a heretic Pope could be deposed. Fused with the Roman Law principles of representation and consent, this provided an argument that implied the church should be governed not by its papal head but by broad councils embodying the Body of Christ—that is, by the community of believers or at least by the those with positions within the Church.

Conciliarists had a venerable tradition to draw on. In the first centuries of the Christian era, bishops had been each other’s equals, and church councils had been used to legislate about important matters of faith and church organization. Perhaps the two most famous are the councils of Nicaea and Constantinople in the fourth century. The conciliar mode of governance of the early church had been preserved in several of the texts that were included in Gratian’s Decretum. Conciliarists also drew on certain places in scripture, including Galatians 2:11-15, where Paul goes to Antioch to correct Peter, who had insisted that gentiles should follow Jewish customs such as circumcision to enter the Christian community. This episode was interpreted by Gerson and other conciliarists as an example of the community of believers (represented by Paul) correcting the papal head of the church (represented by Peter).

The conciliarist position crystallized after the beginning of the Schism in 1378. The hotbed of intellectual development was the famous Parisian Faculty of Theology at Sorbonne, hence the common name of “Sorbonnists” for conciliarists. Most prominent among these was Jean Gerson. But Gerson was to be followed by a number of other Sorbonnists, including the 16th-century Scottish theologian John Major (or Mair), to whom we shall return.

Some conciliarists went further down the path of radical political thinking than others. But they all subscribed to the position that the Pope was not an absolute ruler; rather, he was seen as a form of Prime Minister subject to constitutional limits. These limits would be enforced by representative church councils that could restrain and even correct miscreant Popes. This is where we find the first systematic defense of representative government, which would soon make its way into secular political theory.

Conciliarists were also among the first to theorize another important prerequisite of modern representative government, namely the division of society into a secular and a religious sphere. This, too, had been a reality on the ground for quite a while as the 11th century “crisis between state and church” had undermined the religious pretentions of Emperors, Kings, and Princes. But theologians and political theorists had remained loyal to the older notion of a universal Christendom, headed by Pope and Emperor in unison. High papalists continued to defend this position even after the 15th century. But the conciliarists’ notion of two separate realms, each with its own political institutions, was set to triumph and to take Europe in a different direction than the many civilizations where we do not find a clear distinction between religion and politics, including much Orthodox Christian and Islamic thinking down to the present age.

Conciliarism climaxed at the Council of Basel (1431-49), which was called due to the timetable set out in Frequens at Constance in 1417. But it is also fair to say that it was at Basel that conciliarism suffered its first major defeat at the hands of the Pope, who in 1448 called a rival council at Ferrara (later moved to Florence) to delegitimize Basel. From then on, the high papalists were in the driver’s seat in church governance.

While conciliarism was never to regain the dominance it had exerted at the Council of Constance and in the first phases of the Council of Basel, conciliar ideas, as English historian of ideas Francis Oakley has convincingly shown, would have an important afterlife north of the Alps for centuries after the spectacle at Basel. As late as 1818, the English historian Henry Hallam could note that the conciliar “Whig principles of the Catholic Church . . . [are] embraced by almost all laymen and the major part of ecclesiastics on this side of the Alps.” Not until the proclamation of the twin principles of papal primacy and infallibility at the First Vatican Council in 1870-71 was conciliarism conclusively defeated within the Church.

This provides us with an interesting connection to the literature on Protestantism and democracy. As Francis Fukuyama has emphasized, the teaching of the two key Protestant figures, Martin Luther and John Calvin, were “were anything but liberal in the sense we understand that doctrine today.” Both Luther and Calvin held that Christians had no right to rebel against lay authority. Indeed, they held that resistance against the powers-that-be was sinful, because rulers’ actions were also part of God’s design. Bad rulers would simply be doing God’s work in the form of just punishment.

Persecuted Protestant minorities in the Low Countries, France, and on the British Isles in the 16th and 17th century came to find this teaching of non-resistance intolerable. Unlike the dynastic wars that had hitherto ravaged Europe, the religious wars unleased by the Reformation and the Counterreformation were life and death struggles—comparable to the way the Crusades had been fought in previous centuries within and beyond Europe. It was in these circumstances that persecuted French Calvinists formulated theories about the right to rebel against unjust government. This happened in particular under the protracted civil infighting from 1562 to 1598 that we know as the French Wars of Religion. Resistance theories spread to other Protestants and to Catholics, and they were integrated into modern liberalism, for example, through the work of John Locke. Later, they were to provide important intellectual ammunition for the American independence struggle, in turn infusing the political precepts of the Founding Fathers of the United States.

As Francis Fukuyama points out, modern liberalism—including the liberal argument for representative government—hence emerged “out of sectarian conflict.” But the part of the story left out here is that the new liberal theories of resistance against unjust government mark a return to medieval ideas in general and to conciliar ideas in particular. As Oakley puts it in The Watershed of Modern Politics, with the resistance theories, Protestantism “began to find its way back to the familiar and broader medieval channel from which it had earlier departed.” More particularly, historians have documented how Sorbonnists such as John Major directly influenced some of the Protestants who would develop resistance theories.

The volte-face from Luther’s and Calvin’s principle of non-resistance to 16th– and 17th-century Protestant theories of resistance is further evidence that religions are multivocal. They contain different teachings that, depending on the circumstances, can be used to defend both rule by one (or a few) and rule by many—or absolutism and representative government, if you prefer. This was also the case for Catholicism. Placed on the defensive by the conciliarists, the 15th-century papacy tried to enlist secular support—and to divide the supporters of conciliarism—by construing conciliar ideas about representative government as a threat to papal and monarchical rule alike. Meanwhile, papalist theologians further elaborated already existing theories of papal primacy with respect to doctrine, legislation, and jurisdiction.

High papalists here resorted to the very same sources of authority as the conciliarists. Justinian’s Roman Law dated from an age where the Emperor was the autocrat, placed above the law, and Roman public law therefore contained clauses such as “what has pleased the Prince has the force of law.” The notion that the Pope possessed absolute jurisdictional powers within the Church was based on the fourth-century idea of the Petrine Commission—that is, the Pope as the heir of St. Peter (later as the Vicar of Christ). This, in turn, was justified by the famous formulation in Matthew 16:18 that “That thou art Peter, and upon this rock I will build my church . . . whatever you bind on earth will be bound in heaven, and whatever you loose on earth will be loosed in heaven.” These ideas had a long and prominent place in Canon Law, and they could easily be enlisted to defend the notion of the Pope as the supreme ruler of the church. As Francis Oakley has noted, the obvious parallel between Popes and Kings meant that these theories of absolute monarchy could quickly diffuse from the church to lay circles, just as practices and doctrines of representative government had earlier done.

The historical irony is glaring. Among Protestants, constitutionalist currents had come to dominate on the eve of the birth of modern democracy after 1800; among Catholics, the situation was the exact opposite, as these ideas were on the verge of defeat. This defeat was then sealed by the anti-clerical stance of most of the French revolutionaries of 1789 and the fact that this revolution dealt a deathblow to the most important bastion of conciliarism, namely the French Catholic Church (also known as the “Gallican” Church).

The passing away of conciliarism at the very point in time where modern democratization began has made students of democratization ignore an important historical lesson: Representative democracy is all but inconceivable without the 12th– and 13th-century Catholic practices of representation and consent and the 15th-century conciliar doctrines about representative government. This fascinating story remains relevant not only for those who wish to understand the origins of our political institutions; it also sheds light on current interactions between religion and politics. In that sense, it is a story worth revisiting for those who are interested in the political dynamics of the 21st century.


See Harold J. Berman, Law and Revolution: The Formation of the Western Legal Tradition (Harvard University Press, 1983); Antony Black, Political Thought in Europe 1250-1450 (Cambridge University Press, 1992); Richard Kay, The Council of Bourges, 1225: A Documentary History (Ashgate, 2002); Francis Oakley, The Conciliarist Tradition: Constitutionalism in the Catholic Church 1300-1870 (Oxford University Press, 2003); Francis Oakley, The Mortgage of the Past: Reshaping the Ancient Political Inheritance (1050-1300) (Yale University Press, 2012); Francis Oakley, The Watershed of Modern Politics: Law, Virtue, Kingship, and Consent (1300-1650) (Yale University Press, 2012).



The post The Catholic Invention of Representative Government appeared first on The American Interest.

 •  0 comments  •  flag
Share on Twitter
Published on January 24, 2019 08:53

January 23, 2019

Repression Is Contagious

As 2019 begins, government campaigns around the world to stifle free expression by independent and critical voices are proceeding apace. This is happening not only in China and Russia, whose governments have lately taken to extending their internal repression across borders, and in semi-pariah states like Nicaragua and Burma, or in autocracies transactionally aligned with the West, such as Egypt, Saudi Arabia and Azerbaijan—but now even inside the West, in Poland, Hungary and Spain. Official assaults on free speech are widespread and widening. The world has not seen such grim prospects for political and artistic expression globally since the end of the Cold War, which of course roughly coincided with the fall of Apartheid and the acceleration of what Samuel Huntington had described as “the third wave” of global democratic advancement. The early 1990s was a hopeful time; a quarter century later, not so much. Happy New Year, indeed…

“Campaigns” and “assaults” are in the plural above because these are not necessarily parts of a single coordinated campaign by anything resembling some kind of Authoritarian Internationale. They are nonetheless of a kind and mutually reinforcing. We may be seeing an illustration of what scholars such as Briton Laurence Whitehead have described as a “contagion effect” in international politics. It works in both directions: for instance, military coups d’etat proliferated across West Africa in the 1960s; later, communist regimes in central and eastern Europe fell like dominos after 1989, yielding to democratic forces. When a wave of coups in Africa returned in the 1980s, the term contagion reappeared in The New York Times and elsewhere.

The abortive “Arab Spring” of 2011 also appeared to be such a phenomenon—and indeed it was, even as it boomeranged. When the momentum of self-reinforcing positive developments (i.e., the fleeing of longtime dictators in the face of massive popular demonstrations) slowed, a return wave of violence and repression spread quickly as regimes doubled down on violence to regain the upper hand against their peoples, in Syria, Libya, Bahrain, and Egypt. Undeniably, local case-specific events in various countries directly inspired action—and reaction—in other Arab states. But political actors also learn from their neighbors, and what they see happening around the globe, and they tend to adapt it to their own circumstances.

This is why, even though most responsible actors around the world have come to discount the Trump factor in U.S. politics as an aberration that will be mostly corrected when his time concludes, the American example still remains important. Donald Trump’s relentless campaign against free expression he does not like—from the professional journalism he fears, to civic protest by football players genuflecting in opposition to police violence against unarmed black men, to those who would organize a rally on the National Mall—puts wind in the sails of those elsewhere who seek to discredit, harass and even imprison or murder truth-tellers.

In a telling illustration of the current tenor of official discourse in Washington, Secretary of State Mike Pompeo’s recent critique in Cairo of Barack Obama’s missteps in handling the Arab Spring, falsely advertised in advance as an articulation of Trump Administration policy, was notable in that it was delivered on the 100thday after Jamal Khashoggi’s grisly murder at the hands of the Saudi state. On the day a high-level bipartisan memorial for the Saudi journalist was underway in the Capitol, Pompeo neglected to even mention the episode. He did, however, find time to speculate that General al-Sisi might well “unleash the creative energy of Egypt’s people . . . and promote a free and open exchange of ideas.” Meanwhile, Egyptian songwriter Galal El-Behairy is imprisoned for his lyrics, short-story writer and essayist Ibrahim al-Husseini languishes in pretrial detention, and thousands of those who participated in nonviolent sit-ins against Sisi’s military takeover in 2013 remain incarcerated.

Seemingly oblivious to the conspicuous, festering wound on the country’s face that is the Khashoggi murder, Saudi King Salman and his favorite son Mohammed continue to show the world that none of the values enumerated in our First Amendment shall be tolerated in their Kingdom. Not freedom of religion, not freedom of assembly, not freedom of speech—and certainly not for the women who have advocated for the small liberties the Saudi royals have belatedly started to grant, like the opportunity for women to now drive a car. Dr. Eman Al-Nafjan, a linguistics professor, contributing opinion writer at The Guardian and The New York Times and the engine of October 26 Driving campaign in 2013, is now imprisoned and smeared by the state as a traitor. Dr. Hatoon Alfassi, a distinguished professor of women’s history and recipient of numerous awards for her role in women’s rights advocacy, is imprisoned since last June without trial.Or Loujain al-Hathloul, also a prominent women’s rights activist, who has languished in prison since last March, when she and her husband, the comedian Fahad Albutairi, were both grabbed, blindfolded, and taken to Saudi Arabia — Fahad from Jordan, and Loujain from the UAE. Ten months later, Loujain is still in jail; her father reports she has been brutally tortured. Albutairi’s family has no idea where he is.

The panoply of repression of women can be summed up in the bizarre Saudi “guardianship system” which obliges every woman, no matter her age, to have permission from a male guardian to travel, go to school, to work, to marry, to divorce, even to open a bank account. As the imprisoned driving advocates learned, even the most innocuous expression of opinion is severely punished.

The contagion is spreading in Azerbaijan, where a distinctly less royal family governs well into its third decade. The latest innovation in repression here involves waiting until an anti-corruption crusader, a blogger, or a journalist, nears the end of his or her term in prison (usually on trumped up charges in the first place). A month or two before release is due, the person is charged with a new crime, supposedly committed while in prison, such as hiding a knife in one’s cell or “distributing medicine illegally.” Following a hasty trial, imprisonment is extended another few months or years. This could, of course, go on indefinitely, and thereby become an unacknowledged life sentence.

The list of journalists being harassed by the government of aspiring president-for-life, Ilham Aliyev, is only growing. These include Khadija Ismailova; out of jail now but on short-leash parole, she has in recent weeks been stalked by state agents at her residence and threatened with reincarceration. Blogger Mehman Huseynov, in prison since March 2017 for “defaming an entire police station” for complaining about his mistreatment following an earlier arrest; he was charged December 26 with “resisting a representative of the authorities,” for which he can be sentenced to seven more years in prison. Though that charge was dropped in recent days, Huseynov remains in jail and could still be charged anew. In another cruel extension of the campaign against free expression, charges have been pending for three years now against Akram Aylisli, an 81-year old novelist. Lauded as a hero of the nation during Soviet times and later, Aylisli even served a five-year term as a member of parliament for Aliyev’s governing party (which of course is another kind of honorific, as the legislature is politically inert). His offenses include a fictional portrayal of an unnamed despot resembling the current president’s late father, and sympathetic portrayals of ethnic Armenians. These “crimes” led to an officially sanctioned book-burning campaign when one of his novels was published in 2013. Aylisli has been living under house arrest for three years awaiting trial on a charge of allegedly assaulting an airport border control officer in 2016, when he was stopped from getting on a plane to Italy for a literary festival.

In Burma, an appeals court this month ruled against an appeal of the conviction of Reuters reporters Wa Lone and Kyaw Soe Oo, sentenced in September after a nine-month trial to seven years in prison for their reporting on atrocities committed by the Burmese military against the Muslim Rohingya people. In a stance that has dismayed Aung San Suu Kyi’s dwindling number of admirers worldwide, the de facto leader of the civilian portion of what remains fundamentally a military government has responded publicly—and also to private to entreaties from longtime friends in the international community—with angry denunciations of the journalists as “traitors.”

In Nicaragua, a months-long campaign of violence by the Sandinista government of Daniel Ortega against peaceful protestors—principally students and pensioners complaining about cutbacks in their benefits—in December escalated to seizures of independent media outlets reporting on the crisis, as well as closures of civil society organizations. Armed police in mid-December raided the Managua office of the privately owned Confidencial and its sister television programs, Esta Noche and Esta Semana. A week later, another privately owned cable and internet news station in Managua, called 100% Noticias, was ordered off the air, and the channel’s director Miguel Mora and news director Lucía Pineda Ubau, were arrested. Venezuela and Cuba remain, well, Venezuela and Cuba. Cuba’s innovation in repression is Decree 349, which punishes artists for delving into an expanded list of forbidden subject areas, and those protesting the new restrictions have already been arrested.

Readers of these pages are familiar with the ongoing sagas of Hungary and Poland, once promising leaders in post-Communist democratic transition, where the current leaders have made U-turns toward autocracy, corruption and systematic (and often rather novel) attacks on independent and professional journalism seeking to hold these leaders accountable to their publics. In a truly innovative maneuver to eliminate critical voices, in late November prime minister Viktor Orban orchestrated the simultaneous “donation” of hundreds of private Hungarian news outlets by their owners to a central holding company run by his cronies, in a move The New York Times reported was “unprecedented within the European Union.”  Orban’s assault on the full range of democratic institutions has clearly been strengthened by the rise of Donald Trump.   As the Washington Post reports this week, Hungary’s increasingly autocratic leader said Trump represents “permission” from “the highest position in the world.”

In Poland, meanwhile, independent judges continue to defend the rights of news media to report on official shenanigans, such as Gazeta Wyborcza’s November 13 report about a complex scandal involving the chief bank regulator. American Ambassador to Warsaw Georgette Mosbacher has been outspoken in opposition to the government’s harassment of journalists of TVN, an independent television stationed owned by National Geographic, but it is unclear whether she would do as much for non-American outlets, or speak in favor of a general right to free expression—given the Administration for whom she works.

Freedom of expression is also under threat at the western end of Europe, as Catalonia’s October 2017 referendum on independence—effectively a large-scale act of civil disobedience because Madrid did not consent to the electoral exercise—is a being criminally prosecuted as ‘rebellion.’ Even in democracies, as we Americans well know, challenges to free expression are real— and must be acknowledged and addressed.  But in Spain, Oriol Junqueras, former vice president of the Catalan regional government, is set to go on trial this month on charges of “rebellion, sedition, and misuse of public funds”—for his central role in organizing the ballot.  Eleven others will also be on trial with Junqueras in televised proceedings in Spain’s Supreme Court, including civil society activists Jordi Cuixart and Jordi Sànchez, who face charges for their advocacy of the referendum. Another dozen indicted persons, including the former president of the Catalan regional government, Carles Puigdemont, fled to European countries—all of which have thus far declined to extradite any of the accused back to Spain for trial. Regardless of one’s opinion on Catalan independence, what’s at stake here is the right to peacefully dissent and to advocate for political alternatives. “In a democracy,” said Junqueras in a recent interview from prison, “no one should end up in jail for putting out ballot boxes.”

Far and wide, east and west, 2019 is shaping up to be a rather daunting time for those who would speak their minds when their governments don’t want to hear (or don’t want others to hear) what people have to say. Countries—and a world—in which the right to free expression is protected and advanced, do not arise spontaneously nor is their arrival inexorable, as Damir Marusic and Bob Kagan recently reminded us. Kagan’s point in his recent book, The Jungle Grows Back is that the construction of the much beloved, much despised Global Liberal World Order was never inevitable, but was circumstantial—entirely contingent on who won WWII and the insight and persuasiveness of the few dozen Americans and their allies who thereafter rebuilt the world.

The center of gravity in defending and advancing democratic values has now moved outside of governments and official channels, as the anti-democrats and those who are rejecting long-established norms about free expression are moving into the halls of power. Acolytes of Donald Trump—specifically emboldened to attack the press and other critics—are coming into office from the Philippines to Brazil.

To Jair Bolsonaro, the new president of Brazil, Donald Trump is a role model—proof that incendiary comments, a history of trafficking in conspiracy theories, and assaults on the news media are permitted in democratic countries. “What happens in the U.S. inevitably influences us,” said Sergio Dávila, the executive editor of Folha de S. Paulo, a newspaper that has been a target of Bolsonaro’s ire.

A very dangerous, highly contagious virus is sweeping the world. It is time to develop a vaccine—and government funding will not be available for the research any time soon.


The post Repression Is Contagious appeared first on The American Interest.

 •  0 comments  •  flag
Share on Twitter
Published on January 23, 2019 10:07

“Liberal Hegemony” Is a Straw Man

The Great Delusion: Liberal Dreams and International Realities

John Mearsheimer

Yale University Press, 2018, 328 pp., $30

 

The Hell of Good Intentions: America’s Foreign Policy Elite and the Decline of U.S. Primacy

Stephen Walt

Farrar, Straus, and Giroux, 2018, 400 pp., $28

 

Republic in Peril: American Empire and the Liberal Tradition

David C. Hendrickson

Oxford University Press, 2017, 304 pp., $34.95

 

Restraint: A New Foundation for U.S. Grand Strategy

Barry R. Posen

Cornell University Press, 2014, 256 pp., $29.95


Hegemony is a funny kind of word. Its dictionary definition includes “leadership”—with that term’s neutral or benign connotations—but also the more fraught notion of “dominance.” Its usage in sociology and political science tilts decidedly toward the latter, emphasizing how groups and states employ power and coercion to control others.

Hence, the term “liberal hegemony” is an even stranger construction, marrying “dominance” with notions of liberté, égalité, and fraternité. In practice, it refers to America’s supposed commitment to using its unique power to bend the world toward its preferred ideology of democracy, free markets, and human rights. Being described as hegemonic is not usually a compliment, and indeed, “liberal hegemony” is mostly (though not always) employed by critics of American foreign policy—both liberals and otherwise. Christopher Coyne and Abigail Blanco, for example, argue that implementing a strategy of liberal hegemony “requires, attracts, and reinforces a mentality fundamentally at odds with liberal values.” From a very different vantage point, Russian proto-fascist Alexander Dugin has asserted more expansively, “if you are for a global liberal hegemony, then you are an enemy.”

New Assaults on Fortress Liberal Hegemony

Today, the claim that liberal hegemony has been the centerpiece of U.S. grand strategy lies at the heart of no fewer than four recent books by American academics. Barry Posen of MIT led the way in 2014 with Restraint: A New Foundation for U.S. Grand Strategy. Then 2017 saw the release of Republic in Peril: American Empire and the Liberal Tradition by David C. Hendrickson of Colorado College, followed in 2018 by The Great Delusion: Liberal Dreams and International Realities by John Mearsheimer of the University of Chicago and The Hell of Good Intentions: America’s Foreign Policy Elite and the Decline of U.S. Primacy by Stephen Walt of Harvard University. These latter two feature the most strident arguments of the group.

The books are distinct in style and focus, but they all feature four common themes. First is the one already noted: that America’s bipartisan U.S. grand strategy from the collapse of the Soviet Union until Donald Trump’s presidency is best described as “liberal hegemony.” Second, this strategy is responsible for a track record of dismal foreign policy failures. Third, the ideology and culture of America’s foreign policy institutions have prevented honest and effective remediation of these strategic errors. And fourth, Washington’s adoption of a more restrained— and, for all but Hendrickson, also more “realist”—grand strategy would better serve both U.S. interests and global security.

Mearsheimer’s definition of liberal hegemony—which could serve for all of the authors—is “an ambitious strategy in which a state aims to turn as many countries as possible into liberal democracies like itself while also promoting an open international economy and building international institutions.” While these tendencies have existed throughout modern U.S. history, it was the end of the Cold War and the “unipolar moment” that the authors believe freed American leaders to pursue liberal hegemony in earnest.

In practice, Walt explains, this pursuit “involved (1) preserving U.S. primacy, especially in the military sphere; (2) expanding the U.S. sphere of influence; and (3) promoting liberal norms of democracy and human rights.” This is an accurate enough description of U.S. foreign policy, but do these features add up to “liberal hegemony”?

Critique of the Critique

The four books comprise serious arguments, well constructed by serious people. But their common diagnostic foundation is too weak to bear the weight of their critiques. Indeed, the strategy of liberal hegemony, as postulated by Mearsheimer and Walt especially, is a straw man, much more easily demolished than the complicated reality behind making and executing American foreign policy. One need not be a card-carrying member of The Blob to take note of the following four problems with the overall critique they advance.

The evidence is much more mixed than the critique suggests.

To characterize U.S. foreign policy over 25 years as a bipartisan juggernaut free from serious strategic dissent requires sweeping a lot under the carpet. Even the empirical centerpiece of the argument that liberal hegemony failed—the Iraq War—is more complex than is presented by these scholars. To be sure, a vision of liberal international order and an expansive view of American power were both crucial to President Bush’s decisions about and execution of that war. But even many influential supporters of the war were neither as sure of its righteousness nor as confident of its feasibility as the likes of Dick Cheney and Paul Wolfowitz. While many academic realists opposed the war, supporters came from diverse backgrounds and motivations. Painting, say, Bill Kristol, Ken Pollack, and Christopher Hitchens with the same brush of ideological American grandiosity on this issue is simply ahistorical.

The authors also give too little weight to obvious counter-examples of American missionary commitment to liberal values. The enduring strength of its relationships with Saudi Arabia and other repressive Arab monarchies—even after the Arab Spring, but especially before—is no mere asterisk to a description of America’s evangelism of democracy and human rights. By the same token, if Washington is dedicated to remaking the world in its image, how does one explain its traditional delicate balancing approach to India and Pakistan?

Even if one is willing to tolerate these major caveats to the liberal hegemony critique, there is another evidentiary problem. Walt and Mearsheimer direct much of their disdain at the “liberal” half of the purported strategy. A central claim common to the books is that America is more likely to use force and less likely to respect sovereignty under this strategy: first, because its conception of its “interests” is more universal and less negotiable than that of other states; and second, because liberalism blinds Americans to the salience of nationalism and other identity-based loyalties.

Yet most of the evidence they marshal about U.S. foreign policy failures has much more to do with the “hegemony” half of the equation. It is mostly Washington’s ambition for leadership and resulting over-extension that they criticize, whether in the form of military interventions or attempts to expand the remit of international institutions and alliances. There is no doubt that such activism has frequently prompted pushback, and the liberalism associated with U.S. policy has certainly shaped the nature of America’s pursuit of primacy. But, as Hendrickson clearly appreciates, it is far from clear that liberalism is itself a crucial reason for the widespread resistance to America’s role in the world. Even Walt quotes the historian Timothy Garton Ash as saying “the problem with American power is not that it is American. The problem is simply the power.” There is wisdom in the comment, but Walt’s arguments remain untempered by it.

Moreover, some of the liberal hegemony critics deem even notable examples of pragmatic and non-ideological American restraint as too interventionist. Presidents Bush and Obama were harshly criticized for not exerting more American power in response to conflicts in Georgia, Syria, and Ukraine. But for Walt and Mearsheimer, at least, their responses are, somehow, simply more examples of misguided activism.

The critique ignores or disdains the agency and aspirations of small-state populations.

One of the authors’ principal shared grievances against post-Cold War U.S. policy is the expansion of NATO, which they consider needlessly and blatantly provocative to a Russia that might otherwise have found its way to a productive security relationship with the rest of Europe. There is a good case for this argument; indeed, this view was long held by many members of the so-called foreign policy elite that Walt and Mearsheimer dismiss as monolithic and incapable of nuance.

That said, telling the story of NATO expansion as a parable of unbridled American hegemony requires ignoring not only the pro-expansion policies of others allies in “old Europe,” but also the fervent aspirations of the former Warsaw Pact states clamoring for greater freedom and security. This “pull” from Eastern and Central Europe was at least as important a driver of NATO expansion as the “push” from Washington. But there is little room in the liberal hegemony critique for the agency and aspirations of smaller states.

In a similar vein, Mearsheimer and Hendrickson are very critical of U.S. policy toward Ukraine and its conflict with Russia over the past decade. Mearsheimer goes so far as to say that, “The United States and its European allies are mainly responsible for the crisis” prevailing there since 2014. The logic appears to be that Moscow cares a lot about its influence over Ukraine, while Ukraine is peripheral to U.S. vital interests; therefore Ukrainian interests ought to be irrelevant to the United States. This is a defensible but debatable premise. To infer from it that the United States and Europe are primarily to blame for the current troubles in Ukraine requires an extraordinarily narrow interpretation of the region’s history and politics.

Contrast that analysis with a recent article by Michael Mazaar and Michael Kofman. While advancing a similar critique of U.S. policy toward Russia (and U.S. strategy in general), they still manage to acknowledge that “Russia’s historic strategy for attaining security at the expense of others, its paranoid and narrow strategic culture, and its elite-driven decision-making process all constitute the real nub of the problem.” No one would argue that this fact inoculates Washington from criticism. But to elide it, as Mearsheimer does, is truly to miss the point that the pervasiveness of America’s power, both hard and soft, is enabled by many willing partners with independent interests of their own. Russia and China cannot make the same claim. The liberal hegemony critique makes no allowance for such fundamental differences in the attractiveness of power among the “great powers” or the rest of the world.

The critique devalues the moral dimensions of statecraft.

Advocates of liberalism in foreign policy typically identify two distinct advantages: strategic and moral. Strategically, liberalism supposedly serves American interests best in the long run because democracies tend not to fight with each other and free markets are best at generating wealth. But also, advocates point out, freedom and the rule of law are morally superior to their alternatives.

The liberal hegemony critics mount detailed arguments against the strategic rationale; these are best adjudicated elsewhere. On the moral front, their rebuttals are simpler and boil down to the claim that too much American ambition yields unnecessary conflict and instability, which in turn outweigh the benefits of spreading liberal values. But here again, the critics’ reasonable concerns about overstretch are undermined by their rickety framework. Their conflating of the separate notions of liberalism and hegemony cause them to throw the liberal baby out with the hegemonic bath water.

For example, Posen and Mearsheimer both argue that a key reason for the supposed failures of liberal hegemony is that (in the latter’s words) “nationalism and realism almost always trump liberalism.” As a matter of history, few would deny that nationalism and realism have been more powerful forces than liberalism. But does this justify a normative recommendation to jettison liberalism entirely in foreign policy?

Similarly, Mearsheimer complains that “liberalism makes diplomacy harder” by shrinking the range of acceptable compromises available to states who care about values as well as self-interest. Again, the logic here is sound, but is making diplomacy easier really a worthy trade for abandoning any sense that the powerful ought to be concerned with justice? Imagine what the world today might look like had every powerful state since the dawn of the Westphalian era followed the critics’ advice to abjure values in foreign policy in surrender to the countervailing tides of nationalism and realism.

From a moral perspective, the liberal hegemony critique has nothing to offer except the more modest level of responsibility that comes with being less ambitious. And being less ambitious does not actually demand or require relinquishing liberal values, even in foreign policy. As Emile Simpson put it recently, “Accommodating others does not mean giving up your own values; it just means recognizing their proper limits, on a case by case basis.” Hendrickson seems to be alone among the four authors in allowing for such an approach.

The critique caricatures the world of “elites.”

This sort of all-or-nothing moral attitude toward liberalism leads some of the critics into a final intellectual cul-de-sac: gross caricatures of the beliefs and motivations of an indistinct but apparently large and powerful American “foreign policy elite.” Often, the arguments seem to confuse pervasive advocacy of liberal values in the foreign policy community—for which there is ample evidence—with an inability to consider other issues like national interest and balance of power—for which evidence is scant. These values can coexist and be balanced with each other, as they have been throughout history.

All four authors find fault with too much consensus on wrongheaded ideas among this professional community, or the “official mind,” as Hendrickson puts it. However, it is Walt and Mearsheimer who prove powerless to resist the siren song of sweeping generalizations. Both of their books are shot through with reasonable claims buried beneath unsupportable categorical statements, often sarcastically denigrating opposing points of view. Consider a few examples (emphasis added):



“. . . the foreign policy community believes spreading liberal values is both essential for U.S. security and easy to do.”
“Advocates of liberal hegemony believed that its blessings would be apparent to nearly everyone and that America’s noble aims would not be doubted.”
“. . . proponents of liberal hegemony assumed that the United States could pursue this ambitious global strategy without triggering serious opposition.”
“Many in the West, especially among foreign policy elites, consider liberal hegemony a wise policy that states should axiomatically adopt.”
“Liberal states have a crusader mentality hardwired into them that is hard to restrain.”

Such statements attribute unreasonable views to faceless, naïve-sounding groups with no specific quotations to support them. Who ever said or thought these things supposedly so central to liberal hegemony? The authors do not say.

Mearsheimer seems particularly emotional about Washington’s role in precipitating Russian aggression in Ukraine, which he believes “anyone with a rudimentary understanding of geopolitics should have seen . . . coming.” He might consider that anyone with a rudimentary understanding of actual foreign policy decision-making would appreciate that most choices involve balancing risks with imperfect options. When things go badly, this is not dispositive evidence of cluelessness, or even of surprise.

He continues with the rather extraordinary assertion that, “Western elites were surprised by events in Ukraine because most of them have a flawed understanding of international politics. They believe that realism and geopolitics have little relevance in the 21st century and that a ‘Europe whole and free’ can be constructed entirely on the basis of liberal principles.” Needless to say, no “Western elites”—much less “most of them”—are quoted expressing any such foolishness.

This kind of argumentation is known as the fallacy of the straw man. It is a tool of polemics, not analysis. Walt’s and Mearsheimer’s readers should be on guard accordingly.

Rescuing Restraint from its Advocates

None of the above should be taken as a thoroughgoing defense of liberalism, hegemony, or American foreign policy in general. It is true that U.S. policymakers have often overestimated their ability to transform the politics of other countries, miscalculating the relative appeal of nationalist, tribal, and liberal forces. It is true that American advocacy of liberalism sometimes generates unwelcome (and unanticipated) backlashes. And it is certainly true that there are structural impediments to radical change embedded in American national security policy institutions.

But all these facts still do not support a depiction of U.S. foreign policy as a single-minded ideological crusade waged by an unreflective and monolithic elite. And labeling these features of U.S. policy “liberal hegemony” simultaneously gives Washington too much and too little credit: too much credit for coherence and consistency; and too little for sustaining diversity and debate over the proper priorities for America’s role in the world. As a diagnosis of what ails American grand strategy, liberal hegemony is at best a gross oversimplification.

As for prescription, each of the authors calls for a more restrained strategy, one that retains great power but trims America’s pretensions to dominance and its commitments to global security. Washington should take their warnings of unrealistic American ambitions to heart. But the case for greater restraint does not depend on a wholesale philosophical repudiation of liberal values in foreign policy. It depends most of all on a more careful calibration of goals with available resources, capability, and will; that is to say, it depends on matching ends to means.

Beyond that, academics and practitioners in the so-called Blob would be better served thinking of power and principle, liberalism and realism, balance-of-power politics and moralism, as dialectics in need of constant balance rather than as incompatible poles. We the People would be better served, as well.


The post “Liberal Hegemony” Is a Straw Man appeared first on The American Interest.

 •  0 comments  •  flag
Share on Twitter
Published on January 23, 2019 08:39

January 22, 2019

Chronicling Churchill

Relevant Reading:

Churchill: Walking with Destiny

Andrew Roberts

Viking, 2018, 1,152 pp., $40

Is This the Best One-Volume Biography of Churchill Yet Written?

Richard Aldous in the New York Times

Churchill in All His Complexity

Jeffrey Herf in The American Interest

Churchill’s Canvases

Carolyn Stewart in The American Interest


Richard Aldous: Congratulations on the new book. You have a great quote from Churchill near the beginning saying that, “Far too much has been written about me,” and that was in the 1920s. What made you want to add your own biography of Britain’s wartime Prime M­inister?

Andrew Roberts: Well, yes, you’re right. There have been 1,009 biographies of Winston Churchill; as you can imagine, it’s fairly hubristic to want to add anything at all to that avalanche of biography. This, by the way, is the fifth book that I’ll have written with Churchill in the title or the subtitle, so he’s been with me for a long time, ever since I started off as a historian 30 years ago. But actually in the last ten years, in particular, there has been an absolute cornucopia of new sources of Churchill.

At Churchill College, Cambridge, there have been no fewer than 41 major sets of papers that have been deposited since the last big biography of him. The Queen allowed me to be the first Churchill biographer to use her father’s wartime diaries, and King George VI met Churchill every Tuesday of the Second World War, and Churchill trusted him with all the major secrets of the war, which the King then dutifully wrote down in his diary, so that was also an extremely useful and important contemporary source.

But we’ve also got the Ivan Maisky diaries of the Soviet ambassador. We’ve got the verbatim accounts of the War Cabinet, Pamela Harriman’s love letters. All of these have come out in the last decade, so there’s something pretty much on every page of this biography of mine of Churchill which has never appeared in a Churchill biography before.

RA: You do a wonderful job of evoking Churchill’s character and as you point out, that character partly comes from the sublime self-confidence—self-reliance, you say—that comes from someone who knew where he came from and who he was. Tell us a bit about that.

AR: Yes. Well, I think quite a lot of people will criticize that today as having a sense of entitlement, which he most certainly did. He very much came from the apex of the Victorian aristocracy at a time when the Victorian aristocracy were pretty much at the top of the world. His grandfather was the Duke of Marlborough. He was born in a palace, and not just any old palace, Blenheim Palace, a palace that the royals envy. And so this was certainly a man who knew where he was in life, and the answer was at the top, which meant that he could, and indeed did very often, disregard the views of other people. And of course very often that was a bad thing in politics; you can’t disregard the views of the voters for very long. But nonetheless, when it came to the 1930s, when he was being attacked violently for his views warning against Nazis and Adolf Hitler, this sublime self-confidence in which he didn’t really care what anybody else thought of him became extremely useful.

RA: I suppose, in some ways, he’s what we would describe now as a damaged personality, isn’t he? I mean he’s got these terrible, selfish parents, an awful relationship with his father, a patchy education, they’re always struggling for money. And so in some ways, as you point out, he has this cold-blooded determination to become a hero and what he would describe as a great man.

AR: That’s right, yes. He was very much brought up in the great man theory of history when he was being taught at Harrow. He, as you say, had a pretty patchy education. This was partly due to the fact that he was horribly physically abused as a child, beaten in the most sadistic way by his headmaster at his prep school. His parents never visited him. This was allowed to go on because his parents showed no interest in him at all. His father, of course, was an aloof, disdainful, but very successful Victorian politician, became Chancellor of the Exchequer, and his mother was a fairly vacuous socialite who just went from party to party and—

RA: And some of the letters that she sends him at school are absolutely hair-raising.

AR: Yeah. Well, they’re heartbreaking, aren’t they? They’re heartbreaking in that she should demand that he write when she only wrote to him six times, and he wrote to her 76 times. And some of his letters to her are absolutely heartbreaking, obviously really asking for love and attention and not really getting either. You’ve got this parental thing. The person he was closest to was his nanny, who also featured in the book, a very important part of his early life. And then, of course, he goes off to various wars around the world, five campaigns on four continents, very much as a way of trying to make a name for himself before going into politics.

RA: One of the older biographies of Churchill by Robert Rhodes James has the subtitle A Study in Failure, with the idea that, if he died in 1939 before he becomes Prime Minister the next year, he would have primarily been remembered as a failure. Do you agree with that?

AR: Not at all, no, although of course it’s a brilliant sort of counterintuitive subtitle to give a book. I think that Robert Rhodes James, who I knew quite well, is not giving Churchill enough credit for having been the First Lord of the Admiralty who readied the fleet for the Great War, and made sure that it was ready for the war when it broke out in 1914. He underplays his role in creating a welfare state in Britain and leaving the appalling, grinding, wretched lot of the working classes in the period before the First World War as well.

He’s rather dismissive, and I think wrongly, of the period when Churchill was Chancellor of the Exchequer and presented five budgets. There are lots of great things that Churchill did which mean that had he died in 1939, he would nonetheless have been thought of as a much more substantial figure than simply a failure. He did have failures, and I mean several appalling failures, especially whilst he was Chancellor of the Exchequer, taking Britain on to The Gold Standard.

RA: And as you describe quite movingly in the book, he was almost haunted by going around the country and how people would shout “Dardanelles” at him. Tell us about that and how deeply he felt that particular failure in the First World War.

AR: Yes. The Gallipoli campaign, which sprung from the failure of the Royal Navy to get through the Dardanelles Straits and thereby threatened to bomb Constantinople, modern-day Istanbul, and take the Turks out of the Great War, was a brilliant idea but failed so dismally in its implementation. That failure, which ultimately led to the killing or wounding of 157,000 allied lives was something that was blamed on Churchill and pretty much Churchill alone, even though other people are to be blamed, should be blamed for it, primarily Herbert Kitchener as Secretary for War who shilly-shallied terribly at the time of the Campaign.

And so, this was hung around his neck and people shouted, “What about the Dardanelles?” at him, all the way through the twenties and into the 1930s at public meetings. So yes, that’s another disaster which would support the thesis of Robert Rhodes James. However, of course he learnt from this catastrophe, and never once in the Second World War did he overrule the Chiefs of Staff. When the Chiefs of Staff all agreed on something he could rant and rave against it, but he never actually used the constitutional powers that he had as Minister of Defense as well as Prime Minister in order to overrule them.

RA: One of the themes in the book is the way in which Churchill very often responds to these failures by writing about them, or writing about them in the context of history: his history of the First World War, of his own childhood and youth, his great four-volume biography of Marlborough, and so on. How important is Churchill the writer to Churchill the statesman?

AR: Very important indeed. He had to write books because he was broke all the time. He didn’t actually get out of the red until he was in his early 70s, and signed the book contract for his history of the Second World War. Before that, because he employed 14 servants and lived very high on the hog—and didn’t always have the money to pay for it—he was forced to write lots of books, and thank God he did, because they’re wonderful and most of them absolutely superb, and he won the Nobel Prize for Literature for them in 1953. But also, they let us historians see into Churchill’s mind, as well as anything else that he did, including I would say his public speeches.

So they are invaluable, and also it was tremendously useful for him as a historian to be able to therefore put Adolf Hitler into the long continuum of British history. He explains how you couldn’t allow somebody in charge of Germany to obtain hegemony on the continent. Britain’s historical task was to prevent that from ever happening. Not just the Germans but anyone: Philip II of Spain, Louis XIV of France, Napoleon of course, and then the Kaiser. What he saw was Adolf Hitler trying to do the same thing as Britain had prevented each of those four from doing before in history.

RA: There’s the great moral courage that he shows, particularly during the Munich Crisis in 1938 and more generally in the way that he stands up to Hitler. Some historians have been critical of Churchill in terms of his own views. Some have even claimed that he himself is anti-Semitic. How did you deal with that very delicate subject in the book?

AR: Well, I go straight for it, as you can imagine, and make an important thing of it. His Philo-Semitism is extremely important to him. He had grown up with Jews, his father liked Jews, he considered Jews to have given the Western world their ethics. He was a supporter of the Balfour Declaration. He represented Manchester Northwest, a very heavily Jewish constituency, and he was a Zionist. The idea that he was anti-Semitic all spreads from a paragraph in a 1920 article in the Sunday Dispatch and of course it’s been wildly torn out of context by his detractors.

When you see it in context, which I do in the book, you recognize that he’s not being anti-Semitic about the whole Jewish race, just simply those Russian Jews who supported the Russian Revolution, which he hated of course. People like Trotsky.

RA: Some historians have tried to put that in the context of what they see as Churchill’s broader racism, for example his comments about India. But you actually take quite a hard line on that in the book and push back very firmly.

AR: Well, yes. You see he was born in 1874 and whilst Charles Darwin was still alive, and the neo-Darwinist attitude towards race was supreme. It was considered at the time a scientific fact that there were hierarchies of race, and we consider it quite rightly today to be a completely obscene and ludicrous idea. But in those days it was considered to be scientific fact, and so I don’t think you can wrench, in a kind of politically correct way, the views of Winston Churchill so much out of their time as to criticize him for something that everybody else was guilty of too, and which would have not made any sense to him at all. It’s a bit like, I don’t know, criticizing Oliver Cromwell for not supporting socialized medicine.

RA: It is one of the things that you draw up very nicely in the book, this sense that Churchill really is a Victorian and that by the time he becomes Prime Minister in 1940, it is just by the skin of his teeth. It’s the very last moment that he could have achieved this kind of high office.

AR: That’s right, yes. I mean if anything actually you almost see him as a pre-Victorian character, as a Regency figure. Something of a rake, of course, when it came to his indulgences and his attitude toward his predators. He was somebody who wore his heart on his sleeve. He was driven by his passions and his emotions. He burst into tears 50 times during the Second World War, for example. So he’s not the buttoned-up late Victorian aristocrat of many of his class and background. He’s a much more interesting and complicated figure than that.

And as you say, it took a world war for him to become Prime Minister. The British establishment didn’t trust him, understandably on one level in that he had made lots of errors of judgment: he supported the wrong side in the abdication crisis, for example; he’d opposed women’s suffrage. But as I say, the great thing was that he did learn from each of his mistakes.

RA: In reading the book, that’s the thing that comes across the strongest to me. Yes, there is this sense of destiny, which you talk about and Churchill was very strong on himself, but you make a compelling case that experience is almost as important. By 1940 he is battle-hardened, he’s had these successes, he’s had these catastrophic failures, but whatever we think about him in 1940 he’s ready and he’s learnt from those successes and those failures.

AR: Well, that’s right. I mean he put it, as usual far better than anybody else possibly could in the last paragraph of the first volume of his war memoirs, The Gathering Storm. He said of the day that he became Prime Minister, “I felt as if I were walking with destiny and that all my past lives had been but a preparation for this hour and for this trial.” And that seems exactly what it was, because he had been First Lord of the Admiralty twice in two world wars. He had been Minister of Munitions when he was in charge of two and a half million factory workers turning out war material. He had been Home Secretary, and Chancellor of the Exchequer, and Minister for the Colonies, so this was a man who really had spent his past life preparing precisely for what was to break on the 10th of May 1940, the day he became Prime Minister. Earlier that same day, of course, Hitler unleashed blitzkrieg on the West and invaded Holland and Belgium and Luxembourg.

RA: And the thing you say that is his single most important contribution is really not even that he stopped the German invasion. It’s that he stopped the British government from making peace with Hitler.

AR: That’s right. In the key moment in May 1940 Lord Halifax, the Foreign Secretary, whose biography I’ve also written (admittedly 30 years ago) wanted to come to some kind of arrangement whereby the British Expeditionary Force was allowed back from Dunkirk, and the Germans were allowed to keep the continent and we would have peace. And the peace would of course allow the British to retain their Empire. And that is a crucial moment in history, a turning point in history where history failed to turn, thank God, because it was essential, not least to keep the Americans feeling positive about Britain, that we should continue the struggle.

RA: And of course his rhetoric is very important in this: the famous “blood, toil, tears, and sweat” and those kind of speeches. But you also make the point that because he’s a British icon we often forget that he’s half American, and that half-American side is important because it makes him in your words, “thrusting” in a way that is almost un-British and at odds with the cult of the amateur. Essentially, he’s not afraid to upset the applecart, is he?

AR: No, not at all. And of course this makes him very unpopular. If you look at many of the criticisms of Churchill by British establishment figures throughout his life, the fact that he’s half American, the fact that his mother was born in Brooklyn, is always held against him. It’s never far from the surface, this racial element; in fact some people call him a half-breed American. And therefore they think of him, because his grandfather was also a stock market speculator, as being just an adventurer and wholly untrustworthy.

You get a very strong sense of the anti-Americanism, especially amongst the British governing class in the 1930s. I’m afraid Chamberlain, and indeed Halifax, were prey to this attitude. However, then the flip side comes when Churchill becomes Prime Minister and is able to go off to Washington and speak, and make remarks about being half American which worked enormously to his advantage.

RA: Now what do you make of the relationship with Roosevelt and indeed the one with Stalin? Different historians have said at various times he’s perhaps too much in thrall to both of these leaders. What do you make of that?

AR:  I don’t think that’s true. I think that he was very heavily and constantly conscious of Britain’s declining power in the world, vis-à-vis the United States in terms of money and material and of course Russia in terms of the postwar great power status. And so you see him, especially in late 1943 and early 1944, highly conscious of being the person in between these two coming superpowers, knowing that Britain wasn’t going to be a superpower after the war.

In the calendar year 1944, when Britain produced 28,000 war planes and the Germans and Russians produced 40,000, the United States produced 98,000 war planes—almost as much as the rest of the world put together. So of course we weren’t going to ultimately have the final say on things like when D-Day was going to take place, and who was going to command it.

RA: He loses in the general election in 1945, and then I suppose as a fitting coda he makes this wonderful speech with Truman where he defines the notion of the Iron Curtain. But then he comes back in 1951 until 1955. You’ve been very critical of that government in your earlier work. What did you make of it in this biography of Churchill?

AR: Yes, you’re quite right. Back in 1994 I wrote a book called Eminent Churchillians in which I looked at the naval policies and some of the financial policies of that ’51 to ’55 government, and saw it really as a time when Britain was just treading water, unable to find a role in the world, and largely blamed it on Churchill (and some of his ministers, of course). I think on re-examining it 25 years later, I’m less harsh on him, although I still don’t see it as a great shining moment in British history by any means. But nonetheless, I think that he was taking on a long period of austerity. He had a few signal successes like building a million council houses, and also abolishing rationing. And maybe as a young man I was expecting too much of the old man.

RA: It’s one of the fascinating things about reading this book, that as you say you’ve been working on Churchill now for 30 years. The biography of Halifax, Eminent Churchillians, the books that you’ve written on strategy in the Second World War, and the biography of Salisbury immediately spring to mind. How do you feel over that 30-year period that your general view of Churchill has evolved and changed?

AR: Well, I started off as a bit of an enfant terrible and I don’t think of myself as one now. It’s rather difficult in one’s mid-fifties to think of oneself as an enfant anything! But yes, I don’t see the 1950s government through the lens of the 1990s any longer, obviously. I think I’m much more understanding. But I’m highly critical of the way in which he kept holding on to power, when he should have handed it over to Anthony Eden, either early on in the Ministry, or at the time of the Coronation in June 1953, or the following month when he had a debilitating stroke and it was kept secret from the people. And he didn’t even resign at his 80th birthday in November 1954. These were opportunities which each would have been perfectly reasonable and honorable moments to go. But instead, the acquisition and hanging on to power was such a part of his DNA over half a century in public office that he simply had to have his fingers prised from power.

RA: That’s one of the wonderful things, surely, in working on these kind of characters over a lifetime of scholarship—that they constantly change, you change. If you were still thinking the same things now as you were thinking in the 1990s, that would be a bad sign for you as a historian, not a good one, wouldn’t it?

AR: That’s an interesting way of putting it. Yes, I suppose so. I mean, I’m a Thatcherite Tory and I don’t think my overall political principles have changed terribly much since I left Cambridge University, frankly. But that doesn’t mean that I’m not able to change with the times. Toryism has been changing with the times for 300 years.

RA: And why do you think it is that Churchill remains so popular, on one level more popular than ever? Your own book sits very happily on the New York Times bestseller list. There’s a real sense that he’s somebody who still has resonance, including all the way to the White House. I think President Trump has the famous Epstein bust in the Oval Office.

AR: Well, I like the idea of it sitting happily in the bestseller list! I’m certainly very happy, I can tell you that, Richard. But I think one of the reasons, as well as the Second World War being an absolutely central period in the history of civilization, and therefore the leaders are very important and always will be, is that he stands for a number of things that people will always be looking for in politics. This thing that we discussed earlier about learning from his mistakes is very important in politicians. The idea of him having moral and physical courage is a very attractive one as well. I think his ability before the First World War to spot the threat, before the Second World War to spot the threat, and at the beginning of the Cold War to spot the Soviet threat shows that he has a Themistoclean foresight, which is also of course a very important attribute in politics.

And then one of the last great attributes that he showed was his eloquence and his speeches, 8,000 pages of which he gave over the course of his public life. They bear rereading now, and some of them are as good as Shakespeare, in my opinion. They will live for as long as the English language lives.

RA: You make the point very persuasively that politics was his life, but that it’s defined in the very broadest possible way: as you say, he was a writer thinking in historical context. Britain is going through some political turmoil at the moment with Brexit. It’s very often said that there’s a lack of leadership, and perhaps that comes because politicians don’t really have the kind of experience that Churchill had. They don’t have the hinterland, the depth of background, and so on.

Do you think that’s a fair criticism, or do you think that it’s just a case of different politicians for a different time?

AR: Well, as I said earlier, I don’t think he’d have become Prime Minister in peacetime. And thank God we’re nowhere near wartime, so one can’t expect Churchills to be popping up in our politics for that reason. Also, I don’t know how well he would have done in modern-day politics. He wasn’t an alcoholic, but he drank a lot. He wasn’t a depressive, but he did get depressed. He would make jokes that today are considered completely unacceptable. Even funny jokes can rebound on you very badly in politics today, it strikes me.

I don’t know how great he’d have been on social media. I think he’d have been pretty good on Twitter, because many of his putdowns are brilliant and very sharp, and they could fit into 280 characters or fewer. But I don’t think today you could get away with having made a mistake like the Dardanelles catastrophe and then come back from it. I think it would be pretty nigh impossible.

But nonetheless, can leaders learn from him today? Of course they can. The more the merrier, especially this idea of learning from mistakes. I think that would be very helpful. There was also a very modest side to him, where he didn’t grab as much of the glory as he possibly could for all of his successes. That’s a very attractive feature in a politician and one that of course we see relatively little of nowadays.

So yes, I think he still stands as a role model for politicians, but also for us ordinary people. Many of his maxims about how to work, how to live, and how to look at life really do stand the test of time.


The post Chronicling Churchill appeared first on The American Interest.

 •  0 comments  •  flag
Share on Twitter
Published on January 22, 2019 09:51

Churchill in All His Complexity

Churchill: Walking with Destiny

Andrew Roberts

Viking, 2018, 1,152 pp., $40



Anyone writing a biography of Winston Churchill faces a daunting task. The basic story and its many details have been superbly presented before, most extensively in Martin Gilbert’s eight volumes (several with two or three parts) published from 1966 to 1988, William Manchester’s three volumes published in the 1980s, and Roy Jenkins’s astute study published in 2002. Gilbert, Manchester, Jenkins, and, of course, Churchill himself in The Gathering Storm, have told the now-familiar story of the voice in the wilderness who warned of the Nazi threat, rejected the policies of appeasement in the 1930s and led Britain in 1940 when it fought on alone, prevented Hitler’s early victory and made possible the latter’s eventual defeat. Walking with Destiny, however, rises to the challenge by drawing on an impressive range of Churchill’s vast output of journalism, selections from his 37 published books, and scores of speeches in the House of Commons and elsewhere—as well as on more recently opened private correspondence, diaries, and on memoirs of Churchill’s contemporaries, including those of Churchill’s long-standing secretary, John Colville; Alan Brooke, Field Marshal and Chief of Staff of the British armed forces; and the remarkable diaries of Ivan Maisky, the Soviet Ambassador to London. The result is a complex and compelling depiction of one of the most important political leaders of the 20th century, one sure to enlighten and provoke both those familiar with Churchill and those who may know little beyond, as a recent film puts it, Britain’s darkest hour.

One of the distinctive contributions of Roberts’ book is that it manages to speak to our own times. It avoids hagiography and bitter revisionism yet shows full awareness of the intellectual and political currents that distance us from Churchill’s passionate belief in the fundamental goodness and greatness of the British Empire. Roberts confronts a central paradox of Churchill that figures less prominently in the previous biographies: The same political leader who expressed profound belief in the goodness and value of British imperialism was the political leader who raised the early warnings about the danger of Nazism, prevented a Nazi victory in 1940, played a major role in shaping the strategy that defeated Hitler and the Nazis in 1945 and “saved liberty.” Roberts makes a compelling case that Churchill’s fight against Nazism was, in his own mind and heart, not at cross purposes with his belief in British imperialism. For Churchill, Roberts reminds us, the British Empire stood for fundamentally liberal values and for spreading those values to places such as India, where he believed illiberal values predominated. Roberts does not seek a retrospective justification of British imperialism. Rather, Walking with Destiny offers an explanation of the ways in which Churchill’s life before the finest hour made it possible for him to fight Nazism, support liberal democracy, deepen the link to the United States, and ally with the Soviet Union—all of which were fueled by Churchill’s deep hatred of Hitler and his dictatorship, anti-Semitism, and racism.

These are not the only paradoxes Roberts delves into expertly. He reminds us that Churchill was “the last aristocrat to rule Britain,” a fact that stands in apparent conflict with “his image as savior of democracy.” Yet, he concludes, “had it not been for the unconquerable self-confidence of his caste background he might well have tailored his message to his political circumstances during the 1930s, rather than treating such an idea with disdain.”

The value of experience is another major theme for Roberts, and one that should resonate in the United States, Britain, and Europe, where publics are experimenting with politicians who lack such seasoning. Churchill brought vast experience as a journalist, soldier, cabinet minister, and member of the House of Commons, all of which he acquired before, at the age of 66, becoming Prime Minister.

Walking with Destiny explores Churchill’s mistakes as well. He opposed the vote for women before World War I, continued the Gallipoli operation in 1915 past the point when success was likely, deployed the paramilitary Black and Tans in Ireland, and failed to appreciate the military capacity of the Japanese, among other errors. Yet Roberts’ Churchill is also a person who, perhaps because of that aristocratically nurtured self-confidence, was able to learn from his mistakes. That element of Walking with Destiny also resonates in today’s politics, when admitting error is often regarded as a fatal political blunder. Churchill developed the wisdom to know the limits of his own knowledge and had the self-confidence to surround himself with competent people such as Field Marshal Alan Brooke, whose diary presented his criticisms of and intense arguments with Churchill. Despite the centrality of Churchill’s battle against appeasement and the stirring leadership he articulated in his summer 1940 House of Commons oration (“I have nothing to offer but blood, toil, tears and sweat”), Roberts makes a compelling case that Churchill was not merely “the first significant political figure to spot the twin totalitarian dangers of Communism and Nazism, and to point out the best ways of dealing with them,” but also “the quintessential fox, who knew and did many things, not a hedgehog.” Those “many things” included making unprecedented use of the cryptographic breakthroughs that were crucial in the Battle of the Atlantic, the Battle of Britain, and, as Richard Breitman revealed some years ago, in revealing the Einsatzgruppen murders in 1941. He was also a knowledgeable advocate of air power, long before the Battle of Britain. His fascination for science led him to grasp the military applications of nuclear fission. His early “writing about Islamic fundamentalism prepared him for the fanaticism of the Nazis,” while “his prescient, accurate analysis of Bolshevism laid the ground for his Iron Curtain speech.”

Walking with Destiny pays considerable attention to Churchill’s command of the English language, a skill refined in dozens of books, hundreds of speeches, and decades of journalism and historical writing. In Roberts’ view, that facility with language was indispensable to his fight against appeasement and to his speeches, which rallied the country in 1940 and after. “Above all,” writes Roberts, Churchill’s experience as First Lord of the Admiralty and in World War I, when he prepared the Navy, shared responsibility for the debacle at Gallipoli and the Dardanelles, spent time in trenches in France, and served as Cabinet minister of munitions, “gave him vital insights that he put to use in the Second World War.” Experience, learning, curiosity, detailed knowledge of the technical issues involved in both modern war and diplomacy, a fierce will, and a broad and deep grasp of history–all gave Churchill the wherewithal to rise to the threat Hitler’s Germany posed to democratic civilization.

Many readers in this country will be familiar with the remarkable correspondence between Churchill and Franklin Roosevelt, available in a three-volume set edited by Warren Kimball. Roberts examines their exchange in June and July 1940, as Churchill presented the dire consequences for the United States of a British defeat that would place the British Navy at Hitler’s disposal and thus pose an imminent threat to the continental United States. Roberts offers a detailed examination of just how serious the possibility was that, especially in the absence of support from Roosevelt, political figures in the Conservative Party would have sought a negotiated settlement with Hitler. The historians of Nazi Germany during World War II and the Holocaust have documented in enormous detail the centrality of the drive for “living space” and the race war on the Eastern Front. Churchill at the time grasped the fanaticism and evil at the core of Hitler’s policy. He understood that, were Hitler to succeed in his war in the East, he would then turn against an even more isolated Britain. Roberts makes excellent use of the fascinating three-volume diary of Ivan Maisky, the Soviet Ambassador to London before and during World War II, to fill in details of the development of Churchill’s view that Nazism was a greater threat than Communism. This view led the famous anti-communist to offer an alliance with the Soviet Union immediately following the Nazi invasion of June 22, 1941 and fueled his determination to aid the Soviet Union in the following four years.

Walking with Destiny is a tour de force when it comes to military strategy and the domestic politics of 1940. “The important point about Churchill in 1940 is not that he stopped a German invasion that year, but that he stopped the British Government from making peace.” If [Lord Edward] Halifax been the British prime minister at that time, he would have at least wanted to discover what Hitler’s terms for a negotiated peace “might be.” Roberts argues that “they would probably have been very reasonable,” as he “simply could not see how Britain could possibly win once driven off the continent, when France was about to fall, the Soviet Union was a German ally, Italy was about to become another, and the United States was in no mood to declare war on Germany. Halifax was merely a rationalist when the need was for a stubborn, emotional romantic.” Churchill understood that, if the Germans destroyed the Soviet Union and gained control of a blockade-proof continental empire, such a development “would have soon afterwards spelt disaster for Britain” and “destroyed the credibility with the Americans, as well as looking bad in history.” Roberts elucidates a central message that Churchill conveyed in his letters to Roosevelt and in the famous speeches of the summer of 1940: the point for Britain was not to lose in the short run and, by denying Hitler another quick victory, make possible victory in the longer run.

It is understandable that the hero of the finest hour became a hero to American foreign policy conservatives, and that American liberals in the post-Vietnam era and the years of the wars in Afghanistan and Iraq would look elsewhere. In reminding us of the militant centrism at the core of Churchill’s politics—the mix of defense of the market economy, liberal democracy, individual rights, and even the 20th-century welfare state—Roberts has written a book for center-left and center-right liberals as well. For Churchill was indeed a strong supporter of the welfare state. According to Roberts, the Prime Minister responded to the publication of William Beveridge’s report on social policy in 1942 as follows:


“You [Beveridge] must rank me and my colleagues as strong partisans of national compulsory insurance for all classes for all purposes from cradle to grave,” and, [he] added, . . . everyone must work, “whether they come from the ancient aristocracy or the modern plutocracy, or the ordinary type of pub-crawler.” He had no compunction in saying, “We must establish on broad and solid foundations a National Health Service. Here let me say that there is no finer investment for any community than putting milk into babies. Healthy citizens are the greatest asset any country can have.” Just as radically, Churchill promised, “No one who can take advantage of a higher education should be denied this chance. You cannot conduct a modern community except with an adequate supply of persons upon whose education, whether humane, technical, or scientific, much time and money have been spent.”

Roberts places these remarkable statements, so unlike anything to come out of the core of the Conservative Party, into Churchill’s long-standing political identity as a “Tory Democrat.” He was both a fierce opponent of socialism (and of course communism) and a supporter of those elements of the welfare state that contemporary conservatives have wasted so many decades trying to dismantle.

There are those who have read Churchill as advocating a Britain apart from Europe. Sadly, that is the way Margaret Thatcher and her successors in the Europhobic wing of the Conservative Party read him as they advanced Brexit. Yet one can see similarities between the Brexiteers of today and the Conservative Party members against whom Churchill rebelled in the 1930s in favor of a worldly and, yes, cosmopolitan British patriotism. The figure who emerges from Roberts’s pages strikes me as one who would have realized how much Britain had to gain from being embedded in the European Union, as well as how much the EU had to gain by including the fount of liberalism and parliamentary government among its members. As his resistance to Indian independence after World War II made clear, he was not happy about the end of the British Empire, but this work presents a man who faced facts and the problems of the present and future.

In short, Roberts has given us a great gift. He presents a Churchill in all of his complexity: man of letters, supporter of the natural sciences, fierce critic of socialism and communism who nevertheless allied with them both when needed. At the same time, he was a supporter of the British Empire who became the most prominent and enduring antifascist and defender of liberal democracy in his time. What makes this book essential for those who care about reviving and defending liberal democracy in our time is that it reminds us that, even at moments when old hatreds burn bright and few are willing to swim against the current, it is still possible for great leaders to emerge. Pessimists say that the inanities of social media, the 24-hour news cycle and the cult of celebrity have made it impossible for a figure like Churchill to emerge. Andrew Roberts seems to suggest otherwise; I hope he is right. I hope that the qualities of courage, experience, and conviction evident in Churchill: Walking with Destiny are present not in one great man or women but in many of our fellow citizens.


The post Churchill in All His Complexity appeared first on The American Interest.

 •  0 comments  •  flag
Share on Twitter
Published on January 22, 2019 08:39

January 21, 2019

Nathan Glazer (1923-2019)

The intellectual world will be a much poorer place with the death of Nathan Glazer. I had the good fortune to get to know Nat (as he was known) when I began working on the editorial staff of The Public Interest in its last decade of publication.  Nat served as the journal’s co-editor with Irving Kristol who founded the journal in 1965 with Daniel Bell. For me, it’s hard to think of Nat without thinking of Irving and the editorial partnership that they forged over many years and decades. Irving was known, of course, as the father of neoconservatism, while Nat was, by the time I met him in the 1990s, a one-time neoconservative who had made a partial return to his liberal roots. Yet together these two very different intellectuals edited one of the nation’s most important and influential policy journals.

The offices of The Public Interest were located in Washington, DC, though their original headquarters were in New York City. Irving was a daily presence, coming into the office every day to review manuscripts, solicit new pieces, and of course, talk by phone with his co-editor Nat. Although we younger editors didn’t see much of Nat, who lived in Cambridge, Massachusetts, and taught at Harvard, he too was a regular presence. He wrote frequently for the journal, and he seemed to know a countless number of scholars whom he could draw upon to fill the pages of The Public Interest. It’s not as easy as it may seem to fill 128 pages (the journal’s typical length) with quality, original work. So, whenever it looked like we might be coming up short, Irving would have us get Nat on the telephone. Nat, as he would tell us, always has manuscripts in hand by his colleagues which are ready to go, or an original piece of his own in the back of one of his desk drawers. He never failed us in making it to publication in time with a full issue.

It seems instructive to ask, especially in these polarized times, how Nat and Irving succeeded in such a long and fruitful intellectual partnership, notwithstanding their political and philosophic differences. How did they do it, without coming to blows? I would begin by noting that they were good friends and in fact not so far apart as one might suppose. If Irving was a critic of liberalism, it was because he appreciated liberalism’s strengths and wished to protect it from its worst inclinations. Similarly, if Nat was a liberal, he was not I think it fair to say a progressive. Nat was skeptical of what social policy could achieve, and wary of its unintended consequences for individuals, families, and society. When it came to social policy, he was a follower of the Hippocratic Oath—that first, one must do no harm.

Nat and Irving shared something else in common that made their editorial collaboration run seemingly without friction. They were interested in ideas that were well expressed and true to social reality. They were not fearful of those with whom they might disagree, nor did they believe in monologues. They believed in meeting the other side’s best arguments head on, as well as engaging with each other when they saw things differently (as they often did). They shared a common faith in the respectful exchange of ideas and the intellectual endeavor of getting to the bottom of things.

Yet if Nat was modest when it came to devising policy solutions, he was fearless when it came to offering his opinions on such contentious topics as affirmative action, immigration and assimilation, multiculturalism, monuments and public art, or the serious problem of inequality in American society. I should add though, for the benefit of the born-digital generation, that when Nat offered his opinion on a hot button issue, it was nothing like the tweets and blogs that we daily consume from today’s “thought leaders,” as they are known. Nat came to his opinions, which he most often expressed in long essays of 8,000 words or more, based on rigorous research and deep reflection, while always adhering to the facts of the real world, not as we might wish things to be. This was another intellectual quality that he shared with his co-editor Irving, and was another factor in what make their editorial collaboration a success.

Nathan Glazer was a social scientist who understood the limits of social science and its child, social policy. He was a committed liberal who however eschewed ideological thinking of any sort. He had, as a prominent editor once remarked to me, a genuinely interesting mind. He saw more capaciously than most of us ever will, and was able to make sense of what he saw and then write about it in such a way (as Irving might have said) that all of us understood the world around us a little better than we did before as a result.

This was Nat’s invaluable service to the nation’s intellectual life, and one that will be greatly missed.


The post Nathan Glazer (1923-2019) appeared first on The American Interest.

 •  0 comments  •  flag
Share on Twitter
Published on January 21, 2019 14:56

Peter L. Berger's Blog

Peter L. Berger
Peter L. Berger isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Peter L. Berger's blog with rss.