Peter L. Berger's Blog, page 95

May 15, 2018

America the Erratic

President Donald Trump’s decision to withdraw from the JCPOA (also known as the Iran Deal) last Tuesday has set off a furious round of speculation among pundits and experts alike as to what the impact of the move would be. There are many variables in play. Our European allies are united in dismay, while Israel, the UAE and the Saudis greeted the announcement with glee. The Iranians are furious, but their economy is in terrible shape, in no small part due to their being stretched thin in Syria. Russia stood to benefit from the opening Transatlantic rift and the spike in oil prices on the one hand; on the other, it faced the prospect of being caught in an open conflict between Iran and Israel in Syria, with the fate of its precious proxy Assad, in whom it has invested much blood and treasure, dangling in the balance. On the sidelines of all this sits President Trump himself, by some accounts reveling in his own unpredictability.

I’ll leave specific prognostication as to how this will all pan out to hardier, more expert souls. Still, it’s interesting to observe the decision to abandon the JCPOA in a wider historical context. I would argue that it represents the solidification of a new role for the United States on the world stage—that of an erratic actor. This is not simply the result of the ascendancy of President Trump to the highest office, or the product of any kind of Trumpist ideology, however defined. President Trump is himself merely the fullest embodiment of trends and sentiments that have been brewing in the United States for quite some time. America is becoming more of a “normal” country, shedding its sense of exceptionalism and mission—a tendency that had reached its apex in the long decade following the end of the Cold War (roughly 1989-2003).

But a “normal” country—especially a rambunctious democracy—that is still the world’s predominant power is perhaps not something to celebrate. Over the next few years, America’s role in the world will have less to do with any discernible set of overarching values or ideology. Its behavior will be the product of an increasingly (small-d) democratized and hotly contested domestic politics. This is not to say that its behavior will be impossible to predict for outsiders. On the contrary, Walter Russell Mead’s “schools” of foreign policy thinking—the so-called “Jacksonian,” “Jeffersonian,” “Wilsonian,” “Hamiltonian” factions—are mostly alive and well. What has changed inside America is the possibility of meaningfully reconciling these visions and shaping a longer-term approach to the world. Foreign policy is becoming partisan bloodsport. The result is a lack of coherent purpose, and the growing perception, among allies and antagonists both, that America is inherently unreliable.

It is of course true that the United States has never had a completely stable approach to the world. Stephen Sestanovich’s book-length observation, that America has ping-ponged between “maximalism” and “retrenchment” since at least 1945, is one of the more elegant distillations of a phenomenon that many Cold War historians have written about through the years. The ultimate goals of the United States in the Cold War, however, were easily discernible and comprehensible, even if American foreign policy elites bitterly squabbled over how best to pursue them. At the commanding heights of academia and government, people like George Kennan understood the Cold War as a continuation and adaptation of Alfred Thayer Mahan’s turn-of-the-century maritime grand strategy of hegemony-suppression.

It may have muddled and staggered its way to victory, but throughout the Cold War the United States was broadly pulling in one direction. Its strategic choices were framed by a consensus that broadly articulated both positive values (“democracy,” “freedom”), and an enemy that embodied their opposite (the Soviet Union). The overnight disappearance of the Soviet Union in 1991 was seen as a triumph, but it also kicked off a process which is best described as a kind of somnambulant drift in foreign policy. Planners and strategists strained to apply the principles that had guided the United States for almost a century to a world that had lost a key organizing narrative.

The year 1992 saw the first such effort: Paul Wolfowitz, working for Secretary of Defense Dick Cheney, oversaw the drafting of a strategy document meant to chart a course for the country in the coming decade. It quickly leaked to the New York Times and kicked up an unholy fuss in the media. Seen in the broader context of the history of American grand strategy, the document was just a sharply worded restatement of Mahan’s basic principles: No competing hegemonic power should be allowed to emerge anywhere in the world. But since it leaked in the aftermath of the first Gulf War, which was at the time taken as a model for how problems like Saddam Hussein could be dealt with, Wolfowitz’s document was seen as a breathtaking display of American arrogance—a rude attempt to disrupt a rousing rendition of “Kumbaya” just as it was gaining momentum.

Amid public outcry, the draft was quietly buried and rewritten into something more anodyne by Cheney’s staff. But even at its most fiery, the Wolfowitz document didn’t resolve the crisis at the heart of American foreign policy in the shadow of the Cold War. In 1992, bien pensant opinion was horrified by the idea that some among America’s foreign policy elites harbored what appeared to be secret imperialist ambitions. In retrospect, the real problem was that even at its most ambitious, the United States felt the absence of the Soviet Union like a phantom limb.2

In Sestanovich’s telling, both George H.W. Bush and Bill Clinton were “hybrid” Presidents: Bush showed “maximalist” verve in his handling of German unification and in prosecuting intervention in Panama and Kuwait, but settled into a retrenchment mindset by the end of his term; Clinton started out as an inward-looking “domestic” leader who by his second term was almost frenetically engaged abroad. And while this was a departure of sorts from Cold War precedent (wherein presidencies tended to stick to one disposition), Sestanovich’s observed oscillations belie a deeper consistency: Bill Clinton’s foreign policy team spent its eight years operationalizing a softer, more idealistic articulation of America’s role in the world that, stripped of its particulars, only represented a shift in tone.

The old project of hegemony-suppression—first articulated by Mahan, then Kennan, and then Wolfowitz—was now being legitimized through multilateralism and globalization, with America’s preeminence and leadership in global institutions always assured. Clinton’s trade negotiator, Charlene Barshefsky, quipped that economic globalization was making military alliances obsolete. Nevertheless (or as a result?), NATO was expanded eastward, subtly transformed from a collective security alliance into a vehicle for the consolidation of Western values, influence, and way of doing things. Continuing strikes and the imposition of no-fly zones in Iraq were justified by pointing to existing UN resolutions. For the Kosovo intervention, Clinton didn’t even bother with such niceties. On his way out of office, he was echoing his Secretary of State Madeline Albright’s memorable phrase at every turn: America was the “indispensable nation.”

And yet, still, this palpable lack of overarching mission hung over everything. Albright’s argument to Colin Powell for intervention in Bosnia—“What’s the point of having this superb military you’re always talking about if we can’t use it?”—captured the decade perfectly. Clinton’s time in office was characterized by a quest for meaning and direction. With the Soviet Union gone, Saddam Hussein and Slobodan Milosevic would have to do.

By the time Saudi hijackers smashed passenger jets into the Twin Towers in 2001, both Dick Cheney and Paul Wolfowitz were back in government. And with the National Security Strategy that quickly followed in 2002, the “Global War on Terror” was born. The document was yet another effort to set a firm course for a spiritually rudderless American foreign policy, channeling both the democratic triumphalism of the Clinton years, and the Mahanian spirit of the 1992 draft. The role previously left vacant by the Soviet Union, however, was at last filled—by “shadowy networks of individuals [that could] bring great chaos and suffering to our shores for less than it costs to purchase a single tank.”

Less than a year later, scores of American tanks were rolling into Iraq to overthrow Saddam Hussein’s regime.

Meanwhile, in Chicago, a month after Bush’s National Security Strategy was published and a day after a resolution authorizing the use of military force in Iraq was introduced in Congress, a young state senator spoke at an antiwar rally. Conceding that Saddam Hussein was a vile, degenerate dictator, and that “the world, and the Iraqi people, would be better off without him,” he went on to eloquently—and as it turned out presciently—outline the horrific consequences that intervention would bring. “I am not opposed to all wars,” he intoned. “I’m opposed to dumb wars.”

And in New York City, an even younger MFA student who had watched the Twin Towers go down decided he was in the wrong line of work. He became former House Foreign Affairs Committee chairman Lee Hamilton’s right-hand man, first at the Wilson Center in Washington, DC, then serving as his staffer on the 9/11 Commission. Finally, at the Iraq Study Group, he worked on finding a way to extricate the United States from its ugly, unnecessary war “with dignity.”

President Barack Obama and his Deputy National Security Advisor for Strategic Communications Ben Rhodes represented a rising generation shaped by the trauma of Iraq. We won’t have a fuller picture of the dynamics inside the Obama presidency until more memoirs come out, but what we have already from journalistic accounts and interviews suggests an important break with the past. Obama spent three months in 2009 fighting his entire team of senior national security advisors who were advocating an expansion of America’s commitment to the war in Afghanistan. He ended up agreeing to send more troops, but put a hard timeframe for eventual withdrawal: All the extra troops would be home before 2012. A similar scene unfolded over calls for intervention in Libya, with Obama ultimately—grudgingly—agreeing to “lead from behind” as a Western coalition toppled Muammar Qaddafi. Finally, in Syria, Obama triumphed over his team of advisors and foreign policy elite opinion more generally—“the blob,” as Rhodes contemptuously called it—and managed to keep America’s involvement in the conflict at a strict minimum.

The rejection of “dumb” wars is not the full story of the Obama presidency. He assiduously pursued the Iran Deal as a thinly veiled strategy for disengaging America more fully from the Greater Middle East in the medium term. He sought to hand over responsibility to the Europeans for Russia (a “regional power” acting out of weakness) and gave lip service to a “pivot to Asia” which he ended up not properly funding. Sestanovich is not wrong to identify Obama as a classic full-throated “retrenchment” President in the mold of Eisenhower and Nixon. But thinking of Obama in only these terms misses an important sea change that occurred on his watch.

Throughout the Cold War, all administrations were constrained as to how they repudiated the legacies of their predecessors. The restraining factor was the existence of the Soviet Union. Between the Soviets’ unexpected disappearance and 9/11, there had not been any need for repudiation; George H. W. Bush’s and Bill Clinton’s foreign interventions were on the whole successful, cheap, and as a result popular. Voters were happy, and they gave their blessing to whatever it was the elites said they were up to. George W. Bush may have come into office complaining about Clinton’s hyperactivity on the world stage, but he was mostly on board with the broad premises of American indispensability and the universality of American values that ten years of sleepwalking had coughed up as a purpose. The fact that 9/11 prompted such a hard pivot on the part of Bush, from a domestic-focused “compassionate conservative” to a man deeply animated by America’s missionary spirit, illustrates how little he differed from his predecessors.

Iraq was the first great post-Cold War catastrophe, and it prompted a retrenchment—just like Korea and Vietnam did before. But without the strictures of the Cold War, it also meant that there was no inherent legitimacy in the alternative policies proposed by the retrenching President. One can agree or disagree with the choices Obama made, but one can no longer point with the same certainty to the rationale for one’s judgment. There is still no larger framework against which we can try to measure the merit of this or that policy, since America still has no defined foreign policy goal in the world that is seen as obviously legitimate by all its citizens.

Appeals to humanitarianism, prestige, regional influence, and broader credibility all confronted Obama on Syria. Charges that he was selling out valuable allies in the Middle East and making existing messes more acute hounded him on the Iran Deal. But Obama shrugged, soldiered on, and notably neither profited nor paid a price for his policies. With a foot out the door, his national support on foreign policy was about even, splitting neatly along partisan lines. Politics, at the end of the day, is all that matters in today’s America.

When I saw Donald Trump give a stump speech in October 2016, I remember noting the venom with which he attacked the stupidity with which foreign policy had been handled by America’s elites. Of course he directly attacked Hillary Clinton and her proposals, but he linked her record to the longer, sad history of America’s involvement in the Middle East. And more than with the attack itself, I was impressed at how well it played with the crowd, many of whom were veterans or families of veterans of the failed wars he was denouncing. The tone was harsher, the rhetoric rougher. But the substance reminded me of Obama. Since then, there have been more parallels. Trump, too, was in the end grudgingly bullied into a troop increase in Afghanistan by his advisors. And like Obama, he has continued to insist that America’s involvement in Syria remain marginal at most.

But the differences are more instructive, with the Iran Deal a case in point. Partisans of each President accuse the other of abandoning allies. Which allies, though: European or Middle Eastern? Both Presidents promised to “not do stupid shit.” But what’s more stupid: risking a preemptive war with Iran today that could easily spiral into a regional conflagration, or gambling on forcing a regional realignment, which may eventually include a nuclear-armed Iran, in hopes of disengaging the United States from the Middle East?

These are admittedly oversimplifications, but they are meant to illustrate a point: Without a larger narrative binding the United States in a single mission, foreign policy disagreements have degenerated into partisan squabbles which are impossible to adjudicate. Experts, after all, can be found arguing both sides of any of these issues. And anyway, experts are unlikely to be of any help, as they are correctly blamed for engineering and abetting the status quo, which both Obama and Trump got elected by publicly repudiating each in their own way.

Much of this broader Obama/Trump dynamic was eloquently elaborated by Adam Garfinkle a few weeks ago. “The fact that these two Presidents have come from two different places along the American political spectrum,” he wrote, “suggests that they may well represent a new normal.” I think that’s very probably true. But at the same time, that “new normal” does not necessarily portend the emergence of a new consensus for isolationism. Rather we may be headed for an exceptionally cacophonous, democratic period, where matters of national security are debated in increasingly partisan terms.

And by partisan, I mean arbitrary. Consider the domestic parallel: Even though the ineluctably growing budget deficit represents a threat to the country’s long-term future, the fact that it’s possible to put off a reckoning beyond the time horizon of a single election means that both parties cavalierly make use of it as a cudgel against their opponent but then pursue their priorities while in power. Experts are called in for backup in either circumstance, but it doesn’t really matter. Overall, the political stakes are too low for consistency, and the costs for being unserious, if they exist at all, are outweighed by short-term political benefits.

To state the obvious, this all could come to an end if and when a suitable external challenge arises, creating both a unifying narrative for voters to coalesce around, and disciplining politicians and policymakers into a more hard-headed analysis of what constitutes the national interest. Many these days look at China and imagine, hope, something like that happening, maybe sooner rather than later.

At the same time, perhaps it’s best not to get too optimistic about “definitional” struggles solving our problems for us. History doesn’t usually work quite so tidily, for one—it rhymes, not repeats. And let’s not forget that it was enthusiasm for a “definitional” struggle that gave us the Iraq War, which is no small part of why our foreign policy debate is the way it is. Perhaps we should try harder, hoping against hope, to forge a consensus on our own terms.


Adam Garfinkle, Walter Russell Mead, Walter McDougall, and Jim Kurth have been the some of the most prominent writers to analyze the Protestant roots of America’s deeply felt need for a moral opposite. See Garfinkle, “Can Americans Count to Three?,” The American Interest, March 9, 2018, and Kurth, “The Protestant Deformation,” The American Interest, Winter 2005 for shorter treatments. Mead’s God and Gold (2008) and McDougall’s The Tragedy of U.S. Foreign Policy (2016) are vital book-length investigations.

 I’m far from the first person to remark on this tendency. Hal Brands’ From Berlin to Baghdad (2008) and Michael Mandelbaum’s Mission Failure (2016) are but two standouts. 



The post America the Erratic appeared first on The American Interest.

 •  0 comments  •  flag
Share on Twitter
Published on May 15, 2018 08:43

Japan’s North Korean Anxieties

The pace of changing dynamics on the Korean peninsula has been nothing short of astounding. After North Korean leader Kim Jong-un seemed to open a diplomatic door during his New Year’s address earlier this year, stating an interest in participating in the Pyeongchang Winter Olympics, the pro-engagement government of South Korean President Moon Jae-in stridently pushed forward an ambitious diplomatic engagement strategy with Pyongyang. The result has been a flurry of summitry, including an inter-Korean meeting last month and a looming meeting between U.S. President Donald Trump and Kim, now slated for mid-June in Singapore. In addition, China has hosted Kim for two summit meetings in less than a month, marking his first trips outside of North Korea since assuming power. Russia and Japan are also probing summits with Kim, not wanting to be left out of the diplomatic flurry.

Indeed, developments on the Korean peninsula are now moving too fast for Tokyo’s liking, even though it has supported the pressure tactics that contributed to the current breakthrough. Over the past year, the administration of Japan’s Prime Minister Shinzo Abe has been the Trump Administration’s closet ally on North Korea, acting in lock-step with Washington through its support of a bolstered “maximum pressure” campaign. That policy—including a hard-hitting sanctions regime, increased deterrence activities, and a more credible threat of military force—has certainly created unease in Pyongyang. The sanctions are biting the North as never before, especially those implemented by China on sectors such as coal and textiles. It is also true that Kim is likely cautious about the volatility of the Trump White House and its seemingly casual rhetoric over the past year about the potential use of military force on the Korean peninsula. Pyongyang also was likely concerned about the new influx of hawks, such as new National Security Advisor John Bolton and Secretary of State Mike Pompeo, within Trump’s national security team.

But these explanations are insufficient and too U.S.-centric to explain why Kim is coming to the table. While the role of sanctions and particularly China’s tougher approach to Pyongyang have made an impact, the real drivers here are coming out of the Korean peninsula. The Moon Administration in South Korea has staked its legitimacy on an ambitious gamble that now is the right moment for inter-Korean engagement and peace talks. Kim has rhetorically confirmed he shares these intentions, even though his motivations are different. Peace talks are happening now because Pyongyang feels it has reached a critical stage in the development of its nuclear weapons program. Kim is now able to achieve what his father and grandfather always dreamed: to talk with the United States directly, nuclear weapons-state to nuclear weapons-state.

Unfortunately, this breakthrough constitutes a potential nightmare scenario for Tokyo. In the wake of swift changes on the peninsula, Japan has been forced to adjust to the embrace of high-level summitry. This has prompted a new, high-level diplomatic push on Japan’s part, with two main aims.

The first aim is to mitigate risks that these fast-moving developments would threaten Japan’s security interests, as seen in Abe’s summit with Trump at Mar-A-Lago in April and a host of shuttle-diplomacy efforts by Japan’s Foreign and Defense Ministers over the past few months. Abe is also looking to host a further discussion with Trump on the sidelines of the G7 Leaders’ Summit in Canada next month, just days before the Kim-Trump meeting in Singapore.

Tokyo’s principal concern—which it continues to telegraph to the Trump Administration at multiple levels—is that a “deal” should not be made with Kim if it only focuses narrowly on rolling back Pyongyang’s intercontinental ballistic missile (ICBM) program. Such an action might be tempting for Trump, who could try to frame it as a “win,” but it would serve to isolate Japan (and South Korea), who would continue to be threatened by Pyongyang’s cache of medium and intermediate range missiles that could be tipped with nuclear warheads. This move to “decouple” the United States from its East Asian allies has been a long-held strategy by Pyongyang, and Tokyo is wary of Trump falling into the trap. The Abe Administration has also emphasized the need to broaden talks beyond the nuclear weapons program to include the North’s stock of chemical and biological weapons. Finally, Japan remains concerned about the broken-record legacy of negotiations with the North over the past two decades and insists that the United States and its allies cannot repeat the mistakes of the past.

Japan has also taken this approach in talks with its East Asian neighbors, pushing for concrete measures from the North towards denuclearization. Earlier this month, Japan hosted a Trilateral Summit with Chinese Premier Li Keqiang and South Korean President Moon Jae-in. Abe took the opportunity to stress the need for “complete, verifiable and irreversible disarmament” of Pyongyang’s nuclear weapons program. Seoul and Beijing, however, balked at the idea of including that in a joint statement released at the end of the meeting, favoring the softer language of “complete denuclearization,” which aligns with the wording from the Panmunjom Declaration released after the inter-Korean summit and provides considerable wiggle room for Kim.

The second important aim for the Abe government is to resolve the long-standing issue of its abducted citizens. During the 1970s and 1980s, 17 Japanese nationals were allegedly kidnapped off the west coast of Japan and other areas around the world by North Korean agents and brought to live in North Korea. In 2002, after then-Prime Minister Junichiro Koizumi visited Pyongyang, five Japanese citizens were returned to much fanfare in Tokyo. The shine quickly wore off in Japan, however, when North Korea claimed that the remaining suspected abductees were dead, missing, or had never been taken to begin with. Koizumi visited North Korea once more in 2004, but Pyongyang insisted that the issue was closed. Abe, likewise, failed to break the stalemate during his first stint in office, from 2006 to 2007.

Despite deep reservations about North Korean pledges regarding denuclearization, Abe feels he has a unique opportunity now to pressure the Kim regime to address the abductions. Indeed, the Japanese government has found a receptive partner with the Trump Administration. Trump met with the families of the abductees during his trip to Japan last year, and also raised the issue during his address at the United Nations General Assembly. Abe also secured a commitment from Trump during their last meeting at Mar-a-Lago that the United States would directly address the issue during the summit with Kim.

Japan will continue to seek balance in its two-track approach with North Korea, focusing on the security risks and abductions issue over the coming months. The primary worry for Tokyo, heightened by rapid developments of late, is that its views may be marginalized in any diplomatic deal with the Kim regime. As the world gears up for all the inherent uncertainties of a Trump-Kim summit, expect the Abe government to pay a constant attention to alliance management, as it seeks to keep its voice heard.


The post Japan’s North Korean Anxieties appeared first on The American Interest.

 •  0 comments  •  flag
Share on Twitter
Published on May 15, 2018 08:10

May 14, 2018

Beyond the Factory

Behemoth: A History of the Factory and the Making of the Modern World

Joshua B. Freeman

W.W. Norton, 2018, 427 pp., $27.95



Since 2008, one question has obsessed political elites in many rich capitalistic countries: Has capitalism failed as a form of economic and social order? Has it not produced the goods, or has it produced too many goods, or, more politically relevant, has it produced the sense of a good society? And if it has failed, what might, should, or will come after it?

Any number of abstractions, appearing everywhere in the media, academia, and politics, are now used to describe capitalism as a discredited economic and social order. More in Europe than in the United States, “capitalism” is prominent in the lexicography of ignominy, as are “neoliberalism” and “market fundamentalism.” These abstractions are all difficult, maybe impossible, to define, not least because they all reify a target that is constantly changing. They nevertheless have yielded a debate about the modern world that has morphed into a neo-scholastic exercise of eviscerating exegesis. Hillary Clinton recently speculated that she may have failed because she was a capitalist.

Joshua Freeman’s refreshing book, his fourth, is a good indication that there is another way to approach today’s uncertainties. That involves looking at the past. Think of how lived experience changes, rather than ruminate on recondite terminology. Instead of asking whether capitalism began in the 18th century, or in late medieval Florence, or whether ancient Greece or Rome were in some ways capitalistic (we can probably agree that Stone Age people were not), Freeman knows how to begin: when something dramatically transformative happened. Behemoth starts, as all good history books should, with a date: 1721. “In 1721 […] the first successful example of a factory, as we use the term today, was built on an island in the River Derwent.” It was a silk mill, built by two half-brothers, John and Thomas Lombe of Darby. The factory was quite distinct from other buildings: the church, the palace, the theater, bathhouse, dormitory, lecture hall, courtyard, prison, or city hall. It established the modern world.

Freeman, a professor of history at Queens College, City University of New York (CUNY), and the CUNY Graduate Center, traces the story of the factory as manufacturing techniques and the objects being produced changed: from silk weaving to cotton and other textiles, and then to iron and steel, then to the creation of complex machinery such as the automobile or the aircraft, and then to electronics assembly plants. The geographical focus shifts from England to the young United States, then to the Soviet Union, and ends up in China. The factories appeared in countries that were remaking themselves and dreaming of a new society, of which the factory appeared to be the facilitator. Freeman could easily have also included Germany in his industrial grand tour, where after 1871 the Krupp factory at first seemed to embody the dynamism of the young German Empire.

Each version and incarnation of a factory in the different geographic locations produced an initial utopia. The agglomeration of large numbers of people in large rooms was not just an exercise in increasing economic efficiency; it might make the people involved in production better people. A dream of human liberation became associated with the factory before it ever generated bleak visions of a new bondage for the sake of Mammon and profit. The factory even became an aesthetic object, designed by prominent architects and engineers experimenting with new ways of organizing social space. Companies regularly included an imposing lithograph of their production facilities on their corporate stationery.

It is also clear that the most obvious economic explanation of the rise of the factory—the exploitation of economies of scale—is not a very good account of why the factories were built. In many cases, there were penalties for being too big; especially before the advent of electric power made it possible to distribute energy more widely over a large area, large factories depended on complicated systems of cranks and shafts that linked a central source of power, at the beginning usually water and later on engines usually driven by coal. Sometimes factories were built in order to control workers. More often they were designed to ensure that production secrets were kept and patents not abused. In some cases, factories became merely prestige projects of display for a status-conscious aristocracy.

Initially, factories appeared as objects of admiration, attracting tourism to England. Alexis de Tocqueville thought of the mills of Manchester as akin to “huge palaces;” another observer thought of them as like Egyptian obelisks. Some observers even seem to have been enchanted by child workers, and to have thought that the new forms of physical activity constituted a good. Andrew Ure, a Scottish physician and business theorist (a striking combination of expertise!), commented on how “the work of these lively elves seemed to resemble a sport, in which habit gave them a pleasing dexterity.”

One of the attractive consequences of Freeman’s geographic trajectory is that he can explain how, in each case, the reality of factory life quickly overshadowed the utopian dream. There was soon a dystopia of overworked, brutalized, enslaved workers, often vulnerable children and women in the textile mills that began to overshadow the utopian dream. Government commissions and poets like William Blake and social critics like Marx and Engels graphically depicted the excesses, the abuses, and the human degradation.

Each time the factory was reinvented in a new location, a different story could be told about why this time things were different—how a better world was about to be established. Initially, the New England mills looked like both a strengthening of the commercial sinews of the young republic and liberation for the farm girls who went into the factories. Freeman quotes a mill worker, Harriet Robinson, as she recalls how repressed and depressed girls were transformed by work and money:


After their first pay-day came, and they felt the jingle of silver in their pocket and had begun to feel its mercurial influence, their bowed heads were lifted, their necks seemed braced with steel, they looked you in the face, sang blithely among their looms or frames, and walked with elastic step to and from their work.


The early commentators realized how transitory the initial idyll might be, and began to suspect that as manufacturing spread, conditions might deteriorate. They did.

Each new utopia produced the same cycle. First there was enthusiasm about the size and the transformative capacity when a new production technique brought a new aesthetic: when the steel factory came, or when Ford worked with the assembly line. Then came disillusion, radicalization, and the mobilization of organized labor avowedly to tame the monster.

Some taming occurred, to be sure, but the monster then moved geographically to inhabit a world organized quite differently. That is when Freeman transports the reader to Russia, where an American architect, Albert Kahn, brought the most transformative design for rationalized mass-assembly production to Soviet tractor factories, first in Stalingrad and then beyond. Kahn described himself as a doctor dealing with his Russian patient. There was the same foreign tourism, the same over-enthusiastic accounts, complemented now by films and photos. The photographer Margaret Bourke-White explained that she could provide an “exultant picture” for a world waiting to be impressed by the Soviet achievement.

With the shift to China, and the colossal breakneck transformation of the Chinese economy since the 1980s, the story takes on yet another twist. There is first of all the relatively simple matter of scaling up. Production became even larger, more concentrated, and more globalized. And again, Freemen neatly presents both the utopia, of illiterate rural farmers learning and civilizing themselves, and the dystopia of mass disenchantment and worker suicides and labor protest in Foxconn City (Longhua Science and Technology Park in Shenzhen).

But he notes an interesting change. The giant factories are no longer intended to be icons of design, they aren’t objects of touristic pilgrimage, and they do not figure prominently in the advertising and marketing of the companies for which they produce. They are silent and invisible. That is a solution to the problem of industrial espionage, which is ironic in a way since these factories have themselves benefitted from a good deal of industrial espionage.

As at the beginning of the factory revolution, Freeman insists that there is no strictly economic need for the modern version of gigantism. There may even be declining marginal productivities in large factories. But the exercise is fundamentally about control—the control of people, but also the control of processes and ideas.

And so we are back to capitalism. Foxconn produces for Apple, Dell and Hewlett-Packard. I am probably typing this article on one of its products, but I can’t really know. There is nothing on the laptop to tell me where it was actually made, or assembled, and who made or assembled it. And it isn’t even clear who owns Foxconn either. Behemoth concludes with the observation that the second-largest holder of stock in the majority owner of Foxconn, Hon Hai Precision Industry, is the Vanguard Group (which specializes in the management of retirement funds). In short, I may even partly “own” the company that produces my computer, but I still don’t know anything about its production, and I certainly can’t influence it.

Freeman’s is a work of history, so there is little speculation about such conundrums, and even less speculation about the future. But Behemoth certainly invites and stimulates such thoughts.

First, production is coming back to rich countries, and global supply chains are being cut in the name of rapid response to changing tastes and markets. But much of the onshoring does not involve an extensive hiring of new labor—at least not yet.1 The factory as a location for very large numbers of people may thus be largely an artefact of the past. Instead, the modern factory is a collection of machines—robots, in common parlance. They can be visited too; modern automobile producers in Japan and Germany, for example, like to show off their automatized production facilities as a way of advertising their products.

Second, despite the incipient trend of returning production, service industries are still gaining at the expense of traditional manufacturing. The vast campuses, with giant open-plan offices, are the modern equivalent of 19th– and early 20th-century factories. They are built by the most iconic enterprises, but they are factories that no longer produce anything physical at all. The 430,000-square foot, Frank Gehry-designed headquarters for Facebook in Menlo Park, with 3,000 or so engineers within, is usually thought to be the largest open-plan office in the world. Amazon and Apple are looking for giant new campuses and staging the competition as a race between cities that want to prosper and shine. Capitalism now wants to reinvent the environment—this time, presumably, for the better.

The modern worker halls can be made to seem just as liberating as the old factories, but ultimately they are just as confining, if not wholly monstrous. They produce work-related stress, anomie, and other forms of psychic damage. The mega-offices are supposed to offer what artificial intelligence cannot, at least yet: creative interaction. Modern employers create coffeehouses as niches for that interaction. The water cooler in the corner is just not enough anymore. But everything else, all the routine processing, is left to machines.

The old question—in this year of Karl Marx’s 200th birthday—about the economies of scale continues to apply, very forcefully, to this world. Are giant offices really the best way of assembling and stimulating human creativity? Aren’t some of the other buildings with which Freeman starts his accounts, the pre-modern ones like the university quad, the church, and the city hall—and of course the tavern and the coffeehouse—just as effective if not more so for creative purposes? Is the postmodern world therefore likely to take us back to the social architecture of the premodern? Or is the enduring need for control too strong for that? Freeman doesn’t offer a conjecture, but he does at least stimulate the question.

[1]Informed speculation that it might do so informs William B. Bonvillian and Peter L. Singer, “What Economists Don’t Know About Manufacturing,” The American Interest (May/June 2018) and their book, Advanced Manufacturing: The New American Innovation Policies.


The post Beyond the Factory appeared first on The American Interest.

 •  0 comments  •  flag
Share on Twitter
Published on May 14, 2018 08:35

May 11, 2018

The Meaning of Withdrawal

The deed has been done to the deal: On Tuesday, as by now everyone above the age of about six knows, President Trump withdrew the United States from the so-called Iran deal—formally, the Joint Comprehensive Plan of Action (JCPOA). Before and especially since then, enough electronic ink has been spilled in efforts either to explain or to spin what has happened to fill a virtual ocean basin. The riptides are treacherous; none but the intrepid should dare to enter. I, however, am intrepid, and with any luck at all, so are you, dear reader.

Let’s dive right in, first by posing the seven key questions:



Was the decision a mistake, or not?
What broader context do we need to understand and parse the commentary thus far on the decision?
Mistake or not, does it mean that the United States is on the road to war with Iran, a war that almost certainly would drag in other parties (Israel, Lebanon courtesy of Hezbollah, Saudi Arabia most likely, the United Arab Emirates by extension, maybe Bahrain by extended extension, and so on)?
Does it represent a new breach too far to repair with America’s closest European allies?
Does it complicate the effort to drive a nuclear deal with North Korea, as the elite media and associated chatterati assume?
Does it suggest or imply that the Trump Administration now actually has an Iran policy and perhaps even a broader policy for the region?
What does the decision tell us about the Administration’s broader strategy compass and likely future trajectory, not just in foreign and national security policy but also for domestic U.S. politics?

Now let’s turn to answers. Novices in these treacherous swells, bring your inflatable water-wings. You might need them.

Mistake, or Not?

If one thinks that the deal, whatever its several birth defects, was working to limit and slow the danger of an Iranian nuclear breakout, then leaving the deal is a mistake unless one has in mind some better way of limiting that danger. And it had better be a lot better, because the diplomatic costs of the decision are real: the main two being the dissing of America’s best European allies—with whom we still have much important business to conduct in tandem that we cannot achieve unilaterally, not least with regard to Russia—and undermining the U.S. government’s reputation for consistency and predictability worldwide.

Well, which is it? If the Trump Administration has a better way in mind to limit the dangers posed by an Iranian nuclear breakout, what might they be? There are only two generic possibilities.

The first is the use of force, at one level or another, to seriously degrade if not entirely destroy the Iranian nuclear infrastructure—if not also to try to destroy the regime. If one thinks that the use of force is inevitable on account of the short fuses built into the deal, a reasonable case can be made that doing it sooner is better than doing it later, especially if you don’t trust your successors to bite the bullet and do what is required. As Lord Vansittart said, “it is usually sound to do at once what you have to do ultimately.”

The counter to this advice is some form of Micawberism: waiting, hoping against hope, for “something to turn up” that will enable one to wriggle out of doing what one really doesn’t want to do. That was essentially the Obama Administration’s choice: the regime would moderate; or by showing that the United States did not seek regime change, the mullahs would decide to forgo the expense and danger of building a nuclear arsenal; or something else useful would happen. To its considerable credit, however, the Obama Administration recognized the dilemma for what it was, and did not make the disastrous analytical error, popular in some circles, of simply assuming that an Iranian breakout would be no big deal since deterrence would inevitably and unerringly work despite circumstances and a context vastly different from that of Cold War U.S.-Soviet deterrence.

Not all forms of Micawberism are illusory; it depends on the case to hand. Sometimes something does turn up: The Royal Navy was in the fight of its life in late July 1588 when a huge storm swamped the Spanish Armada. Besides, it was not irresponsible to resist unleashing another spasm of American violence in the heart of the Middle East at a time when we were still engaged in two other shooting wars, and when the political carrying capacity for a third was decidedly low—certainly in the President’s own party but broadly in the nation as well. A can-kicking exercise can become quite appealing under such circumstances.

The second generic possibility is the re-establishment of the sanctions regime and its further tightening, putatively to be used to renegotiate the deal to get a better outcome. This idea, whose origin rests in arguments used to oppose the deal in 2015, is either naive or disingenuous. It took a long time and a lot of effort over more than one administration to strengthen the sanctions regime to the point that, combined with the generic incompetence of the Iranian regime’s economic management techniques, it inflicted real pain. Having just dissed our own partners in that effort, it is highly unlikely that the Trump Administration can resurrect a similarly draconian sanctions regime—unless of course the Iranians in due course engage in behavior roughly as chest-thumping as that of the Trump Administration. That’s possible, but unlikely anytime soon.

The notion that a more painful sanctions region could be used after the fact to renegotiate a better deal depends on the premise that the Iranian regime would agree to do something that runs hard against the grain of its own interests. The deal as signed was strategically ambiguous and had to be to be concluded. For both sides it constituted to some extent a wager against an indeterminate future. Under such circumstances, neither side gets a do-over. To insist on one exemplifies bad faith, and to succumb to such an insistence amounts to humiliation under duress. That sort of thing doesn’t happen except under very extreme circumstances—so extreme that if it takes an American knife to the Iranian throat to get them to agree, one can take it for granted that the terms of such a forced agreement will not be honored, and so one might as well just slide the knife downward and have done with it.

In sum, whether the decision to withdraw was a mistake or not isn’t as simple a question to answer as many commentators seem to think. And that is precisely because the situation created by the Iran deal, in all its necessary ambiguity, has not surprisingly been a mixed one.

On the one hand, the deal has slowed Iranian momentum toward a nuclear breakout, even if it has not stopped a range of critical ancillary activities other than those represented by spinning centrifuges. But on the other, the deal essentially codified and in effect blessed Iran’s status as an incipient nuclear power, and represented a major concession from earlier U.S. insistence that the weapons program be abandoned in toto as opposed to partially and temporarily frozen.

Also, on the one hand, the deal has not moderated Iran’s regional and wider international behavior, as the Obama Administration promised, albeit with a wink and a nod and a false demurral. The sequestered Iranian assets thawed by the deal have mostly been used for military and proxy expeditionary support in Syria, Yemen, and elsewhere—in every case directed against the interests of the United States and an assortment of allies and partners in the Middle East. But on the other hand, that very fact has stimulated a good deal of popular ill will toward the regime and has contributed to an increasingly fissiparous optic between the Rouhani government on the one side and the Supreme Leader/IRGC phalanx on the other—as illustrated in recent days by Rouhani’s extraordinary public statement about the government’s opposition to the latter’s banning of the “Telegram” app.

Where this spinning wheel will stop no one knows. That’s why, as noted above, the best way to characterize the deal from the perspective of the protagonists is as a wager against the future. If the deal ends up teasing the clerical leadership and its domestic allies into a level of overreach that eventually tanks the regime, and if it is ultimately replaced by a more normal, post-revolutionary regime, that will be an excellent outcome from the U.S. perspective. In turn, it would make what happened on Tuesday seem in retrospect to have been of minor importance, except if it could be demonstrated that U.S. withdrawal from the deal accelerated a wave of fatal Iranian hubris. That is obviously a very different outcome from the existing regime becoming more moderate; but it is an outcome that will have required, at least for its first few acts, a good cop/bad cop tandem that neither the Obama Administration nor the Trump Administration could either imagine or admit.

But we don’t know if this will happen, and even if it does we don’t know when—before or after a disastrous catalytic regional war occurs. I would have advised against withdrawal, because the prospect of war—especially a wider catalytic war in the region—bears too many imponderable dangers to risk. It would have been wiser to insist not on a renegotiated deal but on a follow-on agreement. That demand, our allies would have readily supported.  If force had eventually proved necessary anyway, they would have likely been with us. Now they most likely will not be.

That said, it’s too soon to know with any assurance whether the withdrawal was a mistake or not. It, too, like the deal itself, represents a wager against the future. All consequential decisions in foreign and national security do.

Parsing the Commentary

The U.S. debate over the Iran deal was one of the most profoundly dishonest debates in our history. It illustrated the politicized rancor that characterizes American politics writ large these days, and demonstrated the truth of the saying, usually attributed to the late Christopher Hitchens, that partisanship makes people stupid. And the stupidity, let loose to roam the nation at large, produces a level of general understanding about complex issues roughly analogous to a coconut’s grasp of differential calculus. The Iran deal debate was no exception.

The Obama White House line was that “no deal is better than a bad deal.” What it really believed is that almost any deal was better than no deal, because the only responsible alternative to a deal—with the Iranians racing pell-mell toward breakout—was to resort once again to force. It also dissembled about what kind of war it wanted to avoid by suggesting that a war with Iran would be another Iraq-scale ground war, complete with calamitous occupation. No serious person, anywhere, imagined a military campaign against Iran looking like that. Yes, they spun and spun well beyond the threshold of lying, as Ben Rhodes’s infamous NYT profile by David Samuels revealed.

Equally disingenuous was most of the opposition to the deal. A few opponents came clean and confessed that they preferred a military campaign to a deal—and in doing so they expressed the secret hopes of the Sunni Arab state elites and of many Israelis as well. But most insisted that there was some third way—a better deal backed by the aforementioned hyper-charged, new and improved, sanctions regime. Few of the people who made that case actually believed it, but it seemed politically useful to make it anyway in opposition to those who were making equally but politically opposite dubious claims. Don’t forget: Most of these folks, on both sides, were trained as lawyers, and so felt a lot more comfortable than they should have declaiming about things they knew to be eyewash.

Now, in the aftermath of Tuesday’s withdrawal, we are witness to the second coming of all this hogwash, repurposed for a different political context. When supporters of the deal tell you it was the best thing since sliced bread, and that, for example, the verification safeguards of the deal are airtight, don’t believe them. As to the latter, if you actually read the JCPOA, you will see that on-demand IAEA inspections are limited to certain kinds of sites and, more important, require the unanimous consent of the P5+1 to go forward. That group includes Russia, which would act as Iran’s lawyer now as it once did for Saddam Hussein in Iraq under roughly similar circumstances—all on behalf of frustrating U.S. policy. Who believes that under current political circumstances the Russian government would help facilitate on-demand spot IAEA inspections of Iranian facilities? If that’s you, I have a bridge in Brooklyn I’d like to sell you.

When supporters of withdrawal tell you that the decision does not make war more likely, and that they really just want to get a better deal via a strengthened sanctions regime, don’t believe that either. I wish I had another bridge to sell.

Does Withdrawal Mean War with Iran?

It might, but war is not inevitable.

It might because the Iranians might react to petulance with petulance. They might elect to restart their program and dare us to do something about it. You don’t need a crystal ball to see where that kind of dynamic might lead.

But the Iranians may choose another tack. They might decide to play aggrieved party, milking the U.S. withdrawal for all the sympathy it can haul. And the world being the way it is, and Trump being the way he is, that’s several supertankers’ worth of sympathy.

More specifically, the Iranians might stay within the constraints of the deal because the other members of the P5+1—Russia, China, Britain, France, and Germany—have not withdrawn. Remember that the Iran deal is not a treaty or even an executive agreement between the sides. It is an odd duck, though not a unique one. It is, in essence, what its name suggests it is: a joint declaration. In other words, one side says we will do X, Y, and Z at the same time that the other side says it will do A, B, and C. The deal would obviously break down entirely if the Iranian side withdrew, but not just because the P5+1 becomes the P4+1.

Now, in Europe the P5+1 is often called the E3+3. Do the Europeans do this just to be cute? Not exactly. The P5 are the permanent members of the UN Security Council, and that mattered when these negotiations got going because the demands arrayed against Iran were couched in the language of several fairly useful UNSC resolutions. But Germany is not a permanent member of the UN Security Council: It’s the +1. For manifest political reasons having to do with the vicissitudes of the European Union, the Europeans prefer to see themselves as working in unison: hence E3 and, by default, +3—now +2.

If this seems silly, the political meaning of it isn’t silly at all. The E3 are hoping that, together, they have enough leverage to keep the Iranians from acting out of the deal. The Iranians know that, and so they expect to be offered something for letting the E3 succeed in their aspiration, temporarily at least. What might the coin of that offer look like?

It could be economic, and the JCPOA specifies some carrots for the Iranians that could bear elaboration. But it could also transcend the four corners of the JCPOA document to include certain European dispositions toward Syria, or Yemen, or Bahrain, or Qatar, or the Palestinians and Israelis. Use your imagination, and remember: EU policies toward various Middle Eastern actors and contingencies have been shown over the years to be remarkably flexible, much more flexible in many regards than a succession of U.S. administrations have thought helpful or wise.

Note finally on this point that the Iranians and the Russians are in cahoots with each other in Syria, and the Russians can be counted on to encourage the Iranians to harvest European largesse, because that largesse doubles as an enormous wedge to drive between the United States and its key European NATO allies. The Russian leadership loves such wedges. If you like a metaphor, think lettuce wedges . . . with Russian dressing.

In sum, the slouch to war can be avoided through the shifting of the U.S.-European-Iranian triangle in such a way that the American withdrawal appears as an act of self-exclusion; or, put a bit differently, as “a tale told by an idiot, full of sound and fury, signifying nothing.” But the price of American self-exclusion is not trivial: the ceding of leverage and influence with respect to U.S. interests and allies in the Middle East. The Europeans play the couriers as they flee, having been pushed after all, from us; the Russians are the recipients.

A Transatlantic Breach Too Far?

It could be, at least for a while.

There is a history here. First came the U.S. withdrawal from the TTP, but with implications for the T-TIP; then came the withdrawal from the Paris climate accord; along the way was the Brussels Summit at which President Trump refused to explicitly endorse Article V of the NATO Treaty; then the “easy to win a trade war” remark and the tariffs—and now this.

But not just this: Mark the way of this. Emmanuel Macron comes to the United States, and we all know his view of the Iran deal. He puts it to Trump; Trump smiles and is cordial. Angela Merkel follows, with the same view. Trump harrumphs, and she goes home. And then Trump ignores them both, doing it even sooner than the May 12 deadline requires, so that no one can miss the intended humiliation. It’s reminiscent of how Trump handled Mitt Romney before the inauguration, dangling the State Department job before this prominent member of the establishment, the Republican Party establishment at that, before humiliating him as well.

The press in the United States and in Europe is now referring to this as a “snub.” It goes much deeper than that. It is personal, because Trump makes everything personal. It is hard to avoid the conclusion that Trump really does ultimately support Le Pen in France, the AfD in Germany, and the likes of Nigel Farage in Britain. How comfortable AfD types would have felt in Charlottesville this past summer, among what Trump called some “fine people.” Just as the vast majority of what seems to be foreign policy in the Trump Administration is just signaling for domestic political purposes in Trump’s quest to realign American politics, so his manipulations of NATO-European leaders seems tailored to encourage certain political outcomes in those countries. (So Teresa May was smart not to come to Washington in recent weeks.) To the extent there is a “nationalist internationale” reminiscent of its 1930s’ fascist forerunner, Trump seems to be aware of and subtly supportive of it.

Absent an in-your-face threat to a NATO member—and perhaps even then—ever fewer Europeans believe the U.S. government would come to that member’s assistance or would try to rally the alliance as a whole to do so. Alliances thrive on the predictability inherent in trust. Where there is no trust, there can be no predictability, and where there is no predictability there is no deterrence. And right now there is little trust, perhaps even none at all, across the pond. Another conventionalized speech like the one the President gave in Warsaw this past July, and even another act of symbolic solidarity in expelling some Russian diplomats, can’t fix this now. We are below and beyond that.

Of course, there have been rough spots in the Transatlantic relationship before—many of them, in fact. But in Cold War times the specter of Soviet power and pretension worked as a great salve to heal all wounds. That specter is no more; Russia today is a pale shadow of that specter in west European eyes. Russia, they think, is “new” Europe’s problem, which “new” European governments anyway exaggerate. “New” European elites resent the cavalier attitude of their wealthier EU associates toward what they consider existential security concerns. And they worry that in due course Germany, its Russlandverstehers in the lead, will create Rapallo 2.0 at their expense.

But this has nothing to do with Iran, right? Not right. Just as the E3 fleeing Washington on account of differences over the Iran deal provides a wedge for the Russian leadership to harm the Transatlantic connection, so that wedge deepens divisions within the European Union. The Polish, Czech, and Hungarian governments never had any access into the P5+1 negotiations, yet the deterioration of the P5 into the P4 materially harms the two institutional arrangements on which these newly independent countries depend most: NATO and the European Union. All the more so, just by the way, for the Estonian, Latvian, and Lithuanian governments.

And if that were not enough, on the other geographic flank of the alliance the Turkish government is now far more beholden to the Russians than to us. That is largely owed to a combination of Turkish mistakes and bad luck going back several years now, but Russian skill and American errors have helped shape the current situation as well.

Dean Acheson once said (at least on this occasion not meaning to be witty) that “things are not always as they seem, but sometimes they are.” Well, if NATO seems to be bleeding in the gutter right now, that’s because it is. Now, some will immediately object that if you look not at “high politics” but at budgets and improvements “on the ground” things look vastly better than they have in years. There’s a fair bit more money flowing to necessary tasks, even in Europe for a change. That’s both true and irrelevant. At staff levels things are indeed improved and improving. NATO careerists on both sides of the ocean are happy and methodically dismiss brow-furrowed diagnoses of trouble. But capabilities don’t mean much if there is no political will to use them.

What our smiling colleagues seem to miss is that the United States has two policies toward Russia: one, the government’s policy, toward that country, and a second generally contradictory policy, Trump’s policy, toward its leader. (Vignette interlude: Imagine Admiral Mike Rogers, the recently retired Director of NSA, showing the President in the Oval Office unredacted sigint intelligence proving Russian government interference in the November 2016 presidential election, and the President looking back at Admiral Rogers and saying simply, “Well, I have a different view.”) Same for NATO-Europe: The U.S. government has one set of policies toward our allies, and the President has a different and often contradictory policy toward their leaders. Secretary of Defense Mattis is clearly on record as advising against withdrawal from the Iran deal, but the President did it anyway.

It is hard to blame anyone, say, at the Marshall Center in Garmisch, Germany, for not fully taking in this strangeness, because it is indeed very strange. It is an affront to experience and logic alike. But should push come to shove, which policy will prevail? Short of something like a military coup, the President’s will.

And a Prospective Nuclear Deal with North Korea?

The bumper sticker visible on the media and think-tank street is that if the U.S. government walks away from a pledge made by a previous Administration, then the future credibility of all Executive Branch international commitments will be degraded accordingly. This is true in general terms, but specific negotiations are not conducted in general terms. Just because a President, any President, thinks an element of the U.S. security architecture is not in the national interest doesn’t mean that all aspects of that architecture are called into question. During the Bush 43 Administration the U.S. government withdrew from the ABM Treaty. I had hoped that withdrawal would have taken place much earlier, for circumstances and technology had changed dramatically since the thing was signed in 1972. By the time we withdrew, it hardly made any difference: The Cold War was over, the strategic nuclear competition had thus been rendered vastly less salient, and so the practical outcome of the withdrawal was very minor.

The point is that agreements are kept or not kept because they serve the interests of the signatories. So if—and this is highly speculative, of course—the U.S. and North Korean governments manage to reach an agreement that both sides believe to be in their interests, the staying power of the agreement will hinge on those paired beliefs. It will not hinge on the terms of other agreements, whether still in force or not. And if some day one side or the other changes its mind about the ongoing utility of a U.S.-North Korean agreement, as happened with the Iran agreement (for good reasons or not isn’t the point here), then the agreement is liable to lapse.

That is just the nature of international agreements among sovereign states. Agreements entered into freely—in the context of perceptions of relative power and leverage outside the negotiating room, which are always present to shape negotiating outcomes—are just that. And just as governments enter such arrangements freely, they can withdraw from them the same way simply by paying some (small or not-so-small) reputational price for doing so. That’s different from situations, as mooted above, in which one side signs a piece of paper at the point of a bayonet, in which case coercion rather than free will seals the deal. Again, such arrangements will be subverted at first opportunity by the coerced party, and so really are not worth much qua agreement.

No one today imagines a U.S.-North Korea agreement being signed under extreme coercion, nearly amounting to a situation of unconditional surrender. So if a deal delivers something that both sides value, it will stick. There is no such thing as a reputational virus that floats through the air, necessarily contaminating one negotiating experience with another.

Such a deal is theoretically possible—basically, a nuclear stand-down from North Korea in return for diplomatic normalization and a formal pledge of no regime change from us—but whether it will actually come about is impossible to say. I doubt that President Trump fully grasps the relentlessly complex geopolitics of East Asia, or cares to grasp them. He is, in my view, capable of getting fleeced in a negotiation so long as he thinks he can present the outcome as a “win” as seen through the prism of U.S. domestic politics. If that happens, we can perhaps look forward to a Democratic President one day disowning a Korea deal.

So Is There Now an Iran Policy?

I doubt it, but it depends on what we mean by “policy.” Sometimes policy means an integrated, well-thought out plan that matches objectives to means, or at least tries to. Sometimes policy is used to mean a more general set of orientations to specific problems, but that does not necessarily include a lot of dot connecting from problem to problem within regions, let along among them. And sometimes policy refers only to certain instincts or intuitions, often not explicitly articulated, that may or may not congeal into something more over time if a President has to revisit the same problem area repeatedly during his tenure.

The latter describes fairly well the Obama Administration’s adventure in Syria policy. That policy area was in due course connected to a larger picture, one defined by the tantalizing payoff of a diplomatic breakthrough over Iran afforded by a nuclear deal. The gist was: Make a deal with Iran, bring it in from the cold as Nixon brought in China, and create a new regional balance of power that would allow the United States to reduce its presence in a part of the world in which it was “over-invested.” If the Saudis et al. didn’t like it, too bad. But a policy in the first sense—a truly integrated policy on a regional level, let alone on a fully global level—the Obama Administration, in my view, did not attain. Not “doing stupid shit” does not require that kind of effort. Both its admirers and detractors exaggerated the Administration’s policy sophistication the better to support or criticize it.

And so what of the Trump Administration? Well, if one goes back and rehearses the logic already laid out here, one could conclude that the Trump Administration does indeed have an Iran policy: In simple form, it would be that Iran is an implacable and dangerous adversary that needs to be thwacked good and hard. In that scenario, the withdrawal from the deal expects to catalyze a sequence of events that gives the Administration an opportunity to attack Iran, and at the same time to roll back its forward positions in Syria and elsewhere in the region.

Like the idea or not, there is a logic to it. If it exists—and the replacement of Rex Tillerson with Mike Pompeo and of H.R. McMaster with John Bolton have been read as the building of a “war cabinet” in preparation for a fight—it is a policy at least as coherent as the Obama policy toward Iran.

But I doubt it exists. If there were such a policy—I mean a real policy, not a Pinocchio policy—then how to explain Trump’s manifest urge to get the United States entirely withdrawn from Syria, an act that would constitute an unalloyed gift to the mullahs? Yes, he walked that back after someone explained to him the implications, but there’s no mistaking his basic isolationist/unilateralist instincts here. Just as he disdains in his personal life being bound to any inconvenient rules or norms, so he yearns to break U.S. policy free of constraints and obligations in general. His policies are projections of his personality in a purer form than any President heretofore.

Until there is evidence of more coherence and integration on the policy level, Trump’s withdrawal from the Iran deal strikes me as yet another example of his affinity for one-off in-your-face displays of petulance. He likes to flip off buttoned-down types. It draws attention, and if Donald Trump knows how to do anything really well, that’s it. He probably has no idea what comes next, and no idea of how to get an idea about it. That’s one reason why war is both entirely possible but not inevitable for already being intended or planned. Vansittart again, this time speaking about Kaiser Wilhelm, but eerily applicable to Donald Trump: “The Kaiser . . . may not have liked war, though he liked doing everything that led to it.”

And What Next?

Everything has a context, without which nothing is fully intelligible. And that certainly goes for the decision to withdraw from the Iran deal. So what is the context that matters?

As noted, we live in a highly partisan, and hence publicly very stupid, environment right now. It makes having serious and honest conversations about consequential decisions much more difficult, and the fact that the American political class has let the formidable powers of deep literacy slip away with its general abeyance of sustaining reading habits doesn’t help.

But it also doesn’t help to exaggerate, at least to exaggerate the wrong things. There have been plenty of over-the-top episodes in American politics and in our foreign policy debates over the years. The imperialist jingoism of the 1890s counts; the ravings of Father Coughlin and the America First movement of the 1930s do, too; the McCarthy witch hunts of the early Cold War era also come to mind, as does the staged moral panic of the nuclear freeze movement of the early 1980s. There have always been fringe, populist-tinged undercurrents in American politics, but because foreign and national security policy has been mainly an insulated domain of elites, the weirdness has never been decisive or even influential, except rarely.

The difference between then and now is that now the fringe is in power in the White House, and that has led to a mass breakdown in process—one suspects deliberately—which always affects policy outcomes in one way or another. The volatility in Administration personnel thus far may reflect not so much an inexperienced President who doesn’t know what his inbox looks like, but a President who is determined to master the political stage by undermining even the institutional apparatus that was created to serve his office.

President Trump is engaged in a political insurgency designed, in effect, to bring about global regime change, despite the fact that the regime he wants to change is one of mainly American design, construction, and maintenance. His war plan has two fronts: the attack on the so-called administrative and “deep” state domestically; and the attack on the institutional framework of the so-called liberal international order. So Trump may not have policies as they are conventionally understood, but he may well have a strategy of statecraft, however idiosyncratic and illiberal it may be, that combines domestic and foreign aspects into a whole. He may not know or much care what withdrawing the United States from the Iran deal will lead to in the Middle East, but he does seem to know at least in broad outline what the skein of that and related decisions, taken together, are leading to.

When a few months ago White House aide and speechwriter Stephen Miller called Trump “a political genius,” most of us scoffed. But then most of us have thought for the better part of a year that Trump, defined as the sum of his many deficiencies, would yield to more experienced management, and that therefore the arc of policy would regress to the mean. That has not happened, and the decision to leave the Iran deal sharply punctuates the point. Its larger meaning for the future may therefore inhere in that broader context.


This is not a mere decorative remark. Menahem Begin ordered the June 1981 Israeli attack on the Osirak reactor in Iraq in part because an election was near, and he feared that, if he lost, his Labor Party replacement would not be willing to do what he believed was a necessary deed.

Robert S. Vansittart, The Mist Procession (London: Hutchinson, 1958), p. 50.

For years I tried to debunk this dangerous foolishness; see, for example, my “Culture and Deterrence,” Foreign Policy Research Institute E-Note, August 25, 2006.

Mist Procession, p. 151.



The post The Meaning of Withdrawal appeared first on The American Interest.

 •  0 comments  •  flag
Share on Twitter
Published on May 11, 2018 13:52

Social Justice as Distributive Justice

“Social justice”—the goal of a host of progressive projects, from opposition to mass incarceration to the #metoo campaign against sexual assault to immigration reform—combines the political imperative of egalitarian social change with the legalistic ideal of justice. The civil rights litigation of the mid-20th century is a paradigm of social justice: Law and lawyers achieved dramatic reform within the established institutions of the very political order they sought to challenge. In this, social justice movements represent our Constitutional democracy at its best, part of a remarkable political order that contains within itself the mechanisms of self-correction that can produce non-violent paths to durable social change that gain widespread acceptance by association with familiar and conventional procedures and rituals.

But some of the appeal of social justice comes from the comforting but misleading implication that political and social reform can be as orderly as the day-to-day justice that lawyers, courts, and judges typically dispense; that the victims of bigotry, intolerance, and systematic exploitation could be “made whole” as readily as the victims of, say, negligence or burglary. This is rarely so.

Worse, the legalistic conception of social justice can imply that only a discrete group of wrongdoers and their victims need be involved. But many practices contribute to inequality and injustice, not by needlessly injuring members of vulnerable groups, but instead by forcing them to bear a disproportionate share of unavoidable costs that should be more equitably spread: for instance, the tensions involved in interactions between police and minority communities, or in workplaces where some romantic encounters are welcome and others are abusive. Accordingly, a large number of today’s social justice claims involve not only demands to eliminate objectively harmful behavior but also demands to redistribute the unavoidable burdens of day-to-day social interactions—something that requires changes and sacrifices from people who don’t seem like bigots, predators, or wrongdoers. This inevitably strikes some people as unfair and a good reason to reject the demands, but from the perspective of the groups seeking social justice, it’s much fairer than the status quo, which leaves them bearing a disproportionate share of the burdens.

For instance, typical police practices—to say nothing of more aggressive “broken windows”-type policing—harm many innocent members of heavily policed communities by subjecting them to humiliating, intimidating, and potentially dangerous encounters with often-aggressive police officers. Because of the importance of crime prevention, the law typically allows police to act with impunity unless those injured by policing can prove animus, recklessness, or bad faith. This limited legal recourse doesn’t come close to covering all of the costs that even well-justified law enforcement practices impose on heavily policed communities: Typical police preventative surveillance and investigation inconveniences and intimidates many innocent people, and sometimes even diligent and fair-minded police will mistakenly detain, arrest, or use force against the innocent. Social justice movements like #blacklivesmatter don’t just demand that bigoted or abusive officers are held responsible, they also call for law enforcement (and, indirectly, taxpayers) to bear a larger share of these collateral costs—spending additional resources to avoid conflicts and mistakes and to compensate victims—instead of leaving them to fall disproportionately on minority communities.

Or consider #metoo and the campaign against sexual assault. Naturally, the effort began with unambiguous serial predators like Harvey Weinstein. But pretty quickly it began to also target less clearly culpable offenders, including some whose only transgression was failing to promptly stop harassment by others, like Representative Brenda Lawrence, who came under fire for tolerating the harassment of an aide, or Representative Elizabeth Esty, who was forced to abandon her political career for being slow to fire an abusive staff member. Here, #metoo is not simply punishing bad actors and preventing injury; it is redistributing costs that, from the perspective of employers like Lawrence or Esty, are unavoidable.

For example, preventing harassment requires employers to monitor and possibly intervene in social relationships that most adults of both sexes typically prefer to keep private; responding to harassment can require firing otherwise valued employees, who may respond with their own claims of defamation or wrongful termination. But if employers don’t bear some of the costs of preventing harassment, the targets of sexual aggression are forced to bear all of them: For instance, women regularly have to endure sexual taunts and unwarranted overtures, avoid social engagements that might inadvertently encourage lecherous men, think and rethink their wardrobes to avoid sending the “wrong signal” and, ultimately, just suck it up if all of these tactics don’t work and they are harassed or assaulted anyway. The goal of #metoo is not only to call out unambiguous predators, but also to shift some of these costs from the targets of unwanted sexual advances to their perpetrators and to those who tolerate them.

Sometimes the people asked to bear new burdens in the name of social justice have benefited from the injustices they are asked to help correct, in which cases reform seems like a form of just compensation. But this is not always so: Reform can be hard on all members of an institution and new burdens tend to fall of those best positioned to bear them, not necessarily on those most responsible for past wrongs. That’s how a prominent anti-sexual assault activist like Elizabeth Esty found herself on the wrong side of #metoo.

Viewed in this way, social justice isn’t as neat and emotionally satisfying as courtroom justice. It’s messy and open ended, like politics, where even well justified policies often involve some moral arbitrariness. That doesn’t make it any less necessary, but it does help to explain why our current moment of activism is often disquieting, even for many of those who have long sought the changes it promises.


The post Social Justice as Distributive Justice appeared first on The American Interest.

 •  0 comments  •  flag
Share on Twitter
Published on May 11, 2018 09:54

The Curious Case of Mr. Wang and the United Front

This week the Wilson Center, a think tank in Washington, hosted an event entitled “Chinese Influence Operations in the U.S: Shedding Some Light on All the Heat.” Among the announced speakers was Henry Wang, President of the Center for China and Globalization, a think tank based in Beijing. What that title didn’t reveal is that Wang is also a “Standing Director of China Overseas Friendship Association of The Ministry of United Front,” according to his CV on his homepage.

After a warning letter from Senator Rubio to the Wilson Center about disclosing Wang’s United Front affiliation, and reporting by Bethany Allen-Ebrahamian, a journalist for Foreign Policy who tracks United Front activity in the United States, Wang disappeared from the guest list. Actually, that was a shame: With sufficient transparency about Wang’s affiliations, it would have been interesting to hear the spin from a United Front perspective about whether there is reason to worry about its operations abroad.

Instead, Wang put out a rebuttal to Foreign Policy’s article on his webpage. He glossed over the fact that a U.S. Senator had initiated the investigation into his background, instead zooming in on the press coverage. “This line of media coverage will not divert public attention away from the real challenges facing the Sino-US relationship,” he wrote, referencing recent “trade frictions” in a telling statement of his own priorities.

Wang also highlighted the independence of his Beijing think tank—a highly dubious prospect in state-run China—by noting that CCG is included among the top 100 in the University of Pennsylvania’s global ranking of think tanks. Note that the China Institutes of Contemporary International Relations (CICIR) is also listed in the same ranking. CICIR is an acknowledged part of the Chinese Ministry of State Security: the equivalent of the CIA running a front-end think tank. Through its think tank status, CICIR has great access to policymakers in the United States and globally. Accordingly, the bigger question Wang raised by noting that CCG is internationally listed is whether the think tank listing should exclude completely state-run entities, or at a minimum include a warning sticker.

Why does this all matter? For most people, the United Front Work Department sounds like a relic of a distant Cold War past. Not so for Secretary General Xi Jinping, who has elevated and expanded United Front activities, seeing it as a “magic weapon” to co-opt Chinese diaspora communities, build relationships with Western enablers, and “make the foreign serve” the Chinese Communist Party (CCP) and its goals. Compared to Russia’s quicker interference operations, the Chinese Communist Party builds varied and long-standing relationships. Russia wants to create disruptions inside democracies, whereas the CCP wants to change how democracies speak and think about the People’s Republic of China. The main goal is to make the world safe for continued Communist Party rule in China, which means quelling dissenting and negative voices at home and abroad.

The current effectiveness of the United Front strategy is on open display in Australia and New Zealand: two Western, democratic countries whose political, media, and business life have been pierced by the United Front. This has led, first of all, to Beijing’s near-complete takeover of Chinese-language media in the two countries. It has also been revealed that a former Chinese army spy trainer now serves in the New Zealand Parliament and secures Chinese funding for the National Party. In 2017, a now-disgraced Australian Senator was caught reading the Party line on the South China Sea, leading to the discovery of an avalanche of red money in Australian politics. The two main Australian parties have been financially propped up by foreign money through Chinese United Front operatives. And academic freedom has been pressured in both countries, as shown by public intellectual Clive Hamilton’s difficulties in getting his book on Chinese influence published in Australia.

Thus, the Wilson Center was right to host the event: Americans need to have a serious debate about United Front activities. It should not be about casting all Chinese influence abroad as malign. But neither should we allow the story of China to become a trademark of the Chinese Communist Party, which seeks to subsume all Chinese under its banner. Citizens of Chinese origin are an important part of democratic societies globally, with almost 5 million Chinese-Americans living in the United States alone. They have made their choice on citizenship, and no foreign power should be allowed to undo that choice of loyalty. The problem originates with the logic of the Chinese United Front, which addresses overseas Chinese as their “sons and daughters,” part of the extended family of the Chinese Communist Party. Democracies need to shield these citizens of Chinese origins and in particular dissidents who seek refuge from the authoritarian system of the People’s Republic of China.

There has not been a comprehensive public debate about this threat since Chinagate, following revelations of Chinese illicit finance in the 1996 presidential and Congressional elections. Such a debate is long overdue. For that reason, the Hudson Institute is coming out with a report soon on the topic, based on collaboration with other think tanks, journalists, and civil society.

Uncovering the United Front strategy in action will demand continuous effort, which is why our report recommends launching a “United Front Tracker” as a joint civil society and think tank effort. The curious case of Mr. Wang underscores the need for transparency—and should raise the red flag about our own vulnerabilities to the threat.


The post The Curious Case of Mr. Wang and the United Front appeared first on The American Interest.

 •  0 comments  •  flag
Share on Twitter
Published on May 11, 2018 08:51

Fifty Shades of Red

Silent Invasion: China’s Influence in Australia

by Clive Hamilton

Hardie Grant, 2018, 376 pp., $26.99


Clive Hamilton, the Australian public intellectual known for his leftist leanings and professorial demeanor, may at first seem an unlikely candidate to engage in a showdown with the Communist Party of China. With his latest book, however, he has done precisely that. In Silent Invasion: China’s Influence in Australia, Hamilton positions himself as a lonely David against Beijing’s Goliath, creating a well-documented analysis of the Chinese Communist Party’s influence operations in Australia—one that will hopefully resonate with other democracies enough to ignite meaningful pushback.

The book’s message is reinforced by its own publication history. Shortly before its scheduled release in Australia, Hamilton’s usual publisher, Allen & Unwin, suddenly backed out, citing fear of reprisals from Beijing. In the end, Hardie Grant, a publisher with more spine, stepped up to release the book. The incident only bolstered Hamilton’s argument that Australia’s academic freedom is under serious threat, eroded by CCP influence. The evidence, he suggests, is everywhere: China scholars increasingly stay away from subjects seen as sensitive from Beijing’s perspective, for fear of being put on Beijing’s no-fly list and having their visa access revoked. Chinese student associations at universities are loyal to Beijing and often directed by the Chinese embassy. And Confucius Institutes—14 of them in total in Australia—serve as vectors of influence by using Chinese-language education to indoctrinate students, “letting foreigners understand China on terms acceptable to official China.”

Hamilton’s book carefully tracks the Chinese Communist Party and its United Front Work Department in Australia. “United Front” may sound like something out of a quaint Cold War history lesson, a relic of old Comintern ambitions and the Soviet infiltration of U.S. and European peace organizations. But as Hamilton shows, the concept lived on in China. The difference today is that modern China does not appeal to communist ideology, but instead fosters fierce nationalism among Chinese communities abroad and creates financial incentives for foreigners to become corporate spies and political stooges. In other words, as Anne-Marie Brady, the China scholar who exposed the United Front’s activities in New Zealand, has argued, the goal of China’s United Front strategy is to “make the foreign serve China.”

The People’s Republic of China under General Secretary Xi has enhanced its United Front strategy, targeting both overseas Chinese communities and elites in democratic countries. The strategy is to prey on the vulnerabilities of democratic systems. In Australia, for instance, lax regulations on political donations have made it possible for Chinese benefactors to become the financial sugar daddies of the main political parties. It was not until the disgraceful downfall last year of Labor Senator Sam Dastyari, who took cues from Beijing on the South China Sea issue, that these connections were brought into the open. Huang Xiangmo, Dastyari’s donor, rightly observed in a Global Times piece that money was the “mother’s milk” of politics in Australia.

In the wake of the revelations, Prime Minister Turnbull spearheaded a political offensive on espionage, foreign agents, and foreign campaign donations, which is expected to lead to a major legislative overhaul. Meanwhile, though, China has continued to insert itself into the heart of Australian democracy. Hamilton describes how the United Front Work Department was at play in a crucial by-election in December 2017, threatening the continued majority of Turnbull’s government and the current legislation. Flipping an election and a democratic government would have been a major coup, so to speak, for the United Front in Australia.

In Hamilton’s analysis, the CCP is attempting to infiltrate Australian politics for two main reasons: first, to secure Australia as a reliable reservoir of resources to fuel China’s economy; and second, to drive a wedge between the United States and Australia. The second ambition should certainly raise eyebrows in Washington.

I would add that the larger goal is to make the world safe for continued CCP rule by muffling dissent at home and taming the debate about China in democratic societies, including in the United States. To stand up to China’s influence operations will require not only stronger legislation, but a greater understanding of the United Front on the part of policymakers and law enforcement.

Hamilton’s book—which draws on excellent, diligent Chinese-language research by Alex Joske—has already sparked a vital debate about the CCP’s influence in Australian politics, media, business, and academia. It shines a necessary spotlight on organizations with links to China’s United Front. Hamilton also highlights why Australia—which is economically reliant on China, has around 1 million citizens of varying Chinese origins, and every year receives a huge influx of Chinese students into Australian universities—is ripe for influence operations.

Hamilton knows that he will be accused of racism, particularly by pro-Beijing mouthpieces. To counter that claim, he invents his own term, “xenophobia phobia,” to describe the fear in multicultural societies of being called racist, which the CCP deftly exploits to its advantage. True to form, the Chinese embassy in Australia has already described Silent Invasion as “disinformation and racist bigotry” that showcases a “malicious anti-China mentality” and is “doomed to fall flat on its face.” This is consistent with CCP attempts to equate China with Communist one-party rule, squashing alternative accounts at home and, increasingly, abroad.

China, for example, prides itself on non-interference, yet forces democratic leaders to forego meetings with the Dalai Lama, a Nobel Peace Prize laureate. Similarly, China put a deep freeze on relations with Norway and imposed punitive sanctions for the Nobel Committee’s award to Liu Xiaobo. Such forceful assertions of Chinese prerogatives have gradually become acceptable in the West, showing how the CCP can shape and tame our democratic choices.

The Chinese communities abroad are at the front lines of Beijing’s United Front work. The CCP has increasingly claimed them all as “sons and daughters” of the motherland. The CCP thus casts citizens of Chinese origin under a cloud of suspicion by claiming them—wrongly—as theirs. To combat this dangerous generalization, we must recognize the diversity in Chinese communities and protect dissidents and alternative voices within them, who are increasingly silenced even inside democratic societies.

One of the essential voices in this debate is Australian writer Qi Jiazhen, whom Hamilton interviewed for the book. After many years in Australia, Qi started writing about her harrowing stay in a Chinese prison as a young counter-revolutionary. Beijing has since tried to silence her through its influence over the local Chinese Writers Association in Australia, conducting a campaign of subtle intimidation to exclude her from public readings and discredit her work. Exasperated, Qi asks, “how can the Chinese Communist Party be so powerful in Australia?”

Hamilton goes beyond the Chinese community and spares no one. One chapter is cheekily titled “Beijing Bob,” a reference to Australia’s former Foreign Minister Bob Carr, who now runs a Beijing-funded think tank. Another chapter, “Friends of China,” is essentially a shooting spree directed at Australian elites across the whole political spectrum, from former Prime Ministers Hawke and Keating, portrayed as co-opted Chinese stooges, to Ross Garnaut, the former Ambassador to China who spearheaded the economic turn toward Beijing. (Garnaut’s son John, incidentally, has been instrumental in sparking the public debate on Chinese influence operations, through his investigative reporting on the matter and a short stint in government service.)

Two primary objections can be raised to Hamilton’s analysis in Silent Invasion. First, he arguably fails to distinguish between various “shades of red,” painting the Chinese threat in broad brushstrokes and failing to account for appropriate nuance. Second, and relatedly, his policy recommendations are often blunt and inflexible.

On the first count, even though Hamilton is careful about wording, he sometimes falls into the trap of equating Chineseness and the “Middle Kingdom” with the current CCP regime, thereby reinforcing the Party’s false claims to speak for all Chinese. Similarly, Hamilton’s criticism of the policy of allowing Chinese students stay in Australia after the Tiananmen Square massacre in 1989 seems to miss another shade of red, implying that they transformed Australian universities into a hub of spy recruitment. And his casual estimates of how many Chinese-Australians serve Beijing—between 20 and 30 percent, he claims, while roughly 50 percent may be on the fence or have divided loyalties—is broad, arguably veering into dangerous “fifth column” fear-mongering. Hamilton runs up against the same difficulties faced by Western countries in the fight against terrorism: how to target a small group of fanatics while avoiding the wrongful castigation of a much larger group of democratic citizens.

As for policy recommendations, Hamilton talks about a ten-year effort to wean Australia off China. It is part of a larger debate which has also captured Washington’s attention: Does engagement with China still make sense? Nixon’s opening to China launched a decades-long bipartisan consensus in Washington in favor of engagement, but this is increasingly being questioned, including in the Trump Administration’s National Security Strategy. Hamilton does not engage in that debate directly, although his answer clearly leans toward the negative. The tricky question for policymakers is how to keep some positive aspects of engagement with China without succumbing to the fatalistic, preemptive obedience which Hamilton rightly mocks, and without perceiving China as the inevitable enemy either.

Hamilton also primarily emphasizes punitive responses to counter the United Front Work Department. In my view, a civil rights type protection is also needed to shield democratic citizens—particularly those of Chinese origin who may be particularly susceptible to pressure from Beijing—from foreign authoritarian influence. This demands much less of the hands-off attitude toward Chinese-majority communities that often prevails among Australian law enforcement. But it also demands that we not place all citizens of Chinese origin under automatic suspicion. Such an approach would be counterproductive and would play neatly into Beijing’s hands.

The foreign interference laws currently being debated in Australia’s parliament are a case in point. International human rights organizations such as Human Rights Watch, who might otherwise share the goal of curbing Chinese party-state influence aimed at intimidating dissidents abroad, find the current proposal too broad and sweeping, even potentially affecting their own ability to function in Australia.

This shows the need for finely tuned policy recommendations, where the nefarious elements of Chinese party-state influence are singled out and carefully targeted. The United States, too, needs a check-up: The last public debate and report on this issue was in the late 1990s, during the Clinton Administration’s “Chinagate” controversy. China has only grown in determination, resources, and capabilities since then, as the revelations of CCP influence in Australia prove.

Let’s hope that the sunlight shed on Chinese party-state influence operations in Australia can bolster a democratic revival locally and globally. In the years to come, we will all need to become much more color-perceptive about Beijing’s efforts to paint the world in 50 shades of red.


 My colleague Belinda Li and I attempt to do just that in an upcoming report for the Hudson Institute on how to deal with Chinese Party-state influence in American democracy.



The post Fifty Shades of Red appeared first on The American Interest.

 •  0 comments  •  flag
Share on Twitter
Published on May 11, 2018 08:34

May 10, 2018

Indiana’s Gift to the International Order

Elinor Ostrom, the 2009 Nobel Laureate in economics, and her husband Vincent are not household names among foreign policy practitioners. But they should be. The cross-disciplinary research program that they built together at Indiana University, Bloomington (hence the term “Bloomington School”) offers important insights about the international order that can reinvigorate demoralized internationalists on the center-Right both in Europe and in the United States, providing them with a fresh agenda for the 21st century.

The Bloomington School, developed initially to shed light on natural resource economies and local public goods provision, takes a modest, down-to-Earth approach. Institutional arrangements “that work in practice can work in theory” goes the informal “Ostrom’s Law.” Instead of providing complex accounts of how the world ought to work in theory, the Ostroms engaged primarily in field work and documented how communities solved various challenges—from policing in the town of Speedway, Indiana, through farmer-organized irrigation systems in Nepal, to the management of inshore fisheries in Nova Scotia. What the Ostroms saw were bottom-up, rules-based structures with multiple nodes of decision-making: “polycentric orders,” or “open systems that manifest enough spontaneity to be self-organizing and self-governing.”

Of course, Elinor Ostrom’s applied work is well known among experts on natural resource management and local governance. However, she and her husband saw polycentric orders as manifestations of a much more general phenomenon—self-governance—and applicable to a wide range of situations, including international affairs.

Citing Madison 51 and Tocqueville’s characterization of America as a place “where society governs itself for itself,” Vincent Ostrom argues that “aspects of polycentricity are likely to arise in all systems of social order because human beings are capable of thinking for themselves.” As a result, the goal of public policy is not to put in place ready-made solutions to social problems but simply to facilitate the emergence of diverse forms of associations—between individuals, firms, public organizations, and governments—that could respond to such problems adaptively. Such social units are “formally independent” but will choose “to take each other into account, functioning in mutually accommodating ways to achieve many different patterns of order.”

When applied to international relations, this view stands in contrast both to the atomistic view presented by neorealism and to the naive liberal internationalism that seeks to address humankind’s every ill by tools of top-down “global governance.” Western history is replete with examples of polycentric arrangements between states and state-like entities that curbed the power of local Leviathans and enabled their more or less peaceful coexistence.

“The constitution of the Holy Roman Empire,” Vincent notes, “evolved over a period of nearly a thousand years through processes of oath taking mediated through the Church amid struggles for papal and imperial supremacy. The rituals of investiture in both ecclesiastical and secular offices involved the acknowledgment of obligations to others.”

One can find many other illustrations of the same phenomena involving states and state-like entities, featuring multiple centers of powers, rules, and the possibility of exit. The Hanseatic League, an outgrowth of informal commercial arrangements between traders in the Baltic Sea, became a highly effective trading bloc and a redoubtable European naval power, successfully challenging England—all without any central government or any organizational structures other than the “Kontore,” small clearinghouses providing legal services in Hanseatic cities.

Some critics (and defenders) of the euro see it as an unprecedented experiment in creating a common currency without a common government. But whether it is a good idea or not, a shared currency standard across states is not a historically novel idea. The gold standard of the 19th century acted as a polycentric, self-regulating system based on the existence of free capital flows and the convertibility of major world currencies into gold upheld by multiple governments.

The lesson for today is not that we must slavishly re-create these examples in the 21st century. The Holy Roman Empire, the Hanseatic League, and the gold standard disappeared from the surface of the Earth for good reasons. However, the ubiquity of such arrangements throughout history suggests that polycentricity is a powerful tool for rethinking and rebuilding the international order at a time when the existing platforms for international cooperation—whether they involve trade (the WTO, free-trade agreements), environmental policy (Paris Climate Accord), or security (NATO, JCPOA)—have come under attack from populist leaders.

Both internationally and locally, polycentricity encourages “contestation, innovation, and convergence toward mutually productive arrangements,” says Vincent Ostrom. In practice, successful polycentric governance requires a combination of several factors: a clear understanding of the issue it purports to tackle (in contrast with the meandering and evasively defined missions of current international organizations), rules adapted to the issue at hand and subject to change by the participants themselves, mechanisms for resolving disputes, and the possibility of exit from such arrangements.

More importantly, however, participants have to be autonomous in their own decision-making, instead of becoming building blocks in a hierarchical structure. That point matters because the interference of “global governance” in domestic politics has become a major point of contention and a source of justified criticisms by conservative scholars such as Jeremy Rabkin. In the U.S. context, new legislation requires a text agreed upon by the House, Senate, and the President. An international treaty, in contrast, only faces the scrutiny of the Senate and the President. In principle, the executive branch can sign on to various international commitments—including those with domestic ramifications—while leaving their ratification to a time when a Senate majority is sympathetic to the cause.

Complications arise in international affairs not only due to aspirational conventions, say regarding human rights (which more often than not reflect the pet liberal cause du jour), but also the more practical matters of non-tariff barriers, which blur the distinction between domestic policy and trade. Differences in regulatory rules and practices are a source of friction for businesses operating across national borders, and are therefore of interest to policymakers seeking to liberalize trade and integrate markets. At the same time, differences in domestic regulation derive from democratic processes and reflect genuine differences in voters’ preferences. In some countries, electorates are more risk-averse than in others, or place higher value on certain forms of environmental or social protection. To seek to eliminate such differences, through harmonization or mutual recognition, often creates an impression (not necessarily incorrect) that external forces are eroding democratically agreed-upon norms.

The trade-off between market integration and autonomous domestic policymaking in democracies has heavily favored the former in recent years. But the existing international order departs from the prescriptions of the “Bloomington School” in another way. The Ostroms’ view of polycentricity is a holistic one, encompassing human societies from the level of the individual up. Self-governing societies, understood as “richly nested assemblages of associations that include the diverse forms of association developed within and among units of government,” stand in sharp contrast with autocratic societies in which such forms of association, insofar as they exist, are subordinate to one dominant node of power.

Inevitably, “[s]ocieties that place substantial reliance upon polycentric patterns of order present contestable options that must necessarily challenge systems organized on autocratic principles,” writes Vincent Ostrom. Of course, “the American way […] is not the only way to achieve polycentric systems of order.” Yet, true international order is possible only among self-governing, free societies: “[t]he world cannot remain half free and half in servitude. Each is a threat to the other.”

The idea ubiquitous among defenders of multilateralism—namely that free societies and the world’s tyrannies can jointly build a lasting peaceful order—is an illusion. Instead, we must place a much greater premium on cooperation and common governance mechanisms among recognizably free and self-governing societies. That might well include building alternative fora (think a UN, but only for liberal democracies) and enforcing the rules of the game within structures such as the WTO, NATO, or the European Union, which ought to discriminate between their members on the basis of their fidelity to these organizations’ basic values.

Most critically, rebuilding the international order along polycentric lines requires conservatives and classical liberals to become comfortable with the idea of self-governing nations being nested in a larger cooperative order, as opposed to seeing them as solipsistic atoms floating in an anarchic international system. But taking that step would not be an aberration, but a natural extension of those two intellectual traditions. As Vincent Ostrom puts it, “when we contemplate how the principles of polycentricity might apply to the whole system of human affairs, we are exploring the fuller implications of the American experiment.”


The post Indiana’s Gift to the International Order appeared first on The American Interest.

 •  0 comments  •  flag
Share on Twitter
Published on May 10, 2018 13:41

The Lasting Power of Legacy Media

After nearly 80 years, Magyar Nemzet, Hungary’s last independent national daily, closed April 11. “It was so abrupt,” says Szabolcs Toth, a former deputy editor-in-chief and columnist. “I only learned about it from the website.”


Sudden as it was, the timing—just three days after Viktor Orban’s re-election with a constitutional supermajority—was less of a surprise.


Shuttering this moderately conservative paper was but the latest step in Orban’s slow march to consolidate control of Hungarian mass media. In 2016, allies of Orban’s Fidesz party bought and then promptly shut down Magyar Nemzet’s left-leaning rival Népszabadság. Scores of outlets have declared bankruptcy in recent years, and others were brought to heel by owners who opted for the path of least resistance—collaboration with Fidesz. 


Still, Magyar Nemzet’s case was unique. Owned by mogul Lajos Simicska, Orban’s childhood friend-turned-critic, the newspaper had earned special treatment. The government had exerted pressured for at least two years by indirectly subsidizing competitors with state advertising while hindering Magyar Nemzet’s political coverage—by denying its reporters access to government officials. Toth watched as pro-government publishers, and their healthier finances, lured away career journalists.


“[Fidesz] regarded us as traitors, which was more dangerous than an opponent,” he says. “They tried to ruin us. Sometimes I ask myself if it was right to ask people to stay.”


Before all this, upon taking office in 2010, Orban purged state television and radio of independent thinkers and restocked the ranks with lackeys. “The public broadcasters are now similar to how they were in the 1960s, so primitive and such low quality, opposition parties have zero chance to present their views,” Toth says. Except for a handful of websites and the magazine HVG, nearly all Hungary’s big news media are now either owned by Fidesz loyalists or state-run. This was a major factor in the OSCE’s assessment that the April elections took place in an atmosphere of rampant “media bias,” in which Fidesz campaign tactics “limited space for substantive debate and diminished voters’ ability to make an informed choice.” 


The Orban playbook has worked well enough that, after its 2015 election victory, Poland’s ruling Law and Justice (PiS) charted what party leader Jaroslaw Kaczynski called a “Budapest in Warsaw” approach to governing. In the media realm, this meant converting public television and radio into propaganda mills, restricting reporter accreditations to parliament and (again) funneling state advertising largesse toward friendly outlets. Thus far, Polish private media, aided by a market that is four times larger than Hungary’s, is still robust, but it’s not for lack of trying by the government. Former Foreign Minister Witold Waszczykowski has accused the press of colluding with the rival center-right Civic Platform party on a “leftist program” to create a country “of cyclists and vegetarians,” and as recently as the turn of the year PiS pondered laws to curb foreign—meaning German—media owners. Kaczynski and company’s plan looks to be working thus far, as they would handily win any election held today.


In short, in both Hungary and Poland, the media have played a key role as authoritarian populist leaders gained and now maintain power. Notably, however, their success has come primarily by using traditional—not social—media. These tactics are effective because legacy media, brands that began with an offline presence, still exert exponentially more influence on political opinion pretty much everywhere—including online. In combination with larger turnouts by older voters, this means that control of legacy media, especially in the private sector, remains decisive for setting political narratives today. As the global advertising firm Ogilvy & Mather recently summarized: “The predicted demise of traditional media never materialized. In fact, traditional media is more important than ever.”


Though legacy media do harness digital distribution channels to spread content, come election time they remain the most active and influential sources of information. In Germany, during the 2017 election, broadcasters ARD and ZDF and the weeklies Die Zeit and Der Spiegel prompted the most engagement on social networks, according to a study by the Reuters Institute for Journalism at the University of Oxford. During the 2017 French presidential election, 88.43 percent of news-related tweets either originated from or discussed stories that appeared in legacy media. In the United Kingdom, a separate Reuters study defines the BBC, the Guardian, the Telegraph, and the Daily Mail as the biggest influencers on the media scene, and an August 2017 YouGov poll found that more people reported being influenced by actual print than social media during the election (23 percent versus 18 percent). In the United States, the Reuters Institute shows Breitbart’s influence pales in comparison to core media like CNN, the New York Times, and the Washington Post. It’s no wonder that Donald Trump dedicates so much time to attacking these very brands.


“When we look at the overall web, the nodes in the news networks are legacy media,” says Silvia Mayo-Vazquez, an Oxford scholar and one of the authors of the aforementioned studies. “They are the center of the flow of the audience.”


This means that not only do old guard media dominate offline communication, but they also account for disproportionate discussion online. More and more people do get news via smartphones and social media, but much of the content they interact with still comes from legacy media. Furthermore, plenty of people use legacy media to learn what is happening on social media: Think about the many hours of CNN airtime dedicated to Trump tweets. The accidental genius of the Trump communication strategy is not his use of Twitter, but rather the ability to convince legacy media that those tweets were news worth reporting.


In February 2016, in the heat of the presidential campaign, the MIT Media Lab calculated that Trump was the top election influencer, leading the matrices measuring social media influence. He also, however, led both statistical categories gauging impact on the mainstream news cycle. The report concluded Trump was a “master of both domains.” More strikingly, not only was Trump able to drive the mainstream news cycle, but in doing so he “succeeded in shaping the election agenda,” according to an analysis by the Berkman Klein Center at Harvard University. “Coverage of Trump associated with immigration, jobs, and trade was greater than that on his personal scandals,” the report noted. MediaQuant, an organization that tallies media mentions for business brands, found that Trump received $4.96 billion in free publicity during the presidential election campaign—mostly via the news churn of legacy media. “It may not be good for America, but it’s damn good for CBS,” the company’s CEO and Chairman Leslie Moonves famously said of the election-related financial boon for his company.


Though it’s clear that Trump’s victory derived from his domination of narratives on- and offline, there is strong circumstantial evidence that in 2018 influencing legacy media coverage alone may still be enough. Until very recently, in Russia, Vladimir Putin largely ignored domestic web communication to focus on control of television. This works because even young people—54 percent of 18-24 year olds, and 72 percent of the overall population—use state television as their main source of news, according to a 2017 study by the Levada Center. Tempting as it is to dismiss this as a Russian anomaly explained by lower internet penetration, according to Pew Research, local television still reaches more American adults than any other news platform. At the same time, while 67 percent of Americans do receive some news from social media, just two in 10 report doing so often. Meanwhile, a paltry 6 percent of the people who get news on Twitter are over the age of 65—a demographic encompassing 50 million Americans, 53 percent of whom voted for Trump.


Social media use is no longer limited to young people, but they are still more likely to use it as a primary news source. Plenty of old people heard about Trump calling Kim Jong-un “little rocket man,” but few encountered it via Twitter. Many saw it on television, for example, and even people who saw it on Twitter can encounter it via a legacy media tweet rather than from Trump’s account. No doubt, news consumption habits and demographics continue to evolve, but for now social media’s strongest political asset could be its ability to influence the legacy media news cycle. In the United States, the Baby Boomer generation still accounts for a larger number of voters than millennials (48.1 million voters in 2016 versus 34 million millennials) and votes at a higher rate (69 percent of eligible voters as compared to 49 percent).


Central Europe is a more extreme manifestation of this same phenomenon because these societies are aging faster than any in human history. For example, it is no surprise that more Hungarians use television to get news (72 percent) than use Facebook (64 percent), given that the number of people living there will fall 14.8 percent by 2050, according to the UN. In his election campaign, Orban actually used snail mail flyers and manual surveys to stoke fears of mythical Muslim hordes or Soros-directed conspiracies. “Low income, mostly rural voters decided this election by turning out en masse,” Toth said.  


Sound familiar?


In Poland (whose population is set to fall 15.1 percent by 2050) and Slovakia (down 8.9 percent), political control of public media is the biggest problem. “They didn’t tell us why they did it,” says Matus David, a longtime foreign news reporter for Slovak state television who was fired with three colleagues on April 27. “Every journalist now feels the same atmosphere of fear.” But Central European trends are all the more relevant as examples of naked political interest overwhelming private media too. In what remain open, export-oriented economies in the world’s largest single market, this is cause for concern among legacy media in the United States, Western Europe, and elsewhere, as authoritarian populists look to smash traditional institutions.


“Competition among [web-based] platforms to release products for publishers is helping newsrooms reach larger”—not smaller—“audiences than ever before,” a report by the Tow Center for Digital Journalism at Columbia University concluded last year. But those larger audiences no longer translate into proportionate growth in revenues. As of 2017, Facebook and Google were vacuuming up 84 percent of global digital advertising revenue (excluding China). This means that even as legacy media play an outsized role in setting political agendas, they operate in precarious financial conditions. This generally leads to market consolidation, as strong brands carrying debt are sold for cheap, often to owners with political motives. In Central Europe, this phenomenon accelerated after 2008 as foreign—again, German—media houses vacated the premises to local actors amid flagging profits. Hungary ended up the most extreme, but not the only, variant. “We need an urgent discussion to repair the business model of mainstream media, because this could happen anywhere,” Toth, formerly of Magyar Nemzet, says.


In the Czech Republic, where public media maintain relative independence (for now), political actors invert the Hungarian and Polish models, and use media control as a vehicle to achieve political power. Most notably, billionaire Andrej Babis bought two daily newspapers in 2013, finished second in an election that year, and then first four years later (his ANO party won in October 2017 but has been unable to form a government thus far). Other murky business interests followed suit—some with designs on political influence of their own, the rest hedging their bets against Babis. When the powerful Penta investment group bought a chain of regional papers in 2015, cofounder Marek Dospiva referred to the purchase as an “atomic shield.”


“I am not going to beat around the bush,” he said at the time. “The fact that we own media gives us the certainty that it will be harder for anybody else to irrationally attack us.” 


In the United States, it’s easy to find similar patterns, and decades of media consolidation offer ample opportunity for abuse. The so-called Big Six media companies—Comcast, Disney, Time Warner, CBS, Viacom, and 21st Century Fox—have long accounted for an unhealthy share of media output. With Disney set to buy 21st Century Fox and AT&T eyeing Time Warner, more consolidation is ahead. While not all these firms overtly push a political cause, raising the quality of public discourse still ranks far behind boosting dividend payments as a guiding ethos, which can have an equally toxic impact on the health of the republic. “This going to be a very good year for us,” CBS’s Moonves said in that aforementioned 2016 speech. “Sorry. It’s a terrible thing to say. But, bring it on, Donald. Keep going.”


In local television—again, the single most important news platform for American adults—Sinclair Broadcasting’s recent scandal had a Central European air about it. Amid revelations that the company ordered dozens of news anchors to read Trumpist talking points on air, Sinclair is seeking approval to buy Tribune Broadcasting’s television operations—a merger that could eventually encompass 233 stations, reaching 40 percent of American households.


Viktor Orban would be proud.  


The post The Lasting Power of Legacy Media appeared first on The American Interest.

 •  0 comments  •  flag
Share on Twitter
Published on May 10, 2018 11:01

The One Theory to Rule Them All

Why Liberalism Failed

by Patrick J. Deneen

Yale University Press, 2018, 248 pp., $30


How bad are things? This question was the title of a 2015 post by Scott Alexander on his popular blog, Slate Star Codex. In it, Alexander, a psychiatrist, notes that his work in medicine gives him knowledge about people’s lives that others often lack. He finds that many people’s lives—even the lives of the comparatively wealthy—are in worse shape than we might suppose. Alexander:


I work in a wealthy, mostly-white college town consistently ranked one of the best places to live in the country. If there’s anywhere that you might dare hope wasn’t filled to the brim with people living hopeless lives, it would be here. But that hope is not realized. Every day I get to listen to people describe problems that would seem overwrought if they were in a novel, and made-up if they were in a thinkpiece on The Fragmentation of American Society….

This is also why I am wary whenever people start boasting about how much better we’re doing than back in the bad old days. That precise statement seems to in fact be true. But people have a bad tendency to follow it up with “And so now most people have it pretty good.” I don’t think we have any idea how many people do or don’t have it pretty good. Nobody who hasn’t read polls would intuitively guess that 40-something percent of Americans are young-Earth creationists. How should they know how many people have it pretty good or not?

I think about all of the miserable people in my psychiatric clinic. Then I multiply by ten psychiatrists in my clinic. Then I multiply by ten similarly-sized clinics in my city. Then I multiply by a thousand such cities in the United States. Then I multiply by hundreds of countries in the world, and by that time my brain has mercifully stopped being able to visualize what that signifies.


Alexander is not alone in pondering how bad things are; these days, the question seems to occupy many of us. Despite the unprecedented levels of material wealth our society enjoys, a nagging sense that things are not going so well haunts the national conversation. Some worry about economic inequality; others about a crisis of masculinity. Some fear that college students are turning fascist; others that secularist forces are attempting to destroy religion. Concerns over the rise of illiberal populism have launched a thousand think pieces. Pick your slice of society—religion, economics, politics, culture—and you will find people arguing that something has gone seriously wrong in the 21st-century West.

Patrick Deneen, a political philosophy professor at the University of Notre Dame, sees his own discipline (of course) as the key to understanding our present dysfunctions. He stands with those who thinks things are pretty bad, and he has his own theory about the culprit: the political philosophy known as “liberalism.”

What is liberalism? For Deneen, liberalism does not encompass only the tenets or policies associated with the Democratic Party. Rather, he refers to the more fundamental political philosophy that shaped America’s founding, and that both the American Right and Left share.

Liberalism, argues Deneen, displaced earlier classical and Christian theories of political life, ushering in the political habits and forms we now think of as natural. “Protoliberal” thinkers like Machiavelli, Descartes, Francis Bacon, and Hobbes paved the way for liberalism by engendering intellectual “revolutions.” These revolutions included defining politics down so that it is based on “realism” about human selfishness rather than “idealism” about human virtue (Machiavelli), exalting “individualistic rationality” over the power of “irrational” custom and tradition (Descartes, Hobbes), and advocating for a more domineering and extractive relationship to nature (Bacon).

These thinkers created a space for later philosophers like Locke (“the first philosopher of liberalism”) to formulate liberalism proper—specifically, the variety of liberalism known today as classical liberalism. This is the philosophy we tend to associate with American founding principles, one which embraces social contract theory, the centrality of individual rights as political guardrails, and the free market.

After classical liberalism came “progressive liberalism,” which Deneen believes flows directly from its forebear. Progressive liberalism is “inspired by figures like John Stuart Mill and John Dewey” and shares certain assumptions with classical liberalism, but extends or applies them in new ways. For example, whereas classical liberals saw nature as malleable for purposes of human enrichment, progressive liberals extend that analysis to human nature, which they see as also subject to human manipulation. For Deneen, classical liberalism and progressive liberalism today dominate our politics together, and though we tend to think of them as opposed to each other, on the fundamental level they share much in common.

Such is the (abbreviated) genealogy of liberalism; let us return to its central tenets. “The deepest commitment of liberalism,” writes Deneen, “is expressed by the name itself: liberty.” But for Deneen, it is liberalism’s theory of liberty that divides it so completely from its predecessors: It takes the concept of liberty, which he believes predates liberalism, and “colonizes” it with radically different intellectual content. Pre-liberal liberty, in Deneen’s view, did have a place for “individual free choice.” But one exercised that choice in the context of one’s existing relationships and unchosen obligations, while also influenced by virtue, custom, and one’s surrounding community.

Deneen identifies the “most basic and distinctive” aspect of liberalism as its commitment to “the idea of voluntarism—the unfettered and autonomous choice of individuals.” In this view, politics arises when human beings consent to sacrifice some of their natural autonomy in order to set up a political system that will guard them from others’ infringement of their liberty. This leads to a conception of all human relationships as subject to consent given “on the basis of their service to rational self-interest.” Liberalism focuses solely on the individual’s free choice, neglecting to account for “the impact of one’s choices upon the community, one’s obligations to the created order, and ultimately to God.” Individuals become “rational utility maximizers,” accepting or rejecting all relationships, obligations, and political orders based upon narrow self-interest.

Yet liberalism does more than this, in Deneen’s opinion: It also fosters a “war against nature” in which humans seek to master their environment for the sake of economic and technological progress; it seeks to release the desire for food and sex from “the artificial constraints of culture”; it supports the creation of a centralized progressive state to liberate the individual from all unchosen bonds; it changes our perception of time so that we become “presentists,” without regard for past or future; it corrodes the place of the liberal arts in the university by redefining liberty in such a way that these preparations no longer seem necessary to its achievement. These liberal innovations, for Deneen, underlie many of the dysfunctions of the West today. For example, he relates the “brain drain” in small towns and rural areas, and the resulting economic balkanization, to the efforts by liberal thinkers to replace the “old aristocracy” of “inherited privilege” with a new meritocratic aristocracy.

His diagnosis complete, Deenen turns to the treatment. Yet though his diagnosis is intellectual, tracing concrete problems to their roots in abstract ideas, his proposed cure runs in the opposite direction: Deneen argues that we should not replace liberalism with another political philosophy. Instead, “we should focus on developing practices that foster new forms of culture, household economics, and polis life.” He goes so far to say that “the impulse to devise a new and better political theory in the wake of liberalism’s simultaneous triumph and demise is a temptation that must be resisted.” Instead, he suggests, what we need is “not a better theory, but better practices” in local communities. Deneen has few concrete examples of what such practices might be, though he does say it is important “to do and make things for oneself,” and to obtain “the skills of building, fixing, cooking, planting, preserving, and composting.”

These recommendations, as far as they go, are sound. But there is a tension in Deneen’s book; his account of American history casts doubt on the claim that practice, absent theory, will save us. In his view, Americans historically lived better than their theory. Though the Founders conceived America as a liberal country—the Constitution is a liberal document and The Federalist Papers contain liberal reasoning—Deneen argues that ordinary citizens did not always act like liberals. “Americans, for much of their history, were not philosophically interested in Burke but were Burkeans in practice,” he writes at one point. Furthermore, Deneen notes,


Writing of the township democracies he visited during his journey to America in the early 1830s, Alexis de Tocqueville expressed amazement over the intense commitment Americans exhibited toward their shared civic lives…Tocqueville observed practices of democratic citizenship that had developed antecedent to America’s liberal founding. Its roots and origins, he argued, lie in the earlier Puritan roots of the American settlement…


In other words, earlier America had liberal theory and nonliberal practice—and theory beat practice hands down. Liberalism dissolved traditional practices and remade society in its own image; it did not stay a merely official philosophy but came to permeate the lives of ordinary citizens. Practice did not “filter up” and soften theory; theory “filtered down” and corrupted practice.

In light of theory’s victory, why does Deneen expect practice to win in round two? Why not seek a new theory, if political philosophy is so powerful a weapon? Here Deneen seems to equivocate a bit. He writes that “the search for a comprehensive theory is what gave rise to liberalism and successor ideologies in the first place,” and therefore we should avoid such a search this time. However, he also says of the intentional, local communities he recommends that “it is likely from the lessons learned within these communities that a viable postliberal political theory will arise, one that begins with fundamentally different anthropological assumptions….”

These arguments raise some questions. For example, wouldn’t we call the Christian and classical political traditions themselves attempts to “search for a comprehensive theory?” Liberalism hardly stands alone in seeking a robust theory of political life. Nor have such searches ended in the modern day or dissolved into the liberal project. Today, the social and political principles of Catholic thought—known as Catholic Social Teaching (CST)—offer an alternative theory to liberalism. Furthermore, it is a living theory; Catholic scholars are constantly engaged in developing, extending, and deepening CST. In addition, when Deneen does concede the desirability of some better theory, he regards it rather passively: A viable theory “will arise.” Yet how does a theory arise if nobody formulates it—if nobody, in Deneen’s words, searches for it?

It seems that Deneen does not oppose the efforts of philosophers to formulate a better theory per se, but rather to try for one first and above all. He argues that:


Calls for restoration of culture and the liberal arts, restraints upon individualism and statism, and limits upon liberalism’s technology will no doubt prompt suspicious questions. Demands will be made for comprehensive assurances that inequalities and injustices arising from racial, sexual, and ethnic prejudice be preemptively forestalled and that local autocracies or theocracies be legally prevented. Such demands have always contributed to the extension of liberal hegemony…


Deneen gives what appears to be a tactical reason for focusing on practice: Liberal opponents would use the effort to formulate a better theory as a pretext to shut down any attempts to humanize our society. So while the time for theoretical formulation may eventually come, for now we should keep our ideas on the back burner.

Perhaps, for Deneen, people simply are not yet ready for a better theory—whether to formulate or to receive it. Such a view harmonizes with a classical insight: Truth and virtue go together. The ability to discern and accept a better theory than liberalism might depend on a prior habituation in virtue that the average citizen of the West simply hasn’t had. We could argue that Western citizens, even those critical of liberalism, must prepare for postliberalism by first developing virtue through the practices Deneen describes. Perhaps his idea that theory will emerge from intentional communities has some weight, as those communities will inculcate healthy habits of behavior that will lead to wise habits of mind.

We might find this claim reasonable, and yet also find it wanting. The title of Deneen’s book is Why Liberalism Failed. For Deneen, liberalism is collapsing under the weight of its own success. He sees two possible ways this could resolve itself. First, liberalism could be more intensely anti-democratic, “imposing the liberal order by fiat.” But he notes that this solution would create instabilities, and suggests that another resolution is possible—”the end of liberalism and its replacement by another regime.” Deneen thinks that either of these could happen, but that neither “is to be wished for in the form it is likely to take.”

In other words, if liberalism is replaced, that replacement will probably be even worse than liberalism itself. In light of that prediction, why shouldn’t critics of liberalism at least try to offer something better? Deneen may believe that such attempt will necessarily be so disastrous that all we can or should do is endure the next stage of our political, social, and cultural life, and prepare as best we can for better days. No doubt a postliberal pursuit of a better regime would take many wrong turns and be compromised by the realities of our situation, not to mention our unavoidable human weaknesses. But is disaster so inevitable that we must preemptively cede the future to whatever comes next, no matter how undesirable?

If postliberals have any chance at ushering in a better regime than the current one, then it seems worth pursuing. Such a pursuit is not incompatible with the localist strategy Deneen advocates. Recently, the Bruderhof community in Walden, NY (the Bruderhof are a global Anabaptist movement that holds all property in common and seeks to live the life of the early Christians described in the Book of Acts) held an event on their property called “Beyond Liberalism: Community, Culture, and Economy.” Deneen himself was a speaker, as were Ross Douthat and Bill Kaufman, with Rod Dreher moderating a panel and contributing to the discussion. The Bruderhof are deeply localist and communal in their life; in Walden they run their own schools and have their own factory. And yet they also committed themselves to hosting this conversation with national figures on topics that touched upon politics and the very future of our country.

The weekend did not produce the blueprint for a successor regime, but it showed that one need not choose between theory and practice. A central task for postliberalism going forward is finding creative ways of combining the two.


Modern Catholic Social Teaching is conventionally dated to Pope Leo XIII’s encyclical Rerum Novarum, published in 1891, and that tradition of reflection has continued, most recently with Pope Francis’s encyclical Laudato Si (2015).

I recently attended an event at which Deneen, when pushed on this question, spoke favorably of a political program that would include a restrained foreign policy, recognition of the importance of religious faith, and an economy that distributed productive property more widely. Rod Dreher summarizes Deneen’s remarks at the event here.

This is the event I mentioned in the previous footnote.



The post The One Theory to Rule Them All appeared first on The American Interest.

 •  0 comments  •  flag
Share on Twitter
Published on May 10, 2018 09:09

Peter L. Berger's Blog

Peter L. Berger
Peter L. Berger isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Peter L. Berger's blog with rss.