Peter L. Berger's Blog, page 29

September 24, 2019

It’s Ukraine That Loses in Whistleblower-Gate

President Trump on Sunday essentially admitted to inviting Ukrainian officials to interfere in the 2020 U.S. election. Trump acknowledged that in a July 25 phone call with Ukraine’s new president, Volodymyr Zelensky, he urged him to launch an investigation into Democratic presidential candidate and former Vice President Joe Biden. Democrats argue that this is an impeachable offense; Trump supporters claim that it is Biden who is guilty of inappropriate behavior. Meanwhile, Ukraine and its new president are caught unenviably in the middle.

The American public may have started paying attention only recently amid reports that a whistleblower complained about the content of phone conversations Trump had with a foreign leader, possibly more than one head of state. But Ukraine and its new president have been victims for several months of a relentless pressure campaign from the President’s personal attorney, Rudy Giuliani.

I warned on these pages two weeks ago that Trump was holding Ukraine hostage to his own political interests. The situation has only gotten worse.

Giuliani has sought to turn the Russia collusion narrative on its head, arguing that it was Ukrainian collusion with the Clinton campaign, not Russian collusion with the Trump campaign, that should be investigated both in Kyiv and Washington. Giuliani cites a former Ukrainian parliamentary deputy’s exposure of payments made to former Trump campaign manager Paul Manafort as proof of such interference. That deputy, Serhiy Leshchenko, refuted Giuliani’s charges in an op-ed Sunday in the Post.

Giuliani also has accused Biden of inappropriately interfering in a Ukrainian investigation into a controversial energy company with which Biden’s son, Hunter, was connected. Biden played a key role during the Obama Administration in supporting Ukraine after the Euro-Maidan revolution in 2014. The Obama Administration was right to press for the dismissal of then-Ukrainian prosecutor general Viktor Shokin, as I, from my think tank perch at the time, and other supporters of Ukraine were urging in 2016, too. Shokin was an obstacle to fighting corruption.

Giuliani’s allegation, furthermore, has a major flaw: Shokin’s successor, Yuri Lutsenko, acknowledged having no evidence to support any wrongdoing by Biden or his son.

And yet no matter how unfounded Giuliani’s charges might be, Biden the candidate will emerge politically damaged from this. Hunter Biden indeed showed terrible judgment in associating himself with Burisma, the controversial energy company, in a country where his father was actively engaged—a view shared by a number of Obama Administration officials. That is now coming back to haunt his father, even though neither did anything illegal. Politics, alas, is a brutal sport.

At the same time, Giuliani seems unconcerned with the damage he is doing to Ukraine and its relationship with the United States. His smear campaign has badly damaged the image of Ukraine as he tars it as a helplessly corrupt country. In reality, Ukraine has a huge corruption problem, but it is anything but helpless. Millions of Ukrainians gave Zelensky a huge mandate in presidential and parliamentary elections earlier this year. And Zelensky won by promising to launch major reforms in battling corruption. Instead of focusing on this worthy goal, Ukraine’s new leadership is dealing with an unwelcome and unnecessary crisis in U.S.-Ukraine relations.

Making matters worse, Zelensky badly needs military support in fending off ongoing Russian aggression, with more than 13,000 people killed as a result of Putin’s invasion and illegal annexation of Crimea and close to 2 million Ukrainians displaced in the fighting. And yet Trump held up the latest tranche of military assistance, in what may have been an attempt to leverage Ukraine to launch an investigation. Under Congressional and public pressure (thanks especially to a Washington Post editorial), the military aid finally was released.

Ukraine cannot afford to lose—or even appear to be losing—the backing of the United States. Zelensky also needs international help in tackling the problem of corruption. Controversial oligarch Ihor Kolomoisky, who owned the station on which former comedian Zelesnky’s hit show appeared, returned to Ukraine after the election and recently met with the new leader, asserting his own importance in decision-making for Ukraine. Zelensky, by all appearances a capable and agile politician, needs positive reinforcement from the West to push back against Kolomoisky’s efforts to reestablish himself as a power behind the throne.

After promising an Oval Office invitation to Zelensky and being one of the first to congratulate him on his electoral victory, Trump apparently has held that meeting hostage to Ukrainian acquiescence to his and Giuliani’s demands. Their meeting this week in New York on the margins of the UN General Assembly is a serious downgrade from a visit to the White House.

Recent stories in the Washington Post and the New York Times likely alarmed those in Kyiv. First, the Post:


A former senior administration official who repeatedly discussed the issue with Trump said that the president thought “what we were doing in Ukraine was pointless and just aggravating the Russians.”

“The president’s position basically is, we should recognize the fact that the Russians should be our friends, and who cares about the Ukrainians?” said the official.

Then this in the Times:


Privately, Mr. Trump has had harsh words about Ukraine…“They’re terrible people,” he said of Ukrainian politicians, according to people familiar with the meeting. “They’re all corrupt and they tried to take me down.”

One person who must be smiling at all this is Russian President Vladimir Putin. Unlike the Russian collusion story, where Putin and his acolytes played an active role in interfering in the 2016 election, Putin does not appear to have played a direct role in this new scandal—at least so far. With declining support at home, Putin could not ask for a better gift than to see U.S.-Ukraine relations deteriorate and Zelensky on the ropes.

Congress has a responsibility to get to the bottom of this scandal immediately, no matter where it leads regarding President Trump. It also must reassure Ukraine that the United States stands steadfastly with Ukraine at this critical juncture so that Putin does not infer from Giuliani’s and Trump’s words and deeds that it is open season on his neighbor.


The post It’s Ukraine That Loses in Whistleblower-Gate appeared first on The American Interest.

 •  0 comments  •  flag
Share on Twitter
Published on September 24, 2019 14:03

The Personality of American Power

Editor’s Note: This essay is fourth in a series on American Ideals and Interests. The first essay, Tod Lindberg’s “Moral Responsibility and the National Interest,” can be found here. The second, David J. Kramer’s “Human Rights Problems a Commission Won’t Solve,” can be found here. The third, Adam Garfinkle’s “Is Pompeo’s Rights Commission More or Less Than Meets the Eye?,” can be found here.

Jeffrey Goldberg, editor-in-chief of The Atlantic magazine and amanuensis in 2016 of the Obama Doctrine—“Don’t do stupid shit!”—recently applied his skills of strategic divination to our current commander-in-chief. He boiled the Trump Doctrine down into a similarly pithy and profane formula: “We’re America, Bitch!”

A more nuanced explication—or inference—of the president’s strategy comes from the Hoover Institution’s Victor Davis Hanson, a classicist and distinguished historian, who has made The Case for Trump at book length. Hanson allowed that “the verdict by mid-2018 was still out” and that “Trump’s first few years were . . . marked by a number of setbacks,” but that the president had scored a win with China, “given that, for the first time in memory, the United States talked credibly about reexamining the entire asymmetrical trade relationship between Washington and Beijing.”

This “realist” reading of Trump might equally be applied to his predecessor; from the political Left and Right, the two arrived at a similar America First or, in the argot of political science, “offshore balancing” posture, a prudent tending to the balance of international power. Both administrations saw themselves as redressing the excesses of post-Cold-War hubris, expressed most egregiously in the 2003 invasion of Iraq. Ever the hipster intellectual, President Obama’s contribution to a White House show with the cast of the Hamilton musical was a reading of George Washington’s “Farewell,” the 1796 address most remembered for its warning against “entangling” foreign alliances.

Yet, except for the late 1920s and early 1930s, Americans have almost never—and never for very long—thought it wise to turn too much away from world events. The Founders itched for the day, which they believed to be just around the corner, when they could muscle their way to the top of the geopolitical pole. The real money quote from Washington’s “Farewell” is: “[T]he period is not far off when we may defy material injury from external annoyance . . . when we may choose peace or war, as our interest, guided by justice, shall counsel.”

“Guided by justice”—this has been the governing principle of American strategy-making, or strategic culture, not simply since 1776 but since the first English colonists splashed ashore on Roanoke Island in 1585. Ideology—one derived from the Reformation struggles of European Protestants, combined with a global understanding of power, an expansive imperial impulse, and frontier fears of proximate threats—has propelled the English-speaking peoples of North America through centuries of often horrible conflict. Modesty in international affairs does not make great powers, let alone “sole superpowers.” Nor do abundant natural resources or oceans’ remove from Eurasian continental affairs or other material considerations alone suffice to explain why Americans have behaved—that is, wielded power and especially military force—as they have done. It is very difficult for the United States to be a “satisfied” power, as political-science realists would like it to be. It is not in our stars, but rather in ourselves, in our habits of mind and experience, to be perpetually unsatisfied with national interest alone, but be guided by our sense of justice.

It may be that the policies of the last two presidents represent an epochal shift in the American way of strategy for which the past, immediate or distant, is not prologue. And yet the weight of the American past is a heavy thing, not easily laid aside. While there have been periodic moments when the ideological enthusiasm for liberty—be it individual, national, international—has dimmed or deferred to the need for security and the desire for prosperity, it would be a momentous change indeed if this proved permanent.

The history of Anglo-American strategy-making is marked by halting patterns of reform—adaptation to critical geopolitical and military circumstances—leading to restoration, to the discovery of new ways and means to advance traditional ends. It seems more likely that the current era marks another cycle of adaptation to be worked out, possibly over decades, but such reforms are most likely to lead to a strategic restoration rather than a realist revolution.

In July 1947, the magazine Foreign Affairs published “The Sources of Soviet Conduct.” Writing as “X,” State Department official George Kennan intended the article, which reprised points he had made in several official memoranda, including the so-called “Long Telegram” the previous year, to be an explanation and guide to understanding Soviet strategy and behavior. He aimed to describe the “political personality of Soviet power,” an effort he called a “task of psychological analysis” to discern a “pattern of thought” and the “nature of the mental world of the Soviet leaders.” If Soviet “conduct is to be understood”—and, as a matter of American strategy, “effectively countered”—it required not only a grasp of the principles of Soviet ideology but the effects of “the powerful hands of Russian history and tradition.” Kennan thus argued that Josef Stalin and other Soviet leaders saw international politics and the struggle for power through a unique set of lenses, lenses that might filter and distort even nature’s purest colors and shapes. It mattered less what wavelengths objects reflected than what wavelengths appeared to Russian eyes.

Ironically, Kennan might be said to have had more empathy for the sources of Soviet and Russian conduct than he did for that of the United States. As he lost the struggle over Cold War policy within the Truman Administration to a more deft group of intellectuals led by Paul Nitze, Kennan began to see America as a kind of strategic brontosaurus, “slow to wrath” but once provoked liable to cause much collateral damage beyond just subduing the threat.

Kennan’s contemporary Hans Morgenthau thought that the ideological impulse in American strategy needed to be not just bridled but destroyed. It was a “nefarious trend of thought.” He lamented the fact that the American political establishment had a “bias against a realistic approach” to power.

Modern realist scholars often follow in Kennan’s and Morgenthau’s footsteps. “Why,” wonders Harvard political scientist Stephen Walt, “is a distinguished and well-known approach to foreign policy confined to the margins of public discourse, especially in the pages of our leading newspapers, when its recent track record is arguably superior to the main alternatives?” Why, indeed? Distinguished and well-known, realism and other materialist schools of thought would appear to be familiar yet uncongenial to the American mind. And worse, they don’t appear to either explain or predict actual American behavior. 

The source-code for American thinking about strategy was written centuries ago in Elizabethan England. Or, to be more precise, Elizabethan “Britain,” a place of myth about the heroic past and a proposition for a glorious future. The political elites of the Elizabethan age were profoundly aware that England was riven by domestic dissent and disorder, driven from its last toehold on the European continent, with an impoverished and arthritic government, a minuscule and antiquated military, and unsure about which of the two continental great powers, Spain or France, was the larger threat; England feared it was “a bone thrown between two dogs,” as historian James Anthony Froude put it. The comfortable dynastic framework of late medieval and early modern European politics was being torn apart by the passions of Protestant Reformation and Catholic Counter-Reformation. England was also decades behind Spain and Portugal in translating New World exploration into Old World wealth, military might, and political influence. England’s very independence rested upon its ability to play an expansive game of ideological and confessional global power. To grant Philip II of Spain, who engraved his coins with “The World Is Not Enough,” his desired sphere of influence would be suicidal.

To be sure, Elizabeth chafed at being cast as the leader of the Protestant cause throughout Europe. Protestantism by its very nature tended to dissent; beyond the break with Rome, the doctrinal differences between Lutherans and Calvinists were already making the international “Protestant interest” a herd of cats, and that herd would grow more feral, fractious, and fissiparous as the reformed faith put down roots across northern Europe and throughout the British Isles. Protestantism also carried with it a whiff of republicanism, or at least anti-authoritarianism, particularly on the part of the Dutch; indeed, for more than a century the English and Dutch would have a strategic love-hate relationship. The Anglican via media reflected the queen’s own religious views (and she had well-reasoned and well-informed ideas) and was a political compromise. That made it a sometimes-wobbly platform for strategy, both domestically and internationally. But the power of a guiding and shared sense of justice—God’s “providence”—made a common cause possible.

During Elizabeth’s five decades on the throne, a rough set of strategic priorities took root in the quest to realize her just claims. The first concerns were about the durability and legitimacy of the regime at home. Thus the primary principle of English strategy was to secure the queen’s ability to govern domestically. Even in the late sixteenth century, this was a question of asserting London’s writ throughout England and Wales. It was also a matter of asserting the primacy of the Protestant faith. 

The second set of strategic priorities for Elizabeth and her Privy Council was Scotland and Ireland. Unless these “postern gates” were closed to French, Spanish, and Popish influence, and friendly, Protestant local regimes put in place, England faced an existential threat.

Elizabeth, like her predecessors and successors, preferred to fight her great-power battles not at home but away. The saga of the Gran Armada of 1588 highlights the role of rising English naval power during the period, but it is better to see this third element in Elizabethan security architecture as encompassing not just the Channel, the “Narrow Seas,” but the eastern Atlantic from the North Sea through the Bay of Biscay and the related coastal parts of Europe from Holland to northern Spain and Portugal. Indeed, although Elizabeth did her best to avoid and to limit English land-force engagements in continental Europe, the need for commitments of men and money proved constant. As Brendan Simms has convincingly argued in his magisterial Three Victories and a Defeat: The Rise and Fall of the First British Empire, a clearer picture of England’s strategic and operational reasoning emerges when the region is taken as a whole, a “moat-and-counterscarp” system intended to add some strategic depth to an exposed England. The situation in Europe and Philip II’s hegemonic ambitions made offshore balancing too risky. The “wooden walls” of English ships could not secure every Elizabethan interest, nor preserve the Protestant faith. It was in this larger western European theater of operations, both maritime and continental, where England’s status as a great power would be measured.  

These domestic and western European priorities, however, were nested within a truly globe-spanning appreciation of power. Thus, in addition to lusting after the mountains of Aztec and Inca treasure that financed Philip, Elizabethan expansionism included, on a small but still important scale, the first English attempts at a permanent lodgment in this New World. Englishmen also understood that the Spanish had a head start not only in exploiting the riches of the New World but in Catholicizing it. Bringing the reformed faith to the indigenous peoples of the Americas may have been more moral justification than motivation, but it was a constant theme of the Elizabethan strategic conversation. As Richard Hackluyt, both the most distinguished collector of writings on exploration and a some-time intelligence agent, put the case to the queen in the “executive summary” of his Discourse of Western Planting of 1584:


The Spaniards govern in the Indies with all pride and tyranny; and like as when people of contrary nature at the sea enter into gallies, where men are tied as slaves, all yell and cry with one voice, Liberta, liberta, as desirous of liberty and freedom, so no doubt whensoever the Queen of England, a prince of such clemency, shall seat upon that firm of America, and shall be reported throughout all that tract to use the natural people there with all humanity, curtesy, and freedom, they will yield themselves to her government, and revolt clean from the Spaniard….


By Elizabeth’s death, the queen’s strategic ambitions, despite her caution and conservatism, had created a moment of imperial overstretch for which her immediate Stuart successors would pay a heavy price. Yet she had set goals that could not be easily renounced. Englishmen extolled Elizabeth as “Gloriana” not because they remembered her reign as peaceful—it was not at all peaceful—but because they remembered their aspiration to greatness and the securing of the “liberties,” both at home and abroad, which they held dear. Her subjects might grumble about failure or the cost in blood or taxes, or divide themselves into faction, but they could not accept a lowering of sights. 

The first Stuarts, James I and Charles I, lacked both Elizabeth’s strategic sense and her political sensibility. Unlike their Habsburg and then Bourbon competitors, early British monarchs could not play the game of thrones without cajoling their Parliaments to finance them. And the price included Parliament debating the arcana imperii that the Stuarts regarded as their absolute domain, the rights bequeathed to them alone by God. The Stuarts’ unwillingness to lead the Protestant alliance during the end-of-days struggle of the Thirty Years’ War—and their manifest military incompetence—provoked a series of civil wars across their three kingdoms and cost Charles not just his crown but his head. In the view of the victorious Parliamentary leaders, it was Charles who was the revolutionary; their military, fiscal, and governmental reforms were in service to restoring an essentially Elizabethan approach to strategy.  

Charles’s sons, Charles II and James II, never forgot their father’s fate, but neither did they learn from it. James, in particular, was too impressed by his time in France and proximity to Louis XIV. Attempting to model his British—and expanding North American—empire on the Sun King’s formula, James also wished to reach a strategic modus vivendi with France, renouncing continental interests in return for colonial expansion. In the end, his subjects invited the Dutch stadholder, the Prince of Orange, to invade England, and then made him a British William III. 

This “Glorious Revolution” was also a restoration of the Elizabethan imperial tradition. It likewise firmly planted the North American colonies as part of the imperial equation; what was “The Nine Years’ War” or the “War of the Grand Alliance” in Europe was “King William’s War” in the New World. And it reflected a changed great-power reality: Bourbon France, not Habsburg Spain, was the new hegemonic danger. This justified a host of revolutionary imperial reforms. At home, this meant a new regime, bound more firmly by Parliament and to be secured by a second Protestant succession, this time by the German Hanoverian line. It also meant a revolution in state-building and, especially, state finance; the Bank of England allowed William to borrow his way to great military power. The king, the bank, and the army were all Dutch imports.

These changes enabled a return to the Elizabethan form of strategy; the appeasement of France and neglect of the European balance of power—offshore balancing, Stuart style—was on the outs. William presented both his invasion of England and the otherwise dreary and indecisive contest with France as a defense of the Protestant Cause; the Peace of Westphalia’s attempt to “de-confessionalize” international politics cut rather less mustard with Britons, especially those Britons on the wild and howling imperial frontier in the New World, where French Jesuit priests inspired and enabled the frighteningly and seemingly barbaric Indian way of war. Their ideological fervor was of the Cromwellian kind. The Reverend Philip Vincent rationalized the Massachusetts Puritans’ burning to death of hundreds of Pequot women and children—but precious few warriors: “Severe justice must now and then take place.”

“King William’s War” was followed by “Queen Anne’s War”—the War of the Spanish Succession” in Europe—then “King George’s War”—The War of the Austrian Succession—and finally the smashing victory of the French and Indian War—the Seven Years’ War. The period from 1688 to 1763, and the Treaty of Paris that recognized the global and first British Empire upon which “the sun never set,” also marked the increasingly global nature of the conflict, as well as the increasing importance of the American theater.

The paramount victory of 1763, however, revealed a profound difference of imperial opinion in the two poles of the British Atlantic world. George III, who had inherited both the government and global strategy of William Pitt from his grandfather, saw these conquests as a punctuation, an “end state” to be sustained, preserved and paid off. British colonists in North America, like Pitt, saw the victory more as an opportunity to exploit. As that arch-imperialist Benjamin Franklin put it in 1760 after the capture of Quebec and Montreal:


No one can rejoice more sincerely than I do, on the reduction of Canada; and this merely not as I am a colonist, but as I am a Briton. I have long been of the opinion, that the foundations of future grandeur and stability of the British Empire lie in America; and though like other foundations, they are low and little seen, they are nevertheless broad and strong enough to support the greatest political structure human wisdom has ever erected . . . All the country from the St. Lawrence to the Mississippi will in another century be filled with British people. Britain itself will become vastly more populous, by the immense increase of its commerce; the Atlantic sea will be covered with your trading ships; and your naval power, thence continually increasing, will extend your influence around the whole globe, and awe the world.


Franklin had both a remarkably accurate vision of the American future but a blurred understanding of the English present of the 1760s. When the new king declared that he “gloried in the name of Briton,” that really meant that he was a kind of 18th-century “Little Englander,” with little strategic regard either for his Hanoverian inheritance or the Americans’ ambitions. Thus the path from the realization of the original British empire in 1763 to its initial crack-up in 1776 was a long road traveled rapidly, and the first push toward separation came not from fiscal motives but from strategic differences: It was the royal Proclamation of late 1763, which forbade colonial expansion west of the Appalachians, that drove the initial split. In American eyes, the French and Indian War was fought to secure the settlement of the Ohio and Mississippi valleys—to subdue French and Indian claims and power, not sustain them.

But if the pace of imperial crack-up was quick, the path to imperial reform and restoration, when Americans finally felt that their empire for liberty could stand on its own, was traveled painfully slowly. To declare independence was one thing, to achieve and maintain it quite another. The work of the American “founding,” of creating and organizing a union of states powerful enough to survive in a hostile geopolitical environment while preserving their individual liberties, was the work of several generations and much trial and error including a major redesign of the instruments of government, particularly the armed forces. In this regard it was cannily similar to the Williamite “founding” almost exactly a century before.

The American founders did not imagine that, after two centuries of almost constant conflict, their revolution alone was sufficient to secure their liberties. It certainly would not free them from the inevitable entanglements of European power politics. Nor was their wartime confederation strong enough to stand by itself, let alone realize their imperial imaginings. Like the Elizabethans, the earliest Americans were vividly aware of their own political and military weaknesses; they inherited the age-old “bone-between-dogs” dread. In a letter to George Washington, Alexander Hamilton imagined the new American republic as “Hercules in a cradle,” but at the beginning it was the cradle that counted most; American power was potential, great but unrealized. The new republic could not preserve its virtue in perfection; it must become a “republican empire” and employ traditional means of statecraft and military power. While Thomas Jefferson, James Madison and others might hope to “conquer without war,” such an idyll was already proving impractical.

Hercules did not escape the cradle until the end of the Napoleonic wars—“The War of 1812” in America. The “Monroe Doctrine” was something of a Herculean boast, but it was not simply a question of spheres of influence but also of regime type. Republics and monarchies made strange bedfellows, even, as in the case of the British, when there were deep and lasting attachments. “The political system of the allied powers [of Europe] is essentially different in this respect from that of America. This difference proceeds from that which exists in their respective Governments,” Monroe argued. The liberty-loving character of the American regime gave it special license in Monroe’s view. He saw no contradiction between principle and power in securing the expanding Empire of Liberty—five states were admitted during Monroe’s presidency.

However, the “good feelings” of the 1820s required Americans to avert their eyes from the “peculiar institution” of plantation slavery, long a matter of sectional discord and, more importantly, incompatible with justice. As America expanded westward, its future was now fatefully entwined with the future of slavery, which would not simply wither and die. For three decades, Americans fought a series of “Slavery Wars”: the Mexican-American conflict from 1845 to 1848, the Civil War from 1861 to 1865, and the subsequent Southern insurgency during Reconstruction, which continued until 1877.  

These wars also resulted in a third trial and translation of the Anglo-American, imperial proposition not only in North America but globally. The Confederacy could not be induced to rejoin the Union under the pre-war status quo, affirming slavery as it existed in 1861 but preventing further expansion. Thus, by late 1863, it had become apparent in the North as well as the South that this was a “regime change” war, one that targeted the social and economic structure of plantation slavery as well as the armies defending it or the main Southern citadels or lines of communication. 

This was a revelation that occurred first to Union commanders in the western theaters, particularly Ulysses S. Grant and his lieutenants William Tecumseh Sherman and Philip Sheridan. Grant’s successful siege of Vicksburg, completed just a day after the Gettysburg victory, was an indication of the direction of the war to follow, not for the fall of the last Confederate bastion on the Mississippi or Grant’s tactical and operational audacity in crossing the river, but because when, on their approach march, his soldiers beheld the brutal reality of slave life, the cause for which they fought became all too tangible. The spirit of righteous vengeance, not unlike that which motivated British Protestants in the European wars of the Reformation era, remained powerful through the remaining two years of the war and afterward. As Sherman’s troops sang, addressing the liberated black men, women, and children who now followed in the army’s wake: 


Hurrah! Hurrah! We bring the Jubilee!

Hurrah! Hurrah! The Flag that makes you free!

So we sang the chorus from Atlanta to the sea,

While we were marching through Georgia!


Like their Elizabethan and Cromwellian forbearers, the “Roundheads” of Grant’s armies found their inspiration in the Old Testament. Sherman’s troops’ song explicitly evoked the Jubilee of Leviticus: “On the Day of Atonement you shall sound the trumpet throughout all your land. You shall make the fiftieth year holy, and proclaim liberty throughout the land to all its inhabitants. It shall be a jubilee to you; and each of you shall return to his own property, and each of you shall return to his family.” Such stern inspiration carried over to post-war reconstruction. In 1868, Congress passed three “Enforcement Acts” to deal with the Ku Klux Klan and the related violent insurgent groups in the South, which remained divided into military districts even as the seceded states were readmitted to the Union.

But the war had not only been about the present but the future. As Grant moved more aggressively to make America free, he likewise moved to make it whole through accelerated westward expansion, offering federal aid to “homesteaders” and marking, in 1869, the completion of the first “Transcontinental” railway, a project that had also begun in 1863. This also set the stage for the final set of conflicts with the indigenous peoples of North America, which began in Grant’s second term. The president preferred a “peace policy,” but was also a realistic in his reckoning of the sources of conflict. As he told Congress: 


The building of railroads and the access thereby given to all the agricultural and mineral regions of the country is rapidly bringing civilized settlements into contact with all the tribes of Indians. No matter what ought to be the relations between such settlements and the aborigines, the fact is they do not get on together, and one or the other has to give way in the end.


The Plains Indian wars provided a coda to the original American imperial project. From the Atlantic to the Pacific, America was “whole and free,” even if its inhabitants did not yet enjoy full or equal rights.

At the conclusion of these wars in the 1880s, America entered its “Gilded Age” of banking tycoons, barons of industry, and excess, confident that it had come through its internal struggle to stand in the front rank of global powers. It no longer felt threatened in its crib. The American Empire of Liberty now enjoyed an expanding understanding of its strategic interests, buoyed by a righteous sense of justice that had been tested, and tempered to greater strength, by its terrible trials in war.

No one embodied the restored era of good feeling and imperial possibility more than Theodore Roosevelt. The Civil War was a formative experience in his young life, though he regretted that his father, for whom he had immeasurable reverence and regarded as the “best man I ever knew,” had paid another man to take his place in the draft. In many ways, and for many others of his generation, the rest of Teddy’s “strenuous life” was an effort to participate in a glorious and martial cause to advance American power and moral and political principles, for which the Civil War provided the model.  

Roosevelt wrote eloquently about his worldview. In his four-volume Winning of the West, he was clear in drawing out the moral component of America’s westward expansion.


All other questions save those of the preservation of the Union itself and of the emancipation of the blacks have been of subordinate importance when compared with the great question of how rapidly and how completely they were to subjugate that part of the continent lying between the eastern mountains and the Pacific.


As in the American West, so in the world.  “We stand on the threshold of a new century,” he enthused to the Republican convention that in 1896 nominated him as its vice-presidential candidate,


Big with the fate of mighty nations. It rests with us now to decide whether in the opening years of that century we shall march forward to fresh triumphs or whether at the outset we shall cripple ourselves for the contest . . . We do not stand in craven mood asking to be spared the task, cringing as we look on the contest. No! We challenge the proud privilege of doing the work that Providence allots us.


Assuming the Oval office upon the death of William McKinley, Roosevelt seethed with an almost Puritan zeal, one that might have seemed all but adolescent to a Washington or a Lincoln but would have resonated with an Essex or a member of the Rump Parliament.  

His strategic enthusiasm never waned, not even during World War I. The American imperial spirit was muted by the slaughter of the trenches, but only sank into outright isolationism with the onset of the Great Depression. Indeed, what is remarkable in retrospect is that, in leading the nation to war in the 1940s, Franklin Roosevelt—Teddy’s fifth cousin—framed the effort as a return to the traditional themes that defined the Empire of Liberty. In his “State of the Union” address one month after the Japanese attacks on Pearl Harbor, the 32nd president told the Congress that


Prior to 1914 the United States often had been disturbed by events in other continents. We had even engaged in two wars with European nations and in a number of undeclared wars in the West Indies, in the Mediterranean and in the Pacific for the maintenance of American rights and for the principles of peaceful commerce . . . What I seek to convey is the historic truth that the United States as a nation has at all times maintained opposition, clear, definite opposition, to any attempt to lock us in behind an ancient Chinese wall while the procession of civilization went past.


In this way Roosevelt initiated a fourth major period of reform and restoration of the original Anglo-imperial project. But as always, the strategic understanding was shaped by a moral commitment to liberty.  As in 1914, “the American people began to visualize what the downfall of democratic nations might mean to our own democracy.”


No realistic American can expect from a dictator’s peace international generosity, or return of true independence, or world disarmament, or freedom of expression, or freedom of religion—or even good business . . . [W]e are committed to the proposition that principles of morality and considerations for our own security will never permit us to acquiesce in a peace dictated by aggressors and sponsored by appeasers. We know that enduring peace cannot be bought at the cost of others people’s freedom.


Here was the traditional “Protestant Interest” molded to mid-20th century form, deprived of its confessional and racial meanings, but powerfully ideological, meant to appeal to not only the American political nation, but the British and other allied publics. As Lincoln had done in the Gettysburg Address, so Roosevelt looked to the distant past, and the reforms and restorations that had come before, to frame the task before Americans in 1941. “Since the beginning of our American history,” he asserted, “we have been engaged in change—in a perpetual peaceful revolution—a revolution which goes on steadily, quietly adjusting itself to changing conditions.” Roosevelt knew that he lived in a violent time and that military confrontation was the only path toward peace; “there can be no end save victory.” But the goal was a “world order,” a global “good society,” defined by “four freedoms.”

That war also was fought in a very Elizabethan and Williamite way, to secure “counterscarps” across both the Atlantic and Pacific, and deep into the Mediterranean. In addition to oceanic power projection and the deployment of vast American armies, the United States subsidized many allies, particularly Great Britain and Stalin’s Soviet Union, which paid a horrible price in blood. The War Department’s annual report for 1938 concluded that “in the military sense the Americas are no longer continents” and that “the simple unadulterated fact that the range and destructive potentialities of weapons of warfare, primarily those whose realm is the skies,” had “shortened the elements of [military] distance and time.” The risks of “offshore balancing” were too great and its methods—raids, strikes and naval “descents”—too ineffective. It was certainly no strategy for a global power.  

Yet despite the great victories of 1945, the Cold War created a new version of Elizabethan fears: The “Free World” still lacked strategic depth. Given the history I have thus far recounted, it should come as little surprise that it fell not to realists like Kennan or Morgenthau to shape American strategy for the Cold War, but to Kennan’s bureaucratic nemesis, Paul Nitze, to again redefine and restore the American imperial enterprise for the new geopolitical situation. 

Nitze’s memorandum for the National Security Council of April 7, 1950, “NSC 68,” expressed the essentials of U.S. strategy for the decades-long competition with the Russians. NSC 68 observed that the defeat of Germany and Japan and the decline of the British and French colonial empires had “altered the distribution of power.” Moreover, the Soviet Union was more like Counter-Reformation Spain than “previous aspirants to hegemony.” Russia was “animated by a new fanatic faith, antithetical to our own, and seeks to impose its absolute authority over the rest of the world.” Not surprisingly, Nitze described the country’s purpose in a manner that would have resonated with Reformation Protestants: “The issues that face us are momentous, involving the fulfillment or destruction not only of this Republic but of civilization itself.”

NSC 68 began not with its analysis of the Soviet system or international affairs as a whole but with the “Fundamental Purpose of the United States,” citing the Preamble to the Constitution and the Declaration of Independence, from which emerged three strategic “realities”: “Our determination to maintain the essential elements of individual freedom . . . our determination to create conditions under which our free and democratic system can live and prosper; and our determination to fight if necessary to defend our way of life.” The Cold War was both a global geopolitical contest—for the Soviet Union’s “efforts are now directed toward the domination of the Eurasian landmass”—and one “in the realm of ideas and values,” that is, ideological. 

“Soviet domination of the potential power of Eurasia,” continued the memorandum, “whether achieved by armed aggression or by political and subversive means, would be strategically and politically unacceptable to the United States.” The weakness induced by the demobilization of the World War II armed forces in the United States and its allies, to say nothing of the demilitarization of Germany and Japan, exposed vulnerabilities in multiple theaters. What was required was nothing less than “a rapid and sustained build-up of the political, economic, and military strength of the free world” and “an affirmative program intended to wrest the initiative from the Soviet Union,” focused on “the gradual retraction of undue Russian power and influence from the present perimeter areas” in Europe and Asia—the reduction and “rollback” of Soviet influence and the expansion of American imperial sway. In the end, the goal was regime change in Moscow: “Our policy and actions must be as such to foster a fundamental change in the nature of the Soviet system.”

In times of trial and uncertainty, Americans and their British ancestors have found their way forward by renewing their commitment to a “Good Old Cause,” as Cromwellians called the Elizabethan heritage that they strove to recapture. This commitment sprang from within, from how leaders and the political nation thought of themselves, from the ideas and habits of thought that gave meaning to national life and purpose to power. Through centuries of changes in geopolitical circumstances and frequent conflicts—large and small, quick campaigns and “endless” efforts—the personality of this power has remained remarkably consistent. Material “interest” alone does not suffice without a guiding sense of justice.

It’s true enough that through eight years of the Obama presidency, realism in deed (if not in speech) received a second hearing. And insofar as Trump’s National Security Strategy provides a blueprint for his administration’s strategic outlook, it is notably unadorned with the kind of idealistic language that Nitze channeled in his day. 

But those who have fought the tides of American strategic tradition have repeatedly failed. “He kept us out of war!” has not been a slogan that has long resonated with Americans. “First in war, first in peace” rings more true. Confronted with the crises of world politics and, in particular, the prospects of a hostile hegemon in Eurasia, the Empire of Liberty has roused itself again and again to reform and restore, to defy external annoyance and be guided by justice. It will happen again. Bet on it.


The post The Personality of American Power appeared first on The American Interest.

 •  0 comments  •  flag
Share on Twitter
Published on September 24, 2019 11:38

September 23, 2019

An Eye-For-An-Eye Response

From the commentary and analysis inspired by Iran’s September 14 attack on Saudi Arabia’s oil producing facilities—and there seems little doubt that the Islamic Republic bears ultimate responsibility for it—three misconceptions have emerged. To make an informed judgement on the appropriate American response to Iranian aggression requires correcting them.

The first misconception is that the attack demonstrates the failure of the Trump policy toward Iran, specifically the 2018 decision to withdraw from Joint Comprehensive Program of Action (JCPOA) that the Obama administration negotiated in 2015 with the Muslim clerics who rule in Tehran. To the contrary, the attack shows that the policy is succeeding. Its objective is to put pressure on the mullahs, making it more difficult for them to carry out their policies of repression at home and aggression abroad. The fact that the Iranian regime has lashed out as it did, running the risk of severe American reprisals, is evidence that it is, indeed, feeling serious pressure. With economic sanctions reimposed, Iran is unable to sell oil, its only source of income, complicating its efforts to preserve itself in power while seeking to dominate the Middle East.  

Nor did the United States forfeit major benefits by withdrawing. Obama officials suggested that the agreement would empower Iran’s “moderates” (assuming they actually exist) at the expense of its “hardliners,” which would lead to a change in the country’s regional policies. To the contrary, since 2015 Iran has continued, unabated, its campaign to expand its regional power, to the detriment of America’s friends and allies.

President Trump correctly diagnosed the JCPOA as disadvantageous to the United States. It had three major shortcomings. First, it permitted Iran to continue to enrich uranium. Because enrichment is the crucial step in making a nuclear explosive, the central pillar of American non-proliferation policy for four decades had been the denial to aggressive regimes such as the one in Tehran of the capacity to carry it out. The Obama Administration abandoned that principle. Second, the provisions for inspections to ensure that Iran was keeping its commitments under the JCPOA were weak. Third, the major prohibitions written into the agreement had expiration dates, after which Iran would be free to acquire as many nuclear weapons as it desired, to go along with a fleet of long-range missiles to deliver them. The Islamic Republic has a program to build such weapons and the JCPOA permitted it to continue.

While setting aside the agreement was justified, the way the Trump Administration went about doing so had two shortcomings. First, the President and his senior officials did far too little to try to reach a common position on this issue with other countries, especially the Europeans. This opened the way for Iran’s strategy of turning Europe against the United States. Second, the Trump Administration seems not to have planned in any systematic way for countering the inevitable Iranian response to the reimposition of sanctions. Unfortunately, consulting and coordinating with other countries, even—perhaps mainly—friendly ones, and making plans for various contingencies, are not hallmarks of this presidency.

If the Trump Administration had adhered to the Iran policy of its predecessor, in all probability the Iranian regime would have steadily expanded its sway in the Middle East and ultimately equipped itself with a formidable nuclear arsenal. In that case the United States would have confronted a deeply unappealing choice: either resist the Iranian drive for hegemony from a far weaker strategic position that it now has, or acquiesce to Iranian domination of the Middle East.  

That leads to the second misconception the events of September 14 have triggered: Checking Iran is not worth the American time, effort, and resources that it would take because the United States does not need Middle Eastern oil. It is true that the oil Americans consume comes either from domestic sources or from the Western hemisphere. Middle Eastern petroleum remains, however, indispensable for America’s friends and allies in Europe and Asia. If the United States decides not to guarantee the flow of oil from Persian Gulf, it will in effect abdicate its role of seven decades as the protector of European and Asian free-market democracies.  

The withdrawal of the United States from the Middle East would create a geopolitical vacuum there that the Islamic Republic of Iran would do everything in its power to fill. If it did, the region and the world would become far more dangerous places. The United States itself would likely not escape the adverse economic, political, and military consequences. It is worth a great deal to America to block Iran from achieving its goal of regional dominance.

What, then, should the United States do to accomplish that goal in light of the Iranian attack on the world’s supply of oil? That question leads to the third misconception.

The September 14 attacks have evoked the assertion that America must at all costs avoid becoming embroiled in a war with Iran. Behind this insistence lies the fear of yet another protracted, costly, inconclusive conflict like the recent ones in which the United States has become involved. The ghosts of Afghanistan and Iraq haunt the debate about American policy toward the Islamic Republic.

One problem with this position is that if the United States is unwilling to use force against Iran under any circumstances—and the Obama Administration gave the impression that this was its policy—then the mullahs, who have no scruples about killing others or even having Iranians die in large numbers pursuit of their goals, will ultimately get what they want. Another, related problem with the insistence that the United States must avoid war with Iran is that such a war is already underway, as the Islamic Republic presses ahead with its campaign, employing all measures including the use of force, to dominate the Middle East.

Fortunately, one American ally is fighting back, and successfully so. Israel, the destruction of which is a major and long-standing aim of the rulers in Tehran, has, through the use of airpower, thwarted the Iranian attempt to build and deploy accurate missiles in Syria that it could use, in conjunction with the comparable forces it has installed in Lebanon through its proxy, the terrorist organization Hezbollah, to overwhelm Israeli air defense systems. Israel’s policy demonstrates that Iranian aggression can be checked, and at acceptable cost, without putting American troops on the ground and exposing them to attacks, as in Afghanistan and Iran. What, then, should the United States do in response to the recent act of aggression?

The Trump Administration has tightened the economic sanctions already in place and sent a small detachment of troops to Saudi Arabia but seems disinclined to do more. It is not in the American interest for the conflict to escalate sharply, but failing to make any military response risks encouraging the mullahs to mount further, larger attacks, which could lead to a full-scale Middle Eastern war. One possible course of action is to do to Iran what Iran did to Saudi Arabia by conducting a limited aerial attack on Iranian oil facilities. Such an attack would signal to the mullahs, and the world, that the United States will match Iranian attacks but not go beyond them. It would send the message that America will respond to provocations but will not be the party to start a wider war. Such a response would, in effect, support the status quo that the withdrawal from the JCPOA has created. That is the appropriate policy because the status quo, with economic sanctions weakening the Islamic Republic, serves American interests. An all-out Iranian effort to get nuclear weapons would change things, but at present the optimal course for the United States and all those threatened, directly and indirectly, by Iranian ambitions is to preserve it.


The post An Eye-For-An-Eye Response appeared first on The American Interest.

 •  0 comments  •  flag
Share on Twitter
Published on September 23, 2019 12:57

September 20, 2019

Brexit Beyond Britain

Editor’s Note: This is the third in a series assessing the consequences of Brexit. The first, by Robert Singh, can be found here, and the second, an interview with Andrew Roberts, can be found here.


Back in 1941, in criticizing the Eton and Oxford-educated elite, George Orwell lamented that “one of the dominant facts of British life . . . . has been the decay of ability in the ruling class.” Today, poor leadership on all sides has dug the country into a cul-de-sac, unable to stay and unable to leave the EU without crashing out to the disadvantage of all.


Britain’s ambiguous relationship with the rest of Europe has deep roots, tied to a broad spectrum of cultural, ideological, geographical and political issues in the United Kingdom. At this moment, however, something else has been revealed: Brexit has shown the weakness of democracy in Britain, or at least the diminished faith a growing number of Britons have in representative democracy. Alarmingly, this story pertains to the rest of Europe, too.


Challenged by the UK Independence Party (UKIP)—now called the Brexit Party—the question of Britain’s membership in the European Union soon became an obsession inside the Conservative Party. Theresa May tried to unite the party by pivoting to its Brexiteer Right while nodding to the moderates, but she failed.


The government subsequently decided to interpret the Brexit vote as an expression of dissatisfaction on a number of different matters—immigration, for example—all under the rubric of “taking back control.” Prime Minister Boris Johnson proclaimed the victory of “democracy” in all this—by which he meant referendum democracy. Representative democracy, by contrast, seems to be taking a beating. Nigel Farage’s belief that “Sovereignty does not lie with Parliament, sovereignty lies with the people” is shared by other leaders such as Hungary’s Prime Minister Viktor Orban, who claim that the ruler represents the “People,” and untidy things like political parties and parliament just get in the way. In this view, the ruler decides what the People need to know: Only under pressure from the media and Remainers did the government release its own estimate of possible worst case scenarios of a No-Deal Brexit. Similarly, Brexit negotiations with the EU have been striking for their lack of transparency on Britain’s part.


Britain has a history of bad break-ups. From the 1917 partition of Ireland to the 1947 departure from India, Britain has not infrequently embarked on remarkably unsound exit strategies. Indian essayist Pankaj Mishra argues that “partition—a ruinous exit strategy of the British empire—has now come home. In a grotesque irony, borders imposed in 1921 on Ireland, England’s first colony, have proved to be the biggest stumbling block for the English Brexiteers chasing ‘imperial virility.’” Other hastily put together departures of bloody consequence include Greece at the start of its civil war and Palestine in 1948. In the case at hand—the smash-and-grab Brexit due October 31—the damage will likely include in due course the loss of Scotland, and turmoil in Northern Ireland. In the meantime, do not exclude social unrest in England and Wales once hit by the economic consequences of No-Deal Brexit.


Similar upheavals are occurring in other countries, though not all with the same dramatic consequences. Not yet.


Across Europe, political party systems are undergoing radical transformations. None of the parties that ran Italy between the Second World War and the end of the Cold War exists any longer. Spain’s stable two-party system now has five national parties, one of them openly far-Right, and is currently embroiled in a Constitutional crisis. The French Presidential contest of 2017 wiped out the old mainstream parties and saw the consolidation of the radical Right populist party of Marine Le Pen and a new movement, La Republique En Marche, led by Emmanuel Macron. Neither were new to the system: the first was a mutation of an older challenger to French politics from the fascism-inspired Right, the second an outgrowth of the French centrist elite. Together they managed to supersede the old party system and ideological formations, which are now in a state of disarray on both Left and Right. Even long-stable Germany is slowly evolving. The apparent decline of the SPD and CDU has opened space, on the Left for Die Linke and a surging Green Party, and on the Right for the anti-immigrant populist AfD.


Throughout the postwar period, West European stability rested on reliable democratic party systems that contained extremist threats. Even in the tumultuous 1970s, when hard-edged political polarization shook countries like Germany and Italy—with political radicalism gaining ground and terrorism shocking the body politic—the democratic systems held firm. At the time parties, with their mass base and broad representation, were the linchpin between society and the state. Now, however, political parties have declined in importance. The late Irish political scientist Peter Mair put it succinctly a half dozen years ago: “the age of party democracy has passed. Although the parties remain, they have become so disconnected from the wider society, and pursue a form of competition that is so lacking in meaning, that they no longer seem capable of sustaining democracy in its present form.” Erosion of voter trust is unmistakable. And the void is being occupied—in some places quite easily—by populists and extremists.


How should responsible democratic establishment parties of the center-right and center-left respond? Across Europe a number of mainstream conservative parties have started to copy the rhetoric and tactics of the populist Right, pushing for a stronger central state with weaker rule of law, less pluralism, and an emasculated judiciary. Today’s populists are shrewd power consolidators. Once in government, they try to hijack the state apparatus by replacing professional civil servants with party loyalists, ideological soulmates, and culture war experts. They use the legal system to impede and restrict civil society. They employ cronies to buy media and manufacture consent. These things have happened systematically in Hungary; there are signs Poland’s government is in some areas following suit.


The supporters of representative democracy must be self-critical. We did not fully appreciate the impact of globalization and European integration on democracy itself. The populist accusation that Brussels has taken powers away from the nation is not unfounded: the European Union does indeed manage more and more policies of relevance to the daily lives of Europeans. But the decisions are not made unilaterally by a far away bureaucracy: National governments have remained  in the driving seat. Hence the tragic paradox and irony in Brexit: The EU denounced by militant Brexiteers—that of a European super-state killing off the nations of Europe—never came to be. Nor is it in any way in the cards now.


And we all ought to consider where we’ve landed in the meantime. After Brexit, we will have a weaker and more vulnerable Britain, and an EU poorer from its absence. To be sure, the UK has been straddling the periphery of the EU for a long time—an important and influential country without a foot in key institutional arrangements. Europe will survive Brexit. But whether it can survive the illiberal forces at its core is less certain, especially with the growth of “Remain Euroskeptics” who are willing to undermine the EU from within. European advocates have found comfort in the fact that the British vote to leave made public opinion on the continent bounce back in favor of EU membership. That’s promising. However, if the democratic recession at the heart of Europe is not reversed, more will be lost than the dream of ever closer union. Supporters of integration must start winning voters back—not just to their parties, but to the vision of representative democracy that has sustained the continent for so long.


The post Brexit Beyond Britain appeared first on The American Interest.

 •  0 comments  •  flag
Share on Twitter
Published on September 20, 2019 07:54

September 19, 2019

Who Deserves Asylum?

The Death and Life of Aida Hernandez

Aaron Bobrow-Strain

Farrar, Straus and Giroux, 2019, 432 pp., $28


A liberal democracy treasures the right to asylum. Many Americans like the idea that anybody in the world can show up at the U.S. border and ask for refuge. But who actually deserves this kind of protection, and what to do when large groups of people ask for it en masse, have never been easy questions. President Trump’s open bigotry isn’t making it any easier—nor are the new, more generous grounds for asylum being proposed by human rights advocates.

Gender violence is one such criterion for asylum—should the U.S. give refuge to women fleeing murderous husbands and other forms of gender discrimination? Both asylum advocates and skeptics can test their assumptions by reading The Death and Life of Aida Hernandez.  The author, Aaron Bobrow-Strain, is a cultural geographer and activist who is careful to get the backstory. By delving into the life of a particular asylum seeker, he provides a wealth of detail far exceeding what is available to the immigration judges who decide these cases.

The pseudonymous Aida Hernandez begins life in Agua Prieta, Mexico, the twin city of Douglas, Arizona. When she is eight, in 1996, her mother Luz suddenly yanks her and four siblings across the border to join Saul, a U.S. citizen who turns out to have fathered two of the children. They are all able to settle in Douglas on the strength of a short-term border-crossing card, which gives Mexicans the right to visit the United States for 72 hours.

The stepfather proves to be more violent than Aida’s own father, Raul. Now her mother is a mere second wife to Saul’s first wife and family. After three years of being shoved around and humiliated, Luz and her children return to the Mexican side of the border. But Aida has a wild binational adolescence. Mentored by older girls, she is stealing cars and getting high by the age of 13. She is in rehab for three weeks by the age of 14, and pregnant by the age of 16.

The father of her child is the enchanting David, breakdance king of Douglas High School and also a good student, who reluctantly gives up college plans for married life in a mobile home. Soon he is shouting, “I should never have gotten mixed up with a fucking mexicana illegal,” and threatening to call the Border Patrol on his undocumented wife.

When the marriage breaks up, Aida and her U.S. citizen-son continue to live on the U.S. side of the border, with the help of her mother and other supportive kin. Unfortunately, she finds herself being stalked by her new boyfriend’s previous girlfriend, Irma. One bright morning, Aida rams Irma’s car with her own car and punches Irma in the face. The Douglas cops arrive, Irma denounces Aida as an illegal, and the Border Patrol deports her to Mexico.

But not for long—soon Aida is back in Douglas with her son, living without legal status as she always has, until alcohol and border enforcement snare her again, producing a second and more legally consequential deportation. Desperate for income, Aida ignores her family’s warnings and goes to work as a barmaid on the Agua Prieta side of the border. One night, after finishing her shift, she’s picked up by a rapist who, when she punches him away, stabs her repeatedly in the belly. Her death is averted only by emergency evacuation to a U.S. hospital and Arizona Medicaid.

When the hospital releases Aida, on a humanitarian parole allowing her to stay in the U.S. for 30 days, she understandably fears returning to Mexico. And so she resumes life as an undocumented mother—but now subject to the panic attacks, convulsions, and rages of post-traumatic stress disorder (PTSD). A Douglas social worker tries to figure out how Aida could apply for asylum, only to despair when her latest makeover (black clothing, tatoos, and piercings) makes her look like a delinquent.

In 2012 Aida is caught shoplifting a $6 Lego set for her son. As she is being deported for the third time, she states her fear of returning to Mexico. And so, instead, she is sent to the immigrant lockup in Eloy, Arizona, to await a hearing in the immigration system’s overcrowded courts.

In jail, Aida meets an unauthorized border-crosser from Ecuador. The pseudonymous Ema Ponce is a soccer athlete, university-trained engineer and lesbian who hopes to join relatives in New York City. Ema now falls in love with the unsuspecting Aida. After they are both finally free—Ema on bail pending her hearing and Aida as a legal permanent resident—Ema persuades Aida to move to New York City and they get married at City Hall.

Neither New York nor the new relationship turns out very well. The newlyweds and Aida’s son live in an eight-by-twelve-foot room in an apartment packed with other immigrants. Ema finds a job as night clerk at a remittance agency but falls deeper and deeper in debt. As for Aida, the only work she can find is cleaning hotel rooms for a subcontractor who pays a mere $150 to $300 per week.

Aida’s panics and rages resume. She assaults Ema and the police haul her away; only a sympathetic prosecutor saves her from a domestic violence rap that could end her legal residency and deport her. On a subsequent occasion, it is Aida with the bruises and Ema who is hauled away, to what fate Bobrow-Strain does not say.

How Border Enforcement Makes Aida’s Life Worse

That is the death and life of Aida Hernandez, which Bobrow-Strain refuses to shine up at the end. Contrary to the book’s title, Aida never actually dies. For her chronicler, the takeaway is that U.S. border enforcement has made it easier to commit gender violence. When women resist being beaten or raped, not only do their abusers threaten to call the Border Patrol; U.S. law holds immigrants to much higher standards than U.S. citizens. Offenses that merely bounce citizens into the safety nets of due process—driving while intoxicated, possessing drugs, hitting a spouse—can turn into speedy deportations for non-citizens.

Bobrow-Strain’s larger case against U.S. border enforcement is that, contrary to so many pundits, it is not “broken.” Instead, it is working all too well for the following interest groups:


polarizing politicians, nativist social movements, private prison companies, ordinary people in search of decent jobs [which they find in border enforcement], local governments struggling to increase revenue [by welcoming border enforcement as a growth industry], employers seeking exploitable undocumented workers, massive federal law enforcement bureaucracies, and countless private security contractors.


With President Trump’s immigrant-baiting in mind, Bobrow-Strain concludes that border enforcement is part of “a larger American story of race, economics, and policing” and so current policies are “racially motivated nativism.”

But he notes many wrinkles in the actual history. For example, in the 1940s unionization and civil rights activism ended the disgrace of paying Latinos less than Anglos in Douglas’s main industry, a copper smelter. Even the infamous pre-1965 admission quotas did not include immigrants from the Western Hemisphere. Thus it was the abolition of those quotas that led to the first numerical caps on legal migration from Latin American countries. Now border enforcement is staffed mainly by Mexican-Americans, whose feelings about their Mexican co-ethnics are very divided and not necessarily racial in origin.

Mixed-Status Families and Their Travails

Aida is not a typical applicant for asylum. She has gotten into more trouble than most. Nor do most asylum applicants marry someone of their own gender. Minus such particulars, several million Aidas have become trapped in what migration scholars call liminal legality or proto-citizenship. Many come from mixed-status families, which occur when parents jump the border or overstay a visa, bring along their children, then produce more children who are U.S. citizens by birth. Such families pile up in border towns because rigorously staffed enforcement checkpoints, farther north, prevent them from exiting safely.

The Rio Grande Valley of Texas hosts one of the largest concentrations of mixed-status families. When anthropologist Heide Castañeda sampled 100 mixed-status households in the Valley, many proved to have arrived with the same border-crossing cards which brought Aida and her mother. Judging from Castañeda’s interviews (Borders and Belonging, Stanford University Press, 2019), the parents had expected that U.S. schools would be their children’s ticket to prosperity, and that another amnesty—like the 1986 Immigration Reform and Control Act—would eventually legalize them.

Amnesty has yet to materialize because political campaigns to rescue them from their predicament arouse powerful feelings, not just of sympathy and solidarity, but of anger. Many Americans view them as gate-crashers. Mixed-status families, Castañeda’s analysis reveals, cannot assume solidarity even from their own co-ethnics. In the case of the mainly Mexican mixed-status families along the southern border, many have Mexican-American neighbors and relatives working for U.S. border enforcement. The complexities of kin networks and Mexican-American culture foster many conflicting identities including red-white-and-blue patriotism. This produces no end of paradoxes, such as the U.S.-citizen son of undocumented parents who aspires to work for the Border Patrol.

Judging from Castañeda’s research, mixed-status families live in a competitive, unpredictable atmosphere in which envy or bad luck reliably leads to disaster. Their fortunes are determined, not just by the vagaries of border enforcement, but also by the terrible wages at the bottom of the U.S. labor market and the arrival of additional undocumented relatives, who expect to be taken in and who tax family resources to the breaking point. What scholars call “identity loan” within these families is commonplace, and that can lead to identity theft and extortion. “Marriage for papers” produces another raft of conflicts. And so, trapped between the Mexican border and immigration checkpoints a short distance north, the lives of mixed-status families are shaped by desperation, paranoia, and concealment. Now they are suffering even more thanks to Trump rollbacks of Obama policies.

Why Have Borders at All?

If the U.S. attracts so many immigrants and if the Mexican border causes so many problems, why have a border at all? Judging from Aida’s story, one argument for this particular border is that it offers protection from bad people on the other side. This is the cruel paradox of Aida’s binational life: The same U.S. institutions that exclude her also offer safety. Thus, while working as a barmaid in Mexico nearly gets her killed, working as a barmaid in the United States seems a lot safer.

Another argument for the Mexican border is that it cuts down on the oversupply of labor. Flooded labor markets are glaringly obvious in the economies of Mexico and Central America, and they are also apparent in those parts of the U.S. economy where large numbers of Mexican and Central American migrants compete for jobs.

Wouldn’t the ill effects be alleviated by giving everyone legal status? Not if this attracts even more migrants. In the case of Ema, something about the U.S. job market, even in the low-enforcement immigrant mecca of New York City, fails to meet her financial needs. Her lawyer’s failure to request a work authorization is part of the problem, but her partner Aida has the legal right to work, as well as near-native English, and her wages are also shockingly low.

Bobrow-Strain concedes that a world organized into nation-states requires borders, and that borders require decisions about whom to include and whom to exclude. However, nowhere else does his book convey the idea that U.S. border enforcement is legitimate. The stateside institutions that educated Aida, that saved her life, and that eventually gave her legal residency get positive reviews, but not any attempt by U.S. authorities to distinguish between who has the right to receive these benefits and who does not.

How Aida Received Legal Residency (Not Through Asylum)

Gender violence is a fact of life for untold numbers of women, and there is quite a bit of it in Bobrow-Strain’s cast of characters. Most boyfriends and husbands in Aida’s social network seem to be in the habit, when they get angry, of hitting their women. Even when Aida finds a sympathetic partner in Ema, they too hit each other. They all seem to need considerable amounts of policing, judging, social work, and therapy. Do some of them deserve asylum in the United States?

Asylum law protects persons who are afraid to return to their country of origin because they have a well-founded fear of persecution on the basis of race, religion, nationality, membership in a particular social group, or political opinion. Only a minority of asylum applicants come very near this definition, so immigrant-rights groups are campaigning to extend asylum to applicants who say they are fleeing criminal gangs or domestic violence.

The near-fatal attack on Aida, and the danger of returning to a country that failed to find and punish her attacker, might sound like a strong case for asylum. But criminal violence usually does not fall under the legal definition of persecution, and Aida failed to file for asylum within a year of the assault.

Saving her from a third deportation are her U.S.-citizen child—to whom she is clearly devoted—and past violence by her U.S.-citizen husband. Her lawyer persuades a government prosecutor and an immigration judge that she deserves legal residency under the Violence Against Women Act. VAWA protects undocumented women who have been attacked by U.S. citizens or legal residents.

Most women asking for asylum on grounds of gender say they have been harmed in their own country. Ema is an example. Bobrow-Strain never details her case but, judging from her story, she is probably asking for protection from being harassed as a lesbian. She used to play on a lesbian soccer team, which went to court to win acceptance and was sometimes mobbed by hostile fans.

Other women asking for gender-based asylum want protection from their husbands. In 2014 the Board of Immigration Appeals ruled in favor of a Guatemalan applicant named Aminta Cifuentes. “Married women in Guatemala who are unable to leave their relationship,” the BIA decided, meet the legal definition of “membership in a particular social group,” therefore Aminta qualified for asylum. This meant that Guatemalan married women became a protected class who could apply for asylum—if they could prove that their husband had used violence to prevent them from leaving the marriage.

Four years later, as part of the Trump offensive against the growing number of asylum claims, U.S. Attorney General Jeff Sessions barred domestic violence as grounds for asylum. How many asylum cases actually have been filed on these grounds is an elusive datum. But I do know that, in the Guatemalan town where I interview migrant households, a number of women have paid smugglers to take them north so that they can apply for asylum from domestic abuse. If the allegations occur far from U.S. jurisdiction, U.S. courts have very little capacity to verify them.

Bobrow-Strain wants U.S. authorities to interpret asylum law more leniently, to give applicants like Aida and Ema a better chance. If the Democrats take back the presidency and the Congress in the 2020 election, he may get his wish. What will happen if the Democrats make it easier to ask for asylum?

We already have a preliminary answer in the growing number of asylum applicants from Central America. If the Democrats make application easier, the number could grow even faster. According to advocates, the Central Americans are fleeing ever-worsening conditions at home. According to skeptics, they are being recruited by human smugglers, who are taking advantage of humanitarian reforms in U.S. border enforcement to craft a cheaper and less risky path into the U.S. job market.

Judging from media reports, the new procedure is to be accompanied by a child under the age of 18, surrender to a border agent, and tell the agent you are afraid to return home. The reason for the son or daughter is that it’s much harder, logistically and legally, for the U.S. government to lock up parent-child combinations than solo border-crossers.

One hindrance is that, if U.S. officials suspect that you have borrowed or rented the child, you can be separated, locked up for longer, and deported. Another hindrance is that, per calculations by the New York Times, the Trump Administration is currently forcing 58,000 asylum seekers to wait in Mexico, where at least a few have been killed by criminals. But U.S. immigration courts have become so packed that new cases are being scheduled out as far as 2023. In the meantime, if all goes well, asylum applicants have provisional legal status, can find work even if they lack legal permission to do so, and may even be able to start a mixed-status family.

The Political and Economic Limits of Asylum Advocacy

Both Bobrow-Strain and I go back far enough in Latin American studies to remember when progressive politics meant helping Latin America become independent from U.S. power. Those days are gone—now progressive politics revolves around what anthropologist James Ferguson calls “declarations of dependence.”

In Aida’s case, her future hinges on whatever sympathies can be wrung from U.S. border agents, prison guards, immigration attorneys, and judges. Her subordination is all the more striking because her father Raul exemplifies the bygone era of collective resistance. Before Aida was born, he spent his youth in Marxist guerrilla organizations seeking to overthrow Mexico’s one-party dictatorship. Bobrow-Strain gives Raul considerable attention, to show how unmovable Mexican power structures used to be. But he never gets around to telling us about the Mexican women’s movement, Mexican women’s shelters, or—with the exception of Raul’s heroic youth—opposition politics.

What if, instead of being seduced by American culture and snared by U.S. border enforcement, Aida had followed her father into opposition politics? Could the ease of border-crossing, which has drained away millions of Mexico’s most energetic citizens, be one of the reasons that Mexico has been slow to meet the demands of its citizens? In the case of Ema, wouldn’t she and Ecuador be better off if she had stayed at home, built on her experiences as a feminist, and kept fighting for equal rights? These are just rhetorical questions, but we do know what happened instead. Both Aida and Ema became supplicants in the U.S. legal system, both became trapped at the bottom of the U.S. labor market, and both got arrested for domestic violence.

From their sad example I worry that, the more lenient U.S. asylum policy becomes, the more soon-to-be-exploited workers and future domestic-abusers it will wave through. Will the U.S. labor market give them a better life than what they would have faced back home? In many cases, the answer is no.  Aida’s story poses an additional question: Does anyone who reaches the U.S. border with a life-threatening problem—social or medical—have a human right to U.S. services and lenient terms of stay? Do they also have the right to produce a U.S.-citizen child and raise their child in the United States?

There are limits to what voters are willing to pay for, and such limits are becoming obvious, not just in the U.S. and Europe, but in Mexico. According to a July 2019 poll, Mexican public opinion has shifted dramatically against migrants. Why? Hosting large numbers of asylum seekers has social and fiscal costs, which are heightened by the Trump Administration’s success in bottling up some applicants on the Mexican side of the border.

Aida Hernandez’s life turns out the way it does because of the mutual dependence of sister cities and binational investment zones and the easy border-crossings these arrangements require. Her life has been shaped by a staggering array of U.S. policies, some of them lenient and others harsh. Ema Ponce’s situation is very different. She began her life in Ecuador’s middle class, only to be deceived by the media optics of American culture and how wonderful it must be in comparison with her own. I conclude that U.S. society owes a lot more to Aida than it does to Ema. Unfortunately, distinguishing between these two cases, as well as the cases of so many others who aspire to a better life in the U.S., will require an ever larger bureaucracy of border cops, jailers, lawyers, and judges. According to critics, we already have an immigration police state, and they may be right.  The alternative, tragically, is an ever-larger pool of exploited labor at the bottom of U.S. society.


The post Who Deserves Asylum? appeared first on The American Interest.

 •  0 comments  •  flag
Share on Twitter
Published on September 19, 2019 13:53

The Arctic Is American

In February 2008, Russian President Vladimir Putin staged a ceremony at the Kremlin to honor scientist Artur Chilingarov as a “Hero of the Russian Federation.” Months before, Chilingarov, also a politician, piloted a small submersible to the seabed, 14,000 feet below the surface of the Arctic Ocean, to collect samples and data. Before surfacing, he planted a titanium version of the Russian flag. After the mission was completed on August 7, 2007, a jubilant Chilingarov boasted over the world’s wire services: “The Arctic is Russian.”

Canada’s officials knew nothing of the expedition, and Americans dispatched a research icebreaker out of Seattle days later. A spokesman sneered, “I’m not sure whether they’ve put a metal flag, a rubber flag, or a bed sheet on the ocean floor. It certainly to us doesn’t represent any kind of substantive claim.”

The frozen Arctic Ocean, half the size of the United States, typically receives scant attention, but this summer President Donald Trump put it on the media map by suggesting that the United States purchase Greenland from Denmark. The proposal made no sense and was roundly ridiculed. But while Trump was thinking about real estate, his Administration was concentrating on hegemony: This summer, American officials firmly planted their flag across the entire region.

In May, Secretary of State Mike Pompeo set forth his Northern Doctrine in a speech to an assembly of the Arctic Council, attended by countries with Arctic borders— Canada, Denmark (including Greenland), Finland, Iceland, Norway, Russia, Sweden, and the United States. In blunt terms, he put Russia and China on notice for militarizing the region and chastened Canada, describing its claim of sovereignty over the Northwest Passage as “illegitimate.”

Underlying this newfound interest is that fact that scientists predict that in 25 years the ocean will be ice-free in summer months. This will open up resource development and navigation along three routes linking Asia and Europe: The Northeast Passage, or the Northern Sea Route that transits mostly Russian territorial and internal waters and offshore Norway through the Barents Sea; the Northwest Passage, which transits the Canadian Arctic Archipelago and coast of Alaska; and the Transpolar Route across the North Pole, beyond the territorial waters of Arctic states.

The Transpolar route won’t be navigable anytime soon, but Russia’s route is viable because it is already ice-free much of the summer and hugs a somewhat populated coastline. Commercial traffic from China is already transiting the route, and billions of dollars are being invested into navigational, search and rescue, and icebreaking capabilities. The route shaves 20 days off the Asia-Europe journey for cargo ships by bypassing the Suez or Panama Canals.

This has piqued the interest of both the Pentagon and the State Department. For the United States, the non-military concern is that the Russians will create a transpolar logistical monopoly to deliver liquefied natural gas, goods, and commodities to Asia and Europe. This would allow Moscow to exclude or gouge competitors. The military concern is that Russia is boosting its military presence along the sea route, while China lurks nearby.

Surprisingly, Pompeo took a swipe at Canada’s claim of control of the Northwest Passage on the basis of  “a long-contested feud” with the United States. But there is no “long-contested feud,” from the Canadian viewpoint. Since the 1950s, the two allies have agreed to disagree for political reasons as to whether the route runs through internal Canadian waters or solely through international waters. The two have maintained an amicable working arrangement with respect to access, as well as shared security responsibilities as members of NORAD (North American Aerospace Defense Command).

Pompeo’s bluntness surprised many, and a Canadian government spokesman pushed back politely: “Canada and the U.S. have differing views regarding the status of the Northwest Passage under international law,” said Guillaume Berube, a spokesman for the Department of Foreign Affairs.


The situation is well managed, including through the 1988 Arctic Co-operation Agreement, according to which the U.S. government seeks Canada’s consent for its icebreakers to navigate the waterways. Canada remains committed to exercising the full extent of its rights and sovereignty over its territory and its Arctic waters, including the various waterways commonly referred to as the Northwest Passage. Those waterways are part of the internal waters of Canada.

But the underlying agenda was to discourage Canada or China or anyone else from joining forces to develop the Northwest Passage. Pompeo accused Beijing of “planning to build infrastructure from Canada, to the Northwest Territories, to Siberia.” To Canadians, this was also news. There has never been such a plan announced, and to suggest otherwise is puzzling. Bilateral cooperation between Canada and China has been troubled for years and ground to a halt in December 2018, when Ottawa arrested a Huawei executive on a U.S. extradition warrant. China has retaliated by arresting and holding two Canadian businessmen hostage in a Chinese jail, cancelling billions of dollars in food imports, and refusing to return Prime Ministerial phone calls.

What is true, however, is that China has worked feverishly to get a piece of the Arctic action wherever it can. It has built icebreakers and special cargo ships for the Russian route, has a stake in Russia’s Yamal liquefied natural gas plant, and has invested in four mines in Greenland and one in Canada. But further incursions have been stymied. In 2017, Denmark nixed a Chinese mining company’s bid to buy an abandoned naval base in Greenland, and in spring 2018 Canada rejected a $1.14-billion bid by a Chinese company to buy Canada’s largest construction and infrastructure company, presumably Beijing’s intended platform for Arctic development.

Altogether, China has invested about $90 billion in the Arctic, working with the Russians, building Arctic-worthy icebreakers, building a polar research institute in 2009 in Europe, and financing scientific expeditions to the Arctic. In 2014, it became an observer on the Arctic Council and began describing itself as a “near-Arctic state.” But Pompeo dismissed this out of hand: “There are only Arctic states and non-Arctic states. No third category exists—and claiming otherwise entitles China to exactly nothing.”

Trump’s Greenland gambit made headlines but was merely the musing of a real-estate developer in the White House. Pompeo’s pronouncements, however, revised the world order concerning the world’s largest unexploited region. Leveraging its military might, Washington has put Moscow on notice, and frozen China in its northerly tracks—and Canada, too. It was a message heard ’round the world: The Arctic is not Russian; it is American.


The post The Arctic Is American appeared first on The American Interest.

 •  0 comments  •  flag
Share on Twitter
Published on September 19, 2019 11:56

September 18, 2019

The Death of the Neutral Public Sphere

What do you do when the metaphors, stories, and premises that hold together a society are rendered meaningless? When things that you imbibe with your earliest thoughts as beyond question crumple? That’s the stage we are at with the once seemingly permanent principles meant to guarantee the culture of common deliberation and debate on which democracy depends, and to stave off manipulative propaganda. Foundational notions—for example, that in “a marketplace of ideas” the best quality information eventually wins out; that truth can hold power to account; that “accuracy, objectivity and balance” are things journalists should strive for; that media pluralism leads to more productive debate—have all been rendered near-meaningless by new breeds of manipulation and by a radically changed informational playing field. Problems were already evident in the revolutionary year of 2016. But as we approach the 2020 election in the United States, and one even sooner in the UK, virtually nothing has been done to fix things. As a consequence the credibility of democracy is under threat as our ability to reach decisions and trust each other enough to constructively disagree is whittled away.

In my new book This Is Not Propaganda: Adventures in the War Against Reality, I try to diagnose the difficulties—and what is to be done.

The metaphor of a “marketplace of ideas,” where some sort of rational choice theory means the eventual selection of the best quality information, looks naive in an environment where junk news driven by bots and trolls and other forms of non-transparent amplification floods the web, spreading faster than any byte of truth. Nowadays one doesn’t use censorship in the old way to constrict speech; instead, political campaigns douse us with so much disinformation you can’t tell the real from the unreal any more. In 2019 the “marketplace of ideas” looks as corrupt as the “free market” did in 2008, with junk news playing the malign role of junk stocks.

And manipulation has changed in another important way too, one that questions the fundamental premises of the ideals of freedom of expression. In the pro-democracy battles of the 20th century self-expression was seen as a way to stand up for your rights. The powerful would try to stifle speech to assert control. Now social media allows you to express yourself all you want. But all that self-expression is then handed over to data brokers and from them to political spin doctors who use your self-expression to find new and non-transparent ways to influence you all the more effectively. As I return to a little later: I don’t think freedom of expression should be jettisoned or censorship imposed, but I do think we need to consider what freedom of expression means in this new game.

Meanwhile the seemingly solid premise that media pluralism leads to better debate has been undermined by the extreme polarization and partisanship that began with cable news and talk radio, and has been mercilessly catalyzed by the fragmentation of social media. Instead of deliberation we are seeing partisanship and polarization to an extent where there is no sense of shared reality anymore to debate over. It is telling that today illiberal politicians, even authoritarian ones, don’t seek total ideological control, but instead play on sharpening polarization, on dividing societies both at home and abroad.

The notion that was designed to heal such fractures—namely, that we could have a common, impartial, “balanced” space where we could have an objective debate about competing ideas—has been undermined by a philosophy that, in the words of Putin’s most famous propagandist Dmitry Kiselev, “objectivity is a myth imposed upon us.” Public service broadcasters such as the BBC have often been criticized for not being objective and impartial enough, but now it is the very notion of objectivity that is under attack, and that has opened up the floodgates for politicians such as Trump, Putin, and Boris Johnson to throw factuality out the window altogether. If there is no objective reality, if all facts are simply interpretations, then why should a politician bother with fealty to the truth? This in turn disarms the great journalistic credo that we could hold power accountable with facts. Putin, Trump, and Johnson simply don’t care if they are caught lying, as they weren’t trying to make factual arguments in the first place.

So what is to be done?

There is a role for regulation—but in their panic to respond to this crisis policymakers are on the verge of committing crass errors that will only make matters worse. The political push in Europe, Asia, and elsewhere has been to police and take down “fake news” and disinformation. Though often well-meaning, this is a censorious logic that rolls back the victories that democracies secured in their battle against dictatorships. Instead we need more information—in order to grapple with the deluge of manipulative online campaigns we need to be able to understand how the information environment is being shaped, who is pushing what content at us and how, what bits of our own data are being used to target us, how algorithms order information. The lack of transparency is itself a form of censorship, as it means that a citizen simply can’t engage with the information forces around them as an equal. We don’t understand how we have come to make decisions as societies—if we can’t see who has been targeting which messages at whom in an election, for example, we don’t understand the reasons behind a certain vote. In the UK the Brexit campaign used so many different, targeted social media messages to win the 2016 referendum no one can now tell what was the main reason for the vote, what on earth the “people’s will” actually was. At least with television and newspapers we could make an informed decision about who was trying to influence us, how and why. Now we are utterly in the dark. And this murk is leading to a breakdown in the trust necessary for us to live together despite all our differences of opinion. We already see the Trump campaign preparing the ground for the 2020 election with claims that democracy is rigged because Google algorithms are designed to be biased against “conservatives.” And the problem is, without the necessary algorithmic transparency, who’s to say that this isn’t so? The black box of the tech companies has to be broken open and public oversight enforced. And this, in turn, is the sort of regulation that autocratic regimes loathe—the Putins of this world want to keep the internet dark for their troll farms and algorithmic manipulations to run at will. A regulation founded on transparency is still steeped in the principles of freedom of expression and the right to receive information, but updates them for a new world.

Such wise regulation, however, won’t be a cure-all—it will just even out the playing field so that those of us who want to save deliberative democracy can start to compete with the forces that seek to sow mistrust and extreme polarization. But we should at least have the 20-20 vision to promote a new type of media and communication whose job is to smooth polarization, to build bridges and dialogue. Sadly, media itself has failed in this task— either because it is still stuck in a broadcast model, as with most public service media, and hasn’t learnt to really work the internet, or, as in the case of most other media, because it has opted to play into the polarization. This is understandable, as the ad-tech through which much media is funded rewards polarization and partisanship—that’s what drives likes and shares and makes things go viral.

A new approach to social media would need to be able to ignore such immediate financial demands. It would need to work with another set of metrics: Does a piece of content improve trust, and does it generate a constructive conversation? Indeed, how can one move beyond mere content production into a more hybrid approach to foster sustained online and offline engagement? There are small, interesting experiments in this field, but they need to be replicated at scale. A new approach will need a new iteration of civil society whose dedicated mission this is. It will need to utilize the audience analysis and data mining that the manipulators use, but to do so in a transparent way, and with the opposite aims. We are in a race with the propagandists as to who can understand and engage audiences best—but at the moment we are not even on the tracks.

But what exactly do I mean by “deliberation”  and “engagement”—if factuality has been jettisoned, then how are people to build a conversation? Though it’s tempting to blame tech for everything, the cultural malaise that has lead to our “post-truth” moment goes much deeper. For those of us who follow Russia, politicians who stopped caring whether they were caught lying were already popping up in the early 1990s. This was a time when all faith had been lost in Russia, as Communism and botched democratic capitalism led to disaster. Factual political discourse is necessary as long as there is a rational, practical future you are trying to prove, with evidence, that you are establishing. Now, I’ve argued before that the sense that there is no future has reached the West. What unites Trump, Putin, Johnson, and the rest is not ideology, but that they have no coherent ideas about the future and all peddle warped nostalgias. Fact-checking won’t change this. We need to generate a political discourse that focuses on an achievable future where evidence and facts become necessary again. That will mean turning away from the reality show-style debates we are seeing on TV as America gears up for the next elections, whose logic will only help reality show politicians. As Ezra Klein pointed out to me, the television debates are designed to reward petty confrontation: If you attack someone in your comment, then they are given time to respond, and if they then mention you then you get time to respond as well. Instead we need to force candidates to engage with each other to solve actual policy problems, to lock them into a conversation where evidence becomes necessary, to hold them to account on their promises over time.

All of the above are practical steps to take. Together they constitute the first parts of democratizing information in the new environment. We urgently need to update and reimagine the metaphors and formulas when the old ones have withered. Sadly the opposite is happening. And as we tumble into the next elections one has the sense of being in one of those awful dreams where one knows exactly what the adversary will do, what the consequences will be, but looks on, as if in slow motion, unable to stop the inevitable.


The post The Death of the Neutral Public Sphere appeared first on The American Interest.

 •  0 comments  •  flag
Share on Twitter
Published on September 18, 2019 13:09

September 17, 2019

Here’s the Deal: 12 Rules to Restore Our Social Contract

They tell a story about the late Bob Strauss, a Texas lawyer with a folksy drawl who wielded Texas-sized influence in American national politics for more than 50 years at the end of the 20th century. One day Strauss was standing in a conference room at his Washington law firm, Akin Gump Strauss Hauer & Feld, watching a complicated multi-party transaction inch upstream towards its closing against masses of objections by fractious lawyers. Somebody asked Strauss, “Which party do you represent?” Strauss was ready. “I,” he said, “represent the deal.”

We can think of the American Deal as a set of political ideas that have persisted in this country over the past couple of centuries and, most of the time, have kept our political arrangements from falling apart. (There have been lapses. We had a Civil War, after all; and there were certainly other times when it looked like we were coming close.) What with the perpetual frictions among the country’s contending political opinions and interests, the Deal always needs representing.

Right now, the need is acute. It was triggered by the 2016 presidential election.

For liberals, the significance of the event was bracingly simple. In their view, the inauguration of President Donald Trump constituted an attack on the country. It followed that the only proportional response was what Trump’s critics call resistance and Trump’s supporters label presidential harassment. Among Democrats, this verdict hasn’t much changed. Their debate mainly involves tactics, about which their positions range from far Left to not-quite-so-far Left.

In contrast, American conservatives’ ambivalence about the President runs deep—and in contradictory directions. Christopher DeMuth of the Hudson Institute has provided a succinct, on-the-money summary of the crosscurrents:


Some conservatives were America Firsters to begin with, others have become converts, and others began and remain Never Trumpers who loathe the man and his policies. Some love his judicial appointments but are aghast at his protectionism. Some admire his nerve, media bashing, and political incorrectness but wish these were a bit more modulated. Some regard his nationalism as an overdue reassertion of American sovereignty and foreign-policy realism, while others see a destabilizing retreat from global leadership.

This intellectual turmoil seems to cry out for expression; thus, since the Trump inauguration we’ve seen a virtual tsunami of writing about the meaning of his presidency. Understandably, recent conservative writing about President Trump focuses not on the painful topics of his character and governance style but on particular administration policies and—even more—the American political fault lines into which Trump has driven his rhetorical and electoral wedges.

Along the shore of one of those fault lines lies national conservatism. Its current varieties share a sense that liberalism—or neoliberalism, if the speaker aims to tear away the doctrine’s idealistic veil and expose the rigid free-market scaffolding underneath—glorifies a barren individualism that has impoverished large swaths of America’s population economically, socially, or spiritually. It follows from this diagnosis that some type of collective non-market power— derived from civil society or government—needs to be applied to check neoliberalism’s underlying assumptions and the laws and institutions that embody them.

Beyond those general features, though, there’s no brief or easy way to summarize the varieties of current national conservatism. Perhaps the best-known articulation of the idea comes from political theorist Yoram Hazony, who contrasts nationalism—the theory that nations should be “able to chart their own independent course, cultivating their own traditions and pursuing their own interests without interference”—with “imperialism, which seeks to bring peace and prosperity to the world by uniting mankind, as much as possible, under a single political regime.” What “cannot be done without obfuscation,” Hazony states, “is to avoid choosing between the two positions.”

Other versions, like David Goodhart’s “somewheres” and “anywheres,” cited by DeMuth, strongly echo Robert Merton’s classic distinction between “locals” and “cosmopolitans,” from his landmark Social Theory and Social Structure. Though Goodhart’s analysis isn’t exactly neutral—“the people from Anywhere,” he judges, have “too often failed to distinguish their own sectional interests from the general interest”—even Merton was hard on cosmopolitans. “[T]he cosmopolitan influential,” he opined, “has a following because he knows, the local influential because he understands.”

Other flavors of national conservatism are less polite. “We made a political choice,” J.D. Vance, author of Hillbilly Elegy, put it acerbically at a recent conference, “that freedom to consume pornography was more important than public goods like marriage, freedom, and happiness.”

Then there’s the inevitable pushback. In this magazine, Gabriel Schoenfeld has noted some of the outrages perpetrated in the name of nationalism and confessed that he finds “astonishing” not just “the contention that liberalism promotes ‘vicious hatred’ while nationalism tends to be benign” but the fact that this argument has gained “currency in some quarters of the Right.” Adding more strands to the tangle, the 2020 presidential campaign has given exposure to a liberal populist nationalism—Elizabeth Warren comes to mind—that mirrors the arguments of its conservative counterparts. And, in a sign that the discussion has legs, it has spawned a sub-population of taxonomic review articles. Aaron Sibarium’s “guide for the perplexed” in The American Interest offers these categories: “rhetorical nationalists,” whose “reasons may have changed, but” whose “views . . . [have] not; “statist conservatives,” who insist on the need “for government to promote the good at some cost to individual freedom;” and “national conservatives,” who focus on an American people united by a common culture—something like the “mystic chords of memory” that President Abraham Lincoln offered in his first inaugural address. (Lincoln’s speech actually waxed mystical only in its concluding sentence, after pages of considerably more lawyerly argument. But you take your mystic chords where you can get them.)

This is a bare sampling of what’s going on. The Trump presidency is so deliciously awful, and so generous in the opportunities it offers to trade in consequential ideas, that it sometimes seems like a political intellectual’s relief act. You truly can’t tell the players without a program.

Still, even without the Trump embroidery, the Deal has always been confusing and contradictory. It is no accident that the quintessential American defense of internal contradiction—“I am large, I contain multitudes”—comes from Walt Whitman, the quintessential American poet. Little more than five years after Whitman first wrote that line, the then-American deal shattered from the weight of the country’s contradictory multitudes. If we now venerate Lincoln, it is in no small part because he, like a sublimely elevated Bob Strauss, tried to represent the deal.

But some of us who are advanced in age are impatient for the country to recover a sense of its center before the debates are over. So, here is a preliminary attempt at a shortcut. William Galston recently offered “Twelve Theses on Nationalism” in these pages. Following Galston, here are twelve theses about the terms of the present American Deal. To a certain extent they contradict one another. The Deal is large. It contains multitudes.

The Deal, Part One

The Deal has two parts, with six rules each. The first part reminds people who want to change the system why they shouldn’t expand their horizons too broadly or hold their fellow Americans in contempt. These first six provisions of the Deal tend to take care of themselves. The chief danger is that they’ll blow up in the faces of those who don’t give them enough respect.

Rule One: The Deal is federalism. Please, for now, shelve the complaints about federalism having degenerated into a mere ghost of its former self, a corpse on life support, done in by the bunch of German professors who spawned the administrative state. For better or worse, we’ve got plenty of federalism left. (I ride the New York City subways. Don’t get me started.)

The Federalists had the advantage of not having to write on a clean slate. The country they surveyed had pre-existing natural advantages like space, resources, and a relative, though by no means complete, absence of conflicts with geographic neighbors.

Still, the arrangements that the Federalists devised, with state powers divided among jurisdictions, branches, and levels of government specifically designed to block, impede, and generally torment one another, lie at the core of the Deal. This is a federalist republic, not a majoritarian one. In particular, it gives, as it was designed to give, an advantage to the type of diversity that arises from geography. Voters in New York and California will perennially get a raw deal. Government paralysis is in general a feature, not a bug.

In other words, though you can tinker at the edges, you can’t alter the system more profoundly without creating an essentially different arrangement with radically different and substantially unknown dangers and inconveniences. It would be a whole other deal.

We might as well live with what we’ve got.

Rule Two: The Deal is Tocqueville’s America.  After Alexis de Tocqueville’s nine-month tour of the new American democracy in 1831, meant to study the country’s prison system, he gave birth in 1835 to Democracy in America. It explained that a country in which substantial equality is a birthright is profoundly different from a country that has achieved equality only by violently destroying the regime that came before it.

Tocqueville found many American characteristics that follow from this distinction. Among them, Tocqueville noted the tendency of our egalitarian individualism to provide both powerful incentives for cooperation, on the one hand, and, on the other, pressures towards conformity and threats to liberty. Both these characteristics, though they often pull in antagonistic directions, are foundational parts of the Deal. (It is large, it contains multitudes.)

Tocqueville also thought that lawyers were the closest America came to an aristocratic class. That is not part of the Deal. Letter to follow.

Rule Three: The Deal is that most Americans are, when push comes to shove, locals. Yes, yes, you think we’re getting homogenized. We’ve been getting homogenized at least since the building of the first north-south railroads. (Neil Harris of the University of Chicago noted that the railroads may have hastened the coming of the Civil War by bringing Northerners and Southerners face-to-face with the fact that they didn’t like one another very much. In much the same way, the modern equivalents have probably heightened the animus that MAGA supporters feel towards the country’s elites). Still, most Americans’ primary attachments are to their families, friends, occupations, affinity groups, and local communities. Look in the obituary section of any U.S. city newspaper if you have any doubts.

This is the pattern of attachments that has kept the population of a vast country, even in the age of social media, from becoming an undifferentiated mass ripe for tyranny.

True, some citizens have different sets of attachments—to universal principles, for instance, or affinity groups that span the globe. The same people often have more developed skills and resources than the locals, as well as superior arguments to justify their positions. These “anywheres,” as Goodhart calls them, can override local preferences—until they find out they can’t. That’s the Deal.

Rule Four: The Deal is that most Americans are religious, more or less. Today, lots of people are more inclined to call it “spiritual;” certainly large numbers of citizens have drifted away from organized religious denominations. As a result, we’re surprised when we get seemingly anomalous news, like the story of female religious orders that are growing once more because millennials are interested in becoming nuns.

Moreover, numbers aren’t the sole measure of the influence; there’s nothing like religion to remind us of the salience of intensity. Sometimes the story is that religious influence has prompted a state legislature to ban abortion after a term of eight weeks; sometimes the news is about Muslim, Jewish, and Christian clergy joining together to guard a sanctuary after a hate crime.

Almost nothing is embedded more deeply than religion in the American fabric. Other elements of the Bill of Rights may have equal respect, and at least one item—the Second Amendment—periodically explodes in importance, as it’s exploding now. But none of them matches religion, unruly and unpredictable, as an ineradicable part of the Deal.

Rule Five: The Deal is that Americans generally don’t express a desire to take other people’s property outright. The country has shown that it’s fully capable of regulating private property stringently—almost, some would say, to extinction. But the word “socialist” has long been anathema because, surprisingly, most people understand the term in its proper sense, meaning collective ownership of the means of production.

We will see whether the balance shifts, as figures like Bernie Sanders seek to reclaim “socialism” for a new era. But, as of now, this particular sort of wholesale appropriation is not a part of the Deal.

Rule Six: The Deal doesn’t generally include a hatred of the rich. As Tocqueville would have predicted, Americans have shown a distinct reluctance to storm the castles, or the equivalent Malibu beach houses, with scythes and pitchforks. There are perennial predictions, often in the context of political campaigns, that this reluctance is nearing its end. So far, the predictions haven’t come to pass.

The Deal, Part Two

Then, there is the second part of the Deal, the one reminding us that we’re Americans, not Hungarians or Poles. None of the elements of the second part of the Deal has unqualified or even natural support. Every one of them periodically disappears under one populist wave or another. To date, these elements have managed to re-emerge—but there are no guarantees.

Rule Seven: The Deal is liberalism. The old political saw—that John Locke is king of America—is right. The country has no genuinely ancient traditions or an honest-to-God feudal past, let alone communities of people as tied as medieval serfs to their plots of land.

This fact has placed distinct limits on the capacity to mount genuinely reactionary movements in America. The closest we came was the Confederacy’s defense of its 250-year tradition of slavery. This was the original campaign to make America great again, an effort that took five almost inconceivably bloody years to extirpate.

In contrast, when Abraham Lincoln invoked the “mystic chords of memory,” he did so in support of the liberal tradition in America. That’s our kind of mystic.

Rule Eight: The Deal is republican restraint on the display of wealth. The degree of restraint varies from place to place and year to year; but in comparison with counterparts in the rest of the world, the very rich in America tend to distinguish themselves through understatement, real or faux. Some of us remember Jacqueline Kennedy saying, when a reporter asked her about rumors that she spent $30,000 a year on clothes, “I couldn’t spend that much even if I wore sable underwear.” True or not, that remark is an iconic sign of respect to the mystic chords of American memory.

Like other elements of Part Two, this one has a perennially uncertain fate. American television used to romanticize, and to a substantial extent still does, what was once called the common man. The classic example of the genre was Roseanne, until tweets by Roseanne herself revealed that she was perhaps too much of a common man. Her show was replaced by The Conners, featuring a similarly lumpen theme. The new show has done well enough to be renewed for a second season.

True, times have changed. In this age of streaming, TV shows about ordinary folk vie for popularity with exotica like Game of Thrones and The Walking Dead, or high-minded fare like Succession. Indeed, there are moments, like one’s first look at the President’s Trump Tower apartment or the sight of a Kardashian, or even actually seeing Trump and a Kardashian together in the same room, when it’s hard to believe that there’s even a shred of republican restraint left. Still, provisionally, the norm persists.

Rule Nine: The Deal is a set of limits on inequality. This one has been severely tested in recent years. There are substantial arguments for the proposition that income and wealth inequality, as opposed to economic growth, should not be the touchstones of economic policy. But the rhetorical power of these arguments is hard to sustain once the numbers documenting the degree of inequality get big enough. Yes, it’s a matter of perception and partisanship; but these days the numbers do seem to be getting big enough. If that’s the case, the Deal counsels that it’s prudent to act. It’s certainly done so in the recent past, leaving Social Security, Medicare, and Medicaid in its wake.

But—see Rule Five, above—the Deal has put limits on redistributionist policies, limits that would and do seem outlandish to denizens of European welfare states. This is one reason why we’ve twisted ourselves into a national pretzel trying to deal with health insurance. That’s the legacy of the Deal.

Rule Ten: The Deal is immigration. The country’s direction on this issue hasn’t been consistent; but it was set at the beginning of the republic, at a time when most Americans still thought of themselves as aggrieved citizens of Britain. In 1776 Thomas Paine, in Common Sense, argued that in fact America was not British but something new: the “asylum for the persecuted lovers of civil and religious liberty from every part of Europe.” Congress’s first law on the subject, the Naturalization Act of 1790, made U.S. citizenship available to any “free white person of good character” who had lived in the United States for two years.

Those terms don’t look especially liberal from the vantage point of the 21st century, but they were a down payment on Paine’s determination that we should be a country of immigrants.

Of course, that wasn’t exactly a definitive verdict. We’ve had periodic immigration crises since at least 1819, when Congress passed the Steerage Act to try to bring some order to the flood of immigrants that overwhelmed the major ports of entry after the War of 1812. As Peter Schuck points out in The New York Times, we are well overdue for a revision to the Deal, one that is concrete and reasoned enough to reduce the oppressive salience of the issue in U.S. politics. It’s going to be a heavy lift, but that’s what the Deal requires.

Rule Eleven: The Deal is world leadership. This is a recent accretion to the Deal. Historians may detect a precursor of the idea as far back as John Winthrop’s “City Upon a Hill” speech, but the prospect became concrete only with the disproportionate hard power that the United States accumulated and has largely maintained since the Second World War. Shorn of arguments about America’s moral superiority, the justification for elevating “world leadership” to an element of the Deal is that we’re watching in real time, as they say, what starts to happen in the world when the country abandons its decision to lead.

The dimensions of the leadership that’s required are open to debate, but not the need for the leadership. It has become part of the Deal.

Rule Twelve: The Deal is tragedy. There is no avoiding it: The “slavery in this country” that John Adams saw “hanging over it like a black cloud for half a century” has rained on us for 300 years. Nor is it the only such tragedy. The list includes—without limitation, as the lawyers say—slavery’s aftermath; the U.S. government’s slaughter and grinding down of American Indians, of which Tocqueville gave one of the most indelible accounts; internment, exclusions, and murders. This is not a litany of national sins or a serial expansion of the social justice agenda that should weigh on us. It is a reminder that, just as Part One of the Deal will exact a price from those who don’t respect it, Part Two of the Deal is capable of exacting its price as well. You can pretend that such issues don’t deserve inclusion on the list of public imperatives, but the Deal will insure that sooner or later you’ll have to come to grips with them.

The Deal, in other words, has given us work to do. But let no one say on that account that it doesn’t exist or isn’t worth defending.


The post Here’s the Deal: 12 Rules to Restore Our Social Contract appeared first on The American Interest.

 •  0 comments  •  flag
Share on Twitter
Published on September 17, 2019 13:08

The Bard and His Democracy

I was confronted rather forcefully one afternoon by a loud message on a boarded-up storefront: “STRANGER!” it called, “if you, passing, meet me, and desire to speak to me, why should you not speak to me? And why should I not speak to you?” The sign covered an otherwise unremarkable window in Washington, DC, but the words were well placed: Walt Whitman’s poetry belongs in a city.

Some weeks before, I encountered Whitman in a different setting: an exhibit honoring the bicentennial of his birth at New York’s Morgan Library. It’s an appropriate spot—Whitman loved his “Mannahatta,” as he titled one of his poems. And it’s a good time to celebrate Whitman, whose untrammeled delight in his country might tell us something about national identity.

The exhibit’s full name is “Walt Whitman: Bard of Democracy,” based on an 1859 note in which Whitman proclaimed himself as such. It’s a relatively obscure line scribbled in a personal journal, and one of many in which Whitman boldly addressed readers while celebrating himself (“Comrades! I am the bard of democracy”). But it fits: Whitman had long thought of himself this way, going so far as to anonymously publish favorable reviews of his own first volume of poetry, at least one of which declared the arrival of “an American Poet at last!”

That review, which is prominently displayed at the Morgan, is something of a cross between a poem and an advertisement, and toasts “that style which must characterize ‘the nation of teeming nations.’” In it, Whitman described himself as virtually every American type: “a northerner—a planter . . . . a Yankee bound his own way . . . . a boatman over Ontario, Erie, Michigan, or Champlain—a Hoozier, or Badger, or Buckeye, a Louisianian or Georgian—a poke-easy from sandhills and pines.” The poet who is the sum of all these types is “a learner with the simplest, a teacher of the thoughtfulest . . . . a farmer, mechanic or artist—a gentleman, sailor, lover, quaker, prisoner, fancyman, rowdy, lawyer, physician or priest.” In short, “through the poet’s soul runs the perpetual spirit of union and equality.”

While it’s easy to smile at Whitman’s anonymous praise of his own work, he seemed serious about creating a distinctly American literature. In his preface to the original 1855 edition of Leaves of Grass (his first book, which he drastically revised multiple times throughout his life), Whitman wrote that “The English language befriends the grand American expression—it is brawny enough, and limber and full enough . . . . It is the chosen tongue to express growth, faith, self-esteem, freedom, justice, equality, friendliness, amplitude, prudence, decision, and courage.”

If writing reviews of one’s own work is unorthodox, the history of Leaves of Grass is similarly eccentric. It was self-published anonymously by Whitman around the 4th of July, 1855, and originally included 12 poems—most of which were wildly long and half of which were unnamed. The rest all shared the same title: “Leaves of Grass.” The slim, strange volume contained errors and inconsistencies throughout, and was later transformed by Whitman into almost entirely different books in his numerous “revised editions.” Nevertheless, Whitman proudly sent a copy from the first printing to Ralph Waldo Emerson, who responded by calling it “the most extraordinary piece of wit and wisdom that America has yet contributed.”

In some ways, Whitman engaged Emerson directly in Leaves of Grass. The exhibit features an 1841 edition of Emerson’s Essays lying open to his piece on “Self-Reliance,” in which Emerson posed a question: “Suppose you should contradict yourself; what then?” In Leaves of Grass, Whitman famously asked and answered a similar question: “Do I contradict myself? Very well then, I contradict myself. (I am large, I contain multitudes.)”

In another essay, “The Poet,” Emerson asked his readers (as the museum label puts it) “why so many ordinary aspects of American life still remained ‘unsung.’” Whitman later wrote to a friend, seemingly in response, “I was simmering, simmering, simmering. Emerson brought me to a boil.” Among the famous poems from early editions of Leaves of Grass is one entitled “I Hear America Singing.”

He responded to Emerson’s letter with an open letter of his own, proclaiming that, “Swiftly, on limitless foundations, the United States too are founding a literature.” It’s unfortunate that this response was not mentioned in the exhibit, as it provides insight into Whitman’s true aim. In it, he challenged “America, grandest of lands,” to cultivate artists worthy of her:


You are young, have the perfectest of dialects, a free press, a free government, the world forwarding its best to be with you. As justice has been strictly done to you, from this hour do strict justice to yourself. Strangle the singers who will not sing you loud and strong. Open the doors of The West. Call for great new masters to comprehend new arts, new perfections, new wants. Submit to the most robust bard till he remedy your barrenness. Then you will not need to adopt the heirs of others; you will have true heirs, begotten of yourself, blooded with your own blood.

Almost at the start of the exhibit, some of these heirs are on display. A 1931 edition of Langston Hughes’s The Weary Blues lies open to his poem “I, Too,” which recalls Whitman in its first line (“I, too, sing America”) and stakes a claim for African-Americans’ place in America’s story. The museum label notes that Hughes had jettisoned all the books he’d brought on a trip to Africa save for one: Leaves of Grass.

The Spanish poet Federico Garcia Lorca was also captivated by Whitman; during a 1929 visit to New York, he grew to love the “furious rhythm” of the city and Whitman’s poetry. The Morgan features a first edition of Lorca’s 1931 The Poet in New York, which includes his poem “Ode to Walt Whitman.” Lorca seemed to be making the point that, while Whitman wished to contain all of America in his poetry, he could not help but belong to the island he’d grown up watching from across a river. He proclaimed himself to be “of every hue and caste . . . . every rank and religion,” yet he was unable to entirely escape particular ties.

Whitman was born in Suffolk County in 1819, moved to Brooklyn at the age of four, and finally made his way to Manhattan as a teenager. He later claimed that the poems in Leaves of Grass “arose out of my life in Brooklyn and New York from 1838 to 1853, absorbing a million people . . . . with an intimacy, an eagerness, and an abandon, probably never equaled.” In a letter from 1868, he reflected that he might be “the particular man who enjoys the shows of all these things in New York more than any other mortal—as if it was all got up just for me to observe and study.”

His love for New York seemed bound up with his love for epic poetry: in his 1882 memoir Specimen Days, Whitman described his daily visits to Coney Island, where he would “race up and down the hard sand, and declaim Homer or Shakespeare to the surf or sea-gulls.” (The Morgan displays Whitman’s personal copies of Homer’s Odyssey and James Macpherson’s Fingal, an epic poem based on ancient Gaelic folklore.)

Whitman shared this love of classic verse with Abraham Lincoln, whom he came to celebrate in much of his writing. Though the two never met, Whitman penned an essay several years before Lincoln’s presidency in which he called for a bearded “Redeemer President of These States.” The Morgan suggests Whitman might have had a hunch about Lincoln’s career path, which may or may not be true. What is certain is that Whitman wished to replace the “debauched old disunionist politicians” with a figure appropriate for “a proud, young, fresh, heroic nation.” In the essay, a handwritten version of which is on display, Whitman also linked his fierce support for abolition with his yearning for great American art, asking whether the nation’s slave-owners “expect to bar off forever all preachers, poets, philosophers—all that makes the brain of These States, free literature, free thought, the good old cause of liberty.”

Whitman’s experience serving wounded soldiers during the Civil War seemed to deepen his sense that Americans needed to be unified through a peculiar kind of art, which he believed could stem from a peculiar kind of leadership. In Lincoln, Whitman found his ideal American hero—an autodidact like himself, who connected with Whitman’s beloved “common people.” He remarked at one point that “after my dear, dear mother, I guess Lincoln gets almost nearer me than anybody else.”

He was devastated by Lincoln’s assassination; as the Morgan shows, Whitman scribbled “Lincoln’s death—black, black,” in a personal notebook shortly after hearing the news. He found a bright spot of sorts, though: In a lecture following Lincoln’s death, Whitman exclaimed, “Why, if the old Greeks had had this man, what trilogies of plays—what epics—would have been made out of him!” In his view, Lincoln was greater than Washington in part because his death ranked higher in “the imaginative and artistic senses—the literary and dramatic ones.” In these elements that were indispensable to the formation of national identity, Lincoln surpassed “Cæsar in the Roman senate-house, or Napoleon passing away in the wild night-storm at St. Helena . . . . Paleologus, falling, desperately fighting, piled over dozens deep with Grecian corpses . . . . calm old Socrates, drinking the hemlock.”

The Morgan doesn’t shy away from suggesting that Whitman looked at Lincoln’s death as a “meal ticket,” and he did publish and speak a great deal about Lincoln throughout his life, including through an annual address. But his personal writings indicate he genuinely grieved over the loss. And while the museum devotes perhaps too much time to assessing Whitman’s admirers and speculating about his personal life and homosexuality—Allen Ginsburg and Oscar Wilde feature prominently in addition to Hughes and Lorca—it falls short in its assessment of why Whitman admired Lincoln and Emerson so deeply. All three men saw something transcendent in America that the Morgan doesn’t address. The exhibit is worth attending for Whitman’s writings alone, but it does not adequately explore why the “bard of democracy” saw himself that way.

Of course, the answer to that question is elusive. In The Atlantic, Mark Edmundson recently argued that Whitman was attempting to convey “what being a democratic man or woman felt like at its best, day to day, moment to moment.” In celebrating himself, Whitman sought to remind the nation that it was worthy of being celebrated, for reasons that, as Whitman himself put it, were “indirect.” American greatness could be found not only in the country’s founding documents, but in the daily lived experiences of its vast variety of “brothers and sisters”; or as Edmundson writes, Whitman’s “famous catalogs of people.”

In National Review, Sarah Ruden took the opposite tack. Insisting that she has “done [her] dutiful reading of Whitman” and “can remember practically nothing,” Ruden denounces Whitman as a racist and proto-fascist. She unfairly minimizes his membership in the Free Soil Party (calling it “narrow and self-interested”), neglects to mention his anti-slavery writings, and dismisses both his wartime service to the Union and his affection for Lincoln. She calls Whitman less “the father of the American spirit than the father of empty American celebrity: the Kardashians, mommy bloggers . . . . the whole mutually trampling stampede of ineffable individual specialness.”

Whitman might not have entirely balked at this last criticism, though it still misses the mark. He did seek to elevate what the elites of his time viewed as common, and to give a young, fractured nation a sense of self-respect, however removed it might have been from the imprimatur of European tastes. In Leaves of Grass, he called the United States “the greatest poem,” and described the style of the country’s poets as “transcendent and new. It is to be indirect, and not direct or descriptive or epic. . . . Here the theme is creative, and has vista.”

He also sought to give America a hero. In his annual speech commemorating Lincoln’s death, Whitman insisted that the most important outcome of the president’s “heroic-eminent life” would come “centuries hence,” in “its indirect filtering into the nation and the race.” He believed it would give “a cement to the whole people, subtle, more underlying, than anything in written constitution, or courts or armies—namely, the cement of a death identified thoroughly with that people….a Nationality.”

Whitman’s hopes for such a “nationality” may have been misplaced, and he may have failed in his overall project. Americans might too easily shrug off heroism or retreat to a protective cynicism, to paraphrase Winston Churchill. They may not feel the attachment to Lincoln—the entirely ordinary and entirely great man of Whitman’s writings—that Whitman had hoped for. But there is some reason for optimism. Kim Kardashian has an outsized influence on our culture, but she is not the only influence. Lincoln’s status as a self-made man and noble martyr still looms large in our historiography, and distinctly American forms of pop culture—whether the Westerns of yesteryear or the lavish superhero movies of today—offer something Whitman may have valued, in their stories of local and national pride and martyr-worthy clashes.

And while Whitman’s writings could be vulgar and his vision was admittedly “indirect,” there seems to be no reason to believe, as the revisionists would have it, that he was insincere in what he wanted to provide the American people. Whitman sought to convey a sense of the grandeur of their nation, the majesty of everyday life, and a vision of Americans as different from, but with the capacity to achieve equal footing with, the masters of the past. At its best, Whitman’s own work achieved that vision. His attempt to “sing America”—to draw from what the country was rather than preach about what she ought to be—is still worth celebrating two centuries later.


The post The Bard and His Democracy appeared first on The American Interest.

 •  0 comments  •  flag
Share on Twitter
Published on September 17, 2019 09:41

September 16, 2019

Bartleby, the Safe-Spacer

The year 2019 is witnessing a double feting of Herman Melville. The author was born in New York City on August 1, 1819, and we mostly know of him now because he was plucked from the ranks of obscurity in 1919, when the literary commodity that was Herman Melville was rediscovered in all of its posthumous glory. Melville’s centennial began a journey that the author himself didn’t take in his own lifetime.

Melville knew a first flush of success with the publication of his first two works, Typee: A Peep at Polynesian Life (1846) and Omoo: A Narrative of Adventures in the South Seas (1847). The books arose from Melville not really knowing what to do with his life, someone suggesting that he take to the sea, and young Herman doing just that, then fictionalizing a lot of what he saw. These were travelogue novels, lanky on plot, with deep-focus observations spruced up with hyperbole, such that Melville became in hot demand as “the man who lived with cannibals.” They sold well, but Melville—as with all his writing admired by either the public or the critics—felt sickened, a perpetrator of a sham, an artist who had sold out. (His friend Nathaniel Hawthorne would watch this struggle within Melville over the years, certain that the younger author was incapable of achieving a state of contentment in his life or work.)

The most underrated Melville novel of all, Redburn: His First Voyage, followed in 1849, the tale of a merchant seaman written in a litany of styles for a litany of audiences, kind of like a pelagic version of The Pickwick Papers. For those who think Melville writes in an antiquated style, here is proof otherwise, with prose that could have rolled out of bed yesterday morning. Alas, the public had walked away from Melville by this time, Herman himself thought he was being a whore, and so he turned to the composition of Moby-Dick, informing Hawthorne after its completion that he had written a very evil book. Moby-Dick baffled some critics, horrified others, and caused more than a few to play armchair proto-therapist, opining on Melville’s mental health. They’d have ample opportunity to indulge that speculation further upon the publication of Pierre: or, The Ambiguities in 1852.

Pierre was widely panned then and is not widely read today, but Melville considered it his signature work, and, remarkably, the one he assured his publisher would fill both their bank accounts and elevate his name. You could make the case that this is the first true airing of American Modernism, a stateside Ulysses long before Mr. Joyce had gotten to work on his Dublin fixation. The subtitle almost functions as a joke; “ambiguities” is one big white whale of an understatement for a book that plumbs the most metaphysical of souls. Critics pounced. One sententiously expressed hope that Melville’s friends were looking after him, and, as an act of compassion, keeping him far from ink and pen, as if he were an addict seeking to locate the dope needle. Sad!

Well, sad in a sense. Triumphant in our own times, Melville was pissed off in his own—there’s really no other way to put it—and decided to write a short story in response to the reaction from Pierre. It is, in my view, both the best work he ever composed, and one I would personally inject into the bloodstream of every living American at the present moment. Alas, most are too lazy and cocooned to actually read it—which is part of the problem Melville was taking on.

That work was called “Bartleby, the Scrivener: A Story of Wall Street,” and was published at the very end of 1853. It’s known today mostly because so many people can quote its famous refrain of “I would prefer not to,” uttered, time and again, by the title character. I know of no other short story, of this age or any other, that is so centrally relevant to the breakdown of our 21st-century minds, our culture, our reason, and our ability to cogitate—with our simultaneous insistence on being coddled, shielded, and lied to, just so we do not have to live life and experience depth of feelings—as “Bartleby.”

The plot is simple. The narrator is a Manhattan lawyer deep into his career, on the backside of the slope. Two scriveners—people who copy out documents—are in his employ, named Nippers and Turkey. These are obviously nicknames—just as there is an office boy, named Ginger Nut, because of his penchant for ginger snap cookies—and they are just as obviously the result of their owners possessing marked degrees of energy. They’re animated, prone to hijinks, but mostly good workers who wouldn’t suffer to have a more measured personality join them as a role model. The narrator interviews Bartleby for the gig. He’s professional, even phlegmatic—a good guy to have in a crisis, the narrator thinks. He hires him.

It’s important to note that Bartleby begins his career as a paragon of earnest industriousness. He is active. He zigs, nags, zooms. He’s akin to the kid who finishes the assignment first, then asks for extra credit opportunities.

This version of Bartleby, before his metamorphosis—for matters are about to get quite Kafkaesque, pre-Kafka—is the metaphorical child. As children, we are all wide-eyed seekers; we wish to uncover truths, not turn our backs on them. We will deal with how they make us feel after. As we shed innocence, we seek to inure ourselves, and this we do now more than ever in the age of the safe space, the performance trophy, and helicopter parenting. Bartleby “grows up,” if you will, which means he no longer leaps into the bed of flowers, but skirts around the edges, assuming that snakes and scorpions must be writhing within, waiting for him—and that would be a reality with which he could not cope.

He does fewer tasks, and spends his day staring out the window—not at a park, or a bustling thoroughfare, but a brick wall. The man is binging a brick wall. It’s his Netflix. The quality of programming, we might say, doesn’t matter. He stares at the wall the way we stare at screens, to help us plaster up our eyes, detach us from the world around us, while telling ourselves that we are doing something with some vague purpose. For Bartleby, it’s about the act of going through the motions of an act; in his case, a desire for the outside world that is not a real desire at all. His boss, our narrator, asks him to do his tasks, and then we are hit with, “I would prefer not to.”

Let’s look at the construction of the line. It’s eminently passive. It’s not a flat refusal, it’s not rude, it puts its emphasis on feelings. It’s safe space talk. Further passiveness comes in the form of that “would.” He could say, “I prefer not to,” but the “would” creates another buffer, separating subject from stated intention by stating the intention more indirectly, but no less clearly. This is the very definition of passive aggressiveness, at least in its verbal form. In its silent form, we might now call it  “ghosting.” And make no mistake, ghosting is what Bartleby is doing: both ghosting his employer, while being physically present at work, and ghosting himself with his own life. This is not living. It’s death-in-life, mere existence, protected by a safe space that’s inviolable because the narrator won’t puncture it. He’s not sure how to proceed. He’s at once offended, creeped out, angry, concerned—and curious. He’s never seen anything like this, he’s not sure if this is the new way of the world, and that terrifies him more than anything.

Just as it’s important for Melville to show us Bartleby’s initial childlike qualities, it’s crucial that he reveal the narrator to us as a sympathetic man. He’s trying to get this. It’s against his nature, because he lives. He strives. “Nothing so aggravates an earnest person as a passive resistance,” our narrator says. True. What is also true is that the person who practices passive resistance understands this, which allows them to deploy it as weaponry. When they have removed the possibility of censure from the equation—as Bartleby does, and as people do here in the age of the PC-infested safe space—they can mount forms of attack at will. There is no system of checks and balances. It’s a free-for-all of passivity, and aggressive passivity at that.

Melville was also a deeply funny writer, who crackled with pointed, purposeful wit. The narrator quotes one request to Bartleby to do his job being greeted with a quailing, “At present I would prefer not to be a little reasonable,” as if he’s being generous with his time and efforts, flexible, charitable. He is, in short, virtue signaling. He makes himself the victim, and now he is inverting the boss-employer dynamic so that he is the new boss, who is, contra The Who, much different from the old boss.

The narrator terms this reply “mildly cadaverous,” which is funny, and also true; Bartleby is not alive. He exists, but he’s not an absolute, as a human ought to be. He’s the human version of a qualifying remark. The narrator’s problem is that if this is a new form of existence in the world, how can he fit in? He is, remember, older. This is the younger sect, he worries, coming up behind him. Bartleby, meanwhile, is effacing himself right out of his own life. The office is becoming his charnel house—not because he’s such a dedicated worker, but rather because he is not present, quite literally, anywhere else in his world. He starts living at the office.

There are stories of Kafka writing The Metamorphosis and howling with laughter, with his friends thinking he’d lost the plot. It was riotously funny to him, in that way where it feels that the universe has piled up so much against you that you just have to chuckle. When he wrote this part of “Bartleby,” I can picture Melville doubled-over, perhaps with Hawthorne shaking his head and reaching for some calming brandy.

Bartleby starts sleeping at work. Still, the narrator cannot bring himself to fire his scrivener who does no copying. He hopes Bartleby will be roused in time, but together they sail past a tipping point, and he knows that this is how it will remain. We talk now about “living your best life,” a silly notion that usually means, “do as you please, because the world will make allowances for you.” In Melville’s time, there was something called prudence. In fact, this was the era of “new prudence,” which was an over-extension of humane treatments to the point that they became patronizing, and people conflated being patronized with respect. In short, if someone treated you like you were helpless, that meant they cared, not that you were helpless. The same logic applies today.

A little-discussed problem in the story is that the narrator is allowing himself to become helpless as well, by dint of Bartleby’s company and by over-extending himself in trying to help this person who refuses to live actively. Bartleby is clearly depressed, as many of us are in these times. He’s broken, not committed to repair. And he’s encouraged to stay that way, in part, because no one really knows what the hell to do with him.

He gets literally left behind, after having metaphorically left his own life behind. The narrator, unable to evict Bartleby from the office, where he sits on the stairs all day—symbolically going nowhere, reveling in immobility—decides to move his business elsewhere, leaving the new tenants to inherit his former employee. He’s already enabled Bartleby to the tune of asking him to be his roommate, to play Ernie to his Bert, which Bartleby rebuffs. Returning to the old office space after having started work somewhere else on Wall Street, he discovers that Bartleby is still there, and remains so until the new owners have him imprisoned in the Tombs. Again, we are right on the nose, symbolically. The self-prison is a tomb, prison is the Tombs.

Once more, the narrator visits Bartleby, who refuses to take food in his cell. He bribes a guard to try to make sure he eats, but Bartleby—you guessed it—prefers not to, and he dies, formally executing an order that his passive, non-participatory life—if it was indeed a life—had already seen to, informally; which, really, counts for more than enough with these things. The narrator’s concluding words are not as famous as Bartleby’s refrain of deference, but goodness how they tell: “Ah Bartleby! Ah humanity!”

For the entirety of this story, we have ostensibly focused on one person. But now we realize, with the cry of that last word—humanity—that this was not about the individual, but about a directional flow of the human world, which was something Melville feared in his own time—and did not overrun the world until ours. An awful lot of people are Bartleby right now, decamped on stairwells, going neither up nor down, carping about how they’re being judged unfairly, their vital selves compromised. We are an incarnate mass of “I would prefer not to”s, though we pile on excuses, a level of artifice that Bartleby himself cut from the equation. He was at least honest with himself, honest with the narrator, honest with us, the readers. Perhaps that is a lesson we can take from his story, if we are to reverse the prevailing flow of our lives, so much like his, and head back up the stairs once more with alacrity and purpose in our stride.


The post Bartleby, the Safe-Spacer appeared first on The American Interest.

 •  0 comments  •  flag
Share on Twitter
Published on September 16, 2019 08:36

Peter L. Berger's Blog

Peter L. Berger
Peter L. Berger isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Peter L. Berger's blog with rss.