Peter L. Berger's Blog, page 71

October 9, 2018

Under the Information Rubble

We are surrounded by a maelstrom of signs, images, information. One makes sense of them all by connecting them to larger stories in our heads, so that they fit into a hierarchy, an idea of history. But what happens when those larger stories through which we make sense of the signs collapse? Are we just left in a chaos of meaningless messages? What sort of politics flourishes in this world?

Take for example the image of statues of dictators being pulled down. This was one of the iconic images of the collapse of the USSR, all those Stalins hoisted in mid-air before being dumped in the rubbish tip of history. When that same image was repeated during the invasion of Iraq, with crowds cheering as statues of Saddam Hussein were pulled down, it seemed to signify that a similar historical process was playing out. When Iraq became a disaster, the meaning of the image was undermined too.

Or take entry into NATO. That used to symbolize joining the American postwar order. But NATO’s newest member, Montenegro, has found itself pilloried by the U.S. President for the dangers its supposed hot-headedness could potentially pose to U.S. troops, who would automatically be called on to defend the provocations of an unreliable and demanding ally. This is even confusing for America’s traditional enemies, let alone its allies: After decades of defining themselves through opposition to the American world order, what does one attack when America stands opposed to itself?

Many reasons for the collapse of the belief in ideals of progress and history have been put forward—growing inequality, the financial crisis, the Iraq war. You can find any number of these analyses in any one of the many smart books about the demise of liberal democracy. The remarkable thing, however, is that this loss of meaning and direction is also visible in countries that have either avoided or have sidestepped the above-listed problems.

Donald Trump is both the cause and the product of this phenomenon. That Americans could elect someone with so little regard for making sense, whose many contradictory messages never add up to any very stable meaning, was possible because enough voters felt they weren’t invested in any larger narrative any more. Indeed his very incoherence could have been part of the pleasure: your “historical meaning” has let us down, let’s have nonsense instead!

But Trump is an extreme version of the conversation around him. When Russia’s covert digital influence campaign in the United States is equated with a new Pearl Harbor, or in turn when criticism of Trump is labelled “McCarthyism,” these historical references have become completely shorn of context. One imagines the wreckage from a plane crash in a desert, with commentators wandering around beating jet engines with spanners to make a spectacular sound—a sound that nevertheless has almost no relation to the thing they are beating.

New media exacerbates the process. For better or worse, the connections between signs and stories were sealed in the vessels of old media. The nature of social media, however, destroys such stable relationships. Now terrorism sits next to kittens, sexual abuse next to fart jokes; facts become indivisible from lies, and anything can be associated with anything. Social media flattens past and present, so things appear out of the perspective you need to have a sense of development. Memes, social media’s favorite genre, where internet users deface or add new texts in order to transform the meaning of an image, bring out how endlessly unstable signs have become, ceaselessly transformable to marry up with a new meaning in moments.

That our weird era has become associated with sock-puppet social media campaigns is also apposite. When ones hears so many stories of fake accounts that seemed to be supporting freedoms and civil rights, but which in fact turn out to be fronts of illiberal governments like Russia or more recently Iran, one starts doing a double-take at everything one encounters online. Is that civil rights poster over there actually being run out of St. Petersburg? Does it mean what it says?

The breaking up of the old links to grand narratives makes possible the formation of hitherto impossible coalitions. Being a Republican used to mean being anti-Kremlin, for example. Now it could quite as easily mean the opposite. Perhaps this is the way politics will now play out: Instead of a competition between big, coherent ideas about historical progress, we will see sporadic fusions from the debris.

This needn’t be all bad.

One of the less helpful associations bequeathed to us by the Cold War, for example, is how some leftist movements continue to support the Kremlin, in some hangover where being pro-Moscow meant being opposed to fascist Germany or the nastier sides of U.S. foreign policy. Today, there is a bubbling of new ideas on the Left, about everything from public ownership to decentralization, which deserve a hearing. What makes less sense is why new leftist movements would support a Kremlin where LGBT rights are quashed, where you can be jailed for liking an article on Facebook criticizing Russia’s imperial military adventures, where institutionalized kleptocracy is laundered through offshore tax havens in the Caribbean.

Indeed, digital rights and clamping down on offshore money laundering are just the sort of issues which bring together their own unexpected coalitions. The battle against money laundering tax havens, for example, involves Cold Warriors looking for the Kremlin’s vulnerabilities; social equality and “tax justice” activists; and some more idealistic pro-market thinkers horrified at the profusion of crony capitalism across the West.

But as the old connections fall away, and new ones are forged, it’s also hard not to worry.

The linkages between signs and narratives also held important taboos in place. The Holocaust held a privileged place in this system, the little knot which tied together certain images and language with a notion of the unacceptable. Anything that somehow reminded of concentration camps, or of the sort of dehumanizing propaganda which enabled them, was deemed beyond the pale.

This appears to be shifting.

In Hungary, Viktor Orban’s government peddles propaganda images that resonate with 1930s Nazi motifs: “Don’t Let Soros Have the Last Laugh” said one recent poster, with the face of the Jewish financier and liberal NGO backer, who the government claims is working to destroy Hungary, plastered all over the country.

Israeli MPs asked their Prime Minister, Benyamin Netanyahu, to cancel a visit to Hungary in protest. He didn’t: Orban and Netanyahu see each other as allies in another new configuration. Official Israeli government statements were left playing twister: They disapproved of the Hungarian propaganda’s style, but agreed that Soros was a bad element.

Underneath one can hear the tender connections between the images that evoke the Holocaust and their significance starting to strain. Netanyahu would no doubt argue Israel is defending itself from the possibility of another Holocaust. But what I’m talking about here is not something specific to Israel (or Hungary). The agreement that certain images and signs which evoked the Holocaust were taboo tried to define a universal notion of evil. If that symbolic order is undermined, does evil become more possible?

That’s the thing about the waning of the old order of relations between signs and narratives, images and values. Some are there for a reason.


The post Under the Information Rubble appeared first on The American Interest.

 •  0 comments  •  flag
Share on Twitter
Published on October 09, 2018 09:19

October 8, 2018

Crashed and Burning

Ten years ago I watched the collapse of Lehman Brothers unfold, refreshing BBC News online in the offices of the Georgian National Security Council. “Look at the Russian stocks!” laughed one analyst as he handed me a fresh printout of the collapsing MICEX index in Moscow. It was as I was staring at it that I realized I no longer understood what was going on. The person I now know I needed to speak to then was Professor Adam Tooze, whose landmark new book Crashed has finally managed to place the banking crisis in its wider political and geopolitical context. Tooze identifies that summer not just as yet another financial crash. “It was the moment a generation of globalization under the sign of Western power and money had reached its limit.”

My 2008 felt a world away from the footage of the shocked shell-shocked Lehmanites. It was an intense blur of Russian tanks, bombed out buildings, and the trembling refugees. I remember the mood in the Tbilisi Marriott, where Western diplomats, spooks, and journalists congregated as one of shock. Russian tanks? Two hours away? An interstate invasion? The EU officials I met in this atmosphere, as if drawn from Olivia Manning’s 1930s sketches of pearl clutching countesses and stammering diplomats in The Balkan Trilogy, were the most chastened of all. The Russian invasion was quite literally inconceivable to them, even as it was actually happening.

But why? Tooze’s book makes it abundantly clear that pre-2008, the West had not only financial, but deeply-held geopolitical illusions as well. “The high priests of Europe’s twenty-first-century cult of innocence reside, of course, in Brussels,” he writes. EU leaders like Romano Prodi, Commission chief in the early 2000s, talked as if “the the EU was realizing Kant’s dream of perpetual peace.” “But, in fact,” says Tooze, “the geopolitical configuration of the post–Cold War world was more uncertain and ramshackle than that.”

Just as Alan Greenspan, the Chairman of the Federal Reserve who stepped down in 2006, had convinced himself that “we are fortunate… thanks to globalization, [that] policy decisions in the U.S. have largely been replaced by global market forces,” the European leadership was convinced that the boom underway, from Lisbon to Vladivostok, was only strengthening a European unipolar order. Russia, Turkey, and centuries of imperial rivalry were over: the only sphere of influence was NATO and the EU. “The state of denial was common,” writes Tooze. Just as they had misread the warnings from Northern Rock, they missed Eastern Europe’s geopolitical deterioration.

Not only were the great questions of capitalism then seen as settled, but so too were those of European power politics. Not only were bank runs seen as a thing of the past, but the war in Chechnya, the frozen conflicts in Moldova, Georgia, and Armenia—indeed all of Russia’s military adventurism beyond its borders—was seen as a throwback to the 19th century. Moscow’s mad dash to get its troops to Pristina airport at the close of the Kosovo War their last sorry squeal: not a hint of the “little green men” to come. If the inevitable march of progress hadn’t yet put an end to this kind of behavior, it was sure to do so soon enough.

In Europe, this was the intellectual moment of Mark Leonard’s Why Europe Will Run the 21st Century, which asked us to imagine “a new European century.” It was the period of Tony Judt’s Postwar, which hailed “Europe’s emergence in the dawn of the 21st century as a paragon of international virtues: a community of values… held by Europeans and non-Europeans alike as an exemplar for all to emulate.” In the United States it was the era of the “capitalist peace theories” of Thomas Friedman: that no two countries with a McDonalds or in a complex supply chain like Dell’s would ever go to war. Writers like Edward Lucas, who warned of a new Cold War, or Emmanuel Todd who welcomed a Russian resurgence were outliers to say the least: in fact they were routinely dismissed as excitable.

What this translated to in Europe was a view of geopolitics as ideological as the one Alan Greenspan took of financial markets. This was a time when Robert Cooper, the British EU diplomat who held the grandly-named post of Director-General for External and Politico-Military Affairs, would confidently write how “we have to forget the security rules of yesterday.”

Reading Tooze, one can’t help but conclude it was the flip side of the same coin. The prevailing view in Europe was that conflict was simply being outmoded; Euro-Atlantic integration and shared prosperity automatically strengthened a Western unipolar order. Putin’s Russia might also be booming, but it was a throwback, an aberration, not to be taken too seriously. It was sure to get its comeuppance, to learn its place, almost automatically, due to the inherent laws of how things work.

Tooze wants us to grasp the nettle European elites ignored: “that global growth did not axiomatically strengthen the unipolar order.” The boom in Western finance and investment in Central and Eastern Europe strengthened NATO aspiring states, whilst the boom in commodity prices simultaneously revived Russia. Running up to 2008, few Western analysts had appreciated what was really happening. “Ironically the common boom,” writes Tooze, “would prove far more explosive” than the “exhaustion and disorder” that Central and Eastern Europe has shared in the 1990. It was “as if two pressure fronts of global capitalism were rushing towards each other across Eurasia.”

I will never forget, watching the Kremlin’s victory concert in Tskhinvali, in occupied South Ossetia, on the back of a Russian military truck. We were surrounded by North Ossetian and Russian troops, wide-eyed and drunk, many the same age as me at the time, scarcely twenty years old, as the classic music crescendoed. But what I was watching (not that I would ever have admitted it at the time, a little scared) was not only about 2008.

What began in 1989 with the fall of the Berlin Wall was so tumultuous and so fast we forget how European leaders first imagined very different new orders for Europe. Margaret Thatcher initially opposed German reunification, Mikhail Gorbachev wanted a new Union of Sovereign States, seeking assurances NATO would not expand eastwards. Francois Mitterrand called for a two-speed “European Confederation” embracing Moscow to supersede the old blocs. Boris Yeltsin at first hoped Russia would eventually become a NATO member.

What this generation was searching for were answers to Europe’s four geopolitical dilemmas. The first was the German question: How would Europe manage a unified Germany that turned out too big to be just another member state but too small to be the absolute hegemon? The second flowed from the first: If the answer to the problem of Germany was the European Union and the Euro, then how federal would the Union have to become to make it all work? Further east the questions were just as serious. The third unanswered question was: What place would the newly independent states in Eastern Europe hold in this order? Would they be members of the new Union, would they join NATO, or neither? And finally the Russian question: What place would Moscow have in this catallaxy? Would it eventually integrate fully into these structures? Or would it be frozen out? And if so, where would it end up?

Mitterand’s geopolitics were pushed aside, Tooze writes. “Neither Helmut Kohl nor George Bush wanted anything to do with that. It would set the terms for Europe’s reunification.” The path thus chosen is now familiar: the European Union and the Euro were the answer to the first two questions, and eastward NATO expansion with no plans to integrate Russia were the answer to the last two.

These answers would not suffice. The French rejection of the EU Constitution in 2005 was a warning shot, while the financial crisis that followed in 2008 completely exposed the unaddressed issues posed by the German and the Federal questions. And as fate would have it, the issues surrounding Eastern Europe and the Russian questions would also come to a head that same year, showing up just how naive policymakers had been in the preceding years.

“Truly comprehensive global growth breeds multipolarity,” writes Tooze, “which in the absence of an overarching diplomatic and geopolitical settlement is a recipe for conflict.” Yet this was not how the European capitals understood the mid-2000s. Both NATO and EU expansion eastwards were seen as new episodes of the European Peace Project. How could they not be? With high rates of growth and rapid convergence, the European Commission was a place where policymakers rubbished talk of Russian revanchism as simply paranoid. Vladimir Putin was to be ignored.

What was strange about the period was the disconnect between financial and security analysts. Goldman Sachs first coined the term “BRICs” for the rising economic clout of Brazil, Russia, India and China in 2001. And as soon as Mikhail Khodorkovsky was arrested in 2003, Western investors had figured out that not only was Russia back in business but not playing by Western rules.

When should have the Kremlin’s return become unavoidably clear for the security types? Tooze thinks things were clear enough at the Munich Security Conference of February 2007, where Vladimir Putin first openly challenged Europe’s unipolar order. Viciously attacking the United States for “having overstepped its national borders in every way,” the Russian President warned that the boom’s new multipolar economic order would necessarily lead to a new geopolitics. “There is no reason to doubt,” said Putin, “the economic potential of the new centres of global growth will inevitably be converted into political influence.”

And yet Putin’s Munich speech was not seen as a challenge. It was interpreted as more something like a tantrum.

With theatrical symmetry, both Western illusions—the geopolitical and the financial—entered into meltdown over the summer of 2008. The sudden swing, from the American assertion at the NATO Bucharest summit in February, where President Bush pushed NATO membership action plans for Georgia and Ukraine, to the mood of fear and vulnerability that gripped American policymakers by the August opening ceremony of the Beijing Olympics, was something to behold. The lines between geopolitics and finance started to blur. Just as convictions that a banking collapse was impossible were shattered on Wall Street, Russian tanks outside Tbilisi shattered the conviction that European geopolitics had ended. With financial markets in turmoil, U.S. Treasury Secretary Hank Paulson said that he had heard from his Chinese contacts that Moscow had encouraged Beijing to also dump Fannie Mae and Freddie Mac bonds.

What had happened was on one level analogous to the subprime crisis: a risk had been catastrophically misjudged. Despite Vladimir Putin having warned Western leaders in Bucharest that Ukraine “would cease to exist as a state” if it would join NATO, the risk Russia might pose was obstinately denied. Rather than being seen as a sophisticated “virtual democracy” with ample hard power resources, the consolidating Kremlin regime was dismissed as comical in many quarters, and its geopolitical ambitions were taken seriously next to nowhere. This mood led the West to underestimate the risks ambitious states like Georgia were taking in broadly challenging Moscow. Watching French President Nicolas Sarkozy arrive in Tbilisi after brokering a ceasefire agreement in Moscow was to witness the geopolitical equivalent to the firefighting Ben Bernanke and Hank Paulson were engaged in. With diplomatic “maximum force,” the French President and his Western allies had stopped a situation clearly in freefall.

But no sooner had Sarkozy left than did the atmosphere in Tbilisi palpably change. People on the ground realized that nothing had been resolved. “Power, like water, will find its level,” I remember one European envoy saying. Just because Putin had not managed to outright overthrow the Georgian government did not mean that Europe had reasserted anything approaching its altogether imaginary status quo ante. The balance in both finance and geopolitics was anything but restored. Tooze is explicit on this. “When added to the incomplete project of the Eurozone and the missing political frame for the North Atlantic financial system,” he argues, “the unresolved geopolitics of Eastern Europe’s ‘Eastern Question’ completed a trifecta of unanswered questions that hunger over Western power.”

They would indeed return.

While it was indubitably fault lines in the American financial system had led to the crisis, Crashed also argues it was the speed and intensity of U.S. policymakers’ response that ensured 2008 “did not result in a spectacular transatlantic crisis,” with the economies of Britain and the Eurozone imploding completely. To save transatlantic finance the Fed licensed a core group of allied central bankers to issue dollar credits on demand: “Through this they pumped trillions of dollars of liquidity into the European banking system.”

What is so jarring in Crashed is the contrast between American and German officials. “If we are looking for one crucial difference it is surely this,” Tooze writes. “From the morning of September 11, 2001, America was a superpower at war.” Whereas Bernanke, Geithner and Paulson knew they are imperial actors defending a financial system that ultimately undergirds American power, in chapter after chapter, we see Merkel and Schauble refusing to move.

This is a key point of Tooze’s, and one worth ruminating on. The divergence in response to 2008 can be traced to how the decisions of 1991 were arrived at on either side of the Atlantic. Whereas the “liberal world order” was built to empower America, the Eurozone was built to constrain Germany. The French intention all along had been to federalize the newly united Germany just enough to prevent it from becoming an imposing hegemon. “Without a common currency,” Mitterrand warned Thatcher in 1989, “we are all already subordinate to the Germans’ will.”

Thus, beginning in 2008, as America acted to preserve its global prerogative, we see Germans refusing to act, also to preserve their own power. At a crucial moment, America wants to remain hegemon while Germany refuses to be federalised—refuses to be the lender of last resort for its profligate European partners. “There will be no transfer union,” Angela Merkel declared.

Though the European Union remained, symbolically and more, a political peace project, thereafter the Eurozone itself showed itself to be a conflict generator. The consequence of Germany’s decision to not act was the exact mirror image of the Federal Reserve’s aggressive moves—with near mirror image results. Europe’s German and the Federal questions, fundamentally unanswered, were still causing trouble.

Instead of acting like the Fed in stabilizing allied economies, the ECB, squeezed by Berlin, dawdled. It showed little interest in getting involved in Central Europe. “One might have expected the ECB to extend similar support to the East European neighbors of the Eurozone,” writes Tooze. But the Euro-swap lines never came. “Where the Fed had given the ECB the lifeline of dollar swap-lines,” writes Tooze, “the ECB had no intention of extending equivalent privileges to Poland or Romania.” Frankfurt’s decision “shocked the Fed,” which had expected Euro-swap lines to be extended to Poland and Hungary. But with their economies hit by a sudden stop in foreign credit supplies, all the ECB was willing to do was provide Poland and Hungary with short-term funding in exchange for first-class euro-denominated securities. “When the problem,” writes Tooze, “was a shortage of euro funding, this was of no great help.”

As the 2008 crash worsened Jean-Claude Trichet of the ECB was telling journalists a “common European solution was inappropriate because the eurozone wasn’t a fiscal union.” Germany shot down Hungarian and Austrian initiatives for a common support fund. “Not our problem,” announced Peer Steinbrück, the German finance minister.

Hungary was forced to go to the IMF, opening a new chapter of history as the first EU member state to do so since 1976. Brussels then pushed the IMF for harsher austerity measures to send a signal to the rest of Central Europe. Two years later, in 2010 Hungary’s socialist government paid the price with the election of Viktor Orban.

As the crisis strengthened Russian authoritarianism, the Baltic states, fearing their large imperialist neighbor’s reawakened appetites and the consequences of any break with the West, aggressively pursued austerity to make it into the Euro. Hungary, however, was incentivized by the crisis to strike out on its own; a newly empowered Orban, having no qualms about alienating the rest of Europe, turned to a belligerent, nationalist path, open and friendly to Moscow.

It would take years of crisis for Germany to cave in to some the federalisation of its resources in the creation of the European Stability Mechanism and the Banking Union. (Eurobonds still remain a bridge too far.)

Hungary was one the first European government to which the approach now synonymous with the Euro crisis was applied: demanding austerity for financial lifelines. This was the opposite of Geithner’s “Maximum Force” approach—call it the politics of the Minimum Assistance. It would define EU power brokers’ approach to Latvia, Romania, Greece, Cyprus, Portugal, Ireland, Spain, even Italy.

But the insistence of Steinbrück and Trichet that Hungary was “not our problem” reveals something more pernicious than mere financial orthodoxy. They could treat it all as a technocratic exercise because they could not even imagine Orbanism as a possibility. Living the self-assured teleological universe that saw Europe as “the paragon of virtues,” a hostile, defiant, illiberal, Hungary was inconceivable to them. The only future they could imagine was peopled by leaders like Mario Monti, the Brussels backed technocratic Italian ex-prime minister. Not Matteo Salvini.

But in 2008, as I finally left Tbilisi, all this lay ahead. The banking crisis was so severe even Moscow had abruptly moved on from the conflict. “It might be tempting to conclude,” writes Tooze, “that by taming Russia, the effect of the crisis was to calm international relations. And in the short run this was surely true.” But like the Euro crisis it was never resolved: and the unipolar order that once seemed inevitable kept deteriorating. “The comprehensive economic, political and diplomatic clash between the West and Russia that had been foreshadowed in the proxy war” in 2008 writes Tooze, “was unleashed in Ukraine.”

With hindsight this deterioration was sudden. Late at night in March 2014, I walked through the remains of Maidan, as militias patrolled, drunks sobbed, and Kiev prepared for war. On a giant screen the face of the trembling prime minister shimmered: they were no longer pretending anymore. In Crimea, the annexation was certain.

Within days the European Union confronted Russia with sanctions. But this time the bloc, wounded by its inability to answer its fundamental questions, would face the Kremlin a severely weakened power. With Moscow working to crack the European Union—electoral interference, hacking, disinformation, corruption, hybrid war—something had become clear. The continent’s geopolitics are now closer to the fears of Mitterrand and Gorbachev than the hopes of Bush and Kohl.

But something, for me, remains obscure: what has letting go of our illusions about Europe really taught us? Does power throw up contradictions that are ultimately insurmountable? No matter how clever the design? Is this the tragedy of great power politics? I don’t know.

“Putin’s line had always been that geoeconomics were geopolitics,” writes Tooze. We are left thinking that this brilliant historian agrees.


The post Crashed and Burning appeared first on The American Interest.

 •  0 comments  •  flag
Share on Twitter
Published on October 08, 2018 14:59

Columbus Day Has Never Been About Christopher Columbus

The dozens of American cities, counties, and institutions that are named after Columbus (and his literary equivalent Columbia) represent the privileged role that Christopher Columbus has played in American civic life. Columbus was America’s first frontiersman, a hero who had left the comforts of Europe to search for a fresh start in a new world. Early Americans cheered him as an enlightened champion of science who criticized obscurantist European ideas.


Washington Irving popularized this interpretation of Columbus in his History of the Life and Voyages of Columbus, published in 1837. In Irving’s hands, Columbus became a man of science who liberated himself from the shackles of medieval and Catholic Europe to shape a progressive and Protestant America. Much of Irving’s biography of Columbus is pure fiction, but his book defined Columbus for nineteenth century Americans.


Irving’s most enduring myth was the false assertion that King Ferdinand and Queen Isabella believed that the earth was flat. The geographers and astronomers that the royal couple consulted knew the earth was spherical, and they correctly estimated that Japan was 12,000 miles from Spain, not 3,000 miles, as Columbus calculated. Samuel Eliot Morison’s 1940 biography of Columbus called Irving’s story “misleading and mischievous.” Morison wrote, “The sphericity of the globe was not in question. The issue was the width of the ocean; and therein the opposition was right.” Fortunately for Columbus, the Bahamas lie where he thought he would find Japan.


Beginning in the late nineteenth century, Italian Americans adopted Columbus as an immigrant hero whose fame could boost their status in their new country. In 1882, the Knights of Columbus was founded to promote immigrant and Catholic interests. In 1892, at the time of Columbus’s quadricentennial, American Catholics proposed that Columbus be canonized as a saint.


As Columbus became an Italian Catholic hero, American nativists and conservative Protestants found in the Vikings a racial and religious alternative to Columbus. Nativists began to incorporate Viking representations in art and architecture, and they erected statues of Viking heroes in town squares. In 1891, Marie A. Brown wrote The Icelandic Discoverers of America to attack Columbus’s ascendancy and to protect Americans from “the foulest tyrant the world has ever had, the Roman Catholic Power.”  In 1893, The New York Times quoted the baptist preacher R. S. MacArthur calling Columbus “cruel, and guilty of many crimes.”


By Columbus’s quincentennial in 1992, American politics had realigned so completely that Columbus had become a hero to nativist and conservative Christians and villain to progressives. For multiculturalists, Columbus was a European imperialist whose journeys led to epidemics and genocide. In Denver, where Columbus Day was first observed, activists poured blood on the statue of Columbus. In New York City, council members demanded that the City remove statues of Columbus from public spaces.


Today, Columbus is either a saint who represents all that is noble in America or an avaricious tyrant who incited  genocide. Our polarized opinions would have had a familiar feel to Columbus. He returned from his first voyage a national hero. He returned from his third voyage disgraced and in chains, his governorship of Hispaniola usurped by the ruthless Francisco de Bobadilla.


Columbus’s reputation was so damaged by Bobadilla’s reports of Columbus’s mismanagement of Hispaniola that Ferdinand and Isabella stripped Columbus of his claims on the islands that he discovered. Then, the wily cartographer Amerigo Vespucci convinced mapmakers that he, rather than Columbus, had discovered the New World, and they gave Amerigo’s name to the continent. In Europe, the centenary of Columbus’s first voyage to America passed without celebration.


Columbus spent his final years fruitlessly attempting to reclaim his property, titles, and reputation.  Columbus’s chronicler, Bartolomé de las Casas wrote, “The man who had, by his own efforts, discovered another world greater than the one we know before and far more blessed, departed this life . . . dispossessed and stripped of the position and honors he had earned by his tireless and heroic efforts.”


Columbus’s discovery unleashed insatiable passions for gold and for empire building.  For better and for worse, this event transformed the medieval world into the modern world. Columbus and his legacy are full of contradictions.  He will continue to elude us.


The post Columbus Day Has Never Been About Christopher Columbus appeared first on The American Interest.

 •  0 comments  •  flag
Share on Twitter
Published on October 08, 2018 06:58

October 5, 2018

The Courage of the Powerless versus the Craven Behemoth

The most dangerous person in the eyes of any totalitarian regime is the individual who is not afraid and thus will not be cowed. He or she may be in prison, tortured or physically broken, has probably been labeled a criminal or a terrorist in an effort to discredit them, yet remains unbowed, unafraid, even confident. Not necessarily confident in the prospect of liberation or triumph, but confident in the knowledge that the seemingly all-powerful regime cannot make the person capitulate and repeat the lies that every totalitarian state insists upon.

For every dictatorship develops a public narrative that justifies why a small circle of people should hold political power, and thereby enrich themselves at society’s expense. These narratives, called variously doctrines or political programs or ideologies or philosophies, are in the end just big lies. That is why it is also necessary to control the media, the schools, the arts, the internet, public discourse itself, so that common sense questions and honest answers may not disturb the stability of the regime. In every society, however, there are truth-tellers, declining to live the lies.

Vaclav Havel was such a one. He told his truth in the plays he wrote, and later, after being imprisoned for his artistic expression, in more directly political statements. Ten years after the Prague Spring, Alexander Dubcek’s attempt to develop “socialism with a human face,” was extinguished by Soviet invaders, and Czechoslovakia had settled into a long dark night, Havel wrote an essay entitled “The Power of the Powerless.” In it, he explained that oppressed people always contain “within themselves the power to remedy their own powerlessness” by “living in truth” in their daily lives. Havel and his friends subsequently wrote Charter 77, a declaration by citizens of Czechoslovakia that described and supported the human rights commitments their government had nominally made in the Czechoslovak Constitution of 1960 and in signing on to the 1975 Helsinki Final Act. For their troubles, they had to be suppressed, else others might learn what the state had promised – freedom to travel, to speak, to worship, and so on. So they went to prison and served hard labor.

Many years later, Havel wrote in the Washington Post,


I come from a country where, as late as mid-1989, while all around us totalitarian icebergs were cracking and thawing, the stupid, repressive regime remained strong. Together with other people of a similar mindset, I was in prison. Yet by the end of that same year, I was elected the president of a free Czechoslovakia.

Seemingly unshakeable totalitarian monoliths are in fact sometimes as cohesive as proverbial houses of cards, and fall just as quickly.

This is always the fear of the dictator—the moment when the people no longer pretend to believe the lies, and are no longer afraid. Then the dictatorship may crumble, as so many have done.

The increasingly repressive government of Russia, for instance, is afraid of Oleg Sentsov, a writer and film-maker. He is at the moment serenely embarked, even while on a hunger strike in a remote prison cell in northernmost Russia, on a daring crusade for the freedom of scores of fellow Ukrainians, unjustly imprisoned in Russian jails for peaceful opposition to the Kremlin’s brutal military attacks on its neighbor. Putin’s government is terrified that he might die, and also that he might live and be a heroic example to many others, in Ukraine and Russia. Khadija Ismailova, too, is another one, intrepid Azerbaijani journalist, exposer of official lies and corruption at the highest levels in Baku. She is out of jail now, after years behind bars for being a professional journalist, but remains marooned in a country that won’t let her work and won’t let her leave. And, in what must be seen as a harbinger of resurgent military dictatorship in Myanmar, two Reuters reporters, Wa Lone, and Kyaw Soe Oo, have just been sentenced to seven years in prison for accurately reporting on appalling atrocities committed by the Burmese military against the Rohinyga minority. Official efforts to control what is said and written in the country have gone to the absurd step of banning use of the word “Rohingya” in print or on the air – and both the BBC and Radio Free Asia are no longer able to broadcast in the country because they continued to call the Rohingya by the name they call themselves.

All of them knew they risked imprisonment—or worse—for speaking and writing about crimes committed by their governments against the people of their own countries, and the lies told to deflect attention and evade accountability for those crimes. Yet they were and are unafraid.

Consider now the prospects for truth-telling in the world’s most populous country, the People’s Republic of China, which has been evolving toward an ever more tightly managed totalitarianism since the ascension of Xi Jinping as paramount leader in 2012. The campaign to censor free speech and to suppress independent civic life takes many forms. One is hound people to their death with repeated imprisonments, and then to go after family members in order to keep them from telling the story.

This was the story of Liu Xiaobo, who rose to fame in the 1980s with his literary critiques and eventually became a visiting scholar at several overseas universities. He returned to China to support the 1989 Tiananmen Square protests and was imprisoned for the first time from 1989 to 1991 for his involvement in the human rights movement.During his fourth prison term—in 2009 he was sentenced to eleven years for “inciting subversion of state power,” which is double-speak for being an author of Charter 08, the human rights manifesto that proclaims “we should end the practice of viewing words as crimes”— Liu was awarded the 2010 Nobel Peace Prize for “his long and non-violent struggle for fundamental human rights in China.” He was released from prison only on his deathbed, in June 2017. His spouse, Liu Xia, though never charged with any crime, was under house arrest from 2010 – until more than year after her husband died. She was finally allowed to leave for Germany in July 2018. She is under incredible pressure to refrain from speaking out, even from afar, because her brother, Liu Hui, is blocked from leaving the country.

Gui Minhai, a Chinese-born Swedish writer and co-founder of Causeway Bay Books, was mysteriously detained last year. His video-taped confessions were described by the Washington Post as “messy and incoherent, blending possible fact with what seems like outright fiction.” The end result worked, however: China has to effectively censored an important writer and independent distributor of books.

Another brave person languishing in the Chinese Gulag is Ilham Tohti, a modest if outspoken professor of economics who devoted much of his life to fostering better relations between the dominant Han Chinese people and the minority Uighur people, mainly Muslims who live mostly in the western Xinjiang province. In 2006, he founded a Chinese-language website called Uighurbiz to publicize cases where Uighurs’ rights were violated. The idea behind the site was to ultimately promote mutual understanding between Han and Uighur peoples. For years, in speeches, lectures and writings, he repeatedly emphasized his opposition to separatism, extremism, and terrorism. He focused on the rule of law and human rights, and advocated for full inclusion of China’s diverse ethnic minorities in China’s booming economic development.

Chinese authorities have clearly decided that rather than find a way toward the mutual respect and inclusion advocated by Tohti, they would prefer simply to eliminate the language and religion that define the Uighur culture shared by more than eight million people. This year, on an accelerating pace, hundreds of thousands of Uighurs (and other ethnic minorities) are being swept into newly constructed indoctrination camps for what is called “transformation through education.” Children are being separated from parents and placed in ‘orphanages’ constructed only for them, where they can have their families’ religion and language erased from their memories. Human Rights Watch recently released a comprehensive report concluding that the human rights violations attending this campaign are of a “scope and scale not seen in China since the 1966-1976 Cultural Revolution.” This policy of mass incarceration has attracted the attention of the Trump Administration, which has threatened sanctions on the Chinese officials most responsible for this campaign.

Ilham Tohti could be part of the solution to the rising tensions and polarization. Yet he was convicted four years ago on a false charge of promoting “separatism” and sentenced to life in prison. His family has lost contact with him. Seven students who worked with Tohti on his website have also been sentenced to years in prison. In 2015, Tohti’s niece was given a ten-year sentence for having photographs of Tohti on her smart phone and for contributing to articles about him on Radio Free Asia. His daughter, Jewher Ilham, lives in exile in the United States.

The effort by the government of China to suppress free speech and honest discussion of complex social problems is widespread, well-known, and getting worse. You could Google this. Or you ought to be able to.

Yet Google officials won’t admit that they know what their search engine presents when words like “Uighur” or “Ilham Tohti” or “Muslim detention camps” are typed into their search bar. Last Wednesday, September 26, at a Senate Commerce Committee hearing, Senator Ted Cruz (R-TX) posed a question to the senior Google official testifying: “In your opinion, does China engage in censoring its citizens?” Keith Enright, Google’s “chief privacy officer,” couldn’t say. “I am not sure I have an informed opinion on that question.”

Why would a senior executive of the company that claims to be the repository of all knowledge in the world not know the answer to such a simple, uncomplicated question? Any high school student—in China or in the United States—surely knows the correct answer.

“Dragonfly” is the real answer. “Dragonfly” is the code name of the company’s project to enable Google to return to China—to climb back over the Great Firewall, as it were—and play ball in helping the regime control access to any document or news story that might contain key words like “Nobel Prize” or “human rights.” Or names like “Liu Xiabo” or “Ilham Tohti” or, very probably, “Oleg Sentsov” or “Khadija Ismailova” or “Wa Lone and Kyaw Soe Oo” or “Vaclav Havel.”

Google operated in China from 2006 through 2010, but departed the country when the Chinese government was found to be hacking the Gmail accounts of dissidents and systematically surveilling users. At the time, Google co-founder Sergey Brin said the company could not tolerate “those forces of totalitarianism.”

Since then, Google has clearly gone through a period of intellectual evolution, and has now arrived at a place where they have apparently reconciled with these forces. Anyone can (outside China) find online a report published earlier this year by my colleagues at PEN America titled Forbidden Feeds; Government Controls on Social Media in China. The report lays bare the destructive impact of the Chinese government’s vision of “cyber sovereignty” on netizens who dare to dissent. The report also includes documentation of 80 cases of Chinese citizens warned, threatened, detained, interrogated, fined, and even imprisoned for online posts over the past six years. The wide-ranging content of these posts, which touch on everything from Tiananmen Square to issues such as land rights and local corruption, demonstrates the ruthless enforcement of information control and the heightened risks facing those who dare test ever-evolving methods and powers of censorship.

Mr. Enright and his colleagues at Google could do a quick on-line search for it—outside China, that is.


The post The Courage of the Powerless versus the Craven Behemoth appeared first on The American Interest.

 •  0 comments  •  flag
Share on Twitter
Published on October 05, 2018 11:13

October 4, 2018

Ending the Not-So-Futile War in Yemen

The Western media narrative about the ongoing war in Yemen frames it as a “futile” or “disastrous” conflict. After all, Yemen has been prone to tribal and sectarian quarrels for decades, leading to insurgencies and full-blown civil wars. But the historic complexity of Yemen’s politics, and its fraught relationship with Saudi Arabia, should not blind us to Iran’s efforts to establish a bridgehead in the Arabian Peninsula by empowering Yemen’s Houthi militia.

The conflict in Yemen is unlikely to end any time soon unless interested outside powers, especially the United States, play an active role in containing Iran and forcing the Houthis to the negotiating table.

Americans tend to view events in Yemen solely through the prism of that country’s humanitarian crisis and Saudi Arabia’s role in it. But even if the ambition and miscalculations of Saudi Crown Prince Mohammed bin Salman (MbS) have aggravated the situation, there is more at stake in Yemen than the humanitarian critics of the war acknowledge.

Those stakes begin with the openly acknowledged designs of Iranian strategists, who have voiced their expectation of adding Sana’a to the list of Arab capitals—Beirut, Damascus, and Baghdad—which are in “Iran’s hands and belong to the Iranian Islamic Revolution.” For that reason, Ayatollah Khamenei’s regime supports the Houthis—followers of Hussein al-Houthi, a cleric of the Zaidi sect who fought central authority and was killed in 2004—in Yemen’s civil war.

The Houthi movement comprises well-armed and well-trained fighters, and since 2003 has adopted the slogan: “God is great, death to the U.S., death to Israel, curse the Jews, and victory for Islam.” The group officially calls itself Ansar Allah, or supporters of God. Its insistence on seeking power through military means indicates that it is not confident of support, even among Yemen’s Shi‘a population.

Hezbollah, the Lebanese militia, serves as the role model for the Houthis. They have accumulated sophisticated weaponry from Iran and have boldly threatened Saudi Arabia with missile attacks, much as Hezbollah attempted to show strength and harness support by attacking Israeli civilians.

But the threat posed by the Houthis has been eclipsed by denunciations of the Saudi air campaign against them, which is blamed for avoidable civilian casualties. U.S. Secretary of State Mike Pompeo’s recent certification to Congress that the Saudis and other Arab allies were making greater efforts to protect civilians has not dented that criticism.

Just as Hezbollah’s Iran-backed propaganda machine covers up its violence and terrorism through the outcry about civilian casualties resulting from Israel’s counterterrorist operations, the Houthis and their international supporters have managed to avoid discussion of Yemen’s complex politics and the Houthis’ excesses by focused criticism of the Saudi-led coalition’s conduct of the air war. Legitimate concerns about civilian casualties from the Saudi aerial bombardment have led many commentators to ignore both Iran’s role and the unlikely prospect of stability in Yemen if the Houthis—a minority within a sectarian minority—somehow manage to establish their rule over the entire country.

The humanitarian criticism of the Arab coalition’s tactics has also led most observers to ignore that the fight against the Houthis is backed by a UN resolution supporting the restoration of the legitimate government’s control over Yemen. In 2015, UN Security Council Resolution 2216, adopted with 14 affirmative votes to none against, with one abstention from the Russian Federation, demanded that the “Houthis withdraw from all areas seized during the latest conflict, relinquish arms seized from military and security institutions, cease all actions falling exclusively within the authority of the legitimate Government of Yemen, and fully implement previous Council resolutions.”

Since then, the war has been stalemated even though parts of southern Yemen have been wrested from Houthi control. The United Arab Emirates’ forces have borne the brunt of the ground fighting. Their offensive, beginning in June, to dislodge the rebel militia from the port of Hodeida was meant to change the course of the war.

But that offensive was halted in July to enable peace talks. The Houthi representatives, however, refused to join UN-sponsored negotiations in Geneva in September, resulting in renewed fighting. Dialogue is surely preferable to war, but serious negotiations require partners who are actually committed to a peaceful solution. And as the UN special envoy for Yemen, Martin Griffiths, is discovering, the Houthis do not always show up for meetings. They seem aware of the fact that civil wars are often protracted affairs and that most such conflicts in modern times have ended in the decisive victory of one side.

The Houthi strategy is to fight to victory while undermining the Arab coalition’s resolve through adverse international opinion. They condemn the conduct of the Arab governments that confront them while deploying Iranian-supplied missiles, weaponized drones, and landmines in the pursuit of victory on the battlefield. Such an outcome would not be in the U.S. interest and would only add to the misery of Yemen instead of mitigating it.

With 9/11 a distant memory for most Americans, many forget that Yemen was also a major staging ground for al-Qaeda in the Arabian Peninsula (AQAP.) Just this past August, al-Qaeda’s master bomb-maker, Ibrahim al-Asiri, was killed by an American airstrike. Al-Asiri had plotted several attacks against targets around the world, including the 2009 “underwear bomber” plot to take down a U.S. civilian airliner. Some 2,000 al-Qaeda operatives have been killed in Yemen over the past 17 years.

It would be a pity if Yemen’s internal divisions, which enabled al-Qaeda to set up shop there in the first place, pave the way for its regrouping. Nor would the Middle East become safer if a perennially unstable Yemen controlled by the Houthis became a base for threats to its neighbors with Iran’s support.

Consolidation of control of any part of Yemen by the Houthis would be as destabilizing as the rise of Hezbollah has been in Lebanon. Just as Hezbollah ended up supporting the Assad regime in Syria after surviving Lebanon’s civil war, the Houthis could become the major subversive force in the Gulf region.

For its part, Iran has much to gain by fueling the war in Yemen. Even if the Houthis do not win, Iran secures regional advantage at relatively little cost; its partner in the Gulf, Qatar, defrays some of the expenses. The war keeps Iran’s regional Arab critics preoccupied and offers the prospect of building a new proxy regime in the Arabian Peninsula.

But from the U.S. perspective, losing Yemen to Iran permanently would only enlarge the threat Tehran poses to U.S. interests. If one side must win Yemen’s civil war, it would be in America’s interest that it is the legitimate government backed by U.S. allies rather than the Houthis backed by Iran.

Alternatively, the U.S. government could put its weight behind UN peace efforts, provided talks do not serve as cover for the Houthis to keep Yemen’s civil war going for years. Either way, the United States would shorten the war by playing a well-defined role instead of keeping away from the events in Yemen.

Ideally, targeted American support would create circumstances that force the Houthis to reconsider their partnership with Iran and enter into serious negotiations. The worst-case scenario for the Houthis—and the best-case scenario for the United States—would be a decisive end to the ongoing bloodshed that does not leave Iran’s proxies in control of Yemen.


The post Ending the Not-So-Futile War in Yemen appeared first on The American Interest.

 •  0 comments  •  flag
Share on Twitter
Published on October 04, 2018 07:56

October 3, 2018

Being Morally Serious About the Supreme Court

If Americans constituted a morally serious nation—instead of one that seems to take perverse pleasure in tawdry spectacles of political sententiousness, hypocrisy, and bad faith—the debate we would be having over Supreme Court nominee Brett Kavanaugh would turn, at least in part, on the following question: What sorts of youthful transgressions are forgivable, and which are disqualifying, for which jobs?

On the one hand, there are youthful acts that any reasonable person would consider permanently disqualifying for later political office. Few would argue that a one-time, cold-blooded murderer should be admitted to the Federal bench or Congress. Conversely, there are teenage acts that most would agree we can shrug off as mere callowness—egging your neighbor’s house, say, or placing a whoopee cushion on a teacher’s chair.

But in between these two limits—high crimes on the one hand, youthful peccadillos on the other—lies a wide range of morally dubious acts about which reasonable people may feel differently. A few distinctions about this wide spectrum may help us better assess where the young Brett Kavanaugh’s alleged acts fall along that spectrum.

Though rarely publicly acknowledged, the goalposts on political morality move over time. Divorce, for example, was once considered disqualifying for a President. It certainly contributed to slowing Ronald Reagan’s runs at the White House in 1968 and even in 1976. By 1980, however, with divorce rates over the previous two decades having skyrocketed to the point where almost no family was left untouched, Reagan’s divorce from Jane Wyman 30 years earlier was no longer deemed disqualifying in the eyes of either Republican primary voters or the general electorate.

Or consider the question of substance use and abuse. For Bill Clinton, the admission that he had smoked pot in college—even if, as he infamously claimed, he “never inhaled”—was one of several youthful acts that nearly derailed his run in 1992. Likewise, George W. Bush’s youthful drinking, and rumors of cocaine use, was an issue for him as he pursued the presidency in 2000. By 2008, however, when Barack Obama saw a picture of himself smoking a giant reefer go viral on the internet, and even admitted in his book that he had used cocaine as a youth, it did little to dent his appeal.

Growing tolerances regarding acts formerly regarded as sins are relatively easy to deal with because they generally result in a more rather than less inclusive view of who is eligible for higher office. It’s much trickier when the opposite occurs: a norm evolving in a way that ends up condemning behaviors once considered relatively acceptable. This diminishes options for people who once behaved in ways now considered out of bounds, even though at the time the acts committed were considered, if not quite no big deal, then at least relatively minor deals.

This attends in particular to issues regarding gender and race politics, for which a whole set of new taboos have emerged over the past couple of generations. Prior to the civil rights movement, for example, white people referring to black people using the N-word would have been considered completely normal in many circles. Not that this was good—even among many unrepentant segregationists such language would have been considered vulgar and hence rude. But it would have been seen as more unbecoming than politically disqualifying.

Today, by contrast, being caught saying such a thing, even many years ago, is almost certainly career-ending. In 2006, for example, Senator George Allen (R-VA) used the term “macaca” to refer to S.R. Sidarth, a brown-skinned man who was filming his event. It turned out that this term was some obscure Francophone (or possibly Lusophone) racial slur derived from the word “macaque,” a species of monkey. The outcry was immediate; even members of his own party refused to support him. Allen lost a closely contested election to Democrat Jim Webb.

Of course, Allen’s slur took place not in the distant past, but was something happening live in the then-present moment of 2006. What really matters in the debate over Brett Kavanaugh is a narrower but perhaps more vexing question: What are we to make of transgressions committed in the relatively distant past, for which the norms have indubitably moved? (I note that many #MeToo leaders react with suspicion to any effort to historicize their activities. Activists tend to believe they are not so much changing norms as institutionalizing timeless ethical principles that were simply sidelined in the past by malignant and reactionary forces. Alas, that our own age has no monopoly on moral truths is a truth of a different sort that norm entrepreneurs tend to categorically reject.)

One challenge in the case of Kavanaugh is that he as an individual simultaneously occupies the present, in his campaign for a seat on the Supreme Court, and the past, some 35 years ago, when by all accounts—including his own—he engaged in behavior not necessarily of the best sort. He has conceded that sometimes he drank too much, but denied that he had ever drunk until he blacked out, and certainly that he had never, drunk or sober, made unwanted sexual advances on anyone. The burden of these distinctions was in part to parse out venial from mortal sins—occasionally drinking too much beer being not great but also not that big a deal, whereas blacking out and sexually molesting someone being certainly “over the line.” Little direct effort was made to argue that the line itself has moved over the past 40 years, since that was perceived—almost certainly correctly—as unlikely to be a political winner.

But the question of temporal distance is one that others have made explicitly. Preacher Franklin Graham enunciated the argument that, “It’s just a shame that a person like Judge Kavanaugh who has a stellar record—that somebody can bring up something he did when he was teenager close to 40 years ago.” Similar arguments were made by those who wished to defend Republican Alabama Senate candidate Roy Moore from charges that he had made sexual advances on underage women some 40 years ago. Part of the reason for letting actions from the past go is that we all recognize, in both ourselves and others, that people evolve morally over time. Is the moral person who committed a wrong act at age 17 or 20 even recognizably the same moral entity as the person sitting before us today?

In asking this question, we follow part of the logic that drives statutes of limitations in criminal cases. The primary reason for statutes of limitations, contrary to some common views, is not that at some point it become impossible to adjudicate fairly given the passage of time. We do not afford such statutes for crimes like murder or rape. The real reasons for statutes of limitations are twofold.

First, statutes of limitations implicitly account for the fact that social perceptions concerning the moral nature of the crimes in question themselves change over time. And second, statutes of limitations recognize that people’s moral characters change over time, too, and so punishing people in the present for crimes committed many years ago, when they may well have been very different people, somehow seems fundamentally unfair. This is why statutes of limitations are only suspended for the most monstrous sorts of crimes: because the assumption is that people who commit such crimes, even if they have morally evolved to a certain degree, are unlikely to have evolved so much that none of the malign impulses that motivated the original crimes are likely to remain. How much is a cold-blooded murderer likely to change? There is a moral theory of human nature at work here, if only implicitly.

As I perceive the moral case at hand with respect to Brett Kavanaugh, I make two fundamental moral observations—and as moral issues these stand separately from other relevant concerns, such as, for example, the fact that the Court has become gradually more politicized thanks to the incapacities of the Legislative Branch, and that fact that the Court has become far more important in American politics than the Framers ever imagined it would or should be.

The first moral observation is that almost all of us are inclined to be more forgiving of past misdeeds, particularly ones that occurred in the relatively distant past, but only if the perpetrator seems genuinely contrite over what he (or she) did. And a pre-condition of contrition is an admission of wrong-doing. One reason George Bush was forgiven his youthful drinking, or Obama his youthful cocaine use, is that both had admitted that this was a mistake and had clearly moved beyond such transgressive behavior. A lot of people drank sloppily in college; so long as this was the limit of their sins and they have successfully moved past it, most of us would be inclined to forgive.

The first challenge in trying to forgive Kavanaugh, politically speaking, for any of his possible wrong-doing, is that he has not actually admitted that he did anything wrong. Rather than appearing contrite, he is claiming he has nothing for which to apologize. If one believes he has committed misdeeds, therefore, this lack of contrition leads to the suspicion that that he has not changed. (His insistence that “he still likes beer” doesn’t help with this impression.) Of course, this could be because he is innocent; but it could also be because he is still guided by the moral compass that allowed him to commit the acts of which he stands accused by Christine Blasey Ford. In any event, in his testimony this past Thursday, Kavanaugh clearly bypassed the opportunity to seek forgiveness.

The second fundamental observation is that the job Kavanaugh seeks is not just any job. A lifetime appointment to the Supreme Court of the United States is a job of immense prestige and power, and one he would be likely to hold for 30 years or more. This is one of the hardest jobs in the world to get. Hundreds of people alive today either are or one day will be Senators or Governors; thousands of people are or one day will be CEOs of Fortune 500 companies; but at most a few dozen people alive today will ever be Supreme Court Justices. Is Brett Kavanaugh really worthy of this singular honor and power? Even the shadow of a doubt on this score should be enough to cause reasonable people to look elsewhere.

As Julius Caesar observed, in divorcing his wife even after the adultery case against her possible lover was dismissed, “Caesar’s wife must be above suspicion.” So it is with Supreme Court Justices: They must be above suspicion, at numerous levels. Politically, they must seem reasonable and neutral. Intellectually, they must be clear and open-minded. Morally, they must be above reproach. Indeed, the initial marketing of Kavavaugh as a wonderful family man and carpool driver was based on exactly this view: that the moral character of the man is a crucial part of the job application. Nor is such a view of the job requirements simply a question of sentimentality about what constitutes a “judicious disposition.” The legitimacy of the Supreme Court depends, crucially, on the perceived probity of the Justices who sit on it. Why should people respect controversial Supreme Court decisions if they are handed down by people of low moral character, of dubious intellect, or of manifestly partisan motive?

The requirements for this particular job—short of the presidency itself, the most prestigious and powerful of jobs available in the Federal government—are and should be far higher than simply beating a charge on the basis of criminal standards (“beyond a reasonable doubt”) or even the standards used in civil cases (“clear and convincing evidence” or “a preponderance of the evidence”). Indeed, Americans expect much higher standards for a Supreme Court Justice than they would for virtually any other job. Youthful lapses that might well be forgivable when applying to be a sales rep, a software engineer, a football player, or an architect are rightly more heavily scrutinized when it comes to a position as august as a lifetime appointment to the Supreme Court.

For this particular job, we must scrutinize a person’s entire life with a degree of thoroughness reversed for virtually no other position. Insofar as we find question marks in that past, the burden lies on the candidate to clear them up, either by definitely disproving the doubters (a hard charge) or by showing that he or she has engaged in a moral evolution away from the one that permitted him or her to commit misdeeds in the past.

Kavanaugh has not met this mark. If the polls are to be believed, some 150 million Americans doubt Kavanaugh told the truth under oath. Regardless of the truth or falsity of the charges raised against him, these numbers should themselves be disqualifying. Placing anyone with that kind of a credibility challenge on the Supreme Court will be immensely damaging to the institution’s integrity. Republican partisans can gnash their teeth all they want about Kavanaugh being the victim of a character assassination, but that changes nothing of essence. For the sake of the institutional integrity of the court, appointing another person, one truly above suspicion, would seem to be paramount. (Unless, that is, delegitimating the Court is actually an implicit goal of the President. I leave this speculative possibility for another column.)

This explains my dismay at Kavanaugh’s testimonial strategy. I went into this past Thursday’s hearing truly attempting an open mind. I expected Professor Blasey Ford to perform as she did—in a manner that induced empathy if not, of course, providing definitive proof of what may have occurred in that room in Maryland 36 summers ago. But what I had hoped Kavanaugh might have done would be to deliver opening remarks along these lines:


Sexual assault is one of the worst crimes human beings can commit. It damages its victims and their families incalculably, sometimes for generations to come. I stand in solidarity with the #MeToo movement and its valiant effort to banish such behavior. What Ms. Blasey Ford says happened to her is horrifying, and whoever did this to her has no place on the federal bench, but more probably in jail. However, I truly believe this is a case of mistaken identity; not only do I have no recollection of such an event, but I find it inconceivable that I could have committed such an odious act.

Such a speech wouldn’t have gone as far as admitting wrong-doing and asking for forgiveness, but it would at least have suggested that, whatever Kavanaugh’s moral positions in the early 1980s, and whatever the truth about who assaulted Blasey Ford, he at least now has reached a morally defensible position.

But this was not the speech we got. Instead we were greeted by a man barely able to contain his emotions, claiming partisan victimhood, and all but explicitly vowing revenge. This show may have appealed to the Audience of One, but it was simply an unacceptable moral posture for anyone seeking a Supreme Court appointment, regardless of the underlying truth of the charges leveled against him. What Kavanaugh’s speech indicated—what it in fact performed—was a traducing of the moral values we expect a Supreme Court justice to embody: solemnity, equanimity, maturity, forbearance, and yes, sobriety (in the moral sense). Even if he was a man wronged, Kavanaugh’s conduct was, to use a moral concept often deployed in the military, “unbecoming” of a Supreme Court Justice. So forget about what may have happened 36 years ago: No one who behaves the way Kavanaugh did on that Thursday belongs on the Supreme Court.

This underscores the final, deepest issue: Kavanaugh’s apparent inability to recognize that the institutional integrity of the Supreme Court is bigger than justice for him as an individual. At the end of the day, like his fellow Republican partisans, Kavanaugh seems unable to see that his assassinated character, whether just or not, has disqualified him from the job; and that failure of recognition is itself disqualifying. This may seem like a Catch-22 for Kavanaugh himself, and it is. But the Court is bigger than the man, and everyone involved, if they care about the Court, should recognize this.


The post Being Morally Serious About the Supreme Court appeared first on The American Interest.

 •  0 comments  •  flag
Share on Twitter
Published on October 03, 2018 14:50

Brexit and Break-up: Rushing Toward the Endgame

Salzburg, Austria, the charming baroque city best known as the birthplace of Wolfgang Amadeus Mozart, was the setting in late September for one of the ugliest diplomatic encounters in the history of the European Union. At a session of the European Council, British Prime Minister Theresa May was humiliated by the leaders of the Union’s other 27 member states and two of its five Presidents. The topic of the meeting was Brexit, and May’s 104-page plan, adopted in July at her country residence Chequers, for defining relations between the European Union and the United Kingdom after the latter’s withdrawal from the Union early next year.

In the view of British officials, May was ambushed and betrayed. European Council President Donald Tusk had met privately with May on the eve of the meeting and had given no indication of the brusque rejection of her plan that would follow. In the plenary session, French President Emmanuel Macron was the most aggressive attacker, calling those members of May’s Conservative Party who had backed Brexit “liars” and ridiculing those who had suggested that Britain could “easily live without Europe.” After the meeting Tusk issued a statement characterizing the British stance as “uncompromising” and rejecting the Chequers Plan. Proving that Donald Trump is not the only world leader to conduct diplomacy by social media, Tusk posted to his Instagram account a picture of himself offering May a pastry, underneath which was the caption, “A piece of cake, perhaps? Sorry, no cherries.” The latter was a reference to alleged British “cherry-picking” of benefits of the EU single market and was widely perceived in Britain as a calculated insult.

As May returned to London, newspaper headlines blared “humiliation,” the pound fell, and politicians from all sides piled on. Some murmured of a second referendum to reverse the original Brexit decision of June 2016, with Macron and Labour Party leader Jeremy Corbyn especially active in stirring this pot. May recovered her dignity and bolstered her political fortunes somewhat with a forceful statement demanding that EU authorities treat the United Kingdom with respect, engage seriously, and work toward a compromise. All eyes are now on preparations for another session of the European Council scheduled for October 18, when, according to the timetable proposed by Tusk, an agreement could be reached and then finalized at another extraordinary summit in November. Time clearly is running out, however, and the odds are rising of a messy British exit from the Union on March 29, 2019—without the conclusion of a withdrawal agreement providing for an orderly disentanglement of the two sides’ affairs, and without an agreed framework for future UK-EU relations.

The United States, as would be wise for any outsider observing a bitter divorce, has been reserved regarding the Brexit negotiations. Prior to the 2016 referendum, President Barack Obama warned that the United Kingdom would go to the “back of the queue” when it came to negotiating a free trade agreement with the United States, and called for British voters to choose Remain—advice they clearly ignored. Trump did the opposite, welcoming the Brexit vote, appearing at a campaign rally in Mississippi with the leader of the Brexit movement and talking up the prospects of an early free trade agreement with the UK. The U.S. government has since muted its support for such an agreement. Following May’s meeting with Trump in New York the week after the Salzburg debacle, Downing Street issued a statement claiming that the two leaders “agreed that Brexit provides a wonderful opportunity to strike a big and ambitious UK-U.S. Free Trade Agreement.” The White House was more cautious, saying only that the two had “discussed a variety of global challenges.”

The United States has an enormous economic and political stake in the Brexit outcome. A chaotic British exit would dampen economic growth in the United Kingdom and the European Union, with blowback on U.S. exports and corporate profits. NATO solidarity would be undermined, undercutting U.S. efforts to encourage Europe to do more for its own defense. The Good Friday Agreement, which the Clinton Administration helped to broker and which finally brought peace to Northern Ireland, would be put at risk were Ulster and the Republic of Ireland to find themselves on opposite sides of a “hard border.”

Break-up of the United Kingdom itself remains a possibility. An agreement on the terms that the EU leadership is demanding risks dividing the country across the Irish Sea. On the other hand, the permanently estranged relations between the United Kingdom and the European Union that might follow a no-agreement Brexit risks rekindling movements for independence in Scotland, special autonomy arrangements for Wales, and even, as Mayor Sadiq Khan has demanded, for cosmopolitan, pro-Remain London. Break-up effectively would mean the end of Britain as a major power, the winding down of its (Scotland-based) nuclear deterrent, and the loss by the United States of its most important ally since 1941.

The long-term political and psychological effects of a failed Brexit are also difficult to predict. Timothy Garton Ash has warned about the emergence of a “rancid, angry Britain: a society riven by domestic division and economic difficulties, let down by its ruling classes, fetid with humiliation and resentment.” Some measure of the bitterness that already is creeping into UK-EU relations was on display at the recent annual Conservative Party conference in Birmingham, where Foreign Minister Jeremy Hunt compared the European Union to the Soviet Union, provoking outraged reactions from politicians on the continent. Lasting British estrangement from Europe inevitably would affect Transatlantic relations and feed back into the American political scene. As Walter Russell Mead has observed, even members of Trump’s base who are hostile to U.S. involvement in the affairs of Europe have instinctive feelings of solidarity toward the United Kingdom. Perceptions of a vindictive Brussels punishing Britain for asserting its independence would poison American attitudes toward the European Union, opening the way to a full-scale trade war and a weakening of the U.S. defense commitment to the continent.

So what is likely to happen and what can be done?

In her March 2018 Mansion House speech in which she laid out her vision for a future UK-EU relationship, May rejected as models both Norway and Canada, meaning Norway’s relationship to the European Union as a member of the European Economic Area, and the Canada-EU relationship established by the 2016 Comprehensive Economic and Trade Agreement. Norway has full access to the EU single market, in exchange for which it must accept EU law, accept the de facto jurisdiction of the European Court of Justice, accept the free movement of people, and contribute substantially to the EU budget. May rejected such an arrangement as incompatible with British sovereignty and the spirit of the 2016 referendum. Canada takes on no such obligations, but it has far less access to the single market. May rejected this model as well, which she saw as moving too far from the unlimited access to the EU market that Britain currently enjoys as a member state. Instead, she called for a new type of arrangement, “the broadest and deepest possible partnership—covering more sectors and cooperating more fully than any Free Trade Agreement anywhere in the world today.”

For EU leaders, May’s attempt to combine the access Norway enjoys with the more limited political and legal obligations associated with the Canada agreement constituted cherry-picking. In contrast, May’s critics in the Conservative Party saw her as veering much too far toward the Norwegian model in an effort to make her proposals palatable to Brussels. Brexit Minister David Davis and Foreign Minister Boris Johnson both resigned two days after announcement of the Chequers Plan.

EU complaints about cherry-picking have the ring of veracity, but they should still be taken with a grain of salt. Turkey has a customs union with the European Union, but no free movement of people. Norway has free movement of people, but no customs union. In its free trade agreement with South Korea, the European Union has mutual recognition of standards for cars, whereas in its agreement with Canada it does not. The reality is that the Union can accept a wide variety of trading arrangements when doing so serves its interests.

Since its founding in 1958, the European Union and its predecessor organization, the Common Market, have relentlessly cherry-picked the global trading system, setting up a Common Agricultural Policy that almost certainly was incompatible with the GATT and concluding preferential trade agreements with countries near and far that played a major role in undermining the Most Favored Nation (MFN) principle on which the postwar trading order was built. The result of these policies has been what Columbia University economist Jagdish Bhagwati called the “spaghetti bowl” phenomenon, as numerous overlapping free trade agreements cause trade-diverting effects. These policies cast a long shadow over the Brexit negotiations. As a RAND study has shown, fallback to global WTO rules is the most disadvantageous outcome from among no fewer than eight alternatives. In the curious world of 21st-century trade arrangements, MFN has become least favored treatment. The European Union knows how to bend rules—its own and others’—when it wants to.

All this aside, Tusk probably is correct in claiming that Chequers is unworkable. Elements of the plan are extremely complex, as, for example, the proposed Facilitated Customs Arrangement whereby the European Union and the United Kingdom would collect customs revenues for each other—depending on the final destination of imported goods—and settle accounts later, thereby obviating the need for a customs union or complicated rules of origin. Tusk and his colleagues no doubt also hope to wring further UK concessions in the endgame of the negotiations. The problem, however, with any further movement toward a Norwegian solution is that it lacks support in May’s Conservative Party, raising the prospect that any deal she concludes would be torpedoed by Conservative defections and by the opposition of the Labour Party, which as yet has no clear policy on Brexit but which sees political chaos and the fall of May’s government as politically advantageous. That in turn could open the way to elections and the left-wing Corbyn becoming Prime Minister—a frightening prospect that would set off just the kind of panic in the business community that the “Remainers” had initially predicted for Brexit.

The Chequers Plan preserves British access to the EU market, but it makes the United Kingdom permanently subject to EU rules. Britain would participate in the various committees and regulatory bodies that set such rules, but as a non-voting member. It would retain the right to conclude trade agreements with third countries but, as critics point out, its ability to do so would be hampered by its being tethered to the body of EU rules, including on agriculture. Quite apart from the problem of near-term opposition in Parliament, it is hard to see how a Chequers arrangement would be sustainable over the long term. If its provisions on “common rule books” and the like turn out to be a mere fig leaf for EU control over the UK economy, it would likely break down. It is one thing for Brussels to administer Norway as a non-voting vassal state, quite another to do so with a country of the size, economic dynamism, and national traditions of the United Kingdom. If, on the other hand, May’s proposals provided for real British input into EU decision-making, they would introduce yet another level of complexity into the EU system, the last thing the Union needs as it seeks to reform and revitalize its own structures.

The solution that May’s critics offer is a sharp break from Norway and a turn to what Johnson calls Super Canada, a free trade agreement that would unmoor the United Kingdom from the European Union but preserve market access through mutual recognition agreements and other more sovereignty-respecting means. Time to negotiate any such agreement is getting short, however, and EU leaders might claim such an approach would be cherry-picking on an even grander scale. Short-term loss of market access would be a problem for a British economy already experiencing slow growth, rising trade deficits, and the growing inequalities of income that gave rise to the Brexit result in the first place. Not least, the European Union retains the Irish border issue as a stick to beat the British government if it chooses to do so. Any move in Johnson’s direction would require the United Kingdom to show that a harder Brexit did not mean a hard border in Ireland. Against these formidable odds, most observers predict that May will stick with her plan and remain in power long enough to reach some kind of compromise with the Union. Whether that compromise will be approved by Parliament and how sustainable it would be over the long run are unclear.

What might the United States do? Washington is not in a great position to exercise influence. Brexit is to an extent embedded in America’s own domestic political strife. The American academic community has long been enamored of the European Union and its self-proclaimed role as a leader of global multilateralism, a wielder of “soft power” that advances bureaucratic solutions to all the world’s problems. The establishment media generally has followed this line, seeing Brexit as a blow to the vaunted “liberal international order” said to be under assault by Trump. Defense of the European Union thus has become part of the anti-Trump resistance, as a reversal of Brexit and a chastened return to Europe by the United Kingdom would be a defeat for the populist forces that Trump has embraced. For their part, Trump and many of his supporters continue to see the European Union as the cutting edge of the globalism they deplore and welcome the reassertion of national sovereignty reflected in opposition to the Union. There is no easily recognizable honest broker in all of this.

At the very least, Americans in and out of government should make it clear that they are watching, and that they expect the least destructive outcome, one that will require greater coherence on the part of the British and more generosity on the part of Brussels. Americans in both the policy and the business communities for the most part would regret a further fragmentation of the Union. But neither would they like to see a great Atlantic democracy driven into the ground by a vindictive Brussels bureaucracy that remains theologically committed to “more Europe” and astonishingly tone-deaf to the populist rumblings that gave rise to Brexit and that are in evidence in most EU countries.

The United States has held informal talks with the UK government about how to broaden and deepen bilateral ties post-Brexit. These could be stepped up and given higher visibility, provided they were not portrayed as directed against the European Union. The conclusion by the United States and Canada on September 30 of a compromise deal that preserves the essence of the North American Free Trade Agreement is a positive step. It will enable the United States to coordinate with Canada on a positive Brexit outcome and possibly establish the basis, over the longer-term, for EU-UK-North American trade arrangements that would alleviate many of the challenges posed by Brexit. The agreement now needs to be ratified by Congress.

Where Brussels shows itself to be petty and vindictive toward London, Washington should do what it can to compensate. If the United Kingdom is excluded from the European Union’s Galileo satellite system, for example, there may be opportunities for it to work more with the United States on GPS and other projects. If the United Kingdom is excluded from EU-sponsored academic and research programs, new opportunities might be opened for ties with American institutions. No doubt there are other compensating steps that could be taken.

For the most part, however, the United States is likely to be on the sidelines. It will have watch and wait, counsel moderation, hope for the best, and stand ready to help pick up the pieces should the UK-EU negotiations fail.


The post Brexit and Break-up: Rushing Toward the Endgame appeared first on The American Interest.

 •  0 comments  •  flag
Share on Twitter
Published on October 03, 2018 09:06

Immigration, Polarization, and Identity Theft Fraud

Immigration has long been a major third rail of American politics, taking second place historically only to race—“third rail” defined as an issue that is so charged and untouchable that any politician who gets near it will suffer political death by metaphorical electrocution. It is still that, and it’s getting worse. Just as Donald Trump was a symptom of deep political and cultural derangement before he became President—and only at that point constituted a new, contributing cause of that derangement—so the inability to devise a rational and consensual immigration policy has been first a symptom of political dysfunction and subsequently an additional cause of that dysfunction. In our day that dysfunction is defined more or less as the intersection between the wide disagreement over immigration reform and a galloping ideological polarization whose causes are several.

The immigration issue has now become the poster child from hell, the condensation symbol most likely to spark incivility, ideological stupidity, and judgments that one’s adversaries are not merely wrong but evil. The tone of discourse over immigration reminds any historically tutored person of the absolutist, abstract, and angry moralistic language of the 1850s, when slavery took pride of place as the neuralgic issue of the day. That is not a propitious comparison as the third rail becomes hotter than ever.

To get a sense of how hot, note that the last time any Administration made a serious effort to broker a compromise over immigration was back in January 2007, when the Bush Administration put forth what in retrospect was a workable and sensible plan. The plan basically (I’m simplifying here for the sake of brevity) provided a glidepath to citizenship for non-felonious illegal aliens who wanted it, limited guest worker status for those who did not, and better border security to create disincentives to further illegal entry. On the one hand, this was a politically brave initiative, marking the only time during his presidency that George W. Bush pushed against his own domestic political base. At the time, it stood a chance if only the politics could be skillfully managed. On the other hand, as it happened, the politics of managing the initiative was so mishandled that it allowed Bush’s own Republican Party to sabotage the effort from within its legislative delegation.

It was not that much later that I received my first robocall from Newt Gingrich, with a deep, dark message about immigration tied to Mitt Romney’s presidential aspirations. It did not reach Trumpian “carnage”-level rhetoric, but it got close. When the Bush attempt failed, it seems, the nativists were buoyed as the Tea Party settled into Capitol Hill—and they have not stopped since. That, in turn has triggered leftward motion in Democratic Party views, which is typical of the dialectic of a polarized politics.

One result is that the range of options on immigration reform has shrunk, squeezing out most of what still remained in 2007 of a sensible middle—and, as with many issues, this is despite the fact that the great middle of American opinion, no longer represented by either major party, does not favor any radical view, but rather a sensible centrist compromise. On the one side we have support for “open borders.” That is why, for example, the Federal government spent something like $27 billion in FY 2017 supporting “sanctuary” cities whose main aim was to help illegals escape any detection or penalty for the U.S. laws they had broken. That is also how, in one California town—Huntington Park, in Los Angeles County—illegals ended up serving on a town advisory council, and getting paid for it. What can one say of a legal arrangement in which self-contradiction has become the norm, except that it is a disgraceful and irresponsible reproach to the rule of law itself?

And on the other side, of course, we have the Trump Administration and the President’s talk of “carnage,” which he blames on Hispanics illegal and legal alike, for all one can tell. When he does this his body language reminds one eerily of Mussolini; but hardly anyone seems to notice, or care. His depredations are so grinding on the collective national psyche that when he said the other day that he should have fired James Comey when he won the Republican nomination, it seemed to require too much energy for most to point out the sheer lunacy of the remark. Apparently, Trump’s delusions of adequacy have now acquired a timeless dimension, and far too many Americans now share that fantasy zone with him.

The informal compromise between these two insane extremes on immigration, which both Democratic and Republican administrations have settled on but applied in different ways and with different tonalities, is to deport illegals guilty of serious crimes, but to otherwise look the other way for lack of resources or will to do more. This is not a tolerable solution for the long run. A tolerable compromise, however, would be to liberalize but also rationalize immigration along the lines of economic need, not family reunification or other “soft” factors. In short, let more of the right people in legally so as to gain support for keeping or getting more of the wrong people out, also legally. But how do we get there? Can anything be done to free immigration policy reform from its lockbox, and make a sensible bipartisan compromise possible?

Probably not, but there is at least one underplayed issue whose better articulation could help: the link between illegal immigration and identity theft fraud.

It would be nice to have some reliable data on this link, but they are hard to come by. We do know, sort of, that, according to a 2017 CNBC report, “Some 15.4 million consumers were victims of identity theft or fraud last year, according to a new report from Javelin Strategy & Research. That’s up 16 percent from 2015, and the highest figure recorded since the firm began tracking fraud instances in 2004. . . . In all, thieves stole $16 billion, the report found—nearly $1 billion more than in 2015.”

Those are big numbers: 15.4 million Americans, and $16 billion. And as the ease and knack for electronic techniques in theft increase, the numbers seem to be increasing more or less in tandem—not a good sign. But these numbers hide as much as they reveal; in particular, they do not tell us what percentage of identity theft is caused by illegal immigrants in situ. Robocall frauds originate from many places outside the United States, for example; we don’t know what percentage of these totals match which class and type of fraudsters.

If the Justice Department, the Federal Trade Commission (which is the designated repository for reporting identity theft fraud, and on which the Javelin study depended), ICE/Treasury, DHS, the IRS, or some other Federal agency keeps data on this, it’s either not obvious or not publicly available (think of the last scene in Raiders of the Lost Ark). Even the aforementioned estimate of how much identity theft fraud costs the economy in a given year is incomplete even if it is accurate; it does not include large indirect costs in otherwise unnecessary specialized insurance premiums paid by individuals and businesses or time lost trying to recover from an identity theft attack. So what other numbers do we have that may be more relevant to the question at hand?

Well, to start, between 2011 and 2016, the IRS documented more than 1.3 million cases of identity theft by people who had been given Individual Taxpayer Identification Numbers (ITINs). An ITIN, created by IRS fiat in 1996, is a “tax processing number only available for certain nonresident and resident aliens, their spouses, and dependents who cannot get a Social Security Number. The IRS claims that ITINs do not create legal residency status or work authorization on an I-9 form. But thanks to what amounts to an the effort to get illegals to pay taxes, the ITIN can be used to claim the Child Tax Credit, to get a driver’s license, to open an interest-bearing checking account, and more besides. An ITIN also provides, somewhat ironically it will seem to many readers, proof of residency in the United States in case an illegal applies for legal status at some point.

Because the IRS wants to tax them, those with no legal status in the United States can fairly easily get an ITIN, just by filling out a W-7 form—essentially end-running the main purpose of a 1986 law that required legal residency in order to get a job. The applicant does not have to appear in person before any government official, and need submit only any one of 13 documents approved for the purpose; the document(s) are returned with the ITIN after 60 days. The IRS does not ask about legal status, and the law concerning the privacy of all IRS functions has been interpreted (by the IRS) to prohibit it from sharing any information about ITIN applicants with immigration officials or law enforcement. By August 2012, the IRS had assigned 21 million ITINs to taxpayers and their dependents­—a number that far exceeds conventional estimates of the number of illegal immigrants in the United States. By now the number is certainly larger, but it is impossible to find in public sources. It is so easy to get an ITIN that it’s not unreasonable to conclude that every illegal alien in the United States who wants an ITIN has one.

Of course, if any one of 13 documents will do, those most easily forged will be the documents of preference for illegal residents to get an ITIN. Why use a forged document? Because many illegal aliens have no documents at all, and some are forethoughtful enough not to use their real name to get one. That enables individuals to hide past indebtedness and criminal convictions in their country of origin, and also in the United States. So some again unknown number of illegal immigrants secure multiple ITINs to cover their own tracks and then sell those ITINs to newcomers, who simply adopt the fictitious name on the document invented by the earlier arrival.

Many illegal immigrants are content with an ITIN number, but ITINs are limited; they are not much good for credit applications, for example. So, unsurprisingly, a major document-forging industry has arisen in the wake of the 1986 law not just for the purpose of helping illegals get ITINs, but for a good deal more than that—for stealing the identity of U.S. citizens. Good luck, IRS, taxing all that hard work.

The problem was detected and made publicly known at least by 2002, when the Immigration and Naturalization Service reported that, “large-scale counterfeiting has made fraudulent employment eligibility documents (e.g., Social Security cards) widely available.” The report also describes an incident in 1998 where “INS seized more than 24,000 counterfeit Social Security cards in Los Angeles after undercover agents purchased 10,000 counterfeit INS permanent resident cards from a counterfeit document ring.”

It is not hard to understand why this happens. Once in the United States, illegal immigrants use counterfeit documents to secure jobs that would otherwise be kept out of reach by the employee verification process. That works for many who are content with an ITIN, but also for those illegals who want better documents for the purpose of, for example, appropriating someone else’s credit to buy or lease cars and acquire auto insurance. Some use it to claim Social Security benefits that are not theirs, and some succeed. This requires actual identity theft: the trifecta of name, social security number, and birthdate. Addresses and telephone numbers are easy to add to make a stolen identity package.

In the vast market for forged documents, the new illegal does not have to search hard for them. The purveyors of fakes find him most of the time, usually via friends and inward-rippling family connections. Almost every new illegal has at least one relative or friend in the United States who is an older illegal, and so knows the ropes.

In order of popularity as of about 15 years ago, forged documents included: border crossing cards, alien registration cards, non-immigrant visas, U.S. and foreign passports, reentry permit documents and immigrant visas as well. But forged Social Security cards and driver’s licenses are really the gold standard of forgery. Typically, too, some illegals who came earlier go into the phishing and forging business to exploit more recent illegals. They steal identity packages—combinations of names, social security numbers, addresses, birthdates, credit card numbers and the like—and build a store of stolen identities to sell to newcomers. Several cases of older illegals pretending to be bilingual immigration lawyers in order to fleece newcomers have also been uncovered, and there is a strong correlation between states with large immigrant populations (legal and illegal) and incidents of identity theft fraud. Often enough, legal residents with Hispanic names are reportedly targets of choice.

It is very hard to establish any reliable statistical relationship between the acquisition by illegal aliens of ITINS or more “advanced” forged documents and cases of identity theft fraud. That is because there are many cases amid the mix of identity loans. In other words, a family member already in the United States, and perhaps a legal resident, will lend their documents to an illegal alien who assumes their name. The immigrant’s own supervisor often arranges the loan: The immigrant gets to work and the supervisor gets an inexpensive and beholden worker in what is essentially, sort of, a win-win trade. The result is that the illegal alien can get a job posing as somebody else in their family, but the money earned will go on the docket of the holder of the document, thus increasing their investment in the Social Security trust fund. Identity loaning is still fraud technically, but it is consensual fraud. Only the government and the integrity of the law get screwed.

There is a particular emphasis on borrowing legal residency documents for minor children, for obvious reasons. Parents care deeply about their children’s futures, and the older a fraud the harder to uncover. These immigrants, legal and otherwise, like the millions of immigrants who came before, typically think in terms of family security and wealth and adopt family strategies for the purpose. Trying to understand how this works on the basis of individual rationality is tempting for many Americans, but it doesn’t work most of the time. That’s how far most Americans have drifted from the natural communal agency of premodern social structures, so far that their imaginations cannot readily capture it.

One result of all this is that huge numbers of people living in United States work under a name that is not their given name. Of course this is nothing new. My family’s paternal name in the old country was Bawierzansky. It got turned into Garfinkle thanks to my paternal grandfather’s half-brother Benjamin, who preceded him from the Polish territories of the Russian Empire to the United States. No one knows where my great uncle Benjamin got this name, but it was common in those days to travel away from Russia on forged, stolen, or borrowed documents—anything that worked to escape impressment in the Czar’s army. The documents of someone who was deceased were particularly desirable, for obvious reasons. The result was that forged documents of foreign origin often left a trail of disingenuity once the traveler got to the United States. Was my great uncle Benjamin going to tell an American immigration official the true story of how he obtained his travel documents, a story that would have had to be told in a language he could not yet speak, and the truth of which might well have barred his entry? You get the point, and it remains a contemporary point. Besides, the disingenuity in time gets naturalized: By the time I was born, the name Bawierzansky had long since disappeared from the family record. I was amused to learn some 55 years later, however, that I had cousins by that name living in Australia!

Because of the complication of loaned versus stolen documents, it is impossible to say how many incidents of identity theft fraud there have been. A number like 1.3 million might be high, or, depending on how the Javelin numbers parsed down if parsed down they could be, it might be very low. We just don’t know.

But it’s far from zero. One estimate by the Social Security Administration found, for example, that 700,000 unauthorized workers in 2010 alone obtained fraudulent birth certificates in order to get a Social Security number. It also found that “1.8 million other immigrants worked and used an SSN that did not match their name in 2010.” That’s just for one year. Go back to the time of the 1986 law that made legal residency a condition of getting a job, do a little math, accounting for repeat cases of the same person pretending to be someone they are not year after year, and the scope of estimation grows wide. All we can say about the link is that it is not minor, and the costs all tolled up are not minor either.

And then, of course, there is logic: Who more needs to steal an identity than someone who doesn’t have a legal one in the United States?

Another way to go about plumbing the quandary is to ask not how many people perpetrate identity theft fraud, but how many report being victims of it. As noted above, the FTC is the reporting repository for the purpose. (The FBI doesn’t “do” identity theft fraud. . . .) But if you ask FTC officials how many reports they have received, they either don’t know or won’t tell you. Probably the former. If you make a report but don’t update it at least every 60 days, it expires. So does the FTC even know how many reports have been filed? Probably not. Moreover, many victims of identity theft fraud don’t bother to report it to the FTC, since the FTC does nothing much to help. It gives the victim a “recovery plan,” but it is all common sense and nothing that can’t be easily looked up on the internet.

Hundreds of thousands of police reports, local and state, are also filed yearly due to identity theft fraud, because victims whose credit has been abducted or whose Social Security benefits have been claimed tend to take the crime seriously. They actually expect law enforcement to use their tax dollars to do something about it on their behalf. But anyone who becomes a victim will soon learn that law enforcement does no such thing.

If you ask the local police in any jurisdiction with large numbers of non-native born people, they will tell you that the number of cases coming their way is simply overwhelming. They do not have nearly enough detectives to pursue every report, or even one in a hundred reports. Besides, most of the cases spill over jurisdictional boundaries. If a resident of Montgomery County, Maryland becomes a victim of identity theft fraud, the perpetrator may not, and probably does not, also live in Montgomery County. And if the perpetrator has not succeeded in stealing something monetary from the victim, all the police have to go on is an attempted identity theft fraud by someone they have close to no information about. If the victim goes to the Maryland State police, all he gets is a barely sympathetic yawn. If the victim by chance finds out where the perpetrator is probably located, and calls law enforcement both local and state in that locale to ask for help, pretty much the same thing happens.

This means that there is basically no penalty or disincentive for illegal aliens to steal someone’s identity and then proceed to try to use it, over and over and over again, until perhaps they succeed. In my own case, someone out there, having stolen my identity, first tried to divert an IRS refund. Then he tried to use my credit to buy a car, and succeeded—through it did not harm me. Then he tried to buy or lease another car, and failed, because the car dealerships had the presence of mind to call me to check out the suspect. At one point he used my credit and my name to get auto insurance, and that succeeded. But again, it did not cost me anything. Then probably the same perpetrator applied for my Social Security benefits, and was denied not once but thrice.

I am fortunate in that no real harm has been done, except of course that I have had to spend time, which has now accumulated into many dozens of hours, dealing with the problem and trying to protect myself from further assaults—freezing access to my credit reports, buying special insurance, adding layers of protection for credit cards, and so forth. The guy is still out there, and I have no idea what he will try next. I have a copy of his fake Maryland driver’s license, with his photo, thanks to a sharp auto salesman in Orlando, Florida. (Yes, he looks very Hispanic.) I alerted the Maryland DMV about the fake driver’s license. They said thank you, and beyond that, as far as I know, did absolutely nothing. The most annoying thing about all this is that when I found out about the auto insurance policy, quite by accident upon receiving correspondence from the company, I immediately explained the situation and cancelled the policy. But it took four letters and four more calls to get the bills to stop coming—during which time my account was turned over to a collection agency despite three promises that it would not be.

This sort of Kafka-lite thing is certainly annoying, but it is relatively innocuous. Some people have not been so lucky. Several cases have been reported of individuals being denied credit for a mortgage because of bad debts run up by an imposter. Imagine walking into a loan office at a bank to apply for a mortgage and being confronted by a credit sheet listing a dozen delinquent accounts you’ve never heard of. That is bound to be really law annoying. But that really happens, and illegal aliens are responsible for some unknown but logically not small number of cases.

What is truly if generally annoying, as already noted, is that there is no disincentive whatsoever for illegal aliens to pose as citizens or legal residents to try to steal. Law enforcement at all levels does essentially nothing. The FTC and other Federal agencies do even less. The IRS is actually complicit in this, as we have seen, and to a much lesser extent so is the bureaucratic laxity of the Social Security Administration bureaucracy. And this is why, to victims of identity theft fraud, the fact that the Federal government spent $27 billion last year to help illegal immigrants stay in the country is so frustrating, since some small but not trivial number of those illegal immigrants are the identity theft fraudsters. It is hard for a victim of identity theft fraud to resist asking why some members of Congress seem to care more about them than they care about me and my fellow citizen victims.

I wish I knew how many victims of identity theft fraud perpetrated by illegal immigrants there are. I wish I had a list and could help organize the group to make representation to government. I can’t because government itself lacks the data I would need. But around the issue of identity theft fraud there may be a constituency of both individual victims, defrauded businesses, and politicians who get hammered on by constituents to base an effort at reform on a problem that has received too little attention. It is an issue that appeals to the near universal sense of basic fairness, not to those relative few possessed by ideological extremism. It is an issue that affects people of virtually every socioeconomic cohort. It also affects law-abiding citizens and legal residents of Hispanic background, who of all people should want to put a stop to this sort of thing. After all, they are preeminent victims, and the whole thing makes their community look bad in the eyes of the Anglo majority.

Are there any politicians out there, at any level, who care about this? Are there any members of the mainstream media willing to spend a few hours finding out? These are questions that would benefit from a few answers. Meanwhile, every victim of identity theft fraud, individuals and businesses alike, should be demanding answers from their local and state representatives. We need to find a new and more centrist base for immigration reform, and focusing down on identity theft fraud may be part of the mix that ultimately will do the trick. If we don’t find or develop a new centrist base, it’s hard to see how this problem ever gets solved, or the political toxins to which it gives rise ever abate.


For details of the screw-up, see See Daniel DiSalvio, “Four Traps,” The American Interest (March-April 2009).

Newt could have just called me directly; he had my number and we knew each other from Hart-Rudman Commission days, when he was one of 14 commissioners and I was the staffer who wrote the three Commission reports. We have not spoken since that robocall.

Matt Hamilton and Ruben Vives, “In a First for California, Immigrants Here Illegally Get Seats in City Government,” Los Angeles Times, August 3, 2015; and reported by Al-Jazeera America, August 30, 2015.

The IRS created ITINs from within its own Executive Branch authority without any participation from Congress. Only in 2015, 19 years later, did Congress nibble at the edges of IRS decrees; it has never questioned the essence of the policy. The IRS did so as part of delayed reaction to a 1986 law that required legal residency in order to get a job. The rationale for that law, and the situation that obtained before the law was passed, are interesting and are part of the immigration policy story, but this is a discussion beyond the scope of this essay.



The post Immigration, Polarization, and Identity Theft Fraud appeared first on The American Interest.

 •  0 comments  •  flag
Share on Twitter
Published on October 03, 2018 06:00

October 2, 2018

Empower Political Parties to Revive Democratic Accountability

George Washington, the medical consensus now holds, died from aggressive bloodletting. Feeling sick from what may have been strep throat, he called his doctors at 3 a.m. on December 14, 1799. After losing nearly half of his blood in successive “treatments,” Washington died less than 24 hours later.

Bloodletting is no longer accepted medical practice, but we routinely perform comparable quackery on our political system unawares. Voters seeking a remedy for the problem of “out-of-touch elites” damage the body politic when they weaken the political parties that form the basis of healthy political competition and democratic accountability. Citizens are right to remain alert and engaged, but home remedies can cause great and unintended harm. The shrill cry that American democracy is dying rests on a misdiagnosis. If anything, we have introduced too much democracy in the wrong places.

Consider the Democratic Party’s recent decision to downgrade the role of superdelegates in presidential primaries. “I hope that the grassroots who have felt dismissed and who have lost faith in the party . . . understand that they have had warriors on this commission who are completely in line with their values and that we fought and we won a lot to make this party inclusive.” Thus spoke Nomiki Konst, a Bernie Sanders appointee to the Democratic Unity Commission, after it had just agreed on new rules for selecting the party’s presidential candidate. Beginning with the 2020 nominating contest, superdelegates—719 party bigwigs and other members of the Democratic establishment who represent about 15 percent of the total—will effectively be frozen out of the candidate selection process.

Superdelegates, who could tip the balance in divisive cases, had been the Democratic Party’s attempt to fix the party leadership’s loss of control over candidate selection following the McGovern-Fraser reforms of 1968. In now reverting to a bottom-up system of candidate selection, the Democratic Party has made strategic platform construction and electoral competition even more difficult. This is one instance of a larger pattern in which people call for more democracy to redress their alienation from the political process, yet institute reforms that end up compounding the problem.

Why Disciplined Parties Are Better than Weak Ones

The urge to dismantle strong party leadership often stems from legitimate gripes, but it can result in tragic error because bottom-up decision-making is not the same thing as democracy. Political parties are the core institution of democratic accountability because parties, not the individuals who support or comprise them, can offer competing visions of the public good. Voters lack the time and knowledge to investigate the costs and benefits of every policy, let alone to think about how their own interests must weigh against those of other citizens. Individual politicians may appeal to voters with brilliant ideas, but those ideas can only be implemented by a legislative majority. Lone politicians cannot tell voters in advance of elections what policies they will deliver unless they are members of a party campaigning on a platform on which its long-term reputation rests. Because parties gain or retain an electoral majority by offering widely appealing policies, party discipline—a party’s ability to command the votes of its legislative members—is key to democratic accountability. Only a disciplined party can credibly promise to deliver proposed policies if elected.

Parties are able to help citizens achieve what they could not on their own by considering the long-term consequences of each policy for every other goal or interest. Formulating and campaigning on a party platform is the antithesis of bottom-up democracy, such as primaries that allow voters to pick over the party platform, or referendums that give voters a say on one issue at a time.

Take primaries. More competition generally being preferable to less, a McKinsey consultant might say, primaries would appear to be a made-for-order cure for top-heavy, unresponsive party government. Primaries have, in fact, been peddled the world over as a cure-all, including most recently to imbue new life into India’s declining Congress Party. But primaries do no such thing. By creating bottom-up membership selection, primaries undercut the leadership’s ability to punish “cheating” on the party platform. The result is individually strong politicians who are collectively weaker than they otherwise could be. Or, to use a market analogy, giving locally based customers the ability to shape the product for sale undermines the firm’s ability to create the most desirable product for the market as a whole.

Consider another bottom-up measure, the referendum. Voters in California overwhelmingly passed Proposition 13 to set a ceiling on property taxes in 1978, with the unintended consequence that the Proposition undermined their ability to provide quality education for their children. Referendums, by slicing decisions around one issue at a time, undercut the unique ability of legislative parties to consider the relative costs and benefits of relevant issues in a bundle.

Brexit is a more recent instance of piecemeal politics gone wrong. The growing popularity in 2015 of Nigel Farage’s chauvinistic United Kingdom Independence Party (UKIP) made some members of Prime Minister David Cameron’s Tory Party nervous about sticking to their party’s official pro-Europe policy as parliamentary elections approached. Thanks to Britain’s plurality electoral rules, UKIP remained a minuscule parliamentary presence. But, particularly in some tight electoral districts, Tories joined the chest-thumping about protecting British jobs for the British and began humming the right wing’s anti-immigration tune.

Cameron did not have to hold a national referendum on Brexit. A parliamentary majority, and in fact majorities in both major parties, favored Britain’s continued EU membership. By the numbers, there was little doubt that Britain’s economy was better off in the European Union, given its enormous exports of financial services, and the lower labor costs from liberal EU immigration rules. But Cameron decided to put Britain’s EU membership to a popular vote in the summer of 2016, in response to widespread pressures to be “more democratic;” and no doubt also because he was confident that the Brexit gambit would fail. After all, in poll after poll majorities of British citizens had acknowledged the overwhelming benefits to the British economy from EU membership; and they had elected representatives who had made those judgments on their behalf in Parliament. Grabbing the immigration piece of the issue, the intense Brexiteers turned out in higher numbers than those who favored remaining in the European Union—robbing the British electorate of a parliamentary discussion and vote that would have elevated the broader, long-term interests of the British people.

These examples illustrate how strong party government is like a marriage, which enables its members to invest and plan for the whole family and for the long term. By contrast, in the colorful language of Argentinian commentator Eduardo Fidanza, weak parties are prone to patronage deals that are more like sex for pay. In a succession of electoral “reforms” across Latin America in the 1990s, voters gave themselves power to choose among individual legislators rather than, as formerly, to cast votes between party platforms. What seemed like a surge of democratic engagement in that period proved disastrous. Party discipline in Latin America collapsed into what Fidanza called “polygamous alliances” among opportunistic politicians, followed by presidential deal-making “orgies” of political corruption.

Fidanza’s orgy imagery calls to mind Lula’s Brazil. In such a weak party system, the President employs financial and regulatory resources to secure votes from free-agent legislators to pass bills. The result is a cornucopia of favors in exchange for legislative votes. Calibrated by the number of bills passed, the regime can look like a great success, as some scholars have argued. Measured instead by misspent resources and lower prospects for long-term prosperity, the voters get a raw deal.

Barriers to Party Discipline

Given the clear public benefits of electoral competition between disciplined parties with identifiable party platforms, why is there not a universal trend towards this model? If anything, entropy appears to work powerfully in the other direction.

One reason is that assigning credit and blame for policies is inherently hard. Reaching for the closest lever, voters seek to disempower leaders who appear to have erred. Error being hard to ascertain, incumbents do well when the economy hums along and lose votes when the economy stumbles.

Another reason to resist strong party discipline is that voters’ interests may diverge widely, particularly in geographically and demographically complex societies like ours. A majority in one electoral district may be at odds with those in other districts, which will disincline voters in that district to submit to party platforms that aim at the overall best interests of the country. One example is the Southern Democrats, who withdrew from the Democratic Party in the 1960s over differences over civil rights. India is another instructive case. As in the United States, India’s legislature is chosen in single-member districts, a system that tends to produce two parties since only large parties can form the legislative majority needed to shape policy. Despite the enormous advantages that accrue to majority parties with a credible chance at national rule, vast differences across Indian states along ethnic, linguistic, and religious lines have produced party fragmentation. This fragmentation may be diminishing as India’s regional differences diminish with the result that India will develop a system of two-party competition, but that has not happened yet.

Without question, it is easier to be a responsible party of the kind we advocate when electoral districts are internally diverse and when districts are relatively similar to each other in their diversity. David Hume expressed this intuition when he argued that geographically large districts “enlarge and refine” public responsibility, an insight that James Madison echoed in The Federalist (No. 10). Ideally, politicians elected from more or less similar electoral districts would represent nationally representative interests. Single-member districts would push these politicians into one of two parties, both aimed at a widely appealing vision of the public good. Because the districts would resemble each other in major respects, representatives from each need not fear for their local reputation and re-election chances were they to delegate to party leaders the authority to implement policy for the whole party.

Ideal conditions are rare, and the exigencies of state formation, as in America in the 1770s-1780s, often require locking in long-term concessions to powerful local elites. Willing to combine against the British threat but keen to protect existing privileges, the colonies reserved for themselves substantial powers as future states. As a result, the Senate, with two seats from every state irrespective of population, continues to vastly over-represent states with small populations, forcing U.S. political parties to aim at a rurally weighted public interest rather than one representing the average American voter. The Italian Senate, which Prime Minister Renzi tried but failed to eliminate in 2017, produces similar distortions in that country.

Does Any Country Get Things Right?

As a historical matter, electoral competition between strong parties emerged gradually and by accident, first in Britain as the elimination of rotten boroughs and multi-member districts in the 19th century shifted the center of gravity to large, competitive single-member districts. Electoral campaigning on the basis of policy favors for some and an open beer tap for all gave way to policy competition between two large and disciplined parties.

As we argue in our new book Responsible Parties (Yale University Press, 2018), the policy-based competition that emerged in Victorian England continues to benefit British voters to this day for a set of interrelated reasons. Two-party competition forces parties to formulate and promote broadly appealing policies and creates a simpler set of choices for voters to absorb and from which to choose. The shift to single-member districts from multi-member districts had the effect of eliminating the incentives for politicians to stake out their own claims outside of the party platform, which can confuse voters and undermine the party’s ability to deliver on a promised course of action. In turn, shifting the basis of campaigning from personal loyalty to party platform made campaigns far cheaper, reducing the role of money in politics. Finally, the two-party system ensures that there is always one large party in opposition with powerful incentives to discover and advertise corruption and broken promises of the incumbent party. No one had the foresight to set up this system, but in retrospect it has remarkably appealing properties.

Yet Britain has not been immune from decentralizing reforms that have eroded the accountability of its parties. The parliamentary parties used to select their own leaders, but in recent years the memberships have played a larger role.  In 1998, the Tories finally acceded to pressure to democratize the leadership selection process, instituting a system whereby MPs vote among the contenders until only two remain, at which point the entire party membership chooses between them. This means that the party can end up with a leader who is not its preferred candidate, as happened in 2001 when the Thatcherite Iain Duncan-Smith won the at-large contest with 60 percent of the vote even though two-thirds of the parliamentary party preferred his centrist rival Kenneth Clarke. It was not a happy marriage. Duncan-Smith was forced out in a no-confidence vote two years later, at which point the Tories were so demoralized that Michael Howard, their fourth leader in six years, was elected unopposed.

Labour has had its own troubles with direct election of leaders. In 2015, following rule changes to allow party members rather than the members of the parliamentary party to select their leader, long-time backbencher Jeremy Corbyn, from the radical left wing of his party, was elected Leader. He was so far from the median Labour voter that most Labour MPs could not support his policies. Following streams of resignations from his shadow cabinet, in June 2016 the parliamentary party passed a motion of no confidence in his leadership by 172 to 40. Yet three months later the at-large membership, heavily dominated by activists, re-elected him with 62 percent of the vote. It is hard to imagine how Corbyn could govern in the event that Labour wins an election.

Proposals for Stronger U.S. Political Parties

Can political parties in the United States strengthen internal discipline? The U.S. system is often paired with Britain’s because they share single-member district electoral rules, but it lacks a constitutional feature that favors disciplined parties: parliamentary democracy.

In parliamentary systems, common to Britain and much of Western Europe, voters choose among parties, which in turn choose a Prime Minister on the basis of a parliamentary majority. Parliamentary parties have strong incentives to pull together as a team, standing behind a common electoral platform, because failure to do so can result in successful no-confidence votes that trigger new elections. Valuing their political lives, members of parliamentary parties work hard to solve disagreements internally before opposition parties have a chance to pry them apart in full view of a disappointed public. Parliamentary parties rise or fall together, whereas presidential systems, impervious to this version of sudden death, permit far greater levels of internal party dissension. While public disagreements may sound appealing for the same reasons given for primaries and referendums—that they return power to the people—their effects are the opposite. They undermine democratic accountability by stunting the ability of parties to create, compete over, and implement encompassing sets of public-minded policies. Parliamentary systems therefore have an advantage in nurturing responsible, disciplined parties that weigh competing claims to construct a platform that has the best chance of serving the most voters.

Voters in presidential systems labor under a structural disadvantage from this point of view. Still, it is possible to push political parties in presidential systems toward party cohesion and, by extension, greater political accountability. In 1950 a commission of the American Political Science Association chaired by E.E. Schattschneider recommended that American political parties establish strong party councils that would fuse leadership of the executive, legislative, and state and local parties, an idea that was never implemented but would have far exceeded the superdelegates innovation of the 1970s.

The obstacles to party centralization are political, since the Constitution is silent on parties altogether—and, for that matter, silent on candidate selection. Starting in 1808, when James Monroe challenged James Madison for the Republican nomination for President, members of the congressional party, known as the congressional caucus, made the decision. This system prevailed until 1824, when it became a casualty of the fracturing Republican Party. Four years later both the incumbent John Quincy Adams and his challenger Andrew Jackson attacked the caucus system, so that no caucus was held in 1828. From 1831 onwards, congressional nominating caucuses were replaced by national presidential nominating conventions. The use of primaries grew rapidly from 17 in 1968 to 35 in 1980. Today every state has either a caucus or primary, so that the voters who participate in them have become the gatekeepers in presidential contests. Similar reforms among Republicans have had similar results, as was dramatically underscored in 2016 when the party establishment proved powerless to stop Donald Trump’s populist stampede to the presidency via a primary system in which superdelegates had already been stripped of their power unless no candidate won a majority before the Convention.

The emergence of stronger British parties in the 19th century helps explain why self-strengthening measures have been few and fragile in the United States. American geographic diversity, amplified and exacerbated by gerrymandered districts, produces members of Congress who represent narrow and distinct slices of the American public. This electoral set-up cannot help but produce political parties that are internally divided and therefore disinclined to empower leaders to formulate and implement policies, however favorable to the average American citizen, which might be out of line with their own districts. One reform that would go a long way toward establishing conditions for party strengthening—what political scientists call endogenous institutional change—would be to draw electoral districts that approximate the political diversity of the nation. It is not easy to see how this could be done, but in our view it is far preferable to other ideas on offer including jungle primaries (which also weaken parties) or a move to proportional representation (which would fragment the party system in ways that Sweden and Germany struggle with now) or eliminating the Electoral College, which, by legitimating the President at the expense of the legislature, would further weaken parties.

Constructive change is hard, but political reformers should at least obey Hippocrates’s injunction not to do harm. Partisan gerrymandering and majority-minority districts promote the intraparty competition that breeds clientelism and support for sectoral interests. Healthy political competition is between parties, not within them. The best way to achieve that is to gerrymander for competitiveness among the parties. This will be much more likely to occur if redistricting is taken out of the hands of state legislatures and given to independent commissions, as states like California have now begun to do. Another possibility is the British practice where an independent commission redraws constituencies following the Census and then Parliament votes to accept or reject, with no option to revise the commission’s plan.

The British also have their work cut out for them. Just as blue-state-red-state sorting makes it harder to obey Hume’s dictum to maintain large districts that mirror the country’s diversity as much as possible in the United States, regional variation in Britain undermines it there too. Welsh, Scottish, and Northern Irish voters are decreasingly like one another or like their English counterparts, a phenomenon exacerbated by the fact that London’s prosperity has not been matched elsewhere in the United Kingdom. The British would be best served by reducing the number of their constituencies (which are notably smaller than their counterparts in France, Germany, and the United States), and perhaps including a slice of London in each of them.

Real, Not Bogus, Accountability

One of the great paradoxes of electoral accountability is that less is more. Disciplined, centralized, vertically organized political parties solve a multitude of problems of information, judgment, and competitive strategy that voters as individuals or groups of activists dabble in at their peril. Nibbling at their power in hopes of increasing representativeness instead undermines it.

The McGovern-Fraser reforms, which opened primaries and caucuses to wider participation, solved one problem by creating another. Voters in primaries and caucuses tend to be activists who are well to the left of Democratic Party voters, creating the risk that they will pick someone who will not do well in the general election. This quickly became evident in 1972 when Senator McGovern won the nomination under the new rules, defeating the establishment candidates Ed Muskie and Hubert Humphrey, but then Richard Nixon beat him in a rout, winning every state except for Massachusetts and the District of Columbia. In 1980 Jimmy Carter was defeated for re-election by Ronald Reagan following a brutal primary challenge from Senator Edward Kennedy that many establishment Democrats blamed for fatally weakening Carter. Subsequent soul searching led to a Commission chaired by North Carolina Governor Jim Hunt to recommend the introduction of superdelegates to counteract grassroots influence. But after 2016 the grassroots successfully fought back.

Activists see superdelegates as an anti-democratic throwback to the smoke-filled rooms that had prevailed until 1968. The process had indeed been opaque, with backroom deals among state and congressional party officials determining the choice of candidates. But the turn to bottom-up nomination processes has further weakened already weak congressional parties, reducing their capacity to govern in ways that alienate the voters who demand decentralizing reforms. The supposed cure fails to address the underlying problem: Political parties cannot develop and run on coherent platforms that their members can get behind when they run, and implement if they win.

Before 1968 primaries were seen as ways for potential candidates to prove their viability to party leaders. Now they are seen as the font of a candidate’s legitimacy, compromised by the superdelegates. What is missing from this picture is any attention to the relations between the congressional parties and their presidential candidates. They are supposed to be on the same team, fighting for the same agenda, but the grassroots selection of presidential candidates drives a wedge between them because they have to worry about different electorates. Presidents must craft messages that can win among primary voters nationwide, while members of Congress must win in their individual constituencies, which might have a very different political makeup. Superdelegates were always a band-aid on this problem, because although members of the congressional party select most of them, they have never amounted to more than a fifth of the total number of Democratic delegates, and less than half of that for Republicans. In 2016 they were in any case powerless to stop Trump being selected as their party’s candidate by the less than five percent of the U.S. electorate that had voted in primaries and state caucuses.

Getting rid of primaries is likely impossible, but a rule that declared Convention delegates unbound by primary results in which turnout fell below, say, 75 percent of the party’s vote share in the previous general election would blunt the power of activists at party extremes. Still better would be reforms that allowed the party’s sitting representatives and senators to select the candidate if the 75 percent threshold was not met. Rule changes of this kind might not be such a difficult sell politically, because proposing them would highlight the exceedingly low turnout in most primary races.

The vices of presidential primaries are replicated at the congressional level, where activists turn out in disproportionate numbers and pull parties toward extremes, fueling gridlock in Congress and concomitant disaffection among the electorate. One might contemplate comparable reforms there, empowering House and Senate incumbent leaderships to ignore the primary result and select the candidate if the 75 percent turnout threshold was not met. A more robust option still would be to place the presumptive choice of candidates in the party’s congressional hands, to be overturned only when the turnout threshold was reached.

The great virtue of the early-19th century congressional caucus was that it gave everyone the incentive to be on the same team. Members of Congress had to select the candidate who could best articulate a platform that they could run on and candidates for President had to win and retain the confidence of the congressional party—at least if they hoped to be successful in office and re-nominated. This came as close as to approximating a parliamentary system as was possible within the strictures of the American constitutional separation of powers system. It is a system that could be restored without a constitutional amendment, one that would strengthen the capacity of parties to deliver for their voters, attenuating the alienation that grows out of the gridlock that feeds voter alienation but which is worsened by decentralizing reforms.


1. Dr. Benjamin Rush, Washington’s friend and a signer of the Declaration of Independence, was an avid bloodletter. Doctors began abandoning the practice only after William Cobbett showed, using Philadelphia’s mortality statistics, that deaths had increased after Rush began his aggressive bloodletting campaign. See Richard Frank, “Bloodletting and The Death of George Washington: Relevance to Cancer Patients Today,” Yale University Press Blog, February 28, 2015.

2. Daniel Marans, “DNC Unity Commission agrees on slate of historic reforms,” Huffington Post, December 9, 2017.

3. Disgruntled grassroots activists exploded with anger when the Party muscled through Hubert Humphrey at the 1968 National Convention. Humphrey had not contested any of the 13 states in which antiwar candidates had won 80 percent of the vote, raising questions about the legitimacy of his selection. Senator George McGovern and Representative Donald Fraser headed a 28-member commission that opened candidate selection processes to the membership at large. As in 2016, grassroots unhappiness—which in 1968 produced violent confrontations with Chicago police—produced promises of reform. The McGovern-Fraser Commission designed the modern system which greatly increased the significance of primaries and caucuses, making it inconceivable that someone could become the candidate—as Humphrey did—without participating in these contests.

4. William Riker pointed out that single-member districts create economies of scale—a large advantage to big parties—only to the extent that the benefits of being in a national majority outweighed the costs of policy distance from local constituents. See Riker, “The Two-Party System and Duverger’s Law: An Essay on the History of Political Science,” The American Political Science Review, Vol. 76, No. 4 (December 1982), pp. 753-766.

5. “Modernization theory” purports that economic development creates common interests that in time displace older social divisions. One predicted result is democratization: Richer citizens demand the luxury of self-control, and/or can more easily coordinate with each other in demanding that self control. In some versions (for example, V.O. Key), political competition shifts from identity politics to economic policies.

6. See, for example, Robert J. Morgan, “Madison’s Theory of Representation in the Tenth Federalist,” The Journal of Politics, Vol. 36, No. 4 (November 1974), pp. 852-885.

7. Gary Cox presents a rich analytical history of this process in The Efficient Secret: The Cabinet and the Development of Political Parties in Victorian England (Cambridge University Press, 1987). For a contemporary account, see also Walter Bagehot, The English Constitution, ed. Paul Smith (Cambridge University Press, 2001/1867).

8. For evidence that public policy improved as a result, see Alessandro Lizzeri and Nicola Persico, “Why Did the Elites Extend the Suffrage? Democracy and the Scope of Government, with an Application to Britain’s Age of Reform,” Quarterly Journal of Economics, Vol. 119, No. 2 (May 2004), pp. 707-775.

9. Jeremy Waldron emphasizes the importance of a strong and “loyal” opposition that is strongly motivated to support the system that gives it a turn. See Waldron, “Political Political Theory: An Inaugural Lecture,” Journal of Political Philosophy, Vol. 21, No 1 (March 2013), p. 19.

10. “Toward a More Responsible Two-Party System: A Report of the Committee on Political Parties, American Political Science Association,” American Political Science Review, Vol. 44, No. 3 (September 1950). In addition to Schattschneider of Wesleyan, other members included Thomas Barclay, Stanford University; Clarence Berdahl, University of Illinois; Hugh Bone, University of Washington; Franklin Burdette, University of Maryland; Paul David, Brookings Institution; Merle Fainsod, Harvard University; Bertram Gross, Council of Economic Advisers; E. Allen Helms, Ohio State University; E. M. Kirkpatrick, Department of State; John Lederle, University of Michigan; Fritz Morstein Marx, American University; Louise Overacker, Wellesley College; Howard Penniman, Department of State; Kirk Porter, State University of Iowa; and J. B. Shannon, University of Kentucky. The drafting committee, chaired by Schattschneider, comprised Berdahl, Gross, Overacker, and Morstein Marx.

11. In our view, organizations such as Vote Smart have got many of these things backwards.

12. For discussion of how this might be done in the United States within the stricture of the Voting Rights Act see Ian Shapiro, Politics Against Domination (Harvard University Press, 2016), pp. 87-88.



The post Empower Political Parties to Revive Democratic Accountability appeared first on The American Interest.

 •  0 comments  •  flag
Share on Twitter
Published on October 02, 2018 10:00

Can Nord Stream 2 Really Replace Groningen?

Russia claims that gas flows from its proposed pipeline project Nord Stream 2 can step into the breach to replace falling EU domestic gas production. This EU supply vulnerability is underlined by the recent decision of the Dutch government in April 2018 to begin the process of closing its huge Groningen gas field. On closer examination, however, this Russian claim looks like another disinformation operation seeking to use the EU’s supply vulnerability as a cover for a pipeline project which has objectives far removed from that of supplying Europe with additional gas.

The development of the Groningen gas field in the Netherlands in the late 1950s marked the beginning of a golden age of natural gas production in Western Europe. The Groningen field was followed by offshore fields, in Dutch, Danish, British and Norwegian waters, resulting in northwestern Europe itself becoming a major global energy player. In 2013, it was still producing over 80bcm annually (54bcm directly from the Groningen field). Almost half of that production was for export to neighbors in Germany, France, and Belgium, providing substantial additional tax revenue and exports for the Netherlands.

However, in April 2018, the Dutch government sounded the death knell of the Groningen field. Production, which had already been capped in 2015 at 21.6bcm, would be cut to 12bcm by 2022. The gas field will be closed in 2030. This decision was forced on the government by overwhelming evidence of increased seismic activity being tied to the project. Although there had been small tremors since the early 1990s, the decision to close down the field stems from a 3.6 magnitude earthquake in Huizinge in August 2012. This led to the Hague imposing the first of several production caps. But despite these caps, there has been no letting up in the pace of the tremors in the region. As recently as January 2018, another major quake of 3.4 magnitude was felt across the Groningen field. The Groningen gas producer NAM and the government are on the hook to pay damages for approximately 90,000 households affected by the tremors.

It is not surprising therefore that given these circumstances and hostile public reaction to any further production, the Hague has taken such radical action to close one of Europe’s principal gas fields. This blow to EU domestic gas production comes on top of the decline in gas production in much of the North Sea. For example, British North Sea gas production has fallen almost continuously from 110bcm in 2000 to just under 40bcm in 2017. Overall EU production had fallen from 341bcm in 2004 at its peak to 257bcm in 2016. Production projections suggest that by 2030, EU producers may be only producing 146bcm. Only Norway is still able to add production, but will not on its own be able to replace all the lost Dutch, British and Danish production.

Gazprom argues that it is about to come to the rescue with Nord Stream 2. This pipeline project is supposed to bring natural gas in two 27.5bcm undersea pipelines from the Russian Baltic coast to Greifswald (in Angela Merkel’s parliamentary district) on the German Baltic coast. It argues that as EU gas production is in rapid decline its project can provide the key additional gas supply that the EU needs.

However, on closer examination, the “Nord Stream 2 to the rescue” story does not quite add up. When the previous Gazprom-led pipeline project Nord Stream 1, which is largely on the same route as Nord Stream 2, was proposed in 2006, the same argument was deployed: great promises of additional supplies of gas to the EU. In fact, once Nord Stream 1 came into operation, Russian gas flows through the Ukrainian transit “Brotherhood” pipeline network fell, and instead that gas was now being dispatched through Nord Stream 1. In other words, Nord Stream 1 was emphatically not a pipeline that involved delivering additional supplies of natural gas; it was merely a project aimed at diverting gas flows from one pipeline to another-in order to achieve Russian strategic objectives—principally undermining Ukraine and the states of Central and Eastern Europe.

There are compelling reasons for believing that Nord Stream 2 is also just another diversionary pipeline built in the same spirit. In June 2015, just before the announcement of the Nord Stream 2 project, the Deputy CEO of Gazprom, Alexander Medvedev, in trenchant and colorful form said that “under no circumstances,” even “if the sun replaced the moon” would Gazprom enter into another gas transit contract with Naftogaz (the Ukrainian state-owned energy company) when the current contract expires in 2019. The negative reaction of the EU to that statement, plus the practical reality that it was unlikely that the Nord Stream 2 pipeline would be in place by 2019, meant that Gazprom backtracked on its absolutist position of no new contract with Ukraine.

However, the underlying intent revealed by Medvedev’s remarks remains. The intent is to seek to minimize or shut down the Ukrainian Brotherhood transit route as soon as possible and switch the gas that would have transited via the Ukrainian network via Nord Stream 2 instead.

Evidence for the target primarily being Ukraine is twofold.

First, in order to minimize any extended Ukrainian gas transit contract period, the two Nord Stream 2 pipelines, unlike the two Nord Stream 1 pipelines, are being built simultaneously and not consecutively. By contrast, in almost all other aspects Nord Stream 2 follows the practices and procedures of Nord Stream 1. This is strongly indicative that the aim is to get the project completed as quickly as possible so that the Ukraine transit route can be eliminated, or to at least significantly reduce the gas flows from that route.

Secondly, Gazprom’s Turk Stream pipeline project is geared similarly. Like Nord Stream 1 and 2, it consists of two sets of pipelines, their carrying capacity at 16bcm each. One pipeline, now already constructed, will provide gas to Turkey. The other pipeline aims to provide gas into South-Eastern Europe, replacing the gas that currently transits across Ukraine and down the Balkan pipeline network toward Greece.

Beyond the geopolitics surrounding Ukraine, it’s clear that Gazprom does not plan to provide any additional gas supplies to Europe via Nord Stream 2. For evidence of this, look no further than Nord Stream 2’s connecting pipeline, EUGAL—another project being built out by Gazprom and its European partners. Nord Stream 2 itself terminates on the German Baltic shore. But EUGAL, which will have sufficient carrying capacity to take all Nord Stream 2’s annual gas flows, points eastward, toward Polish and Czech borders, not towards Western Europe. It connects via the EU’s west-to-east gas interconnectors with the CEE states pipeline networks. The clear aim of the EUGAL pipeline is to flood those west-to-east interconnectors with Gazprom-controlled gas. By flooding the interconnectors, it will be difficult for any other none Gazprom controlled gas to enter the CEE market.

What Gazprom is clearly aiming at is to split the EU’s single market for gas in two. Western Europe will remain a liberalized open gas market able to access multiple sources of gas supply, while central and Eastern Europe will once again become fully subject to Gazprom’s market dominance and Russian influence. The Ukrainian Brotherhood pipeline network and the application of so-called reverse gas flows or swaps through the Ukrainian network is a major roadblock to this strategy. Gas liquidity stemming from flows through the Brotherhood network has already caused Gazprom to lose most of its market influence in Ukraine. Once gas has transited through Brotherhood and on to its customers in Slovakia, Hungary and Poland, legal title passes to those customers, and CEE states have then been reselling that gas back to Ukraine. The danger for Russian strategy when deploying Nord Stream 2 gas in Central and Eastern Europe is that if gas still flows through the Brotherhood pipeline, that gas will generate competition for the gas flowing through Nord Stream, as Gazprom has no legal title to it and control to whom it is sold and at what price. Russia’s pipelines are best thought therefore as a play to restore one-on-one relations between Gazprom and its customers, and to thus restore the coercive power of energy price-setting all across a region over which the Russians feel they must hold sway.

Popular and even policymaker opinion at first glance will assume that if there is a new pipeline, there must be additional gas in the offing—an erroneous conclusion that only helps Russia perpetuate its disinformation campaign. The first step is realizing the mistake. The second is figuring out how to address the real issue: the emerging supply gap for the Netherlands and the rest of the EU. The real solution is not more politically-motivated pipelines, but rather developing a credible replacement energy mix of renewables and alternative natural gas sources. This is now much easier to execute for two reasons.

First, as the International Energy Agency pointed out in its November 2017 renewables study, the cost of renewables in respect of both wind and solar is falling dramatically. Already the IEA estimates that 1000GW of renewables will be added to global power generation capacity between 2017 and 2022. This equals half the total global capacity of the coal power sector which took 80 years to put in place. Europe in particular has considerable scope for deploying much more renewables to improve its supply security.

Second, there are a growing number of sources of non-Russian natural gas open to Western Europe, including from traditional suppliers like include Norway and Algeria. Algeria in particular has underused supply pipelines to the EU, and huge natural gas resources which could be brought online. Another major source of supply is liquid natural gas (LNG). Both the United States and Australia are ramping up LNG production. By 2024, the US will have as much LNG export capacity as Gazprom’s exports to the EU. These additional gas resources can be deployed in tandem with the rolling out of much greater renewable capacity, dealing with the “intermittency problem” while also driving down power costs.

Nord Stream 2 is an object lesson for Europeans that when Russia proposes to build yet another pipeline it does not mean that it is going to provide you with any more gas.


The post Can Nord Stream 2 Really Replace Groningen? appeared first on The American Interest.

 •  0 comments  •  flag
Share on Twitter
Published on October 02, 2018 08:09

Peter L. Berger's Blog

Peter L. Berger
Peter L. Berger isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Peter L. Berger's blog with rss.