Peter L. Berger's Blog, page 31
September 5, 2019
The Flight 93 Temptation
Hyperbole has flowed fast and free during the past three years. If some of it can be attributed to the pearl-clutching instincts of those who championed the inevitable trajectory of liberalism, it is also the case that politics across Western democracies has become dramatically less predictable, and far less constrained by the norms that most of us had long taken for granted.
Recently, U.S. President Donald Trump suggested that the next G7 summit be held at his own resort in Miami, the Trump Doral, going against the spirit if not the letter of the Constitution’s emoluments clause. Over the same period, he also attacked the independence of the Federal Reserve by urging it to lower interest rates and comparing the Fed chairman, Jerome Powell, to China’s dictator Xi Jinping. It has also been reported that the President promised pardons to officials who break laws in order to expedite the construction of his border wall.
The endless news cycle of outrage is exhausting, and contributes to what some see as a “normalization” of behavior that would have been all but unthinkable only a few years ago. Setting aside Trump’s mercurial character, these developments arguably reflect a deeper erosion of the division of labor envisaged by the framers of the Constitution—an erosion that precedes the President. Over the years, Congress has abdicated many of its traditional responsibilities, allowing the Executive Branch to de facto legislate or to conduct trade policy. The role of the judiciary has also become much more political, dramatically raising the stakes for judicial appointments, most prominently on the Supreme Court.
Brexit Britain finds itself in uncharted territory, too. Over the past three years, the country has seen the largest government defeat in British political history, the longest parliamentary sitting in 200 years, a major piece of government legislation defeated three times without the government collapsing, cross-party crisis talks in Number 10, and the parliament seizing control of the legislative agenda. In late August, the Queen was asked to grant approval to the Prime Minister to prorogue parliament for the longest period since the Second World War, and in response, former Prime Minister John Major joined legal action against his own party of government.
As Britain’s famously uncodified constitution comes under strain, the country’s institutions work overtime to keep things afloat, and to limit the permanency of these developments to a smudge rather than an imprint. Journalists and their Twitter feeds swing wildly between astonishment and despair, the endless hysteria numbing the impact of yet another unbelievable transgression. Yet every old convention broken, every norm superseded, leaves an indelible trail of precedent that could haunt British politics for generations to come.
Paradoxically, while our political systems seem to be on life support, traditional indicators of prosperity and wellbeing do not paint a picture of a world in crisis. Notwithstanding the current slowdown, both Europe and the United States have seen decent rates of economic growth and employment. The waves of immigration across the Mexican border and the Mediterranean Sea, which caused such alarm amongst anxious populations, have plateaued. The wave of terror attacks that plagued major cities in Europe over recent years appears to have receded. The menace posed by China, Russia, and Iran is real, but, for the most part, the world remains at peace.
Yet if we have learned anything from 2016, it is that the insecurities trembling beneath the surface are not mechanical responses to any single political event or economic crisis. Rather, they have been building over decades in the face of profound social, demographic, and cultural change, which has fractured the basis for social trust. The emergence of effective populist campaigners on all sides has given voice and legitimacy to existing frustrations, while depicting the battle for the country’s future as an existential choice.
Although Britain as a whole remains deeply fragmented about the best path forward on Brexit, a survey of Conservative Party members ahead of Boris Johnson’s triumphant leadership election found that they were willing to gamble almost anything—including potential economic ruin, the dissolution of the Union, or the Northern Ireland peace process—in exchange for achieving Brexit by October 31. For these previously small-and-large-C conservatives, Brexit has come to symbolize so much more than the sum of its parts, encapsulating a worldview and an identity that must be defended at all costs.
Focus groups in England make clear that the capacity to forge common ground across political divides is rapidly being eroded. While in the past participants would moderate their behavior in the service of a kind of community, they now spout newspaper slogans about “traitors” and “saboteurs.” Leaders depict their opponents as enemies, and so do ordinary citizens, who see themselves as competitors, and their elected representatives as agents of betrayal. In May this year, the Metropolitan Police declared that threats against public officials had reached “unprecedented levels”, with 342 crimes committed in 2018 alone.
In the United States, Pew Research has demonstrated that the partisan gap between Democrats and Republicans has never been greater. Not only do they disagree on policy solutions, they are also deeply suspicious of one another’s character, and fearful of the opposing party’s plans for the nation’s future. If Michael Anton’s “The Flight 93 Election” provided a blueprint for Republicans who saw politics as an existential struggle in which “winning” becomes an end in itself, the same mentality is spreading to the progressive left.
Infuriated by Republican Party gerrymandering, the denial of President Obama’s putative right to a Supreme Court appointment, and the fact that Hillary Clinton lost the election while winning the popular vote in 2016, an increasing number of Democrats appear to be giving up on the system altogether—a dynamic that is bound to worsen should President Trump be re-elected. Denouncing America’s political institutions as irreparably flawed, they are calling for the end of the Electoral College and packing the Supreme Court to ensure conservative jurists can never find themselves in a majority.
In the United Kingdom, in turn, parliamentary opponents of a “no deal” Brexit conspire with the supposedly neutral Speaker to seize control of parliamentary business and subvert the will of the Executive Branch. Undoubtedly, leaving the European Union without a deal would be an extraordinary act of self-harm, with profound consequences for wellbeing, prosperity, and social cohesion. Extraordinary times necessitate extraordinary acts. Yet in the name of democracy and liberalism, some seem too ready to rush to the pilot’s cockpit without considering the distinctly illiberal outcomes that could be achieved should the same tools be deployed under a different parliamentary majority.
The two most venerable English-speaking democracies appear to be following in the footsteps of countries they once sought to inspire. As the experience of Argentina, Hungary, or even Italy make clear, once unhinged politics becomes the new norm, escaping the normalized chaos and nihilism that ensues is difficult. As of now, both the British and American political systems are being stress-tested—not only by the most prominent demagogues, but also by the centrists who seek to follow the lead of their adversaries in undermining the democratic conventions they feel are working against them.
The radicalization of former moderates conceals a naive, private hope that somehow, not too far away, there will be a system-level correction back to the comfortable climes of center-ground politics. But the train has left the station, and the only question is in which direction it will go.
In one sense, the desire to renegotiate the norms, standards, and structures underpinning liberal democracies is understandable. Political institutions have to adapt and evolve to the demographic, economic, and social realities of modern times. The conversations that many “left behind” voters on both the Left and the Right have started by means of their unconventional choices are important and should be taken seriously. The new generation of politicians are right to call attention to injustices and outrages at the heart of our political systems, such as the legacy of slavery in the United States or undemocratic efforts of voter suppression. It is one thing, however, to call for reform and quite another to champion the dissolution of the codes of conduct and institutions that have granted us prosperity, security, and freedom over so many years.
President Trump has sought to consolidate his power through the pernicious erosion of trust in the watchdogs of democracy, and to ensure that even if he isn’t re-elected, the next President will lead a nation that could be virtually ungovernable. Meanwhile, Boris Johnson’s advisers are war-gaming a fast and brutal election to be fought on a “Parliament versus the People” theme. As he stood on the steps of Downing Street last Monday, railing at the dissent within his party and with angry crowds at the gates chanting “Stop the coup!,” it felt clear that we are only at the beginning of a much larger fight for Britain’s future.
The erosion of trust in democratic institutions is pernicious—not only because it makes it harder to credibly push back against genuine violations of constitutional norms, but also because it risks turning our politics and our societies into fundamentally combative spaces, in which compromises are neither possible nor desirable. While there is undoubtedly much at stake, moderates must avoid the temptation to follow the populist playbook and allow those with illiberal intentions to reshape the very nature of our democracies. Rather, they must find a meaningful way to re-imagine these institutions that once served us so well, and reinstate the foundations of our peace and success.
The post The Flight 93 Temptation appeared first on The American Interest.
September 4, 2019
The Dead End of “More Democracy”
The following is Part Two of a two-part exploration of contemporary populism and its various historical antecedents. Click here for Part One.
Even as Theodore Lowi was formulating his critique, interest-group liberalism was under assault, first and foremost by the civil rights movement and then the anti-war movement. Other movements were soon to follow. Yet also emergent at that time was another, more subtle aspect of politics and policymaking that sheds additional light on the political disaffection of so many Americans. Despite Lowi’s fundamentally valid criticisms, the system today is genuinely more open and accessible than ever before, and Americans are better educated, informed, and equipped than their predecessors to participate in politics and government. So again, why are government and politics so off-putting, alienating, and seemingly closed to such large numbers of citizens?
From Networks to Checkbooks
One clue to addressing this question may be found in a familiar but inadequately examined term: “networks.” To be sure, networks have been featured in recent social science research on information flows and the maintenance of informal social ties. Networks also figure prominently in discussions about the maintenance of civil society. We have all heard of “the old boys’ network,” which has been supplemented, if hardly replaced, by what some refer to as “the old girls’ network.” “Issue networks” figure prominently in Heclo’s study of the impact of experts on the policy process in Washington in the late 1970s. But what exactly do we mean by networks?
In the broadest terms, networks consist of open, ad hoc, non-hierarchical relationships in which individuals may opt to participate or not, remain or not, and rejoin or not, depending on their interests, capacities, time constraints, resources, and other factors. Networks typically emerge informally—though hardly randomly—to address the specific needs or objectives of those involved. And they tend to break up or disperse when no longer relevant or effective. This feature importantly distinguishes networks from formal organizations, which are sturdily structured and generally more long-lived. As the term “networking” implies, networks may have open, ill-defined objectives requiring little or no commitment of those involved; or they may be more focused and demanding of participants.
Most observers would agree that today, politics and policymaking in Washington and throughout the country are more reliant on networks than ever. Indeed, this characteristic helps account for the openness and permeability of our civic and political life, emphasized above. But as I have also suggested, the informal, ad hoc nature of networks presents new challenges and dilemmas. Most notably, because it is not always clear who is in charge or responsible, accountability for outcomes midwifed by networks can be elusive. Yet even when responsibility is evident, accountability can still be problematic, because networks are prone to disperse or break up.
Another challenge posed by political and policy networks is that while they tend to be open to all comers, they nevertheless depend on participants who are not just “interested” but have knowledge, skills, or expertise to offer. Because of their inherently ad hoc, informal nature, the workings of these networks are not typically widely known or necessarily accessible, even to those who might be intensely interested. As often as not, new participants do not discover these networks on their own; rather they are sought out and invited in by individuals who are already involved.
Networks are transforming politics and policymaking in Western Europe as well as the United States—and again, not necessarily for the better. In a nutshell, they are weakening established hierarchies and bureaucracies on both sides of the Atlantic. Even in Sweden, networks have contributed to a blurring of lines of responsibility and authority that some observers fear threatens to diminish “the strong state.” Not surprisingly, well-heeled special interests have few problems negotiating uncharted but wide-open policy networks. The same can be said of well-educated professionals and other segments of the broad upper-middle class. As Richard Reeves of the Brookings Institution argues, such “dream hoarders” constitute the critical but largely overlooked source of inequality in contemporary America. One reason why is how well their education and skills are suited to navigating the far-flung, informal, knowledge-intensive networks that are the lifeblood of politics and policy in Washington, throughout the country, and around the globe.
Highlighting both the impressive energy and creativity as well as the problematic impact of such networks on post-1960s politics is what Robert Putnam calls the “checkbook organization.” This formation allows busy, highly mobile, and typically harried professionals and other affluent individuals to contribute money to favored political causes that make few, if any, demands on their time. To Putnam, who has spent much of his career expressing concern over America’s declining “social capital,” checkbook organizations are a welcome, if partial remedy. But they also present new challenges.
One example is the enormous discretion and influence wielded by the policy entrepreneurs who are the prime movers in such efforts. Typically motivated not by narrow self-interest but by broader political, ideological, or ethical-moral objectives, such entrepreneurs invest substantial time and energy in securing the seed-funding for facilities, support staff, and policy expertise. So, they are understandably content to keep their “members” informed but not in a position to exercise much voice. Checkbook organizations tend to be staff-dominated and minimize or even avoid face-to-face contact among members, or between members and staff or leaders. Voice is therefore not easily exercised, but exit is. The expected way for a member to express disapproval or discontent is simply to stop writing checks.
Unfortunately, this fosters a counter-dynamic that has contributed to the contentiousness of contemporary American politics. To make their widely dispersed and highly atomized members feel engaged and persuaded of the effectiveness of their contributions, leaders and staff of checkbook organizations invariably dramatize their efforts through whatever media their members might be paying attention to. As a result, such policy entrepreneurs are hardly averse to controversy. Quite the opposite. As the economist Burton Weisbrod once put it, they seek to maximize not profits but publicity.
Finally, both their affinity for controversy and their atomized, attenuated membership ties help explain why checkbook organizations focus on policy disputes arising from market failures, externalities, and free-rider problems. Indeed, they are particularly well suited to advocating for “public goods” such as clean air and water. No surprise, then, that the template for such organizations emerged from the environmental movement of the late 1960s and early 1970s, followed by similar efforts focused on consumer protection (Ralph Nader’s Public Citizen) and campaign finance reform (Common Cause). Soon others were launched to address the needs or interests of overlooked, underserved, or otherwise marginalized populations—whether Mexican Americans, the disabled, or victims of domestic violence. The organizational model that all such efforts shared—and still share—is reliance not only on energetic, creative policy entrepreneurs, but also third-party funders willing and able to front the substantial resources necessary to bring together and sustain the staff and infrastructure capable of eventually attracting modest financial contributions from highly dispersed and otherwise distracted potential members.
Such endeavors came to be known as “public interest organizations.” The term is apt insofar as they do in fact address problems or issues that if left unattended would negatively impact the interests or well-being of some broader aggregate of interests or a communal interest more generally. Yet “the public interest” is notoriously difficult to define. In the mid-20th century journalist Walter Lippmann offered this: “The public interest may be presumed to be what men would choose if they saw clearly, thought rationally, acted disinterestedly and benevolently.” But postwar American political scientists explicitly and forcefully rejected this view of the matter as conceptual drivel. To behaviorists and pluralists alike, the public interest was simply the outcome of the ongoing and inevitable interplay among competing private interests. Such intellectual timidity undoubtedly prevailed in part because the United States was afforded a de facto national purpose during decades of crisis: economic depression; then world war; and finally, Cold War with a nuclear-armed adversary whose communist leaders threatened to “bury” us. In any event, the social and political upheavals of the 1960s soon rendered such academic scruples irrelevant as embattled political elites cast about and embraced the emergent notion of . . . . public interest organizations!
Yet the pendulum swung rather far, and has stayed there. The accumulated effect of the political turmoil of the last half-century has been to render the very notion of self-interest—the basis on which the Framers founded this regime—as dubious and suspect. As Tufts political scientist Jeffrey Berry, the foremost student of public interest organizations, once lamented: “The very basis of public interest advocacy is a distrust of the motives and intentions of private interest groups.” Berry has also noted that the organizational survival of public interest groups dictates that they “propagate a popular image of government as a set of institutions beset by ineptitude, corruption, and ineffectiveness.” As he has furthermore pointed out: “A byproduct of their efforts to make governmental institutions more responsive to the ‘people’ has been to undermine the credibility of those same institutions.” Consequently, “public interest groups have placed themselves above politics.”
Given the issues toward which public interest entrepreneurs are drawn, they have typically relied heavily on expert opinion from natural as well as social scientists. The resulting claims to scientific objectivity have contributed mightily to public interest advocates’ self-perception as transcending politics. Indeed, Berry once noted that they “are as self-righteous as they are skillful.” In this respect, of course, they reflect the deep-seated religious and moral fervor that suffuses American political culture. Indeed, this aspect of public interest politics was reinforced by the religious zeal that sustained the civil rights movement, out of which public interest politics emerged in the late 1960s.
So today, we are at a point in our political life where fewer and fewer claims get made on the basis of self-interest. Even egregiously self-regarding corporate tax cuts are justified—with a straight face and against abundant evidence to the contrary—as promoting economic growth that will benefit all Americans. And no politician can ever justify any career move except out of a profound sense of duty to the republic. Or if resignation from public office becomes necessary, it is invariably motivated by a deep sense of responsibility to family.
But the real test is how our brightest and most politically engaged youth feel constrained to cloak their personal ambitions in tired, unconvincing rhetoric about “public service.” No mention of opportunities for world travel, stimulating work, or proximity to power ever seems permissible. With the possible exception of those headed for careers in the military, even “leadership” aspirations are rarely voiced and somehow regarded as suspect or illegitimate.
Contemporary culture has apparently reached the point at which self-interest in politics and public life has changed positions with sexual drives and desires. Whereas the latter were once expected to be subdued and sublimated for the good of society, today an individual’s overall health and well-being are deemed as requiring that sexual impulses be gratified, even indulged. Conversely, it is now self-interest that must be overcome, or simply repressed, for the sake of the wider public good. In any event, our guiding ideal and earnest aspiration appear to be—at least according to a recent book review in the Sunday New York Times—“to redirect the nation away from destructive partisanship toward a disinterested pursuit of a common good.” But as Jeffrey Berry might argue, such thinking is precisely how we got to where we are today!
Media Hegemony
The final topic to be addressed here has already been mentioned, the media—particularly the elite media, including print, electronic, and internet. Yet the importance of the American media transcends their strategic position in the current regime. Indeed, the media’s positive contributions to postwar American politics have understandably afforded them enormous influence and prestige. Today, however, these assets have been squandered, and these outlets have long since succumbed to myopic self-importance and arrogance.
Nevertheless, the road to this unfortunate place was a glorious one. First was the national media’s exposure of the indignities and terror experienced by black Americans in the Jim Crow South. Then came their critical role in exposing the half-truths, outright lies, and horrors of the Vietnam War. Finally, Watergate afforded the media the opportunity to play the decisive role in exposing and finally ending a corrupt, criminal administration. And these are only the highlights of the media’s accomplishments over the last half of the 20th century.
In addition to such triumphs, the media have figured prominently in other aspects—both positive and negative—of postwar American politics. I have already highlighted how dependent checkbook organizations and public interest politics generally are on all variety of media. Another instance concerns the critical role that was effectively thrust upon the media by the post-1968 reforms of the parties’ presidential nominating procedures. As Nelson Polsby observed over 30 years ago, those reforms effectively replaced the old party bosses with the media. Certainly, in the 2016 Republican primary campaign and now in the 2020 Democratic primary contest, the critical role of the television networks in choosing when and which of the many candidates to feature is glaring. All the more so with regard to their prerogatives over the staging and format of the debates. Again, the media now effectively wield the power once held by party insiders, yet with even less accountability.
The reduction of party identity to something akin, at best, to brand loyalty is another development that has enhanced the political influence and power of media. After all, advertising and sales are their bread-and-butter.
Less noted developments are perhaps even more significant. Peter Mair has pointed out that political leaders in Western democracies are less and less likely to be recruited and groomed up through the ranks of the parties. Instead, party leaders are increasingly likely to be chosen for their ability to appeal to the media and a wider mass audience. In the United Kingdom, one result has been the disappearance of rough-hewn trade unionists and small businessmen from Parliament. In the United States we have become well accustomed to the blow-dried congressmen who have replaced the much less camera-friendly pols who were accustomed to meetings behind closed doors beyond the reach of microphones and TV cameras—not to mention congresswomen and female Senators invariably turned out in tailored suits and impeccable makeup.
Hence British journalist Peter Oborne’s denunciation of “the merger of the political and media classes” into “a triumphant metropolitan elite” that “has completely lost its links with a wider civil society.” One result is what he refers to as “a structural dishonesty about a great deal of political reporting.” Britain’s new Prime Minister, former journalist Boris Johnson, is only the most recent and glaring such example. In Washington the same incestuousness is evident among the press corps and the political figures they cover—though decidedly less so under the present administration.
The problem is exemplified in a recent Wall Street Journal op-ed by New York Times publisher, Arthur Sulzberger. Sulzberger was responding to a June 15, 2019 Twitter posting by President Trump that accused the Times of “a virtual act of treason” on account of a story the Gray Lady had run about U.S. cyber incursions into Russia’s electrical grid. The details of this particular episode are irrelevant to the point at hand—which is not to fault Sulzberger’s warning that Trump’s accusation “crosses a dangerous line in the president’s campaign against a free and independent press.” Rather the point is to highlight Sulzberger’s summary assertion that “Over 167 years, through 33 presidential administrations, the Times has sought to serve America and its citizens by seeking the truth and helping people understand the world” (my emphasis).
Now, “helping people understand the world” certainly strikes me as a worthy goal for a daily newspaper. But “seeking the truth” is, at best, sophomoric twaddle, and at worst, evidence of the profound arrogance and self-importance of today’s elite media. Whatever happened to “All the News That’s Fit to Print,” described by the BBC as “the most famous seven words in American journalism”? These words still appear every day in a box on the upper left corner of the Times’s front page, where they could be found since 1897. And they still articulate what a daily newspaper functioning under commercial, political, and time pressures can reasonably set as its objective. As for “seeking the truth,” the Times should leave that to others better suited to the task.
Making Sense of Trump
Given the political and cultural terrain I’ve been mapping, one does not have to approve of Trump’s behavior, rhetoric, or policies to acknowledge the rationality of his modus operandi. From his perspective it makes perfect sense to criticize and confront the decisions of various unelected government officials, whether Federal judges or the Federal Reserve chair. So, too, when he refuses to fill vacant high-level positions in various agencies, bureaus, and departments. After all, these are what Mudde and Kaltwasser refer to as the “unelected bodies and technocratic institutions” that elites on both sides of the Atlantic have increasingly relied on “to depoliticize contested political issues.”
In light of such developments, it is striking that higher education has not been more prominently targeted by Trump. To be sure, his Department of Education has sought to bolster for-profit education, weaken accreditation procedures, limit or cut student loans, and reduce research funding. Yet Trump’s rage and rhetoric have generally avoided denouncing colleges and universities, even though these are the spawning grounds of the elites who inhabit these same “unelected bodies and technocratic institutions.”
On the other hand, much of the vitriol that Trump directs toward immigrants, African-Americans, Muslims, and minorities generally is also aimed, albeit obliquely, at their champions in the academy, particularly in the humanities, law, and social sciences. After all, these are the faculties most dominated by liberal and progressive values, whose graduates shape policies and occupy key positions in the regime’s social welfare, educational, philanthropic, and cultural institutions. More to the point, universities are the bastions of the enlightened upper-middle class that is “the class enemy of the lower-middle class,” as sociologist Herbert Gans once put it to me. Indeed, much of the vitriol expressed toward minorities by Trump and his supporters derives from their hostility toward such segments of the upper-middle class, which habitually, almost reflexively champion the disadvantaged and oppressed. This is hardly admirable or desirable, but is nevertheless not irrational.
Still, the target Trump has pursued the most avidly is clearly the elite news media. And given its preeminent role in today’s regime, this tack is similarly not irrational. Nor is Trump’s “fake news” mantra without some foundation. Even before the FCC under Reagan did away with the Fairness Doctrine (by which broadcast media were mandated to present controversial issues in a fair and balanced manner), Roger Ailes saw the opportunity—indeed, what he regarded as the necessity—to create an alternative to the mainstream media. Clearly, as a market calculation, Ailes’s judgment has been vindicated.
But the more revealing angle is how, even as Trump has conducted his jihad against the media, he has had to distance and disentangle himself from it. After all, the prominence and visibility that made possible his running for president were the result of a successful reality television program. It is surely not an exaggeration to say that Trump is a creature of the media, and in turn a master at deploying it. Moreover, the media have come to rely on him to do so. As CBS CEO Lesley Moonves smugly commented about the ongoing presidential campaign to an audience of media investors and executives in February 2016: “I’ve never seen anything like this, and this is going to be a very good year for us. Sorry. It’s a terrible thing to say. But, bring it on, Donald. Keep going.”
Seven months after sharing that cynical insight, Moonves was forced to resign his post at CBS in response to sexual assault and harassment charges. This raises another aspect of Trump’s media profile and persona that could be a liability, but thus far has not been. His personal morality and behavior have always reflected what most of the media celebrate and exploit. Then, too, the success of Fox News has revealed to anyone willing to pay attention that there is a market for politically incorrect fare that does not practice or even preach traditional values and morality, as long as those values are not openly disrespected or mocked. When Bill O’Reilly did just that—flagrantly and repeatedly—his dismissal confirmed that at some point a price might have to be paid.
These examples highlight the thin line between news and entertainment media. Yet if Trump has freely and vehemently attacked the former, the latter have simply not been in his crosshairs—that is, as long as they have not criticized him politically. For while certain segments of the American public—conservative Catholics and evangelical Protestants, for example—are categorically offended by the cultural fare on offer from the entertainment industry, others either revel in it or are ambivalent, simultaneously attracted and repelled.
Also not on Trump’s radar screen, either for praise or criticism, are the super-rich. To be sure, he has long enjoyed the company of the very, very wealthy, regardless of how and where they acquired their money. Yet he seldom lavishes praise on such individuals, unless perhaps they render him some personal service—like staging a big fund-raiser. Nor does Trump much criticize them—even, for example, CEOs of large corporations that have for decades now been making a mockery of the law and hiring undocumented immigrants.
Such maneuvering by Trump sheds light on a major wellspring of his appeal. No personality cult is ever likely to develop around him. No one is going to liken his entrepreneurial genius to that of Steve Jobs, or praise his enlightened philanthropic activities. Rather Trump’s appeal lies in the fact that he made his money the old-fashioned way—in real estate! Millions of Americans find it easier to understand the sources of Trump’s wealth than to decipher what Wall Street geeks do when glued to their computer screens, or how the fancy footwork of hedge fund managers earn them billions. Neither do they associate him with the enlightened smugness of youthful Silicon Valley nouveaux riches, or the moral obtuseness of pharmaceutical executives slithering around and through government regulations in order to peddle deadly opioids to millions of their countrymen.
Real estate, by contrast, is a realm that ordinary Americans can at least fathom, a market with which most have some familiarity, from which they have typically profited, but which they also understand to be rife with posturing, sharp-dealing, and even cheating. Indeed, this captures the folkloric understanding of what many believe is necessary to “make it” in capitalist America. The fact that “people get screwed” doesn’t necessarily delegitimize the system: After all, “that’s just the way it is.” The result is a relatively stable equilibrium that comes under challenge only when the losers become too pitiable, numerous, or visible. In the interim, individuals like Trump are for many Americans the object of begrudging admiration—nothing more, nothing less.
In this regard, the most useful and revealing foil for Trump is Senator Elizabeth Warren, who manages to reek not only of enlightened, meritocratic privilege but also of crass opportunism and dishonesty. I refer of course to her history of misrepresenting herself as a Native American deserving of affirmative action consideration. But there’s more.
As I have already suggested, Americans drawn to Trump are likely to be tolerant—even a bit respectful—of a little cheating. After all, one does what one has to do to succeed! But Warren’s Achilles’ heel is her self-righteous condemnation of the system that allowed her to rise up from humble origins and prosper. Particularly problematic is her lucrative legal work for corporations whose practices or products she has sharply criticized. Whereas Trump comes across like a cross between a taxi driver and a union business agent doing what it takes to survive in his native New York, Elizabeth Warren presents to many as a whining, hypocritical ingrate.
In a political culture where, as I have noted, America’s elites have denigrated self-interest as suspect and unworthy, Donald Trump has made a virtue out of the crass, crude pursuit of his own narrow self-interest. Yet he has also managed to speak out on behalf of the self-interest of the nation, which cosmopolitan critics have also denigrated and rejected. Indeed, however wrong-headed or offensive Trump’s initiatives on immigration and trade, they do push back against a persistently assertive and ill-considered cosmopolitanism.
Again, the avatar of this cosmopolitanism was the public interest movement that emerged from the turmoil of the 1960s to challenge the facile pluralism that dominated American politics in the postwar era. The Public Interest may have been an inspired title for a publication launched in 1965 to create a forum where academic specialists could engage with Cold War intellectuals reared for ideological combat, and debate the specifics of policies undertaken by an increasingly activist government. Yet, as the basis of a “new politics” that presumed to pick up where the civil rights movement left off, and to right the many wrongs of late capitalism in America, the public interest movement was ultimately counter-productive. This is why John Gardner, founder of Common Cause, the flagship of the public interest movement, articulated his own reservations about terms like “public interest organization” and expressed his preference for “citizens lobby.” And why political scientist Andrew McFarland, who has chronicled Gardner’s concerns, does him one better and suggests the term “civic balance organization.”
Ralph Nader harbored similar doubts about the emergent reform movements of the late 1960s, and consequently called his sprawling organization Public Citizen. Yet regardless of the name, most such efforts fail to transcend the biases and limitations of the highly educated, well-informed individuals who are typically drawn to them and tend to look upon “politics” with disdain and suspicion, while at the same time regarding their own efforts and activities as motivated by obligation and duty.
Throughout our history, the role of “citizen” has held little meaning for vast numbers of ordinary Americans, whose life histories and customary circumstances lead them to think in much less abstract categories. Perhaps the obligations and duties that attach to citizenship come to mind when saluting the flag at some public ceremony, or when watching a movie about World War II. Yet for most Americans, the demands of political or even civic activities (as opposed, perhaps, to voting) tend to compete unsuccessfully with those of earning a living, maintaining a household, and keeping one’s family together.
In the past, the inevitable gaps between interest-based political activity and disinterested civic-minded pursuits were bridged by organizations like unions, political machines, even churches. These fostered roles and defined interests in concrete contexts that made sense to millions of men and women—whether as workers, Democrats, Republicans, Italians, Poles, Jews, Catholics, and so forth. Today, though difficult to discern through the fog of cultural warfare, identity politics provides another alternative to the abstractions of citizen politics—one that, however controversial or problematic, is eagerly and opportunistically embraced by politicians and political entrepreneurs.
More Will Be Less
In the ongoing, ever intensifying debate over the nation’s direction, substantive populism will continue to be a critical focus of attention and controversy. After all, “a politics of issues” is what enlightened opinion in America has long called for. But now that we have it, what are we going to do with it? Issues such as persistent, perhaps increasing income equality are obviously of utmost concern. So, too, are health care and, of course, the immigration and refugee crises. Then there is the panoply of frightfully divisive cultural issues and controversies to which there are no easy answers, or even responses.
In this environment, procedural populism will likely be regarded, not entirely incorrectly, as less than compelling and narrowly technical—the kind of stuff political wonks and obsessive activists get caught up in. That will certainly be the reaction of most Americans. Meanwhile, well-heeled, highly motivated business and commercial interests will doubtless have more immediately relevant, substantive items that they will be able to place and keep on the nation’s agenda.
Yet procedural populism will hardly diminish in importance. Sooner or later, one or more of these reforms will capture the public’s attention, and might very well take off. After all, we live in a democracy, don’t we? Why shouldn’t we abolish an anachronistic and elitist institution like the Electoral College? Or the filibuster? Why shouldn’t we make it easier for citizens to vote? And while we’re at, why shouldn’t non-citizens who have lived here, paid taxes, and raised their children to be Americans have a say in how their community is governed?
One answer to such questions is: “No, we don’t live in a ‘democracy,’ we live in a representative democracy, whose institutions were designed to limit abuses of power but also to foster enlightened leadership and statesmanship.” Yet in the current environment, this response is, unfortunately, not very persuasive.
More compelling, in my view, would be to argue that the democracy on offer is chimerical. Under the conditions elucidated here, “more democracy,” i.e., more procedural reforms and gestures toward “increased participation,” will do nothing to clarify and prioritize the substantive matters confronting us. In the past, this vital function was performed, however imperfectly, by organizations and institutions such as interest groups and political parties. Today “interest group” is a term of opprobrium, and our political parties have been reduced to bureaucratic behemoths incapable and unwilling to help citizens order and aggregate their interests into coherent sets of choices that can be evaluated against those of other organized political actors.
In sum, there are today no viable organizations or institutions by which the needs, interests, tastes, and desires of different segments of the political community may be weighed in the balance, one against the other. Absent any such, the populist call for “more democracy” will inevitably result in more noise, confusion, posturing, denunciation, and recrimination—all of which will be facilitated by, and work to the advantage of, the media, whose raison d’être now looms dangerously as profiting from the chaos.
The post The Dead End of “More Democracy” appeared first on The American Interest.
September 3, 2019
Good Idea, Bad Reasons, Worse Context
On Monday, August 12 the Trump Administration changed U.S. immigration policy by Executive Branch fiat in the form of a new 837-page rule. As is by now well known—because of the hue and cry it set off—the Administration ordered new stringencies against immigrants who would or could reasonably be expected to become a drain on state resources. According to the new “Inadmissibility on Public Charge Grounds” rule, U.S. policy will now apply a wealth test for green-card status, favoring immigrants who fit into the needs of the U.S. economy—those more likely to engage in activities that create new jobs than to compete for existing ones. The rule, which is in the main a new narrower definition of an old rule, will go into effect on October 15.
The reaction to the announcement was vigorous and generally critical, and the criticism did not come from just the so-called progressive Left; it came also from what could be fairly called the center. Thus Mindy Finn for Stand Up Republic, from August 15: “[T]he engraving on the Statue of Liberty does not specify from where a person should come, or how much money they should have, just that they will be welcome in the new world. . . . Though Trump and his nativist cronies are trying to redefine our ideals through their xenophobic statements and bigoted policies, Americans do not separate themselves from each other, we unite.” Some criticism came, as well, from the Right, of which more below. The basic dig against the new policy was that it would discriminate against poorer nations and people of color, and this, presumably, is not only morally wrong, but shows the Trump Administration’s own true colors, which are debased and disgusting. Some also suspect that the rule will dramatically reduce the volume of legal immigration, over and above even the degree of economic need for lower-skill labor.
It is easy to account for such expectations given the context into which the announcement fell. It is as Ms. Finn described it: xenophobic and bigoted. The President had just recently told the so-called Squad, made up of four female congresswomen of color, to go back where they came from—despite the fact that all but one of them were native-born American citizens. Then there was the unfortunate remark on August 13 by Ken Cuccinelli, the acting Director of the U.S. Citizenship and Immigration Services. Cuccinelli opined during an NPR interview that Emma Lazarus’s famous poem on the base of the Statue of Liberty referred only to Europeans, and flippantly suggested that the words should be changed to read, “give me your tired and your poor who can stand on their own two feet, and who will not become a public charge.”
The chatter over Cuccinelli’s remark bore elements of the bizarre and the ignorant. As to the former, it is obvious that Cuccinelli’s quip about changing the poem was just that—a quip—delivered during a hostile interview. He should not have said it, but some chose disingenuously to take his suggestion literally because it served their purposes to do so. Apparently, virtue signaling has yet to acquire borders, making it often indistinguishable from an old-fashioned cheap shot.
[image error]
R. Jay Magill, Jr., 2018
More important, most otherwise educated Americans remain innocently ignorant of the poem’s history. When it was written it was understood to be about those seeking refuge and asylum, not those seeking greater economic opportunity, as Ms. Finn mistakenly asserts. More specifically, “The New Colossus” was written in 1883, about two years after the notorious May Laws were promulgated by Czar Alexander III, sending large numbers of persecuted Jews toward American shores. Lazarus was a Sephardi Jew who was active in Jewish causes, and so clearly had her oppressed brethren foremost in mind. The statue was erected in its current location on June 19, 1885. Only in 1903 was the plaque with the poem added to the base of the statue, by which time its meaning had already begun to conflate refugees with economic aspirants. Lazarus died in 1887 at the age of 38, so she could have nothing to say about the repurposing of her poem. And while Lazarus said nothing about liberty being only for Europeans, it is a matter of plain fact that, at the time, that is what everyone here had in mind.
To get back to the present moment, earlier, too, in mid-July, the Administration had unilaterally changed American asylum policy, making it more difficult to claim asylum status to get into the country. Specifically, the new rule states that asylum seekers who pass through another country to get to the U.S. border will not be eligible to apply for asylum in the United States unless they have first requested it from pass-through countries and been denied.
Then there were the much-ballyhooed ICE raids to deport illegal immigrants. These ended up being rather underwhelming in practice unless you happened to be caught without papers in a remote corner of Mississippi on August 7. But the acerbic, bitter anti-immigrant language coming from the White House was loud and clear. And if all that were not enough, not long after the August 12 announcement news sources reported on August 26 that Cuccinelli’s USCIS was denying all medical deferment requests, excepting only some military families and DACA “dreamers,” and the latter only because of ongoing litigation. (The subsequent outcry led to a partial reversal of the policy, announced on September 2.)
Taken together, the optic was of an Administration applying a decidedly bigoted lens to how it saw the entire immigration policy portfolio. That optic, most likely, was entirely intended for the political purpose it was deemed to serve. It is difficult to come up with any other general conclusion, given the Administration’s overall behavior and body language.
This is unfortunate, because the August 12 announcement, were it to be judged solely on its policy merits, is a positive one. Over the past few decades many proposals to adjust U.S. immigration policy so that it is more functional in economic terms, and less tuned to humanitarian and family reunification criteria, have been put forth. These proposals have come not just from Republicans, but also from Democrats and independents. They have mainly made good sense.
The general argument has been that given changes in the globalizing American economy—changes that have made getting ends to meet more difficult for much of the middle class and for most poorer Americans—the United States can no longer afford an immigration policy, formed in 1964-65 under very different circumstances, so focused on broadly humanitarian criteria. Much research has shown that excessive low-skill immigration depresses wages among less well-educated Americans, and has fallen especially hard on African-Americans. Other research has emphasized the job-creating energies provided by new immigrants arriving in the United States with enough capital to start their own businesses. Those who have made this argument over the years have also frequently pointed out that no other country in the world bases its immigration policy the way the United States has since 1965, and failing comprehensive reform until now, still does.
None of that prevented the aforementioned hue and cry. Many Americans seem actually to think that U.S. public policy ought to put the interests of those who are not citizens of the country above those who are. This proves that, as Tomo says in the exquisitely awful Indonesian movie Firegate, “It’s easy to be an idealist, if you have money.”
It is a mystery to me why anyone thinks this a normal premise, or a morally justifiable one, upon which to make public policy in a democracy. It is of course a matter of great sadness that there are so many poor people in Latin America and sub-Saharan Africa, for example. But how does that translate into it being the responsibility of the U.S. government to solve this problem, and to do so in part on the back of its existing citizenry without fairly soliciting their view of the proposition? If you believe that it really is the responsibility of the United States government to solve this problem, even if the chosen method adds burdens to poorer Americans, you should read (or re-read, as the case may be) Charles Dickens’s Bleak House, and focus in on the unforgettable character of Mrs. Jellyby. Because that’s you.
The problem with the August 12 announcement was not only the context of concerted bigotry in which it fell, but also the manner in which it was promulgated. Immigration policy is of such vast importance to public policy that it must be the province of the Legislative Branch. It is not a proper domain for Executive fiat. We know this from the experience of the Obama Administration, which, faced with an obstructionist Republican majority in Congress, resorted to a great many Executive Orders. The problem with Executive Orders is that they are much easier to overturn by successive administrations than a law passed by Congress. The result is a policy that whipsaws back and forth and that is generically unstable and hence resistant to consistent implementation.
Now we have more of the same despite the fact that a congressional consensus for this particular change was possible had the White House been willing to negotiate. It is true, as the White House claims, that it sought first to work through Congress, but it should have been more flexible and patient. Instead, apparently, Stephen Miller lost patience and persuaded the President to act precipitously. Now that he has acted by Executive Order, engaging Congress may no longer be feasible, at least for the duration of his tenure.
So whose fault is all this?
We need to be careful about parsing the discrete pieces of the immigration policy issue. The mess at the southern border concerning illegal immigration is real enough, and its cause falls mainly on a series of piecemeal legislative and administrative acts that taken together have actually produced the crisis the Administration describes, less the “carnage” hyperbole of course. It has persisted for so long because an unnatural coalition supports ignoring the problem: liberal Democrats who favor virtually unrestricted immigration because they think it will end up being politically beneficial to them, and corporate Republicans, justified by a passel of libertarians like those at the Cato Institute, eager to exploit undocumented labor for purposes that need no explanation. This is why the aforementioned corporate Republicans are in the main not fond of the President’s August 12 decision—herewith the opposition from the Right.
Neither part of the coalition appeared to care much about all the problems this has caused for ordinary Americans, not to exclude the burgeoning problem of identity theft fraud, at least until the broader immigration policy blowback helped mightily to put Donald Trump in the White House. Much the same can be said, by the way, for Tony Blair’s 1997 decision to open the United Kingdom wide to “third world”-origin immigration; he thought it would help Labour Party electoral fortunes, but instead it brought Brexit and made Enoch Powell a prophet.
But beyond the mess at the southern border, which is arguably more the fault of Democrats than Republicans, the reform of immigration policy proper remains on the national to-do list entirely because of Republican stonewalling against comprehensive immigration reform. One would likely not know it from the reporting provided by the depleted mainstream media, but back in 2013 a bipartisan bill that bore all the characteristics of an intelligent, well-balanced, and carefully detailed reform came before Congress. A more detailed and refined version of a proposal first proposed by President George W. Bush in January 2004 and introduced as a bill in June 2006, the 2013 bipartisan “Gang of Eight” bill passed the Senate by a comfortable margin: 68-32. That bill would have solved the illegal immigration part of the policy portfolio along with the larger issue. But it was never put to a vote in the House because of the Hastert rule, which held that unless the majority of House Republicans supported a bill, it would not come up for a vote. Had it been put to a vote, it almost certainly would have passed.
It is hard to think of a better example of dysfunctional partisanship destroying an opportunity for a major advance in public policy. But the Republicans wanted to keep the issue of immigration on the front burner so that they could profit from it politically, and of course they have, at the expense of the national interest as a whole. Alas, harvesting anxiety and division works; it merely requires that you have no scruples. As far as qualifications ago, that’s almost too easy for most Republicans lately.
In other words, to circle back to where we began, it is the fault of the Republican Party that a Republican President had to resort to an Executive Order to do something that by rights ought to have been done by Congress; and it is something Congress might well have done in due course had it not been for the aura of bigotry surrounding the Administration’s overall orientation to the problem.
That orientation is deliberate; under current circumstances it is more useful politically for the President to have changed the policy by fiat than to have had Congress do it properly, and in a bipartisan fashion. After all, no one is better at sowing division and harvesting anxiety than he is.
Politics is a funny thing. Sometimes the wrong thing happens for the right reasons, and sometimes the right thing happens for the wrong reasons. Sometimes things do or don’t get done because reason itself ends up having little to do with the political process. And sometimes not knowing the backstory makes it impossible for observers to figure out what the hell is going on at all. Stay tuned to the TAI dial for more backstories as necessary; this overwhelmingly sad saga isn’t nearly over.
For Emma
Emma, perhaps it’s for the best you never knew
What the fickle tides of time did to you;
Your sonnet so fair in Jacob’s behest
To a small share of balm and a moment’s rest
On American shores, in the New World’s womb
Far from Czarist hatreds that did loom.
Your beauty lives on, all the same;
for at Liberty’s feet, is written your name.
Its beacon, rest assured, o’er the sea still glows
Though your first purpose, alas….
Few will ever know.
The post Good Idea, Bad Reasons, Worse Context appeared first on The American Interest.
Skills-Based Immigration: Good Idea, Bad Reasons, Worse Context
On Monday, August 12 the Trump Administration changed U.S. immigration policy by Executive Branch fiat in the form of a new 837-page rule. As is by now well known—because of the hue and cry it set off—the Administration ordered new stringencies against immigrants who would or could reasonably be expected to become a drain on state resources. According to the new “Inadmissibility on Public Charge Grounds” rule, U.S. policy will now apply a wealth test for green-card status, favoring immigrants who fit into the needs of the U.S. economy—those more likely to engage in activities that create new jobs than to compete for existing ones. The rule, which is in the main a new narrower definition of an old rule, will go into effect on October 15.
The reaction to the announcement was vigorous and generally critical, and the criticism did not come from just the so-called progressive Left; it came also from what could be fairly called the center. Thus Mindy Finn for Stand Up Republic, from August 15: “[T]he engraving on the Statue of Liberty does not specify from where a person should come, or how much money they should have, just that they will be welcome in the new world. . . . Though Trump and his nativist cronies are trying to redefine our ideals through their xenophobic statements and bigoted policies, Americans do not separate themselves from each other, we unite.” Some criticism came, as well, from the Right, of which more below. The basic dig against the new policy was that it would discriminate against poorer nations and people of color, and this, presumably, is not only morally wrong, but shows the Trump Administration’s own true colors, which are debased and disgusting. Some also suspect that the rule will dramatically reduce the volume of legal immigration, over and above even the degree of economic need for lower-skill labor.
It is easy to account for such expectations given the context into which the announcement fell. It is as Ms. Finn described it: xenophobic and bigoted. The President had just recently told the so-called Squad, made up of four female congresswomen of color, to go back where they came from—despite the fact that all but one of them were native-born American citizens. Then there was the unfortunate remark on August 13 by Ken Cuccinelli, the acting Director of the U.S. Citizenship and Immigration Services. Cuccinelli opined during an NPR interview that Emma Lazarus’s famous poem on the base of the Statue of Liberty referred only to Europeans, and flippantly suggested that the words should be changed to read, “give me your tired and your poor who can stand on their own two feet, and who will not become a public charge.”
The chatter over Cuccinelli’s remark bore elements of the bizarre and the ignorant. As to the former, it is obvious that Cuccinelli’s quip about changing the poem was just that—a quip—delivered during a hostile interview. He should not have said it, but some chose disingenuously to take his suggestion literally because it served their purposes to do so. Apparently, virtue signaling has yet to acquire borders, making it often indistinguishable from an old-fashioned cheap shot.
[image error]
R. Jay Magill, Jr., 2018
More important, most otherwise educated Americans remain innocently ignorant of the poem’s history. When it was written it was understood to be about those seeking refuge and asylum, not those seeking greater economic opportunity, as Ms. Finn mistakenly asserts. More specifically, “The New Colossus” was written in 1883, about two years after the notorious May Laws were promulgated by Czar Alexander III, sending large numbers of persecuted Jews toward American shores. Lazarus was a Sephardi Jew who was active in Jewish causes, and so clearly had her oppressed brethren foremost in mind. The statue was erected in its current location on June 19, 1885. Only in 1903 was the plaque with the poem added to the base of the statue, by which time its meaning had already begun to conflate refugees with economic aspirants. Lazarus died in 1887 at the age of 38, so she could have nothing to say about the repurposing of her poem. And while Lazarus said nothing about liberty being only for Europeans, it is a matter of plain fact that, at the time, that is what everyone here had in mind.
To get back to the present moment, earlier, too, in mid-July, the Administration had unilaterally changed American asylum policy, making it more difficult to claim asylum status to get into the country. Specifically, the new rule states that asylum seekers who pass through another country to get to the U.S. border will not be eligible to apply for asylum in the United States unless they have first requested it from pass-through countries and been denied.
Then there were the much-ballyhooed ICE raids to deport illegal immigrants. These ended up being rather underwhelming in practice unless you happened to be caught without papers in a remote corner of Mississippi on August 7. But the acerbic, bitter anti-immigrant language coming from the White House was loud and clear. And if all that were not enough, not long after the August 12 announcement news sources reported on August 26 that Cuccinelli’s USCIS was denying all medical deferment requests, excepting only some military families and DACA “dreamers,” and the latter only because of ongoing litigation. (The subsequent outcry led to a partial reversal of the policy, announced on September 2.)
Taken together, the optic was of an Administration applying a decidedly bigoted lens to how it saw the entire immigration policy portfolio. That optic, most likely, was entirely intended for the political purpose it was deemed to serve. It is difficult to come up with any other general conclusion, given the Administration’s overall behavior and body language.
This is unfortunate, because the August 12 announcement, were it to be judged solely on its policy merits, is a positive one. Over the past few decades many proposals to adjust U.S. immigration policy so that it is more functional in economic terms, and less tuned to humanitarian and family reunification criteria, have been put forth. These proposals have come not just from Republicans, but also from Democrats and independents. They have mainly made good sense.
The general argument has been that given changes in the globalizing American economy—changes that have made getting ends to meet more difficult for much of the middle class and for most poorer Americans—the United States can no longer afford an immigration policy, formed in 1964-65 under very different circumstances, so focused on broadly humanitarian criteria. Much research has shown that excessive low-skill immigration depresses wages among less well-educated Americans, and has fallen especially hard on African-Americans. Other research has emphasized the job-creating energies provided by new immigrants arriving in the United States with enough capital to start their own businesses. Those who have made this argument over the years have also frequently pointed out that no other country in the world bases its immigration policy the way the United States has since 1965, and failing comprehensive reform until now, still does.
None of that prevented the aforementioned hue and cry. Many Americans seem actually to think that U.S. public policy ought to put the interests of those who are not citizens of the country above those who are. This proves that, as Tomo says in the exquisitely awful Indonesian movie Firegate, “It’s easy to be an idealist, if you have money.”
It is a mystery to me why anyone thinks this a normal premise, or a morally justifiable one, upon which to make public policy in a democracy. It is of course a matter of great sadness that there are so many poor people in Latin America and sub-Saharan Africa, for example. But how does that translate into it being the responsibility of the U.S. government to solve this problem, and to do so in part on the back of its existing citizenry without fairly soliciting their view of the proposition? If you believe that it really is the responsibility of the United States government to solve this problem, even if the chosen method adds burdens to poorer Americans, you should read (or re-read, as the case may be) Charles Dickens’s Bleak House, and focus in on the unforgettable character of Mrs. Jellyby. Because that’s you.
The problem with the August 12 announcement was not only the context of concerted bigotry in which it fell, but also the manner in which it was promulgated. Immigration policy is of such vast importance to public policy that it must be the province of the Legislative Branch. It is not a proper domain for Executive fiat. We know this from the experience of the Obama Administration, which, faced with an obstructionist Republican majority in Congress, resorted to a great many Executive Orders. The problem with Executive Orders is that they are much easier to overturn by successive administrations than a law passed by Congress. The result is a policy that whipsaws back and forth and that is generically unstable and hence resistant to consistent implementation.
Now we have more of the same despite the fact that a congressional consensus for this particular change was possible had the White House been willing to negotiate. It is true, as the White House claims, that it sought first to work through Congress, but it should have been more flexible and patient. Instead, apparently, Stephen Miller lost patience and persuaded the President to act precipitously. Now that he has acted by Executive Order, engaging Congress may no longer be feasible, at least for the duration of his tenure.
So whose fault is all this?
We need to be careful about parsing the discrete pieces of the immigration policy issue. The mess at the southern border concerning illegal immigration is real enough, and its cause falls mainly on a series of piecemeal legislative and administrative acts that taken together have actually produced the crisis the Administration describes, less the “carnage” hyperbole of course. It has persisted for so long because an unnatural coalition supports ignoring the problem: liberal Democrats who favor virtually unrestricted immigration because they think it will end up being politically beneficial to them, and corporate Republicans, justified by a passel of libertarians like those at the Cato Institute, eager to exploit undocumented labor for purposes that need no explanation. This is why the aforementioned corporate Republicans are in the main not fond of the President’s August 12 decision—herewith the opposition from the Right.
Neither part of the coalition appeared to care much about all the problems this has caused for ordinary Americans, not to exclude the burgeoning problem of identity theft fraud, at least until the broader immigration policy blowback helped mightily to put Donald Trump in the White House. Much the same can be said, by the way, for Tony Blair’s 1997 decision to open the United Kingdom wide to “third world”-origin immigration; he thought it would help Labour Party electoral fortunes, but instead it brought Brexit and made Enoch Powell a prophet.
But beyond the mess at the southern border, which is arguably more the fault of Democrats than Republicans, the reform of immigration policy proper remains on the national to-do list entirely because of Republican stonewalling against comprehensive immigration reform. One would likely not know it from the reporting provided by the depleted mainstream media, but back in 2013 a bipartisan bill that bore all the characteristics of an intelligent, well-balanced, and carefully detailed reform came before Congress. A more detailed and refined version of a proposal first proposed by President George W. Bush in January 2004 and introduced as a bill in June 2006, the 2013 bipartisan “Gang of Eight” bill passed the Senate by a comfortable margin: 68-32. That bill would have solved the illegal immigration part of the policy portfolio along with the larger issue. But it was never put to a vote in the House because of the Hastert rule, which held that unless the majority of House Republicans supported a bill, it would not come up for a vote. Had it been put to a vote, it almost certainly would have passed.
It is hard to think of a better example of dysfunctional partisanship destroying an opportunity for a major advance in public policy. But the Republicans wanted to keep the issue of immigration on the front burner so that they could profit from it politically, and of course they have, at the expense of the national interest as a whole. Alas, harvesting anxiety and division works; it merely requires that you have no scruples. As far as qualifications ago, that’s almost too easy for most Republicans lately.
In other words, to circle back to where we began, it is the fault of the Republican Party that a Republican President had to resort to an Executive Order to do something that by rights ought to have been done by Congress; and it is something Congress might well have done in due course had it not been for the aura of bigotry surrounding the Administration’s overall orientation to the problem.
That orientation is deliberate; under current circumstances it is more useful politically for the President to have changed the policy by fiat than to have had Congress do it properly, and in a bipartisan fashion. After all, no one is better at sowing division and harvesting anxiety than he is.
Politics is a funny thing. Sometimes the wrong thing happens for the right reasons, and sometimes the right thing happens for the wrong reasons. Sometimes things do or don’t get done because reason itself ends up having little to do with the political process. And sometimes not knowing the backstory makes it impossible for observers to figure out what the hell is going on at all. Stay tuned to the TAI dial for more backstories as necessary; this overwhelmingly sad saga isn’t nearly over.
For Emma
Emma, perhaps it’s for the best you never knew
What the fickle tides of time did to you;
Your sonnet so fair in Jacob’s behest
To a small share of balm and a moment’s rest
On American shores, in the New World’s womb
Far from Czarist hatreds that did loom.
Your beauty lives on, all the same;
for at Liberty’s feet, is written your name.
Its beacon, rest assured, o’er the sea still glows
Though your first purpose, alas….
Few will ever know.
The post Skills-Based Immigration: Good Idea, Bad Reasons, Worse Context appeared first on The American Interest.
Will the Real Populists Please Stand Up—or Perhaps Sit Down and Chill
Donald Trump is riding a wave of conservative populist anger that he did not create but is masterfully manipulating. Historically, populist movements have come chiefly from the left and focused primarily on economic grievances. But as recent events attest, populism also has conservative variants, which may reflect economic grievances but social and cultural anxieties as well.
Since the emergence of the Tea Party and then the rise of Trump, populism has been broadly de-legitimated on the left and among those still referring to themselves as liberals. Yet as the now almost forgotten Occupy Wall Street movement suggests, populism remains potent on the left, though it now goes by different labels—“liberal populism” is one; even “democratic socialism” gets invoked. But the most frequent is “progressivism,” which is surprising in light of turn-of-the-century Progressives’ hostility to populism.
Out of this morass of casually invoked labels there remains a persistent strain of what I refer to as “procedural populism,” which argues for abolition of the Electoral College, ending the filibuster in Congress, and generally eliminating all barriers to voting and taking proactive measures to get individuals registered on the voter rolls. Such proposals can be traced back to notions of “participatory democracy” advanced by the New Left in the 1960s. In this sense, populist impulses have once again become part of a broadly defined Left agenda.
Such participatory reforms have since the 1960s been widely implemented and remade our political institutions—for the worse. Indeed, the continuing reforms of our political parties have made the ascendance of a total amateur and outsider like Trump possible. Despite that outcome, procedural populists push for more and more direct democracy.
The result will be ever weaker parties dominated by elites that refuse to identify as such; increasingly technologically sophisticated and professionalized campaign machinery that will require ever greater infusions of cash; and even greater removal of politics from the daily concerns of ordinary voters. The prime beneficiary of these developments will be the media, which is already drunk with its power and influence. Meanwhile, the only antidote on offer is a politics of selfless, civic-minded engagement that is based on unrealistic notions of disinterested political actors motivated by grandiose notions of an ill-defined “public interest.”
The outcome will be more sullen anger and alienation among the mass of ordinary Americans whose only champion appears to be Donald Trump, our Fifth Avenue populist.
The following is Part One of a two-part exploration of contemporary populism and its various historical antecedents.
The populist wave roiling politics in America and other western societies should be of concern to all those committed to liberal democracy. Yet some conservatives have accommodated themselves to this angry current and earnestly regard themselves as defending “the people,” however belatedly, against the blatant and entrenched arrogance of globalist elites. Other conservatives are simply unwilling to challenge the apparently unstoppable tsunami that Donald Trump has succeeded in not merely surfing but stoking. Still others are opportunistically trading in the venom and vituperation that now pervade our public life.
Despite such accommodation, it is hard to exaggerate the improbability of this vain, vulgar, irreligious, rapacious, and ill-informed individual emerging as the tribune of millions of decent Americans, who feel economically threatened as well as socially and culturally marginalized and disrespected by their “betters.” Throughout his long and tawdry career Trump has proven to be not merely a sharp dealer and a cheat, but a narcissistic liar and miscreant. And given the intensity and rawness of the emotions he trades on, it is not inconceivable that Trump could eventually be devoured by his own supporters.
But if conservatives are guilty of opportunism, progressives are well-nigh blinded by their rage at Trump and all those who support or even tolerate him. To be sure, concerns and fears about his willingness to traffic in offensive sexual, religious, ethnic, and racial tropes—not to mention his affinity for autocrats—are not without foundation. But progressives’ fury at Trump and his right-wing populist supporters has grown so intense, it has become easy to overlook that progressives and their liberal allies have often tolerated and even embraced angry left-wing populism.
The short-lived Occupy Wall Street movement—“We Are the 99%”—is a recent, if now frequently overlooked, example. Much less recent is the affinity that contemporary progressives and their allies further to the Left have expressed with the agrarian populists who revolted against Eastern banking and industrial interests in the closing decades of the 19th century. During the political and intellectual upheavals of the 1960s, youthful historians on the Left began challenging their consensus-oriented elders who dismissed these populists as backward-looking, small-time agricultural entrepreneurs obstructing the development of a dynamic capitalist economy—and as anti-urban, anti-modern bigots and anti-Semites. As Princeton historian Eric Goldman depicted them in the early 1950s: “Populists thought of themselves as engaged in a work of restoration, a restoration of the good old days, when, as they liked to believe, there was open competition and plenty of opportunity for everyone.” Not coincidentally, the postwar New Left’s accommodation to 19th-century populism reflected its contemporaneous political sympathies, especially with the civil rights and antiwar movements but also the emergent black power, feminist, and environmental movements. Yet in short order, liberal as well as leftist Democrats were also presenting themselves to disgruntled “middle Americans” as populists.
Today, the sustained visibility and strength of the populist Right, not to mention Trump’s increasingly outrageous pandering to it, has rendered populism of any political stripe suspect—and encouraged contemporary progressives to side-step this complicated history. They have also been too preoccupied responding to their adversaries to reflect on the origins of their populist sympathies. Neither do progressives today appear to have noted that their namesakes—early 20th-century Progressives—tended to regard populists as reactionaries. Yet this conveniently neglected history has significant bearing on our current situation, particularly when contemporary progressives focus not just on substantive issues but on procedural and structural reforms intended to open up institutions and make them more democratic—that is, more responsive to popular opinion.
An example of such “procedural populism” is the recent successful efforts of progressives in the Democratic Party to weaken the role of “superdelegates,” typically party insiders and elected officials serving as ex officio delegates, at the upcoming presidential nominating convention. Another is the numerous calls to reform or simply abolish the Electoral College. Both bear the imprint of notions of “empowerment” and what the New Left called “participatory democracy.” In this same vein are recurrent efforts not merely to eliminate unfair or discriminatory barriers to the ballot box, but to significantly reduce the inconveniences and “costs” associated with voting by means of measures such as early voting, expanded use of absentee ballots, same-day as well as automatic voter registration (when obtaining or renewing a driver’s license, for example), and even pre-registration for 16- and 17-year-olds.
Such contemporary proposals reflect that little noted but significant shift in the Left’s approach to populism which occurred during the tumultuous 1960s. As Michael Kazin, Georgetown historian and editor of Dissent, has noted, “the New Left’s distrust of representative institutions separated this kind of populism from its predecessors.” For while late 19th-century populists sought primarily to reform institutions they regarded as basically sound, their 20th-century successors had much more fundamental goals of opening up those institutions to wider participation and scrutiny. Similarly, progressives today believe that such process-oriented reforms will provide a more secure foundation for American democracy. They also assume, it is not unfair to suggest, that such measures will facilitate the mobilization of disadvantaged constituencies whom they regard as allies and supporters. Yet of course such procedural and institutional reforms also expand opportunities for the mobilization of their adversaries, including many conservative populists!
Donald Trump’s presidency is Exhibit A for this last proposition. In critical respects he has beaten progressives at their own populist game. I refer not to his substantive policies, but to his mastery of the political tools that progressives have fashioned over the last half-century or more. Most notable among these would be what Theodore Lowi has characterized as “the personal presidency”: a fundamentally plebiscitary office, cut loose from any supports or constraints provided by strong, institutionalized political parties, whose occupant is consequently dependent on volatile mass opinion, which he must alternately manipulate and be manipulated by.
From this vantage point, progressives bear more responsibility for the current populist ferment than they acknowledge, or even understand. Again, I am not talking about their substantive policy views on race and gender, trade, or even immigration, although these have been advanced with a stubborn self-righteousness that has provoked the ire of large numbers of their fellow citizens. What I am talking about is how in recent decades progressives and their allies have come to advocate and implement critical procedural and institutional reforms that, while arousing little attention and controversy, have inadvertently facilitated the right-wing populism that now looms so ominously. And now, more such procedural populism looms on the horizon.
Parsing Populism
Considerable confusion, even obfuscation, envelops the term “populism.” Drawing on the work of Cas Mudde and Cristobal Rovira Kaltwasser, I do not consider populism a full-blown, coherent ideology, but rather “a set of ideas that, in the real world, appears in combination with quite different, and sometimes contradictory, ideologies.” How could it be otherwise? Populism reflects disaffection and alienation expressed by “ordinary people” when they arrive at the realization, however incorrectly or inchoately, that the elites in charge of “the big picture” have not only screwed up but also screwed them!
Populism has variants on the Left as well as on the Right, but in either mode it is fundamentally illiberal. Fixing it more precisely in the contemporary context, Mudde and Kaltwasser conclude: “In a world that is dominated by democracy and liberalism, populism has essentially become an illiberal democratic response to undemocratic liberalism.” Populists assume an undifferentiated, monistic popular or general will that elites are ignoring or subverting. Counterpoising the pure people against a corrupt elite, populists inevitably introduce a moralistic element into politics. Yet as Princeton political scientist Jan-Werner Mueller argues forcefully, one can disagree strenuously with populist complaints, as he does, without dismissing them, as elites frequently do, with “psychologizing diagnoses” or references to “authoritarian personalities.” Thus, while populism of any variety is worrisome and potentially dangerous, it should not be regarded as inherently irrational.
More central to my concerns in this essay is the degree to which contemporary populism is not merely anti-elitist but also anti-institutional. Historically, populism in its 19th-century guise was generally not anti-institutional. Indeed, the People’s Party was itself an institution, albeit short-lived, that grew out of a network of agricultural cooperatives that were the model for a system of “federal sub-treasuries” proposed by the Populists to provide credit to cash-starved farmers. That scheme never materialized, and, like the People’s Party, soon disappeared from view. Aspects of it reappeared, albeit under starkly different auspices, when the Federal Reserve System was created in 1913.
Yet during the 1960s, as the New Left was reinterpreting populism in a more favorable light, the mantra became “participatory democracy.” This led to our own American version of an ongoing cultural revolution that has, as noted by political scientist Hugh Heclo over 20 years ago, “institutionalized the distrust of institutions and their normative authority, whether in the public or private sector.” In this essay I focus on how this anti-institutional populism has been directed not only against various agencies and institutions, but also against political parties in particular. And while instances of such anti-institutional sentiment are evident on the populist Right (against the Federal Reserve, for example, or perhaps universities), that sentiment is much more prevalent on the Left, especially with regard to political parties. Indeed, contemporary populism and progressivism are now converging on an agenda to remake our political institutions.
There is, however, one significant source of anti-institutional sentiment on the populist Right. It involves the not inaccurate perception that elites have relied on certain institutions, in particular the courts and the media, to defend and advance the interests of various protected minorities in America, including blacks, women, gays, immigrants, and Muslims. As William Galston argues cogently in Anti-Pluralism: The Populist Threat to Liberal Democracy, “populist movements . . . are not necessarily antidemocratic. But populism is always anti-pluralist.” Similarly, Mudde and Kaltwasser emphasize: “Populism holds that nothing should constrain ‘the will of the (pure) people’ and fundamentally rejects the notions of pluralism, and therefore, minority rights as well as the ‘institutional guarantees’ that should protect them.”
Yet however cogent, this contention that populism is simply anti-pluralist misses a key dimension of the present situation. It is possible, from a populist perspective, to see elite championing of pluralism and minority rights in a different light. Quite aside from whether they regard minorities as legitimate components of “the people,” populists have reason to find fault with elites for advancing the interests of minorities while ignoring the fact that those interests invariably include the narrow, self-regarding interests of minority individuals. In other words, populists might well object that the interests of some individuals are being elevated in the name of a pluralistic conception of the public interest, while those of others—“the people”—are being dismissed. Given this perceived hypocrisy, it should not be surprising that the focus of much populist anger on the Right is on the courts and the media.
While my emphasis here is on the cultural dimensions of populist outrage on the Right, I do not deny that economic factors have also been at work. Indeed, the emergence of the Tea Party beginning in 2009 is generally regarded as driven primarily by economic grievances and concerns. Nevertheless, economic populism is much more in evidence on the Left. Again, Occupy Wall Street is the prime example. In any event, populist ferment and energy on the Right are more in ascendance—and of much greater concern to elites—than on the Left.
Put differently, Occupy Wall Street typifies substantive populist grievances. My concern here is to refocus attention on the neglected topic of procedural populism, which remains strong on the Left. Indeed, it pervades the ill-defined but critical territory shared by populism and progressivism. But again, this procedural populism has gone largely unexamined and unacknowledged. It will be a prime concern in what follows.
The Cult of Participation
The best guide to the American Left’s complicated relationship with populism is historian Christopher Lasch. Arguably the most insightful and influential member of the generation of leftist scholars who began their careers in the late 1950s and early 1960s, Lasch was an avid student of Marxist and neo-Marxist social theory and criticism. He was also a critic, albeit a sympathetic one, of late 19th-century populists for their naive understanding of economic interests under then-emergent “corporate capitalism.” Unlike Marxists, populists simply assumed interests to be self-evident. They lacked (and still lack) any notion of how ideology may distort reality and obscure from view an actor’s “objective” interests. Whereas Marxists rely on “theory” to understand and explain the crises of a capitalist system understood to be inherently and irredeemably flawed, populists express anger and outrage that things have gone awry and seek to restore the status quo ante.
After the New Left’s mantra of participatory democracy culminated in chaos at the 1968 Democratic Convention in Chicago, things degenerated still further into a terror campaign waged by the Weathermen. Lasch condemned the violence and argued that the New Left’s “cult of participation” was resulting “in an unworkable definition of democracy as the direct involvement of all the people in every political decision, no matter how minute.” Far preferable, in Lasch’s view, was the work of community organizer Saul Alinsky. Lasch was drawn to the organizer’s criticism of the politics of “cultural identity” then emerging among blacks and Native Americans. He also endorsed Alinsky’s ridicule of the New Left for refusing to take “the poor as they are”—for “romanticizing” and “patronizing” them. Citing Alinsky as the notable exception, Lasch concluded: “It is only the left which, both in its politics and in its culture, clings to the illusion that competence is equally distributed among people of good intentions.”
Echoes of participatory democracy were first heard in the halls of Congress in the mid-1960s, when the old-guard Democrats who had dominated the institution since the New Deal found themselves under growing pressure from ascendant liberals to blunt the authority of committee barons, and to open up congressional proceedings to the scrutiny of the increasingly assertive media. After the fiasco at the 1968 convention and Vice President Hubert Humphrey’s subsequent loss to Richard Nixon, liberal Democrats turned their attention to how their party chose its presidential candidates. Deprived of the opportunity to reform America, they reformed themselves—and in so doing, they contributed mightily to the fundamental reshaping of American politics in a more plebiscitary, populist mold.
By and large, the architects of these reforms were Democrats outraged that antiwar candidates who had been tested in various primaries in 1968 were denied the nomination in favor of LBJ’s surrogate, Humphrey, who had not run in a single primary. Yet it is critical to put this episode in context. For as recently pointed out by the Brookings Institution’s Elaine Kamarck, up to and including the 1968 cycle, “the primaries were more like tryouts for professional sports teams, with the scouts being the powerful party leaders who made the ultimate decision on which candidate prevailed as the party’s representative.” For many years the overwhelming majority of delegates to the national convention had been chosen at state conventions controlled by party regulars and insiders. Consequently, when the Democrats convened in Chicago late in August 1968, most of the delegates had been hand-picked by state party leaders—in many cases well before the beginning of that eventful, tumultuous calendar year. Some had been elected in primaries in which their commitment to a specific presidential candidate was either ambiguous or non-existent. Only a small minority had been chosen in primaries in which their candidate pledge was explicit and transparent.
So in the aftermath of the debacle in Chicago, the Democratic National Committee established the Commission on Party Structure and Delegate Selection, known more colloquially as the McGovern-Fraser Commission after its successive chairmen, South Dakota Senator George McGovern and Minnesota Congressman Donald Fraser. According to The Congressional Quarterly, one of “the radical changes wrought by the McGovern-Fraser Commission” was the insistence that “rank-and-file Democrats . . . . have a full and meaningful opportunity to participate in the delegate selection process.”
As a result, almost none of the delegates to the 1972 Democratic National Convention were selected by party insiders, leaders, or elected officials. Indeed, these traditional power brokers had been relegated to the margins of or excluded completely from the process. As Byron Shafer, the leading student of party reform, concludes: “By 1972, a solid majority of delegates to the Democratic National Convention was selected in presidential primaries, while an even more crushing majority was selected through arrangements that explicitly linked delegate selection to candidate support.” Moreover, scores of women, minorities, and others not previously in evidence were highly visible delegates on the floor of the 1972 convention.
Subsequent national conventions (Republican and Democratic alike, both parties having been transformed by revised state election laws) increasingly reflected the direct will of primary voters. Convention outcomes have become highly predictable, with delegates effectively reduced to passive emissaries who, in Senator Daniel Patrick Moynihan’s pithy formulation, “merely serve as scenery for the television cameras.” This has led some to ask whether the time and expense of staging the conventions is worth it. The more salient point, however, is made by Kamarck: “The new nominating system is solely in the hands of voters. . . But until 2016, it had never produced a nominee who was a total outsider with no government experience, demagogue-like qualities, and a disdain for the Constitution and the separation of powers. This is the danger of the new system and the legacy of 1968.” (emphasis added) More precisely, this is the legacy of participatory democracy, whose contemporary manifestation is the procedural populism so virulent among today’s Democrats.
Yet note how former Vice President Joe Biden characterizes the bizarre, vaudevillian format of the recent Democratic presidential debates televised at the end of July: “Look, it’s not anybody’s fault the way it’s worked. There’s 20 candidates and that’s a good thing.” Surely his view here is mistaken—or, more likely, disingenuous. The various news networks are primarily responsible for the staging of these events, and have been widely criticized for fostering a circus-like environment. As for the plethora of candidates, that is not “a good thing.” Moreover, it is the direct result of reforms implemented by liberal Democrats and now brought to light by analysts like Kamarck and Shafer.
Back in 1972, the problem surfaced quickly when the new participatory reforms led to an outcome different from the previous convention, but equally unsatisfactory: the candidacy and then resounding defeat of the Democratic presidential nominee, George McGovern. The connection could not have been more direct: McGovern had overseen the party’s reforms, best understood their intricacies, and was therefore ideally situated to take advantage of them. And while one seldom hears mention of it these days, at the time of his nomination McGovern was favorably dubbed a “prairie populist.” When he died in 2012, The New Yorker, The Nation, and like-minded publications resurrected that epithet to describe him.
Today, in the wake of the Tea Party and the rise of Trump, the Left’s response to populism is decidedly more complicated and convoluted. On the one hand, one cannot avoid the populist economic messaging of presidential candidates Bernie Sanders and Elizabeth Warren. Even so, these days on the left “populist” is not always invoked as a compliment. Indeed, in such quarters populism has taken on decidedly negative connotations. The New York Times was presumably attempting to cope with this dilemma when it recently referred to Sanders and Warren as “populist liberals.” In any event, as I have been suggesting, when it comes to procedural and structural issues, populism is alive and well on the American Left.
Another challenge is that substantively populism has two different dimensions: economic and cultural. And it is with the latter that left-liberals in the recent past and progressives today have had the most trouble. In the late 1960s and early 1970s, politicians such as George Wallace and Spiro Agnew were aggressively campaigning as cultural populists in response to the civil rights, anti-war, and campus youth movements. In the case of McGovern, even his fellow Democrat and Senate colleague (and for a brief time his vice-presidential running mate) Thomas Eagleton called him the candidate of “acid, amnesty, and abortion.” McGovern’s only alternative was his economic populist agenda.
Making a case similar to McGovern’s was the 1972 book by Jack Newfield and Jeff Greenfield, The Populist Manifesto: The Making of a New Majority. The following year Fred Harris—former chair of the Democratic National Committee, recent presidential aspirant, and newly retired U.S. Senator from Oklahoma—published The New Populism, another attempt to articulate an economic agenda that would counter increasingly successful Republican appeals to Middle America. During the Reagan years, journalists Robert Kuttner and Jim Hightower as well as campaign consultant Stanley Greenberg were among those arguing for a blue-collar populism focused on bread-and-butter issues that would steer between Middle America’s animus against both corporate elites and the “undeserving poor.” In a forlorn attempt at humor reminding fellow Democrats how they should position themselves in the 1980s culture war, Hightower (who was also Texas Commissioner of Agriculture) wrote that they needed to be “down at the Seven-Eleven picking up a Budweiser and a Slim Jim . . . . (not with the) yuppies enjoying a midday repast of cold melon mélange and asparagus and goat cheese and a delightfully fruity and frisky California white wine.”
But Hightower was whistling past the ballot box. As Michael Kazin has observed in The Populist Persuasion:
The Democrats’ turn to populism . . . remained a strategy hatched by candidates and their consultants . . . . It did respond to mass emotions but was not connected in any organic way to the ‘workingmen and -women’ whose sentiments candidates ritually invoked. This was a populism that saw no need for organized movements from below to support and extend its achievements. Like the copywriters for Hewlett-Packard and Banana Republic, Democratic campaigners were trying to pitch populism to a certain segment of the national market. But, in politics as in any sales effort, the consumers could always select a competing product or simply decline to buy any goods at all.
Today, it remains to be seen whether economic populism—however labeled and packaged—will work any better for Democrats.
Plaintiffs Rather Than Precinct Captains
If the economic populism cultivated by Democratic elites has had limited impact on substantive policy outcomes, their procedural populism has had a much greater—and, as I have indicated, largely negative—impact. The party reforms of the late 1960s and early 1970s may have led to wider participation in the presidential nomination process, but at the price of distorting how ordinary Americans conceive of politics. For instance, as already noted, those reforms have reduced those attending national conventions to “mere delegates” expected to parrot the views of those who sent them and not exercise any independent judgment.
A similarly problematic dynamic was highlighted by the noted African-American political scientist Charles V. Hamilton in 1974, when he expressed concern that activist lawyers’ reliance on the courts to advance the interests of African Americans was turning ordinary black citizens into “plaintiffs rather than precinct captains.” Not coincidentally, these litigation efforts were undertaken just as party reforms were becoming sufficiently complex and controversial that litigation was increasingly needed in that domain as well. Likewise, campaign finance reforms that followed the Watergate scandal served to augment the role of lawyers at national party headquarters, whose functioning grew more and more bureaucratic.
Of course, the prominence of lawyers in American politics was hardly a recent development. But it was during the New Deal that the Roosevelt Administration sought simultaneously to recast the federal judiciary and to develop an administrative state that would rely on technical experts such as economists, but especially lawyers. Over the decades, that project has culminated in a Congress that enacts increasingly vague, complex statutes whose details and implications are then fleshed out administratively by executive and regulatory agencies only nominally answerable to elected officials. Broadly speaking, what happened to delegates at party conventions has also befallen members of Congress: They, too, have become increasingly passive actors before political forces not readily held to account.
Similar processes have reshaped and actually undermined the prerogatives of political parties across the West. Parties have come to be understood less as private, voluntary associations and more as appendages directly implicated in the functioning of the state, fiscally as well as administratively. These trends are especially visible in Western Europe, where election campaigns, party functionaries, and their affiliated think tanks and foundations are significantly, if not fully, subvented by the state.
Here in the United States, such developments have been more limited, but nevertheless evident. Campaign finance reform has resulted in closer regulation of the parties by state and Federal governments. At the national level, qualifying presidential candidates are eligible for public subsidies in primaries as well as the general election. From 1976 until 2012, the presidential nominating conventions of the parties were either partially (minor parties) or fully (major parties) funded by the Federal government. And finally, various public financing options are currently available for designated electoral offices in 14 states. The details here are obviously of critical importance. But the broader point has been forcefully advanced by political scientist Peter Mair in Ruling the Void: “From having been largely ‘private’ and voluntary associations that had developed in the society and drew their legitimacy from that source, parties have therefore increasingly become subject to a regulatory framework whose effect is to accord them quasi-official status as part of the state.”
The bureaucratization and professionalization of parties also connects to the growing dominance of experts in politics and public life. The problematic role of experts in government and policymaking, and the perception of that role by ordinary citizens, have not gone unobserved, though their overall impact has doubtless been underestimated. During the Carter Administration, Hugh Heclo argued that the government’s increasing reliance on experts was fostering not “merely an information gap between policy experts and the bulk of the population,” but also “‘an everything causes cancer’ syndrome among ordinary citizens,” the result being that “the non-specialist becomes inclined to concede everything and believe nothing that he hears.”
Since Heclo wrote that in 1978, the role and visibility of experts—especially from the social but also the natural sciences—has grown. And their assertiveness, indeed bravado, has grown commensurately. For instance, in the heyday of the post-Cold War economic boom presided over by the Clinton Administration, Princeton economist Alan Blinder served on the Council of Economic Advisors and then as Vice Chair of the Board of Governors of the Federal Reserve. Back at Princeton in 1997, he published an article in Foreign Affairs titled “Is Government Too Political?”, in which he argued, directly but diplomatically, that “we want to take more policy decisions out of the realm of politics and put them in the realm of technocracy,” more in the hands of “nonelected professionals.”
About 15 years later, one of Blinder’s junior colleagues in the profession that understands itself as the queen of the social sciences, MIT economist Jonathan Gruber, personified a major problem with Blinder’s perspective. A key architect of Obama’s health care reform, the Affordable Care Act (ACA), Gruber was caught on video at a policy forum trumpeting that the ACA’s controversial “mandate” was in fact a tax and that “the lack of transparency” around this and other aspects of the legislation were premised on “the stupidity of the American voter.” Even making allowance for the pedagogical value of an attention-getting line, it is hard not to see the contrast between this remark by Gruber and Blinder’s carefully framed proposal as a measure of the burgeoning arrogance of America’s mandarins. Even more telling than Gruber’s tone and substance was the license with which he expressed these views in numerous public fora. Such showboating before presumably like-minded audiences spotlights the cloistered universe of our policy elites. Consequently, no one should be surprised that politicians like Barack Obama and Hillary Clinton, who surround themselves with such talent, feel at liberty to express either condescension toward fellow citizens who “cling to guns or religion,” or outright contempt toward those they consider “a basket of deplorables.”
Broader and deeper bureaucratization, professionalization, and dependence on experts trained in the natural and social sciences are now routinely cited as critical factors in citizen disaffection with government. But equally important, these developments have also impacted politics—political parties in particular, and civil society institutions in general. Indeed, there have been significant sociological effects on how citizens and voters relate to politics.
As mentioned above, Peter Mair argues that parties have attained “quasi-official status as part of the state.” His further insight is that as party organizations in Western democracies have moved “from a position in which they were primarily defined as social actors . . . to one where they might now be reasonably defined as state actors,” they “are now less well rooted within the wider society” and are “now more strongly oriented towards government and the state.”
The transformative impact of pollsters, marketers, media advisors, and campaign consultants on contemporary electoral politics is now legend. Most recently, digital media have been transforming the terrain all over again, creating new opportunities for tech-savvy specialists. One obvious outcome is further diminution of the role of parties, as individual candidates have come to run their own show. Yet candidates are hardly free agents. On the contrary, they have become critically dependent on these coteries of consultants, and that dependence does not abate once the candidates get elected.
Less noted has been the impact of these campaign experts and technicians on how politicians relate to voters and citizens—and how voters and citizens in turn respond, or don’t. Marshall Ganz is a former union and community organizer who now teaches at Harvard’s Kennedy School. He points out how electoral campaigns have shifted from “gathering” together as many supporters and voters as possible to “hunting” the much narrower segments of the electorate that can be most reliably and economically activated by means of targeted mailings and media messages.
In By Invitation Only political scientist Steven Schier offers a similar perspective by differentiating between voter “mobilization” and “activation.” The former relies on strong partisan appeals to stimulate maximum voter turnout. It characterized the era of classic party mobilization in late 19th-century America. By contrast, “activation” is what contemporary candidates and interest groups do to induce specifically targeted segments of the public to participate in elections, demonstrations, or lobbying. As Schier suggests, activation of specific segments of the populace is predicated on indifference to the rest, who are effectively demobilized: “Mobilization has given way to activation, a system by which minority interests manipulate the complex electoral and governmental system in the misleading garb of participatory democracy.”
As Mair rightly observes, the sociological implications of these cumulative developments are profound. His point of comparison is “ ‘the golden age’ . . . [when] the mass parties in western Europe strove to establish more or less closed political communities, sustained by reasonably homogeneous electoral constituencies, strong and often hierarchical organizational structures and a coherent sense of partisan identity.” As he elaborates, “Voters, at least in the majority of cases, were believed to ‘belong’ to their parties, and rather than reflecting the outcome of a reasoned choice between the competing alternatives, the act of voting was seen instead as an expression of identity and commitment.” Summing up, Mair quotes two colleagues: “‘Choosing’ a party is nearly as misleading as speaking of a worshipper on Sunday ‘choosing’ to go to an Anglican, rather than a Presbyterian or Baptist church.”
Here in the United States, party affiliation and identity were never that all-enveloping. American parties have typically never had formal, paid memberships, though they did have strong roots in ethnic and religious institutions and communities. In any event, Western European parties are now suffering from drastically declining numbers of paid memberships. Back in America, church attendance and religious affiliation have come to resemble consumer choices among competing brands. Meanwhile, both domains, political and religious, are ruled by bureaucratic hierarchies staffed by functionaries who are increasingly perceived to be out of touch with “consumers,” but who apparently have no alternative but to soldier on and endeavor as best they can to attract adherents.
Interest-Group Liberalism
The overall consequences of these varied developments in American politics and government are not straight-forwardly assessed. Without a doubt, our processes and institutions are more accessible, open, and transparent than they ever have been. There are certainly fewer “smoke-filled rooms”—unless we’re talking about a different kind of smoke. Our politics are more democratic and more participatory, and dramatically less controlled by party regulars and insiders. There are more avenues open to inquiry and investigation.
Moreover, by any reasonable historical standard, there are far fewer barriers to the ballot box for most citizens. This is true in spite of the many issues raised about limited access to registration and voting for specific disadvantaged, marginalized populations. Without challenging the validity of such claims, one must recognize that they are advanced in light of the greatly improved standards that have come to apply to the vast majority of citizens. Similarly, today there are certainly more opportunities and options to vote other than going to the polls on election day—unless of course we count the “good old days,” when even the dead got to vote.
At the same time, however, Schier emphasizes that while the educational levels of Americans have been increasing over recent decades, voter turnout rates have been declining. One explanation might be that while politics and government are more open, procedurally and substantively, to scrutiny than ever before, they are also more embedded in labyrinthine bureaucracies administering typically vague or contradictory statutes and regulations. Things are seldom as transparent as advertised or promised.
A half-century ago, as these developments were just beginning to be analyzed, political scientist Theodore Lowi identified the problem in The End of Liberalism. Contrary to what James Madison depicted in The Federalist, the factional interests generated in the dynamic commercial republic envisioned by the Framers never quite worked out as planned. Instead of continually emerging, competing, dissipating, and perhaps re-emerging, factions got organized, and interest groups eventually became more or less permanent parts of the policymaking machinery. To be sure, it took a long time for this to play out, but by the last third of the 20th century, the new regime that Lowi termed “interest-group liberalism” was in place.
Under this new dispensation, Lowi emphasized, “policy-making power” got parceled out to the most motivated parties, while “the mass of people who are not specifically organized around values salient to the goals” of various initiatives got “cut out.” And responsibility for government’s many endeavors was assumed by experts, whom he defined as “trained and skilled in the mysteries and technologies” of particular programs. For the usually blunt Lowi, this was a polite way of saying that this emergent regime was fundamentally corrupt.
The post Will the Real Populists Please Stand Up—or Perhaps Sit Down and Chill appeared first on The American Interest.
August 30, 2019
Is a Replay of Tiananmen Coming in Hong Kong?
With the arrests earlier today in Hong Kong of the youthful pro-democracy activists Joshua Wong and Agnes Chow, a harsh crackdown by Beijing on the popular movement seems increasingly likely. Since the protests erupted in June, the Hong Kong authorities have not exactly acted with restraint. More than 800 protestors have been arrested, and, The Guardian reported on August 12, peaceful protesters have repeatedly been attacked with tear gas, rubber bullets, and other projectiles and weapons. Unable to contain the relentless and ever-shifting popular mobilization through conventional policing methods, the communist authorities in Beijing have turned to local Hong Kong mafia groups, the so called “triad” gangs, to terrorize peaceful demonstrators.
Yet nothing has silenced the people of Hong Kong, who, since June 9, have used a fluid and decentralized array of tactics to press their demands: permanent withdrawal of the proposed Hong Kong bill that would enable Beijing to obtain the extradition of anyone in Hong Kong; fulfillment of Beijing’s obligation to allow the people of Hong Kong to democratically elect their own government; and accountability for multiple acts of police brutality.
The current movement lacks the clear leadership and organization of the 2014 Umbrella Movement (in part to avoid leaving leaders as clear targets for persecution). And unfortunately, some radicalized elements of the movement have turned to violent tactics to fight the police—a battle against a massive authoritarian state that they cannot possibly win with rocks, sticks, and firebombs but might with strict and sustained adherence to proven methods of nonviolent resistance. The crisis could have been de-escalated many weeks ago if Hong Kong’s stubborn and politically inept Chief Executive, Carrie Lam—who owes her loyalty to the Communist Party bosses in Beijing and not to her own people—had offered to negotiate with representatives of the pro-democracy movement. Instead, she only conceded to table (not permanently withdraw) the extradition bill, while street demonstrations, blockades, airport shutdowns, and gestures of resistance as delicate as protest symbols on baked goods have made Hong Kong an ongoing civic battleground and a metaphor for today’s global struggle for liberty.
Now, nearly three months of crisis appears headed toward a tragic denouement. Writing from Hong Kong on Wednesday, the New York Times columnist Nicholas Kristof (who shared a Pulitzer Prize for his coverage of the Tiananmen Democracy Movement in 1989) poignantly shared a sense of foreboding that is growing among journalists, policymakers, and intelligence analysts: “There are so many parallels to the Tiananmen student democracy movement that I covered in China 30 years ago—and I wonder if Beijing may ultimately deploy troops, perhaps from the paramilitary People’s Armed Police, to crush these protests as well.”
Earlier this month, China’s leaders amassed thousands of paramilitary personnel from the People’s Armed Police in a sports stadium in Shenzen (just over the border from Hong Kong). The spectacle appeared to be a warning to the people of Hong Kong, and the world. Now Beijing is upping the ante. Yesterday it sent a fresh detachment of People’s Liberation Army troops into Hong Kong, along with armored personnel carriers, army trucks, patrol boats, and armed helicopters. If these deployments were “routine,” as Beijing dubiously claimed, the threats to “resolutely implement the ‘one country, two systems’ principle” carried a menacing tone. If Beijing had implemented this principle, Hong Kong’s once-vaunted rule of law would not now be under mounting assault, and its people would be fully and freely choosing their own leaders and representatives.
With the 70th anniversary of the Chinese Communist Party’s revolutionary conquest of China rapidly approaching on October 1, the odds are increasing of a violent crackdown (possibly in stages, beginning with the removal of leading voices for peaceful democratic change, such as Joshua Wong and Agnes Chow, or perhaps sooner and more brutally).
There may not be much time left to avert a tragedy. The United States, the UK, and other leading democracies must make clear to China’s leaders (particularly through private channels) that violent repression in Hong Kong will bring severe and long-lasting consequences. At a minimum, we should use the Global Magnitsky Act to impose targeted sanctions (including financial penalties and visa bans) on individuals responsible for the repression, like those a former Treasury Department official has already proposed applying to Chinese officials responsible for the ongoing grave violations of the Uighur minority in Xinjiang Province. While the trade war is dangerous enough as it is, Beijing’s leaders must know that a violent crackdown in Hong Kong would obstruct any possibility of a return to a more normal relationship with Western democracies. And we must prepare now to wage and win a battle for global public opinion to make the PRC pay a very heavy price in esteem should it use force to suppress peaceful protestors in Hong Kong.
At the same time, in the dwindling time that may be left, we should reach out to diverse elements of the pro-democracy movement in Hong Kong, urging strict adherence to non-violence and a willingness to negotiate and compromise. Being careful to avoid any language that might be seen to justify or excuse a crackdown, foreign friends of the Hong Kong democracy movement should try to deflate dangerous illusions. Kristof concluded his column on Wednesday with this haunting reflection: “In the run-up to the massacres of 1989, idealistic protesters often told me that their cause was invincible. And then I watched tanks roll over righteousness.”
The post Is a Replay of Tiananmen Coming in Hong Kong? appeared first on The American Interest.
Human Rights Problems a Commission Won’t Solve
Editor’s Note: This essay is the second in a series on American Ideals and Interests. The first essay, Tod Lindberg’s “Moral Responsibility and the National Interest,” can be found here.
Secretary of State Mike Pompeo’s launch of a new Advisory Commission on Unalienable Rights has raised more questions than answers. For starters, what is driving the need for such a commission? What role will it play in policy? How were the members of it chosen? And how will it address human rights problems created by the Trump Administration itself through its affinity for authoritarian leaders and its actions here at home?
“I made clear that the Trump Administration has embarked on a foreign policy that takes seriously the Founders’ ideas of individual liberty and constitutional government,” Pompeo said at the launch July 8. “Those principles have long played a prominent role in our country’s foreign policy, and rightly so. But as that great admirer of the American experiment Alex de Tocqueville noted, democracies have a tendency to lose sight of the big picture in the hurly-burly of everyday affairs. Every once in a while, we need to step back and reflect seriously on where we are, where we’ve been, and whether we’re headed in the right direction, and that’s why I’m pleased to announce today the formation of a Commission on Unalienable Rights.”
Invoking former Czech dissident and human rights champion Vaclav Havel, who became President of his country after the Velvet Revolution, Pompeo warned that “words like ‘rights’ can be used for good or evil; ‘they can be rays of light in a realm of darkness . . . [but] they can also be lethal arrows.’”
“We must, therefore, be vigilant that human rights discourse not be corrupted or hijacked or used for dubious or malignant purposes,” Pompeo added, without saying by whom.
He lamented that “more than 70 years after the Universal Declaration of Human Rights, gross violations continue throughout the world, sometimes even in the name of human rights. International institutions designed and built to protect human rights have drifted from their original mission. As human rights claims have proliferated (emphasis added), some claims have come into tension with one another, provoking questions and clashes about which rights are entitled to gain respect. Nation-states and international institutions remain confused about their respective responsibilities concerning human rights.”
Again, Pompeo leaves unanswered who the guilty parties are. Is it those who have been repressed and seek equal treatment under the law? Or is it authoritarian regimes who pretend to observe human rights but in reality commit abuses on a regular basis?
Pompeo’s argument that human rights have “proliferated” is true to some extent, but one certainly hopes that Pompeo is not implying that equal rights for LGBTQIA people, women, people with disabilities, children, and minorities are a bad thing. Might he have in mind LGBTQIA rights and those who do not believe, for example, that gays should have the right to marry, when he said, “some claims have come into tension with one another, provoking questions and clashes about which rights are entitled to gain respect?” Most of the members whom Pompeo has chosen to serve on the Commission appear to lean against gay marriage as a right. Is the Commission going to review that?
Pompeo went on to lay out his vision for the Commission:
I hope that the commission will revisit the most basic of questions: What does it mean to say or claim that something is, in fact, a human right? How do we know or how do we determine whether that claim that this or that is a human right, is it true, and therefore, ought it to be honored? How can there be human rights, rights we possess not as privileges we are granted or even earn, but simply by virtue of our humanity belong to us? Is it, in fact, true, as our Declaration of Independence asserts, that as human beings, we—all of us, every member of our human family—are endowed by our creator with certain unalienable rights?
Of course, the line in the Declaration of Independence to which Pompeo refers states: “We hold these Truths to be self-evident, that all Men are created equal, that they are endowed by their Creator with certain unalienable Rights. . . .” In the Constitution that was ratified 13 years later, Article I, Section 2 treats non-free persons, i.e., blacks, as “three fifths of all other persons.” This was not changed until ratification of the 14th Amendment in 1868. Similarly, women were denied the right to vote until the 19th Amendment was ratified in 1920.
The evolution, or proliferation to use Pompeo’s term, of rights in this country has fixed problems dating from our founding. Surely Pompeo would not disparage these rights that were cemented decades after our founding as a “proliferation.”
U.S. human rights policy over the years has tended to emphasize political rights and civil liberties over economic and social rights. Some argue in Europe and even here in this country that it is a human right to have health coverage, to have a job, to own a home. Many Americans would disagree with such claims, but if Pompeo wants to review and debate these matters, the composition of his commission may not be well suited to do so.
Open Letter Disagrees
On July 22, more than 350 human rights organizations, activists, and former officials (including this author) released a letter criticizing Pompeo’s decision to create the Advisory Commission on Unalienable Rights. Among the criticisms in the letter, “Almost all of the Commission’s members have focused their professional lives and scholarship on questions of religious freedom.” While religious freedom unquestionably is a fundamental right, the Open Letter goes on to note that a number of members of the Commission are “overwhelmingly clergy or scholars known for extreme positions” opposing LGBTQIA rights.
The chair of the commission, Mary Ann Glendon, Learned Hand Professor of Law at Harvard and former Ambassador to the Vatican in the George W. Bush Administration, sought to allay concerns about the commission in a podcast with Lawfare earlier this month. And yet her public stance opposing gay marriage, for example, has stirred controversy. “What same-sex marriage advocates have tried to present as a civil rights issue is really a bid for special preferences,” Glendon has said, a view shared by a number of other commission members.
Pompeo identified other members of the commission: Russell Berman, Peter Berkowitz, Paolo Carozza, Hamza Yusuf Hanson, Jacqueline Rivers, Meir Soloveichik, Katrina Lantos Swett, Christopher Tollefsen, and David Tse-Chien Pan. Some have solid records on human rights; others have been dismissive publicly of human rights abuses committed by so-called friendly regimes such as Saudi Arabia and the UAE.
In his July 8 announcement, Pompeo acknowledged the role of the 1948 Universal Declaration of Human Rights. “An American commitment to uphold human rights played a major role in transforming the moral landscape of the international relations after World War II, something all Americans can rightly be proud of. Under the leadership of Eleanor Roosevelt, the 1948 Universal Declaration on Human Rights ended forever the notion that nations could abuse their citizens without attracting notice or repercussions.”
The Universal Declaration lays out universal freedoms by which all countries should abide. These include freedoms of expression, association, assembly, and belief. Citizens should have the right to choose their own leaders through free and fair elections. Rule-of-law systems with checks and balances and independent institutions ensure protection of these human rights. A free and diverse press acts as an additional check on power, as does a vibrant civil society. At root, all people are created equal and should be free from discrimination regardless of race, ethnicity, religion, sexual orientation, or political views. In a hundred words, I’ve just outlined what the commission likely will spend countless hours pondering.
Finally, the advisory commission will not fix larger problems of the Trump Administration’s own making.
Well into the Trump Administration’s third year in office, for example, there is still no Assistant Secretary for Democracy, Human Rights, and Labor (DRL). Created by Congress in the late 1970s, DRL’s responsibility is advancing the cause of democracy and human rights around the world. Robert Destro was nominated for the Assistant Secretary position late last year and after a rough hearing this spring has still not been confirmed. While some responsibility for this rests with the Senate, it is hard to take the Administration seriously on human rights when it doesn’t fill the top position responsible for this portfolio. (DRL was also excluded from the decision to create the advisory commission.)
Additionally, with three country exceptions (Cuba, Venezuela and Iran—and maybe a fourth, Nicaragua) and one thematic exception (religious freedom), this Administration has done a woeful job in advancing human rights. On Cuba and Venezuela, the Administration has taken a tough stand on human rights abuses. It has taken a similar stance when it comes to Iran as part of its larger hardline approach to the regime there. It used to highlight human rights outrages in North Korea, until President Trump “fell in love” with North Korea’s brutal dictator Kim Jong-un.
Pretty much everywhere else, however, the Administration has been silent about, if not complicit in, human rights abuses around the world. President Trump’s whitewash of the role played by Saudi Prince Mohammed bin Salman in the killing of journalist Jamal Khashoggi and the massive human rights abuses in neighboring Yemen, with U.S. military weapons, ranks at the top of the list.
With rare comments as the exception, the Administration has said and done precious little about the cultural and ethnic cleansing of Uighurs in the northwest part of China or the broader human rights crackdown in China, the worst there in decades, under President Xi. It seems worried that criticism on human rights might damage fragile trade negotiations. (To his credit, Pompeo described China’s treatment of the Uighurs as the “stain of the century.”)
Trump has taken a very hands-off approach to the inspirational protests in Hong Kong. “The Hong Kong thing is a very tough situation,” Trump told reporters on August 13. “I hope it works out peacefully. I hope nobody gets hurt. I hope nobody gets killed.” He should be warning authorities in Beijing of a strong response to any bloody crackdown there.
The Administration has been silent while Turkish President Recep Tayyip Erdoğan holds the unenviable distinction of imprisoning more journalists than any other country. Trump embraces brutal strongmen leaders like Egypt’s al-Sisi and the Philippines’ Duterte and has dismissed criticism of Russian President Putin’s human rights record by arguing that the United States is “not so innocent” either. After the biggest protest in years in Moscow recently, the Administration has said virtually nothing about these brave Russians who risk arrest, beatings and possibly even worse in their demands for a level political playing field in upcoming Moscow elections and against a thoroughly corrupt, increasingly authoritarian leadership.
For the past 13 years—going back to the days of the Bush Administration—Freedom House has documented a decline in political rights and civil liberties worldwide. The Putins, Kims, Castros, Maduros, Erdogans, al-Sisis, and Xis of the world would commit human rights abuses anyway, but Trump’s regular attacks at home against the media and journalists and his coarse demonization of and racist tweets against his political opponents and others give them succor and cover. His Administration’s appalling treatment of people seeking to enter this country through the southern border, including the separation of children from their parents, and the President’s polarizing and divisive rhetoric undermine the U.S. image as a shining city on a hill.
I criticized the Obama Administration’s human rights record and acknowledge the shortcomings of the Bush Administration, but the current Administration’s behavior, rhetoric and actions are demoralizing human rights activists around the world and damaging the cause of freedom. No advisory commission is going to fix that.
The post Human Rights Problems a Commission Won’t Solve appeared first on The American Interest.
August 29, 2019
Moral Responsibility and the National Interest
In his 2011 Presidential Study Directive 10, Barack Obama declared, “Preventing mass atrocities and genocide is a core national security interest and a core moral responsibility of the United States.” He sought, in this area of humanitarian concern at least, the unity of moral responsibility and national security interest in a policy of prevention. He briefly elaborated his reasoning as follows: “Our security is affected when masses of civilians are slaughtered, refugees flow across borders, and murderers wreak havoc on regional stability and livelihoods. America’s reputation suffers, and our ability to bring about change is constrained, when we are perceived as idle in the face of mass atrocities and genocide.”
It seems unlikely that Obama’s rhetoric here did much to persuade anyone who was not already convinced about the importance of humanitarianism—that taking action to prevent mass atrocities is a sound priority for U.S. policy, indeed, a “core” priority. To say “our security is affected when masses of civilians are slaughtered” is merely to restate the proposition that prevention is a national security interest, which it may or may not be. That is the question. When “refugees flow across borders,” presumably fleeing violence, this could clearly affect U.S. national interests in some cases—but not necessarily in all cases. Atrocities certainly “wreak havoc” on local stability and livelihoods, but they may or may not have regional effects the United States is obliged as a matter of national interest to care about.
As for the reputational damage that idleness in the face of mass atrocities supposedly causes the United States, that would seem to hold mainly for those who already believe the United States should take action. The moral authority of the United States as an opponent of genocide and mass atrocities would indeed be compromised as a result of a failure to take preventive action when possible. But whether the United States has or should seek such moral authority is another question. If you have concluded that the United States has neither a moral responsibility to act to prevent atrocities nor a national interest in doing so—either in general or in a specific case—then you are likely to be willing to ignore claims about damage to your reputation coming from those who disagree with you. As for idleness constraining the ability of the United States “to bring about change,” how does it do so? One could argue—in fact, many do argue—that refraining from unnecessary humanitarian military intervention keeps America’s powder dry for those occasions when the use of force is necessary according to criteria of national interest.
President Obama, in short, was preaching to the choir—those who share his view about the “moral responsibility” of the United States to take action. That’s not necessarily a bad thing to do, but it offers little to those who would like to understand why the prevention of atrocities by and against others is something the United States must undertake as a matter of national interest. Obama offered no more than an assertion of national interest, leaving us with a humanitarian imperative one could accept for moral reasons or decline for practical ones—reasons of state, i.e., national interest.
Worse, Obama formulated his statement in such a way as to evade perhaps the hardest question arising out of this consideration of American moral responsibility and national interests: What happens when taking action to prevent atrocities actually conflicts with perceived U.S. national interests? This contradiction was nowhere more apparent than in Obama Administration policy toward Syria, where the prevention of atrocities came in second to the national interest Obama perceived in avoiding American involvement in another Mideast war.
Now, one could perform a rescue mission on Obama’s rhetoric by noting that he claimed preventing atrocities was “a” core national security interest and “a” core moral responsibility—not “the.” His statement thus implicitly acknowledges other such “core” interests—which, of course, he left unspecified. Presumably, these core interests and responsibilities may at times conflict with each other, and in such cases, Obama has provided no guidance on how to resolve the conflict. A cold-eyed realist such as John Mearsheimer could say that national security trumps or should trump moral responsibility in all such cases: There is nothing “core” about moral responsibility when the chips are down. If that’s what Obama was really saying, then one could chalk it up to posturing—a president claiming moral credit for seeming to take a position he has no real intention of backing up with action.
But this is a willfully perverse reading of Obama’s statement. He did not say what he said in order to relieve the United States of all responsibility for taking preventive action with regard to mass atrocities. On the contrary, his intention was plainly to elevate the importance of such preventive action within the government. His statement came in the context of the establishment of a new Atrocities Prevention Board, an interagency body that would meet periodically to assess risks in particular countries and develop policies to mitigate them.
Meanwhile, the signal contribution of the Trump administration to date on the relationship of moral responsibility to national interest has been what one might describe as the cessation of moralizing. President Trump himself stands apart from his modern predecessors in generally eschewing appeals to morality in his public comments, preferring to justify himself by recourse to a kind of callous pragmatism. Yet this tough-guy act entails a bit of posturing of its own. Recall that Trump was visibly moved by children suffering in Syria from a chemical attack by the Bashar Asad regime—to the point of authorizing a punitive military strike. Even a document as interest-focused as his 2017 National Security Strategy gestures at strengthening fragile states and reducing human suffering. And at Trump’s National Security Council, Obama’s Atrocities Prevention Board is in the process of undergoing a rebranding as the more modestly named Atrocity Warning Task Force, but its function appears to be substantially the same.
While putting “America first” seeks to subordinate moral responsibility to national interest, the moral aspect of policy choices never entirely goes away. In fact, the president seems to take the view that U.S. moral authority has its true origin in putting America first—and being very good at it. Without the strength that comes from serious American cultivation of its security interests, moral authority means nothing. And while many argue that Trump is sui generis, it is hard to miss that the current Commander in Chief has tapped into a kind of popular moral fatigue that was already brewing under Obama.
The tension between moral responsibility and national security interests is real, and it cannot be resolved either by seeking an identity between the two or simply chucking out moral considerations in their entirety. The key to making sense of Obama’s sweeping statement is to view policies of prevention not as either a matter of moral responsibility or national security interest, but always as a matter of both. There is no separable national security argument about how to handle cases in which large numbers of lives are at risk without considering the moral implications of doing so or failing to do so, nor is there a moral argument that can govern action apart from national security interests. As a practical matter for policymakers, the question of what to do always takes place at the intersection of moral responsibility and national security interests.
Generally speaking, there are three broad categories of ethical reasoning in deciding what one should do. As applied to the United States or other governments, a “consequentialist” perspective asks whether the likely outcome of what we do is good for our country and our friends and bad for our enemies; a “deontological” or rule-based ethical perspective tells us to do the right thing; and a “virtue-ethics” perspective asks us to take action that reflects our values and will reflect well on us as a country. Moral authority, and with it “moral responsibility” of a “core” nature or otherwise, is mainly a matter of the latter two categories of normative reasoning. Perpetrating atrocities is wrong, and those with the capacity to stop it should do so: That’s a general rule of conduct. Because the Declaration of Independence founded the United States on the principles of the “unalienable Rights” of all human beings to “Life, Liberty and the pursuit of Happiness,” Americans should take action where possible to secure those rights for others when they are being violated on a mass scale; our self-respect requires us to take action rather than turn away: That’s a virtue-ethics perspective.
But something else is already implicit in these imperatives, at least at the extreme where the United States contemplates taking military action to halt a genocide in progress. Even in response to genocidal activity, no doctrine or rule can oblige the United States to take action that would have suicidal consequences. Nor does self-respect properly understood ever demand suicidal policies. The Latin maxim “Fiat justitia et pereat mundus”—“Let justice be done, though the world perish”—may be an appropriate point of view for an advocacy group working to promote such desired ends as human rights and accountability for abusers with as little compromise as possible. But the principle is no basis for government policy. The United States would not invade Russia to halt genocide against the Chechens, for the simple reason that the possible if not likely consequence of doing so, nuclear war, would be worse. So consequentialism is already present in the rule-based and virtue-based normative declarations about supposed American moral responsibilities; in the preceding paragraph, it arises in such phrases as “those with the capacity to stop” atrocities and “where possible.”
The implicit consequentialism in even a “core moral responsibility” also rescues the rules- or virtue-based normative arguments from a frequently voiced criticism, namely, that they cannot be consistently applied in the real world. Of course they can’t! Once again, we are reasoning about the most extreme form of political violence: mass atrocities currently under way and whether to take action to halt them. The fact that a military response in one such hypothetical could lead to nuclear war does not mean that a military response in all such circumstances would be devastating. Therefore, the fact that one must refrain from responding in certain circumstances out of prudential considerations does not mean one must refrain in all circumstances. The moral reasoning that leads to the conclusion that one should take action is not rebutted by the conclusion that taking action in a particular case is too dangerous. One should take action where one can take action with due regard for prudence. The moral position does not become hypocritical just because it can only be realized imperfectly.
Critics often deride “idealism” of the Wilsonian sort (or the Kantian sort) as wildly impractical. So it can be. But if we consider consequences at the opposite end of the spectrum from nuclear war—that is, where the risks of taking action are negligible to the vanishing point—then we still face a moral question about whether or not to act. And in such circumstances, why wouldn’t the United States act?
Here, a useful analogy is the response to a natural disaster abroad: an earthquake or a tsunami. Offering humanitarian assistance through the U.S. military, for example, is not entirely without risk, but in many circumstances the risk is minimal—perhaps little more than the risk accompanying routine training missions. Because we place a value on preserving human lives, we extend the offer. This is “idealism” in action as well. An American president would be hard-pressed to give a speech to the American people explaining why the United States has no reason to care about lives lost abroad to a natural disaster and no reason to extend assistance to survivors to prevent further loss of life.
Most policy questions, of course, arise in the context of risk to the United States that falls between “negligible” on one hand and “nuclear war” on the other. When atrocities are “hot”—ongoing or imminent—the United States must and will always weigh the risks of taking action to stop them. President Obama’s statement in no way obviated such a necessity. Nor will the calculation ever be strictly utilitarian in the sense of the greatest good for the greatest number: Any American president will put a premium on the lives of Americans, including members of the armed services. To make these common-sense observations is to lay the groundwork for a conversation about whether the United States may be prepared to intervene in a particular situation involving atrocities. There is no “doctrine” that can provide the answer to the question of risks in particular circumstances. Policymakers must and will assess risk on a case-by-case basis.
Moreover, there is no reason to believe policymakers will all arrive at the same conclusion in a given situation. There will likely be disagreement over most cases between the extremes, and some of it will be the product not of differing technocratic assessments of risk but of the predispositions policymakers bring to their positions. These predispositions may be ideological in character or may simply reflect different tolerances for risk affecting individuals’ calculations. Observers often say that the decision to take action is a matter of “political will.” That’s true, but it’s not typically a matter of conjuring political will out of a climate of indifference. Rather, it entails acting against the resistance of opposing political will. When it comes to questions of humanitarian intervention, there will almost always be stakeholders in favor of inaction. Their justification is unlikely to be articulated on the basis of indifference to the lives at risk. Rather, it will be based on the contention that the likely cost is too high—a consequentialist argument based on national interest.
In practical cases, the only thing that can overcome such consequentialist calculation is a moral case. We have seen that such a moral imperative is implicit in the modern reading of the founding principles of the United States—the “unalienable Rights” human beings have, starting with life, liberty, and the pursuit of happiness. It is no refutation of these principles to note that governments often fail to respect them. Governments should respect them and should enact laws and enforce policies that protect individuals from abuse. Ultimately, the utility of Obama’s statement was not in establishing the “national security interest” of the United States in preventing atrocities as such, but in reminding policymakers of the moral questions involved in and underlying national interests.
Steps toward the institutionalization of this moral perspective—to create within the U.S. government redoubts where officials begin with the presumption that lives are worth saving—are a welcome addition to policymaking processes that often find it easier to pretend that national security interests are indifferent to moral considerations. Obama’s Atrocities Prevention Board represents such an addition. Another addition was Obama’s requirement for the Intelligence Community to make assessments of the risk of atrocities in other countries. Congress has weighed in with the near-unanimous passage of the Elie Wiesel Genocide and Atrocities Prevention Act, which President Trump signed into law in January. The legislation included support in principle for a dedicated interagency prevention process such as the APB, mandated training for select Foreign Service Officers, and required the executive branch to report to Congress on prevention efforts. All of these initiatives are ways of affirming that the human element matters in considerations of national interest, just as the imperatives of national interest in some cases circumscribe acting out of the best of moral intention.
And it is here, finally, that the most important aspect of the prevention/protection agenda comes into sharper relief. So far, we have mostly been looking at hard cases: where atrocities are ongoing, perhaps on a mass scale, and the only plausible way to halt them is through military intervention, with all the risk it entails, and perhaps also with no certainty of success. We have contrasted an extreme such “nuclear war” scenario with nearly risk-free humanitarian disaster relief, while noting that most “hot” atrocity situations fall somewhere in between.
In all these cases, we have been analyzing situations involving ongoing atrocities. A “prevention” policy in such circumstances entails at best prevention of further loss of life—a worthy moral goal subject to prudential calculations of risk. But a “prevention” agenda is much broader. Its purpose is to prevent atrocities, or conflict more broadly, from breaking out in the first place. This process has two elements: first, identifying countries at risk; second, devising and implementing policies to reduce it.
Once again, consideration of the matter through the perspective of a natural disaster is illustrative—in this case, an epidemic. Ebola is an often-deadly infectious disease, an uncontrolled outbreak of which could consume lives on a large scale. Everyone knows this. We also have a general idea of where the disease might break out. Medical doctors and epidemiologists have studied the problem and devised strategies to contain an outbreak. They have briefed public health officials in countries at risk and have sought to overcome any self-interested resistance to acknowledgment of the problem and the need for planning and training to cope with an outbreak.
Political violence is, of course, more a matter of human volition than an epidemic, but some of the structure of the problem is similar. To the extent political conflict has the potential to turn violent but has yet to do so, it resembles a known deadly pathogen to which human beings will fall prey, potentially in large numbers, should it break out. But human beings are not helpless in dealing with such a problem. They need not wait idly by, leaving the possibility of an outbreak to fate. There are strategies for containment that have proven effective in past cases. There are best practices in terms of the standard of medical care, and there are protocols for caregivers to reduce the possibility that they will contract the disease from the patients they are treating. What is more, all involved are aware of the acute danger of the problem, the need to address it seriously, and the importance of learning lessons from past successes and failures.
All of these elements are present in cases of potential conflict, including conflict of the worst sort—mass atrocities and genocide. A fundamental requirement is expertise—first, the equivalent of epidemiologists, individuals skilled in identifying potential sources and precursors to conflict and political violence and strategies to prevent or contain it; second, the equivalent of doctors, those who implement and further refine policies designed to address the sources of risk and to disrupt the precursors to violence. This requirement entails the commitment of resources in developing and coordinating expertise as well as in implementing prevention policies.
A second major requirement is the willingness to put the expertise to use—the political question we have been assessing throughout. With regard to “prevention” in the sense in which we are now taking the term, however, we are no longer looking at the possibility of putting Americans and others in harm’s way, or at least no more so than that risk faced by U.S. diplomats operating in challenging countries. The “risk” is simply the opportunity cost of resources devoted to the cause of prevention—the sacrifice of whatever else might be paid for with dollars going to prevention efforts.
Moreover, just as public health officials look at the dangers of failing to contain an outbreak of disease—another aspect of risk—so policymakers dealing with prevention must look at the potential costs of the failure of prevention efforts. The worst toll of violent conflict comes in the form of human lives lost, but that is hardly the only cost. The resources required to halt a conflict once it is under way and promote reconstruction and reconciliation in its aftermath are vastly greater than the sums required to assess risk and devise and implement prevention policies—a point well understood with regard to epidemics.
Epidemics don’t respect national borders, and neither do some conflicts and potential conflicts. Many potential conflicts, however, are specific to particular countries, and it is unsurprising that policymakers tend to view them through this prism. The history of international politics is more than the sum of all bilateral interactions between governments, but that sum represents the largest component of the history. Organs of the U.S. government that deal with foreign affairs—from the State Department to the Defense Department to USAID to the Intelligence Community to the National Security Council and beyond—are typically organized regionally and then typically by country, as is the interagency process by which they seek to coordinate their efforts to create government-wide policy. The cumulative effect of this mode of organization is to create substantial reservoirs of country expertise inside the government, and this is a very good thing.
But to deal with a potential epidemic, you need not only experts on countries in which epidemics may break out. You need a means of identifying which countries are at greater or lesser risk of epidemic—which means you need experts on epidemics and their causes, or epidemiologists. And you need experts on what to do if an epidemic does break out, public health experts and responders—expertise that cannot wait to be assembled until a breakout actually occurs. And you need experts in how to treat the victims of the epidemic—doctors. In the case of political violence and conflict, you need expertise in identifying potential steps to address drivers of conflict and to disrupt precursors, a task conceptually similar to the development of a vaccine. And of course, once you have these expert human resources in place, you really do need country experts to provide the necessary local context and to adapt general principles and policies to specific cases.
These are the prevention resources the U.S. government needs—and has taken some strides to develop, however incompletely and perhaps without as much systematic attention as the magnitude of the challenge requires. The question of a “national security interest” in prevention in this sense is really nothing other than the question of why we have a government that deals with foreign capitals, regional and international organization, and other entities abroad at all. The United States operates about as remotely from Star Trek’s “prime directive” as is imaginable. “Observe but do not interfere” is not the American métier (nor, for that matter, was Captain Kirk much good at observing the prime directive on behalf of the United Federation of Planets; his moral sense kept coming into play).
There are numerous ways of conceptualizing aspects of or the totality of the problem of political violence and its prevention. Some of them, in fact, have developed into rich policy niches offering great practical insight and guidance. We began here with an area I have spent some time on over the past 15 years, namely, trying to improve the ability of the U.S. government to prevent genocide and atrocities, as well as to improve international coordination of such efforts. But that’s just one aspect. To name a few more: conflict prevention, post-conflict stabilization and reconciliation, peace-building and promoting positive peace, pursuing the Millennium Development Goals and promoting sustainable development more generally, promoting resilience, capacity-building in fragile states or failed or failing states, promoting human rights, promoting effective governance, halting gender-based violence and promoting gender equity, countering violent extremism, human protection, the responsibility to protect (R2P), and promoting accountability as deterrence.
Practitioners in these areas may object, but I would submit that none of them is quite as advanced in its mastery of the policy questions on which it focuses as the epidemiologists, doctors and public health officials are on the problem of preventing an outbreak of a deadly infectious disease. This is not a criticism, but an acknowledgment of several major differences: First, humanity has had a clear idea of the problem of epidemics for centuries, whereas many of the concepts under the “prevention” umbrella listed in the paragraph above are of fairly recent origin as policy matters. Second, as noted previously, though human and governmental responses to a potential epidemic are volitional in much the same way prevention policies are, the diseases themselves are not; political violence or conflict is always volitional, and human motivations are complicated. Third, the downside risk of an outbreak of a deadly infectious disease is vivid in the public imagination and therefore obviously something worth the serious attention and resources of governments, international and regional organizations, nongovernmental organizations, and expert individuals. Conflict and political violence, prior to their eruption in actual killing, are murkier possibilities.
The prevention agenda is also hobbled by what we must acknowledge as an epistemological problem: How do you prove something did not occur as a result of a given policy choice? The failure of prevention is easy to discern, from Rwanda to Syria. Success is more elusive.
From my point of view, a serious examination of the evidence demonstrates that NATO’s military intervention in Kosovo prevented ethnic cleansing, mass atrocities, and possibly genocide. But this was a case where the atrocities were already under way and subsequently stopped. Likewise, a dozen years later, NATO’s Security Council-authorized bombing campaign in Libya prevented the forces of ruler Muamar Qaddafi from wiping out the opposition in Benghazi and exacting the reprisals he threatened on its civilian supporters.
But some scholarship has questioned whether Qaddafi really intended to engage in mass atrocities, thus the necessity of the intervention on its own terms. And, of course, following the fall of Qaddafi, Libya rapidly sank into violent chaos, a situation neither NATO nor its member countries, including the United States, was prepared to prevent, at considerable human cost. Interventions on this scale are rarely simple and one-off: Save the civilians in Benghazi. Intervening sets in motion a chain of events whose end is rarely known at the beginning.
These, of course, are the “hot” cases. The broader prevention project we are considering here aims to address causes of and derail precursors to political violence well before it breaks out. Its activities take place farther “upstream” from such potential violence. That just makes the success of prevention harder to demonstrate.
Harder, but not impossible. And the farther upstream one is from actual violence, the more closely prevention policies seem to resemble each other across the broad conceptions of the prevention of political violence enumerated above. Once violence has broken out, perspectives may diverge; debate still persists over whether bombing Nazi death camps in 1944-45 would have saved lives in sufficient number to justify diverting resources from the military objective of winning the war as quickly as possible. But in the context of a list of countries ranked by risk of bad potential outcomes in the future, the policy interventions from differing perspectives are similar. Potential ethnic conflict, for example, has shown itself open in some cases to amelioration by programs fostering dialogue between relevant groups, and this is true whether the stated priority is conflict prevention, human rights, gender, or societal resilience.
The fundamental point with regard to a prevention agenda is that a policy toward country X is incomplete without one. The U.S. government, as a matter of national interest, conducts assessments of political risk worldwide, typically at the country and regional level. Two conclusions follow: First, the risk assessment should draw on expertise across the full spectrum of the ways in which political violence manifests. Second, the purpose of risk assessment isn’t merely to avoid surprises; it’s to try to avert potential bad outcomes that are identifiable now.
That we are so far able to do so only imperfectly, with regard both to our processes for identifying and ranking risk and to our policies for mitigating it, is no reason to pretend we don’t care about this moral aspect of our national security interests. It’s reason to work to get better at the task.
The post Moral Responsibility and the National Interest appeared first on The American Interest.
August 28, 2019
Ten Ways to Defuse Political Arrogance
The warm glow of being certain that my political views are correct. The thrill of the perfectly formulated gotcha question. The combined feelings of fury and astonishment that my political adversaries could be so stupid, so evil, so misguided. Welcome to our era. We live in the age of arrogance—an unforgiving, intolerant, anger-stoked age of entrenched confirmation bias across groups and of constantly alleged binary political choices in which your position is entirely wrong and mine is entirely right.
This way of understanding those with whom we disagree has become so prevalent that we may come to view it as normal, or inevitable, though it’s neither. Nor is this tumor benign. Believing that my political opponents are either deluded or trying to cause harm destroys the trust on which civil society depends. It wrecks politics and political discussion. It weakens our intellects. It distorts the mission of higher education. It threatens family life. It ends friendships.
What is to be done? The ultimate antidote for political arrogance is political humility, which is a branch of intellectual humility. And so I rise with soft clear voice to sing its praises.
Intellectual humility appears to be a malleable personality trait. We can define it briefly as the capacity for recognizing that a particular personal belief or position may be fallible. Accordingly, intellectually humble people typically understand their own beliefs as subject to further consideration and typically feel willing and able to learn from the views of others, even those with whom they strongly disagree.
For centuries, intellectual humility was understood as a character virtue to be cultivated. Socrates, a founder of Western philosophy, embodied and taught it in classical Athens. In his famous 1748 “Enquiry Concerning Human Understanding,” the Scottish philosopher David Hume expressed one of his principle conclusions this way: “In general, there is a degree of doubt, and caution, and modesty, which, in all kinds of scrutiny and decision, ought for ever to accompany a just reasoner.” Benjamin Franklin, in his warnings about the self-defeating qualities of “dogmatical expression,” describes intellectual humility as one of society’s most useful virtues.
At the personal level, intellectual humility counterbalances narcissism, self-centeredness, pridefulness, and the need to dominate others. Conversely, intellectual humility seems to correlate positively with empathy, responsiveness to reasons, the ability to acknowledge what one owes (including intellectually) to others, and the moral capacity for equal regard of others. Arguably its ultimate fruit is a more accurate understanding of oneself and one’s capacities. Intellectual humility also appears frequently to correlate positively with successful leadership (due especially to the link between intellectual humility and trustworthiness) and with rightly earned self-confidence.
At the social and political levels, intellectual humility is a primary democratic virtue. Many political philosophers across the centuries have insisted on its importance. Why? Because, as the political philosopher Jean Bethke Elshtain put it, “a responsible politics is one that appreciates the limits to our understanding: We don’t know enough and can, in principal, never know enough to advance epistemological and political certitude.”
Indeed, without those habits and commitments associated with intellectual humility—dialogue based on reason-giving, openness to other views, rational argument in the service of truth, and norms of forbearance, civility, and self-restraint—democracy itself is poisoned and can grind to a halt. For this reason, intellectual humility may be the essential cure for the condemn-your-neighbor political polarization now dominating our society.
Analytically, and especially when considered as a character virtue, we can view intellectual humility as a wisely discerned middle ground (the golden mean) between the two extremes of intellectual arrogance and intellectual servility.
Viewed this way, intellectual humility can, but does not need to, lead to the inability to act with courage and conviction. Abraham Lincoln gave his life for the preservation of the Union. Martin Luther King, Jr., gave his for the beloved community. The Czech playwright and political leader Vaclav Havel showed matchless fortitude in the fight against authoritarianism.
Yet these three are among my heroes in large measure because of their intellectual humility. For their highest principles they risked all, but they were never ideologues. They never bragged, never gloated, never considered the conversation closed. They never suggested in word or deed that doubt is the enemy of truth or that humility undermines conviction.
We can similarly view intellectual humility as the wisest balance between, on the one hand, the belief that truth exists and is objective, and on the other, the knowledge that our access to the truth is subjective and therefore partial. Understanding this balance suggests that the search for the truth we revere is best undertaken in recognition of our limitations and in collaboration with others.
Havel once said that he would rather have a beer with someone searching for the truth than with someone who has found it. An important quality of both scientific inquiry and democratic political discourse is the understanding that all can learn from all and that important conversations don’t end.
Finally, intellectual humility is not a freestanding (purely heritable) or fixed human quality. It’s like baseball. It can be done well or badly, and doing it well requires practice and repetition. Nor does excellence typically arise only from internal effort. Like baseball, intellectual humility is most proficiently played as a team sport.
Accordingly, my capacity for intellectual humility depends importantly on a surrounding culture that prizes it and expects it, particularly of its leaders, that institutionalizes it, and that teaches it, especially to the young.
A number of societal conditions are favorable to cultivating intellectual humility. They include:
knowing conceptually what intellectual humility is and how to recognize it in others;
participating in institutions that value openness and flexibility and that tolerate and often welcome uncertainty;
receiving environmental feedback that permits us to understand accurately what we do and do not know;
being exposed to the benefits of intellectual humility, such as improved decision making, better relationships with others, and enhanced organizational and social progress; and
being exposed to societal leaders who model intellectual humility, are admired by others because of it, and whose success is in part attributed to it.
What can be done to help produce these conditions? Here are ten ideas.
Education
Encourage social scientists to conceptualize and measure intellectual humility. (This is already beginning to happen.)
Teach children in family, community, and religious life that consciously cultivating intellectual humility is a way to become both a smart person (with a high “Civic IQ”) and a good person.
Promote intellectual humility on college campuses as a gateway to knowledge and an antidote to politicized higher education.
Politics and Media
Seek the revival of “regular order” (rules and customs intended to produce deliberation and compromise) in the U.S. Congress.
Seek the revival of senatorial courtesy in the U.S. Senate. (The principle here is that attitudes follow behavior. If I dislike you but must pretend otherwise in my external behavior because custom obliges me to use prescribed language indicating respect, I may eventually come to suspect that you deserve respect. Thus acting as if I respect you can increase my respect.)
Replace posturing and publicity-seeking with actual, give-and-take communication in town hall meetings with members of Congress.
Encourage (mild-mannered) arrogance-shaming of those in the public eye acting with gross intellectual arrogance.
Create and seek to make popular a social media code of ethics.
Community Life
Do everything we can to foster social, class, and racial integration.
Create more opportunities for citizens to talk with (not just at) one another across partisan divides.
Will any of this work? Of course it will. My logic is flawless, and my argument is irresistible. And if you can’t see that, you are a bad person who doesn’t care about others. Of this I’m certain.
The post Ten Ways to Defuse Political Arrogance appeared first on The American Interest.
America’s Anti-Kleptocracy Breakthrough
Over the past year, an extraordinary burst of anti-kleptocracy legislation—much of it intended to counter Russian influence—has taken aim at the tools and tactics of foreign criminals looking to move money through the United States. These bills, which enjoy bipartisan support, aim to shore up America’s moral high-ground against the forces of kleptocracy.
Such measures are long overdue. For years, America has acted as arguably the world’s greatest offshore haven. Anonymous shell companies, creating entire industries in states like Nevada and Wyoming, have blossomed, while oligarchs and arms-traffickers continue to take advantage of supposedly “temporary” loopholes—now nearly two decades old—in order to purchase luxury real estate.
But because of the bills now working their way through both the House and Senate—as well as the tireless efforts of activists in and around Congress, including the Helsinki Commission—there is unprecedented wind in America’s anti-kleptocracy sails. Should the bills mentioned below come to fruition, America’s time as an offshore magnet may well be coming to an end.
For decades, American states have accrued fees from company formation, not bothering to ask which criminals, narco-traffickers, or extremist networks are behind the anonymous LLCs spreading like fungus in Reno or Cheyenne. Everyone from Chavista goons to post-Soviet kleptocrats have gotten in on the action, hiding their millions from investigators both foreign and domestic. Delaware, the most notorious American haven, not only set the tone for other states—with Nevada marketing itself as the “Delaware of the West”—but also for other countries like Nevis and Panama.
But the heady days of America as the capital of anonymous shell company formation appear to be coming to a close. Earlier this summer, a bill aimed at ending the practice—the Corporate Transparency Act—passed out of a House committee. The bill, which would force companies to identify their owners, hasn’t yet passed the House, and may yet be bundled with broader anti-money laundering legislation as it moves through Congress. However, a pair of bills in the Senate complement the Corporate Transparency Act; one, the Improving Laundering Laws and Increasing Comprehensive Information Tracking of Criminal Activity in Shell Holdings (ILLICIT CASH) Act, wouldn’t just identify those behind anonymous companies, but also create a team of financial expert investigators at the Treasury Department’s Financial Crimes Enforcement Network (FinCEN) to incubate new innovations in America’s efforts to counter grand corruption.
Of course, ending America’s role as the world’s greatest font of anonymity isn’t the be-all, end-all of anti-kleptocracy efforts. Nor are things like the Corporate Transparency Act the only bills being proposed. Instead, there is a constellation of inventive laws, all aimed at different facets of the kleptocratic process and representing a bipartisan effort, that comprise the legislative push.
One recently introduced bill would beef up the Foreign Corrupt Practices Act (FCPA), which targets American entities and individuals offering bribes abroad. The FCPA was, as the University of Cambridge’s Jason Sharman wrote, the legislation that sparked the “unusual commitment to the cause of fighting international corruption” on the part of the United States. But it only criminalized the bribe-supplier, rather than the bribe-demander.
Now, those on the receiving end of graft may have to begin looking over their shoulders as well. In August, a bipartisan group of legislators introduced the Foreign Extortion Prevention Act (FEPA), aimed specifically at criminalizing the entirety of the bribery loop. “Currently, a business being extorted for a bribe can only say ‘I can’t pay you a bribe because it is illegal and I might get arrested,’” Rep. John Curtis (R-UT), one of the bill’s co-sponsors, said in a statement. “This long-overdue bill would enable them to add, ‘…and so will you.’”
To be sure, holding people like the former President of Kazakhstan to account when he demands bribes from American companies is a steep hill to climb. But even if the foreign official extorting a U.S.-based company never sees the interior of an American courtroom, an indictment would lay out further evidence for other potential remedies, such as sanctions. (A similar line of logic is at play in the proposed Rodchenkov Act, which would criminalize participation in state-sponsored doping regimes of the kind for which Russia recently received sanction.) “Even if a kleptocrat cannot be immediately extradited, a U.S. indictment serves as a play-by-play of the crime committed that can be used to support additional measures—such as sanctions—and can force transnational criminals to think twice before traveling abroad to spend their ill-gotten gains,” said Rep. Richard Hudson (R-NC), another of the bill’s co-sponsors.
That indictment would also serve as public notice of the official’s corrupt proclivities—a reality that another bill, the Kleptocrat Exposure Act, would also highlight. If passed, that legislation would allow the State Department to publicly reveal which foreign officials (and which members of their families) had been barred from the United States on account of corruption. As with the other bills at hand, this one is bipartisan, sponsored by Representatives Steve Cohen (D-TN) and Steve Chabot (R-OH). “Global criminals and corrupt autocrats—or kleptocrats—seek to spend their ill-gotten gains in the United States, where they can indulge in luxury, pursue positions of influence, and exploit the rule of law, which protects their stolen wealth,” said Cohen. “Our country should not be a shelter for these corrupt individuals.”
Another arrow in the expanding legislative quiver, the Countering Russian and Other Overseas Kleptocracy (CROOK) Act, would take anti-corruption efforts abroad. If passed, the CROOK Act would create a brand new anti-corruption fund, overseen by the State Department and directed toward countries undergoing democratic transitions (such as Moldova a few months ago, or Ukraine in the aftermath of the 2014 EuroMaidan Revolution). The fund would use some 5 percent of the fines levied under the FCPA—good, presumably, for millions of dollars annually—while also setting up anti-corruption points of contact in American embassies. Back in Washington, the CROOK Act would also set up an interagency taskforce focused on directing related American anti-corruption efforts—effectively setting up something of a first responder system for eradicating corrupt networks abroad.
Plenty of other bills round out America’s sudden sprint toward transparency, from those barring visas for officials demanding bribes from Americans to others incentivizing whistleblowers with insight into the kinds of kleptocratic networks degrading democracy elsewhere. None of the aforementioned bills have yet become law, and their success isn’t assured. But there’s a reason so much optimism continues to swirl in the world of anti-kleptocracy efforts.
For instance, many, if not all, of these bills have come with bipartisan backing. In an era of partisan rancor, thwarting kleptocratic networks remains a bipartisan goal. Much as increasing sanctions regimes on Russian actors commands near-unanimous support among legislators, the anti-kleptocracy bills in motion have found a deep well of support on both sides of the political spectrum. That breadth of support stems from the conscious efforts of those pushing the legislation—especially the Helsinki Commission, which is comprised of an equal number of Republican and Democratic legislators.
But the sense that the dam has broken when it comes to long-stalled anti-kleptocracy efforts isn’t limited to Congress. The current Administration, for instance, has expanded FinCEN’s Geographic Targeting Orders (GTO), a program aimed at identifying those behind anonymous real estate purchases in a number of American locales. And contenders for the 2020 presidential election have begun placing anti-corruption policies front and center in their campaigns. Former Vice President Joe Biden has called for an end to anonymous shell companies, while Senator Bernie Sanders (I-VT) has already announced that he would back legislation like the Foreign Extortion Prevention Act. Senator Elizabeth Warren (D-MA) has likewise pledged to “crack down on tax havens” and called for increased “transparency about the movement of assets across borders.”
All told, America still remains a—perhaps the—major hub for all and sundry looking to conceal their ill-gotten wealth. Not only does it remain easier to set up an anonymous shell company in the United States than it is to get a library card, but any number of investments—from hedge funds to real estate—remain exempt from even basic anti-money laundering due diligence. A litany of issues continues to plague efforts to combat America’s role in facilitating kleptocracy, and the bills above shouldn’t be considered a panacea.
But they are, taken together, a remarkable step forward—and a sign that anti-kleptocracy efforts are no longer relegated to a small number of academics and activists trying to sound alarms. We haven’t crossed the finish line yet, but it’s there, in sight. And with the passage of the bills mentioned above, the United States can put an end to its role as a global bastion of kleptocratic anonymity—setting a model for allies and sending a message to corrupt adversaries everywhere.
The post America’s Anti-Kleptocracy Breakthrough appeared first on The American Interest.
Peter L. Berger's Blog
- Peter L. Berger's profile
- 226 followers
