Peter L. Berger's Blog, page 143
August 17, 2017
Charlottesville and Our Crisis of Political Legitimacy
In some ways, the grimly telegenic events in Charlottesville last week tell us less about the state of the country than meets the eye. Even according to the less-than-neutral Southern Poverty Law Center, membership in white nationalist groups has been in steady decline since 2011. It appears to have been less a “coming out party” for a rising force in American politics and more a desperate bid for continuing relevance. One proud white supremacist bragged to a Vice reporter at the time of the rally that it was the biggest of its kind in over two decades, a boast that sounds ominous on its face but in fact is less impressive after a second of reflection. For one, the event was organized over the internet, with some participants coming to Charlottesville from as far away as Canada; this suggests that the number of proudly and openly racist people in the United States (or even in North America as a whole) remains vanishingly small. For another, if this rally was in fact eclipsed by a larger one in the 1990s, I struggle to conjure up any memories of the earlier demonstration; this also suggests that the felt impact of this rally had little to do with its size and everything to do with a transformed media landscape.
Still, Charlottesville may yet end up as a clarifying moment for the United States. We increasingly seem to have forgotten that one of the keys to a well-functioning democracy is a strong shared sense of purpose—a common identity that unites all our citizens. Cultivating unity these days often takes a back seat to a determined effort to emphasize our differences. And in de-emphasizing unity, all too often it seems that we blithely assume that our “mature” democracy can easily cope with whatever we throw at it. The 2016 election was to an astonishing degree defined by identity politics, and the outcome of such a vote both made something like Charlottesville inevitable, and our collective reaction to it a litmus test for the health of our Republic.
Now of course, in one sense, identity politics are nothing new. A violent form of reactionary identity politics has flourished in America ever since Lincoln freed the slaves. The 1960s, however, saw identity politics gradually emerge as a revolutionary force on the Left. Starting with Civil Rights, through feminism, and on to LGBTQ activism today, with each successive breakthrough the logic of the movement has become embedded in the thinking of an ever-wider segment of the Left, to the extent that today it is taken for granted by many. These movements have sought justice for oppressed groups by increasingly relying on mechanisms gleaned from a radical postwar political philosophy explicitly intended to serve as a critique and rejection of the Enlightenment in the shadow of the Holocaust: postmodernism.
One can easily get lost in the minutiae of these philosophies and forget the bigger picture, which is that the politics of postmodernism are ultimately incompatible with liberal democracy. Since it got its start as a radical form of literary criticism, postmodernism is a philosophy of competing “narratives” that sees dominant ones violently suppressing weaker “others” as part of an endless zero-sum competition that leaves no room for meaningful political compromise. The struggle ends up being not between ideas, but between groups that have to varying degrees been repressed, each with its own set of contingent “truths.” To challenge any of these truths on objective grounds represents a mortal threat, an attempt by “hegemony”/”patriarchy”/”capital” (take your pick) to “silence” the weak, to deprive them of their very ability to exist. Even at its least violent—when it is not calling for the overthrow of the dominant “narrative” but rather asking for the space to have a thousand (identity) flowers bloom—postmodernism doesn’t allow for any kind of positive, constructive politics. Everything boils down to an absolute struggle between oppressor and oppressed. There is no room for a common positive vision in such a Manichean world.
The above rough sketch cannot be perfectly fair to the life works of the often-confounding continental philosophers we lump together as “postmodernist.” But I hope it at least presents a recognizable snapshot of the way politics are increasingly done by both sides in America today. Yes, both sides—President Trump was essentially correct in calling attention to that fact in his remarkable presser earlier this week, although not in the way he may have thought.
Donald Trump’s presidency represents the most recent mutation and metastasis of an intellectual cancer that has thus far been mostly confined to the revolutionary Left. Many have asserted that Donald Trump’s ascent paralleled the rise of Bernie Sanders, ascribing both politicians’ success to a resurgent populism grounded in economic grievance. While there some truth to this, it’s perhaps more accurate to see Trump as the perfect counterpart to Hillary Clinton—a champion of the identity politics of white males, an increasingly threatened and marginalized group that had finally adopted the toolset honed over decades by the groups that Hillary Clinton had come to represent. Political scientists talk about increasing polarization in the American electorate; another way to describe the same phenomenon is to say that identity politics is displacing a democracy grounded in a shared sense of purpose. People are having trouble finding a middle ground precisely because identity politics does not admit of compromise.
Charlottesville was itself not an expression of postmodern identity politics. On the Right, it was an attempted show of strength by reactionaries who trace their foul lineage back to Reconstruction, not the sixties. And the counter-protesters on the Left, even smaller in number, had among them a violent fringe calling itself “antifa” that traces its intellectual roots to Marxism, anti-capitalism, and anarchism—all solidly “modern” (as opposed to postmodern) antecedents.
The broader reaction to the event, however, is what deserves our close attention. The first punches may well have been thrown by antifa activists, and it’s this that probably prompted President Trump to say in his initial remarks that “hatred, bigotry, and violence” were coming from “many sides.” But it’s not only Trump’s equivocation as the President of the United States that is notable. It’s the kind of gleeful support his relativism received in conservative media, which saw his stance primarily as a blow against the cultural hegemony of the Left—as standing up for an increasingly “othered” class of white people.
Unlike either Donald Trump or Hillary Clinton, President Barack Obama seemed to understand the dangers of identity politics instinctively, and he ran his campaigns accordingly. Instead of running to be America’s first black President (as Hillary shamelessly did to be America’s first woman President), Obama consciously chose to run only as an American, seeking to represent all of America (“not red… not blue…”). Especially early on in his presidency, he took flak for this non-identitarian positioning from both sides of the political spectrum. From the far Left, he was accused of being “insufficiently black,” or of being too deferential to white sensitivities—an “Uncle Tom” President. On the fringes of the Right (the sewers that Donald Trump comfortably called his home), Obama’s very identity was called a lie; he was accused of being a foreign-born Muslim, and his claims to being an American-born Christian were presented as a vast conspiracy between Democrats, the media, and other shadowy elites to foist an illegitimate President on the nation. Obama’s politics unambiguously tended to the Left, but his politicking avoided identity. The vital importance of this sensibility to the smooth functioning of our democracy is best felt in its absence.
One could easily imagine a President responding to Charlottesville in a way that disavowed both white supremacists and far-Left anarchists, without equivocation. “Racism has no place in the kind of society we are trying to build together,” such a President might say. “Loud displays of bigotry and hatred by those marching in Charlottesville, and their attachment to a past we have long been struggling to overcome, is a sad sight that shows us how far we still have to travel.” “At the same time,” the President might continue, “free speech is a value we hold as dear as equality itself. Those resorting to violence in order to stop its exercise are no heroes. They themselves are un-American.”
That was, more or less, the gist of President Trump’s prepared remarks on Monday, before the fateful press conference, when he tried to walk back his initial statement about “many sides.” He read from a teleprompter through gritted teeth. His delivery was not credible, and neither his supporters nor his opponents were mollified. In any case, on Tuesday Donald Trump doubled down on the postmodern approach, firmly casting his lot with the identitarians, and banking his political prospects on the premise that identity politics is the future.
Would a President Hillary Clinton have fared any better? She would have denounced the white supremacists immediately, and may have tried to seize the moral high ground a few days later by paying lip service to the importance of free speech. Superficially, the trauma of Charlottesville would have been dealt more quickly, and pundits would follow up in the coming weeks writing pious op-eds about how Clinton had managed so expertly to heal the nation’s wounds, and thus become the President of all America.
But would that have been really true? I have my doubts. Given that she would have been elected on an explicit appeal to identity politics, her opponents, too, would be up in arms. The media would try to write it off as indecent partisan grousing, but the truth is our collective crisis of political legitimacy would be no less real—just more submerged. The ideal of a unifying identity still exists as an abstraction in most Americans’ minds, but increasingly it feels like neither side can admit that its opponents can credibly speak for it. That’s not a good place for our democracy, or our Republic, to be.
. Racism, of course, is as old as the country itself. But before emancipation, slaves were not part of the political system. Reactionary identity politics, especially in the South, only became possible with the enfranchisement of all.
The post Charlottesville and Our Crisis of Political Legitimacy appeared first on The American Interest.
A Mexican Oil Renaissance Could Thaw US Relations
America’s southern neighbor seemed on the verge of turning a corner three years ago when its then newly-elected president Enrique Peña Nieto began rolling through some much needed reforms for the country, starting with an overhaul of the energy sector. Mexico’s problem was its state-owned oil company, Pemex, which had owned and presided over the country’s oil resources for three-quarters of a century. Inefficiencies grew, as they often do in these state-owned enterprises, so there was plenty of excitement both within and outside of Mexico when Peña Nieto began his privatization push.
Unfortunately, things didn’t start out smoothly. Initial lease auctions produced meager to middling results, and Mexico’s reform-minded president started to feel the pressure of denationalizing the country’s oil reserves—resources with deep cultural significance—without much to show for it. This summer, Mexico’s energy prospects brightened considerably after an international consortium of private companies hit it big with a “world-class” find in a shallow water region in the Gulf of Mexico that they had won the right to explore through a government auction. Premier Oil, Talos Energy, and Sierra Oil & Gas estimate that the Zama field they discovered may contain between 1.4 billion and 2 billion barrels of oil. There were two other encouraging signs for the fledgling privatization movement in Mexico on that very same day: Mexico auctioned off 21 of the 24 offshore oil blocks on offer, while the Italian firm Eni upped its estimates of a March discovery to more than 1 billion barrels.
Naki Mendoza, the director of energy for the Council of the Americas, recently wrote about that historic day for the FT with an eye for future growth in Mexico, and for stronger Mexican-American relations:
Since July 12 the government has been anticipating a boost in interest. The National Hydrocarbons Commission has already pushed back the date of Mexico’s next deepwater auction by one month to January 2018 to allow companies more time to analyse the blocks on offer. […]
The discoveries also come at an auspicious moment for Mexico to advance broader political discussions. Announced one month before the start of Nafta renegotiations, they provide a strong impetus to embed the same guarantees of open investment and cross-border commerce that are underwritten by the country’s energy reforms into a revised trade agreement.
President Trump has strained the Mexican-American relationship, to put it mildly, but growth in the newly-privatized Mexican oil industry could help bandage those wounds. Plenty of American producers already have experience drilling for crude in the Gulf of Mexico, and can bring that expertise to bear in underutilized offshore Mexican formations. Mexico has shale hydrocarbons, too, so there’s potential for U.S. frackers down south as well. There’s a path forward here that could help everyone in North America win.
The post A Mexican Oil Renaissance Could Thaw US Relations appeared first on The American Interest.
August 16, 2017
Israeli General: We Hit Hezbollah Nearly 100 Times
Every once in a while during the Syrian civil war, a relatively secure sector under regime control will unexpectedly get blown up. Almost inevitably the casualties or apparent target of the blast will be a significant Hezbollah leader or convoy, as in the 2016 explosion at Damascus’ airport that killed Mustafa Badreddine. The strikes have long been assumed to be Israel’s doing, but only now are Israeli officials beginning to confirm the strikes and reveal the scope of Israeli intervention. As Haaretz reports:
Israel has attacked convoys bringing arms to Hezbollah and groups on several Israeli fronts dozens of times over the last five years, a top Israeli military commander has confirmed for the first time. The number of Israeli attacks on such convoys since 2012 is approaching triple digits, said Maj. Gen. Amir Eshel, the outgoing commander of the Israel Air Force. [….]
“An action could be an isolated thing, small and pinpointed, or it could be an intense week involving a great many elements. Happily, this goes on under the radar,” Eshel said. Aside from the direct achievement of destroying weapons designated for attacks on Israel, “there is another thing that I believe is very significant,” he says: “We had the good sense not to drag the State of Israel into wars.”
“Escalation to war is trivial in the Middle East,” Eshel told Haaretz, and Israel could have easily been dragged into one if the military made mistakes in its campaigns to foil arms smuggling to Hezbollah. “It is no great trick to be a bull in a china shop. When Israel has a vested interest, it acts irrespective of the risks. I think that in the view of our enemies, as I understand things, this language is clear here and also understood beyond the Middle East.”
Israel’s implicit language—its red lines, you might say—is remarkably clear. Aside from occasional strikes on high-value targets like Badreddine, the majority of reported strikes have targeted arms depots and weapons convoys assumed to be transporting missiles and rockets from Iran or Syria to Hezbollah in Lebanon
The number of airstrikes involved makes clear that the Israelis aren’t deterring Hezbollah’s efforts to acquire these weapons. Just as overwhelming Israeli firepower hasn’t deterred Hamas, so Hezbollah will continue to try to smuggle in more weapons. As in Gaza, this is a grass cutting exercise. How effective it has been so far is unknown to the public; certainly Hezbollah’s rocket and missile capabilities have increased significantly in recent years whether or not that’s the result of transfers from Syria and Iran. But it would be easy to imagine that the arms shipments would have made Hezbollah even more powerful over the course of this war had they not been destroyed by Israel.
The repeated Israeli intervention in Syria also makes the deceptions from former Obama Administration officials about the potential consequences any military action in Syria might have had in response to President Obama’s red line all the more apparent. In a 2016 essay in Politico defending the decision not to strike Assad, Derek Chollet, former special assistant to president Obama, suggested that a military response to Assad’s chemical weapons attacks would have resulted in “absolute hysteria” and required “deploying substantial numbers of American troops to Syria to secure those remaining chemical weapons depots over which Assad likely would have lost control.” Blasting through strawmen just as fast he can build them, Chollet goes on to suggest that those who supported a military response to the red line were calling for “overwhelming force” or even taking out Assad altogether.
Of course, we now know that Assad retained significant quantities of chemical weapons. We know that he continued to be willing to use them. And we now know, after President Trump’s missile strike on the Syrian airbase that launched a chemical weapons attack, that it’s possible to punch the Assad regime in the nose without being drawn into an escalatory quagmire ending in regime change.
There is still the dangerous possibility of escalating conflict involving the United States in Syria. But the threat comes almost entirely from our ground presence and support for the Kurds in the east, policies launched by the Obama Administration itself. The chemical weapons red line episode was an embarrassing one for the United States and its allies. Nonetheless it’s possible that that the deal to remove the bulk of Assad’s weapons stockpiles remains defensible—after all, Assad’s use of chemical weapons since then has been limited. But the example of Israeli strikes against Hezbollah make clear that a military response to the red line leading to the US getting inexorably sucked into the conflict were not and are not a tenable defense of President Obama’s inaction in Syria.
The post Israeli General: We Hit Hezbollah Nearly 100 Times appeared first on The American Interest.
Charlottesville, Trump, and Our Bitter Politics
Like so many, I’m saddened and deeply troubled by what happened in Charlottesville this past weekend, and its aftermath. And I also worry that more argument about it at this point is unlikely to do much good and may even do harm. Yet silence, somehow, feels cowardly.
Let’s review the basic story to date. An innocent woman lies dead, murdered. Far-right hate groups, for decades essentially exiled from anything but the most marginal participation in our public life, are now being discussed around the world (whether accurately or not) as a viable and perhaps growing presence among us. And the polarization of our society, much of it stoked by our market-share obsessed media—the rancor, the bitterness, the frantic hyperbole, the relentless either/or framing of issues, our fear of and anger at each other—appears only to have been increased by Charlottesville and its aftermath.
I agree with, or at least can understand with some sympathy, many things President Trump said. Left-wing provocateurs do exist; and they, too, use telegenic violence to recruit new members and raise money. Labels such as “alt-right” or “neo-Nazi” probably don’t describe everyone who showed up for the rally. There is more than one side to the issue of the Confederate statues and monuments; indeed there are at least three sides, since some African-American members of a Charlottesville city commission formed by the Mayor to consider the issue favored keeping the statues partly as “teaching moments” for the future.
The President also said yesterday that neo-Nazis and white nationalists “should be condemned totally,” a sentiment for which I’m grateful and with which I fully agree—but which also seems both forced and late.
But here’s the heart of the matter, for me. The great majority of Americans on both sides of the political aisle recognize that, in this land we all love and want to make better, racism exists. It’s deep and it’s serious. It dishonors us, and we need to do everything we can to erase it and put it behind us.
In that light consider: The rally in Charlottesville was planned and carried out by openly racist groups in pursuit of openly racist objectives. These facts should and do cause the great majority of Americans to feel distress, embarrassment, regret, shame, remorse, anger, and a renewed determination to do all that we can to minimize this terrible thing that crawled out of the fever swamps this past weekend to highjack our attention. Almost all of us—liberals and conservatives, Republicans and Democrats—know this and feel this in our bones.
Yet President Trump seems neither to know it nor to feel it. This or that occasional and usually elliptical remark (such as the one cited above) notwithstanding, he seems to have almost no willingness or ability to speak to this topic. Most of the time, he seems almost proud to display what appears to be a kind of insouciance, or willful ignorance, about it. It’s as if, for him, for the world he lives in and envisions, the topic doesn’t truly exist.
Why this is, I can’t say. Perhaps it’s because he’s not very interested in any topic other than himself. Perhaps he believes that addressing the issue forthrightly isn’t smart politics—better to appear always to be on top, always a “winner.” Perhaps he worries about upsetting some who voted for him. Or perhaps he’s simply a racist. I think a credible case can be made for each of these hypotheses.
I do know that, in the late 1950s and early 1960s, William F. Buckley, Jr., the founder of National Review and for many years a leading light of the American conservative movement, made a determined and largely successful effort to expel from the ranks of American conservatives the very type of pathetic, fringe, hate groups that organized the Charlottesville rally this past weekend, and which the current leader of the American conservative movement, who is also the President of the United States, seems fundamentally willing to tolerate and perhaps even welcome back into the fold.
An innocent woman is dead and some hate groups got our attention. And thanks in large measure to President Trump’s behavior since Saturday, our public discussion of these already-difficult issues is becoming shriller and more hysterical each day, the polarization among us continues to grow, the hate groups appear to be feeling pretty good, and the specter of yet more politically motivated violence haunts us.
The post Charlottesville, Trump, and Our Bitter Politics appeared first on The American Interest.
You Probably Think This Art Is About You
Walter Benjamin, writing in 1934 about the new medium of moving film and the role of the audience, captured a dynamic that applies to today’s social media users:
“In the end, [the actor] is confronting the masses. It is they who will control him. Those who are not visible, not present while he executes his performance, are precisely the ones who will control it. This invisibility heightens the authority of their control.”
Benjamin wrote those words as a warning about how mass media could be harnessed by the fascist powers ascending at the time. Yet today’s social media user has more in common with Benjamin’s actor than the person she was in the pre-digital era. Whether on Facebook or Instagram, we are all actors performing for an invisible audience, our actions influenced by an amorphous collective that is as likely to include one’s boss as one’s dog walker.
The power of this invisible influence struck me on a recent visit to the Hirshhorn Gallery in Washington, DC. I was standing in line to enter an “infinity mirror room” within Yayoi Kusama’s blockbuster retrospective exhibit. The trademark creation in Kusama’s 75 years of artistic output, the cube-shaped infinity rooms immerse visitors in a range of ethereal dreamscapes. Despite filling just 130 square feet, the mirror-lined interiors create a recursive effect that surrounds the visitor in an ever-expanding starry galaxy or field of glowing pumpkins.
When I reached the front of the line to enter an infinity room, a staff member armed with a stopwatch paired me with two teen girls for our foray into “The Souls of Millions of Light Years Away.” The two girls looked crestfallen. They had expected to have the room to themselves, and now their selfie opportunity would be photo-bombed by a stranger in a dreadfully pragmatic skirt suit.
The staff member shut the door, and our allotted 30 seconds in the room began. My two new friends immediately began posing and taking selfies together. I tried to shrink my presence and avoid ruining their photos—a difficult feat in a room of mirrors—as the thirty seconds proved as limitless as the horizon. While I had expected large crowds at the exhibit, I wasn’t expecting large entitled crowds. These girls had arrived with a single-minded sense of purpose, taking selfies as one might pick up a gallon of milk at the supermarket.
Kusama’s artwork focuses on the notion of “self-obliteration,” yet self-aggrandizement would become the defining narrative of the exhibit. Countless visitors, mobile phones in hand, have repurposed Kusama’s magnum opus into window dressing for their personal brands. The ubiquity of this new behavior has inspired numerous guides on how to take the “perfect infinity room selfie,” and even a cautionary article in Insurance Journal, warning museum insurers about the spike in damaged art due to selfie-takers. During the exhibit’s run at the Hirshhorn, social media accounts from DC to New York were flooded with Kusama selfies. The trend will continue as the exhibit begins a five-city tour.
A selfie can be more than just a selfie. In the gallery setting, selfie-taking subverts a pact that has existed between museums and visitors since the Enlightenment Era. Museums offer a transformational experience and communion with creative genius in exchange for the focused attention of its visitors. But when we walk through a gallery today, we are accompanied by our invisible audience and the lure of self-presentation in the digital era. The average museum visitor spends seven seconds in front of an artwork—how you choose to spend each second counts.
The first public museums were created to embody nationalist ambitions and the enlightenment values of reason, progress, and universal rights. The British Museum was founded in 1753 to house the collection of curiosities amassed by naturalist Sir Hans Sloane, some 71,000 artworks, specimens, and antiquities. The vast collections stood as a testament to man’s genius, nature’s diversity, and the acquisitional might of the British Empire. At the time, most similar collections were privately held and required a letter of introduction for admission. The British Museum, on the other hand, welcomed “studious and curious Persons” of all ages and social classes.
Across the English Channel, the young revolutionary government of the French Republic followed suit. Months after executing Louis XVI in 1793, the gates of the Louvre palace were thrown open. Visitors were treated to the vast collections of art formerly owned by the church and royal family, and, within a few decades, the treasures that Napoleon Bonaparte plucked from his conquered nations. The Louvre seemed to reify the particularly French assertion that true authority was no longer founded in doctrine mediated by the church, but in objectively verifiable truths cared for by l’état.
These early museums, with their treasures preserved in massive cathedral-like spaces, stood as the Enlightenment’s secular rejoinder to the religious reliquary. Germain Bazin, a curator at the Louvre in the 19th century, observed that the visitor enters a museum hoping to discover “those momentary cultural epiphanies” that give him “the illusion of knowing intuitively his essence and his strengths.” While the collections catered to the prevailing trends of intellectual curiosity and civic pride, the expectations that typically accompany sacred rituals were transferred to these secular spaces. Museums still relied on an inherently religious model of experience to convey the significance of its collection. In exchange for a visitor’s complete attention and renunciation of outside concerns, museums promised a deeply transformative experience.
In her classic essay “The Art Museum as Ritual,” Carol Duncan characterizes the early museums as resembling medieval cathedrals. A hushed atmosphere, with few windows and reminders of the world beyond, insulates the visitor in a heightened space created by the artist and the curator. Citizens entered the public museums as equals, given a respite from the pressures and stratification of everyday life, just as a church might welcome all worshippers. Similar to a pilgrim following the path of a relic-studded ambulatory, a gallery visitor would follow a prescribed path that encouraged moments of rest and contemplation.
Goethe, visiting the Dresden Gallery in 1768, describes his visit in terms that could be used to describe the Met or National Gallery of Art today: “…the well-waxed parquetry, the profound silence that reigned, created a solemn and unique impression, akin to the emotion experienced upon entering a House of God, and it deepened as one looked at the ornaments on exhibition, which, as much as the temple that housed them, were objects of adoration in that place consecrated to the holy ends of art.”
Nearly 250 years later, Goethe’s “holy ends of art” remain the primary justification for why we while away our afternoons at museums. While “artistic genius” may be defined differently today—compare Rembrandt’s The Night Watch to Maurizio Cattelan’s golden toilet at the Guggenheim—today’s art institutions still claim to serve the same function in society. We uphold the Enlightenment Era notion that art offers society a reservoir of profound meaning and a conduit to self-knowledge.
Deeply transformative experiences do not come for free, however. They require the abdication of self, a notion deeply embedded in most religious texts and rituals. Christianity’s view of spiritual salvation hinges on the sacrifice of one’s body, heart, and soul to God. Buddhists believe that we’ll all be much happier if we release our ego and its attendant fears and desires. “With ascetism, wisdom bears fruit,” said Ali bin Abi Talib, known as Shi‘i Islam’s first imam. Millennia of tradition assert the same thing: if we’re going to receive the profound lessons of our mystic guides, we must first shed our cloaks of conceit and vanity.
Mobile technology encourages us to forego the Enlightenment Era experience and its accompanying promise of profound self-knowledge. With the invisible audience of social media always lurking in our mobile phones, we are tempted to permanently affix a scrim of personal narrative over the artwork we see and experience.
Do art selfies correlate with lower levels of engagement with the artwork? I examined a sample of public Instagram posts taken by visitors to the Hirshhorn, comparing 100 photos from the viral Kusama exhibit to 100 photos from the concurrent Hirshhorn Masterworks exhibit. How would Kusama stack up against the best works by Joan Miro, Edward Hopper, Gerard Richter, and Constantin Brancusi?
At the Kusama exhibit, I discovered that self-representation was the overwhelming motivation behind Instagram posts, at the expense of any deeper exploration of Kusama’s artwork. A full 70 percent of posts tagged with #InfiniteKusama were art selfies. Only one-quarter of Kusama art selfies included an observation about the exhibit. The other three-quarters of Kusama art selfies featured the kind of self-referential Insta-speak that requires minimal reflection, such as #currentmood or #vibes. Visitors to the Masterworks exhibit, on the other hand, were 14 times less likely to post art selfies than Kusama Instagrammers. These visitors only posted selfies in 5 percent of their Instagram posts, and were twice as likely to supplement their selfies with observations about the artwork.
Selfies present a catch-22 of mutually conflicting intentions. Increasingly, we take selfies to signify that we’re in the presence of something extraordinary. A Paul McCartney concert is more likely to warrant a selfie than an afternoon spent scrubbing the bathtub. We are motivated by a desire to let the invisible audiences of social media know that we are affiliated with an exalted object, person, or place. Yet in the process of taking a selfie, we degrade the object of our respect by subordinating it to our self-promotion. We push the artwork behind us in a literal act of giving it the cold shoulder. Upon the altar of an artist’s thoughtful creativity, we offer our own banality. What requires less intellectual engagement, less genuine consideration, than smiling and looking into a camera?
By taking selfies and turning art into accessory, we’re not only robbing ourselves on a personal level—we are shaping the collective experience and depriving our friends as well.
Not long ago, our first impression of a new exhibit would have been influenced by words rather than images. A carefully written preview in The New Yorker or a friend’s comments would be all the advanced guidance needed before going to an art gallery. Seduction is predicated on mystery, and a good gallery preview (or word from a friend) might sketch the contours of the exhibit while leaving enough unrevealed to tantalize the imagination.
The joy of personal discovery is the mechanism that imbues art with power. Anatoli Mikhailov, the former rector of a liberal arts university shut down by Alexander Lukashenko, describes what we gain from being present with art. “We can then experience the wonder and surprise that accompany the immediate experience of an artwork, thereby enlarging our horizons and contributing to our development as human beings.”1
Today, when an art exhibit goes viral and sparks a Mass Selfie Event, we have little choice but to absorb image after image displayed in our social media feeds. This advance guard of photos leeches the “wonder and surprise” we might feel, as every corner of an exhibit is revealed and over-exposed. Within two weeks of the Kusama exhibit’s three-month run at the Hirshhorn, the rote repetition of selfies had already transformed the exhibit into an “insert-your-face-here” cliché. Before we take one step into the gallery, we have already seen what we have come to discover.
Blockbuster artistic events expose the tension between a collective experience shared on social media and the vivid impact of individual discovery. In other contexts, however, it seems we have found a solution. Popular television shows, like Game of Thrones or House of Cards have been accompanied by new social conventions that seek to protect the watching experience for the collective. By providing “spoiler alerts” (a.k.a., advance warning) before revealing juicy details, we acknowledge that the viewing experience relies on the power of those unanticipated moments of storytelling.
On social media, art exhibitions should be handled with the same care as an eagerly awaited episode of television. To relish each art-going experience, we have to protect those moments of joyful discovery and respect the creative progression laid out by artist and curator. How differently would we watch the series finale of The Sopranos, if we knew its abrupt ending even before Tony stepped foot in the diner? Could we still lose ourselves in the episode’s slowly building tension if we knew that the repeated dinging of the diner’s doorbell and crescendo in Journey’s “Don’t Stop Believin’” marked the swan song of a mafia don?
On the question of “to post, or not to post,” social media applies opposing pressures simultaneously. As with Game of Thrones or The Sopranos, we are capable of developing social conventions that reward restraint over display, to protect the collective viewing experience. What influences our decision to forego these social pressures, and treat gallery-going not as a protected experience, but as the virtual equivalent of a conspicuous luxury item?
Evolutionary psychology reveals the motivations that tempt us to take art selfies and the social pressures introduced by social media. Anthropologist Aimee Plourde, in examining the notion of prestige and its function in society, notes that the act of signaling prestige allows a person to cement or advance her status in a social group. When our friends engage in prestigious activities, it signals that we should “pay particular attention to the details of their behaviors and opinions, to maintain proximity to them, and to desire their friendship.”
Art selfies are the latest tech-enabled manifestation of our search for social status, which according to Plourde, can be attributed to three societal factors. The pressure for prestige is intensified by an enlarging group size; an increase in the complexity of skills demonstrated in the social group; and technological innovation, which advantages those who master the new system. Courtesy of the digital age, we are presented with these three pressures in ever-increasing levels. Walter Benjamin’s “invisible audience,” with its ambiguous membership that we can only conceptualize but never fully know, is now the backdrop to these additional social pressures. Art gets caught up in this never-ending race, which, thanks to the internet’s limitless horizons, takes place in its own infinity room.
Cultural institutions should not rest easy with the selfie trend, even though many do. Public art museums see mobile phone technology as a route to higher levels of visitor engagement that will justify grant dollars and corporate sponsorships like never before.
At the blue-chip Saatchi Gallery in London, a current exhibit reflects the struggle of art’s legitimizing institutions to find purpose in the narcissistic impulse. In an exhibit titled “From Selfie to Self-Expression,” snaps of Kim Kardashian keep company with Gustave Courbet and Rembrandt. Nigel Hurst, the gallery’s CEO, described the exhibit in the same manner one might comment on the questionable cooking of a loved one: “The selfie represents the epitome of contemporary culture’s transition into a highly digitized and technologically advanced age.” It is more of an observation than an exaltation, and damns the exhibit with faint praise.
For better and for worse, by embracing the habits created by technology, galleries can reach new funders. Saatchi’s selfie exhibit was sponsored by Huawei, a Chinese technology producer with an alleged pesky habit of installing spyware in the hardware it sells western companies. Huawei’s sponsorship also coincided with the debut of their new camera phone, further complementing the “zero boundaries” theme.
Yet social media and museums do not have to be an unholy alliance, and a series of recent social media experiments are reframing attentiveness and the museum-visitor pact in ways not possible during the Enlightenment Era.
The minds behind the Instagram account @Artwatchers_United are exploring new contours of the artwork-visitor relationship by inverting the art selfie dynamic. Rather than depicting art as a glamorizing accessory, visitors are accessories to the artwork. The account’s “curators” choose photos submitted by their followers, either candid or posed, of gallery visitors locked in a visual and mental interplay with a work of art. A visitor’s gestures, clothing, and expressions all contribute to a Fred-and-Ginger level of human-artwork simpatico, the two locked together in a heightened realm beyond the “invisible controlling masses.” The overall effect is a love note to attentiveness and respect for the creative process.
Some cultural institutions are drifting toward adopting “personal identities” on social media, just as an individual presents a carefully curated account of her life on Instagram. Through the growing trend of #MuseumInstaSwaps, museums are acknowledging the notion that their social media accounts represent their distinct identities, reflecting the interests, priorities, and history of their museum and staff. During an Instaswap, the social media staff of two museums swap the collections that they represent and share online.
Russell Dornan, a social media manager in London who initiated the first Instaswap, describes the responsibility he feels: “Our own interests and skills affect how we talk about the museum activities, the aspects we choose to focus on and the way we decide to interrogate or interpret them.” In one instance, the social media staff of the Cooper Hewitt Museum of Design took over the Intrepid Sea, Air & Space Museum’s Instagram account, and highlighted the beauty and functionality of a wirecutter from the Vietnam War. MoMA’s social media staff wandered the halls of the American Museum of Natural History, choosing to recognize the masterful work of its diorama artists, who painted natural habitats so expressive they lend remarkable vitality to an eighty-year-old taxidermied bear.
For the museums involved in the Instaswaps, and for the gallery-goers chronicled on @Artwatchers_United, the key to success is the same. Self-representation online is anchored in hyper-attentiveness, and a self-awareness of who they are and what their relationship to the art (or museum object) signifies. Acts of curation, on an individual or institutional scale, reflect deliberate choices to enhance certain qualities and omit others, a form of manipulation that can achieve thoughtful ends or obscure them.
As for art-selfie takers eager to be seen, it’s worth noting that you’re being watched in more ways than you think. Artists, those chroniclers of the human condition, have also begun to take notice of the art selfie phenomenon. At this year’s Frieze Art Fair in New York, I wandered the white-walled booths representing over 200 international galleries, observing how attendees interacted with the artwork. Anything that was reflective, brightly-colored, or high contrast became a target for selfies.
I spotted street photographer Daniel Arnold near an artwork resembling a large concave mirror, impassively blending in with other gallery-goers. A member of the “invisible audience” made real. I recognized Arnold from his Instagram account, where his photos capture two types of humans: those so genuine they can’t help but be themselves; and those so wrapped in pretense they seek to become anyone but themselves. He spends most days wading through the morass of Midtown, capturing a world that’s a bit inelegant, disjointed, poignant, and captivating. The popularity of his account proves that many people would gladly take his version of normalcy’s strange beauty over the calculated artfulness governing most of Instagram.
Arnold also has an unerring eye for human behavior, capturing unguarded moments of pain, joy, or conceit. Standing next to him and his mirrored lure, I asked what he was doing at Frieze. With a wry smile and camera in hand, he responded: “I’m waiting for people to behave badly.”
1Anatoli Mikhailov, “The Language of Art: A Saving Power?”, in David Breslin & Darby English, eds., Art History and Emergency: Crises in the Visual Arts and Humanities (Yale University Press: 2016).
The post You Probably Think This Art Is About You appeared first on The American Interest.
The Incredible Shrinking President
Given the chance, most Americans would stand in line for hours to shake the hand of the President of the United States. It is a rare treat to engage the leader of the Free World entrusted with America’s fate. Folks like to meet homegrown guys with gumption—for this requires a brave and steady hand to discharge the broad demands of America’s most important political office. Without effective leadership, the United States is just one country among many.
That’s why President Donald Trump’s lame-footed behavior can, from time to time, be so disappointing. This scion of a New York real estate family does not seem to realize that his role is different now—that he is supposed to lead the country and not endanger its democratic liberties. The recent attempt by young racist hooligans gathered in Charlottesville, Virginia, to use metal pipes and speeding cars as weapons against our common polity is a shocking event. An American President is honor bound to support the Constitution, not to undermine our country’s promise by cosseting racist hooligans who choose to murder their fellow citizens. One would expect that a thoughtful President would call to order the town meeting that is America.
To be sure, Mr. Trump was reared in the wilds of New York—where this writer also was brought up. There is a jocularity and informality familiar to native New Yorkers, a form of verbal card tricks on the sidewalk, a delight in acting as corner man to political wrestlers, an amusement in bluffing and calling a bluff. But folks from Gotham also tend to have a keen sense of fairness and a willingness to see the world for what it is.
So it is flummoxing—indeed, impossible to understand—how any President of the United States could demonstrate such indifference to the grotesque violence and spewed racial hatred seen during the last several days in and around the beautiful university campus of Charlottesville, Virginia. Why would any American President seek to airbrush away the searing sight of snarling proponents of racial discrimination—a crowd of malicious young men focused on beating and even killing peaceful protestors? And how could a President elected by the people of the United States take it to be within his remit to allow the mounting of scarring attacks and mob violence against innocent people—including students, the elderly, women, and visitors—whose only “offense” was to announce that black lives matter?
And more to the point, how could any American President fail to loudly denounce the dangers of hooligan youth movements that are founded on racial prejudice and violence?
The university town of Charlottesville, Virginia, was morally ransacked by the invading “Alt-Right” roustabouts seeking to advertise their contempt for people of color. And while they were visiting, the “Alt-Right” brigade thought it might also be amusing to assault peaceful protestors with bats, rods, and speeding cars. In their evil tour de force, one of their ranks murdered a young woman by crushing her body with an automobile.
Without care, this could become the ultimate symbol of Donald Trump’s presidency. That’s some legacy for a guy who wanted to make it to the big time. His sullen indifference to violence and the suffering of victims will dog his reputation for the rest of his term as President, and ever after.
But there is an added responsibility thrust on the shoulders of the rest of us as well. The so-called “Alt-Right” is a relatively new political concoction offered by a ragtag crew that lacks any proper rearing or productive purpose in life. But it will take serious work to contain the spread of their dangerous bacillus. There are plenty of grievances that working people might have in the present condition of the American economy. A mammoth number of the factory jobs that once provided economic support to American families have now gone offshore to Asia and elsewhere around the globe. The American working class is in desperate straits, a matter that neither political party has addressed. Many states have no form of public assistance for people out of work—so unemployment can bring real despair. If you can’t find a job, your only option may be to beg.
But the entitled young men who wielded clubs and cudgels last weekend in Charlottesville, Virginia, deserve no indulgence. Their talents in hooliganism were fully on display, along with their disregard for public safety and decency. This was an ideological rampage, rather like the fascist riots that drew young Germans into the hellfire of World War II and the Holocaust. The usurpation of a college town by club-wielding ideological homeboys from across the country ranks as one of the most disgusting and cowardly acts in recent American history.
And gun control will surely get another look after this rank hooliganism by the angry young men who marched with torch lights in Charlottesville, Virginia. Open-carry laws were never supposed to allow thugs toting assault weapons to intimidate law-abiding citizens. The right of assembly does not allow hotheads and bigots to have a field day cracking skulls.
American Presidents are judged both by contemporaries and by history. From this vantage, President Trump’s initial indifference to the tragic moral harm caused by this fascist field exercise in ugly and scarring violence betokens something even more troubling: namely, nonchalance about the duty of protection owed to every American.
You don’t need a soothsayer to predict how this presidential lapse will be described in the retrospective view of American history books. The presidential candidate who urged the audience at campaign rallies to “rough up” protestors may finally realize that he was assaulting his own historical reputation.
The post The Incredible Shrinking President appeared first on The American Interest.
August 15, 2017
Muqtada al-Sadr Visits the UAE
Two weeks after his trip to Saudi Arabia, the Iraqi Shia cleric Muqtada al-Sadr met the crown prince of the United Arab Emirates on Sunday. Like their partners the Saudis, the Emiratis are looking for alternative options in Baghdad, as Reuters reports:
The United Arab Emirates signaled its desire to strengthen ties with Iraq during weekend talks with influential Iraqi Shi’ite cleric Moqtada al-Sadr as part of efforts by Sunni nations of the Middle East to halt Iran’s growing regional influence.
Sadr met Sheikh Mohammed bin Zayed al-Nahayan, crown prince of Abu Dhabi and deputy commander of the UAE armed forces on Sunday in Abu Dhabi, according to a senior aide of the cleric.
Sadr also discussed ways of improving understanding between the Sunni and Shi’ite branches of Islam, at a meeting on Monday with a prominent Sunni cleric in Abu Dhabi.
The news comes as Saudi Arabia confirms that it will be re-opening its border with Iraq for trade for the first time in 27 years.
While closer ties between Saudi, the UAE, and Iraq would be great news for the Gulf in the abstract, it’s clear that this maneuvering has to do with Iran. With its dominant position over Iraq, Saudi and the UAE are looking for potential Iran-skeptic partners in Iraq, and Sadr, whose political movement has moved towards partnering with Iraqi nationalist elements, fits the bill.
Even if the UAE and Saudi Arabia have made too little of an effort too late in the day to make much difference, it’s possible that this relation with Sadr could at least be a positive outcome for Iraq. Sadr has been a vicious sectarian, and has an awful lot of blood on his hands, but if the Saudis and the UAE can meet with one of the worlds most infamous Shi’a clerics, perhaps Sadr may be willing to become a more “normal” political figure in Iraq. His supporters’ repeated storming of the International Zone and ransacking of the Iraqi parliament last year certainly did not paint a good picture of Iraqi stability.
We’ve written before about how Syria after ISIS will face a potentially deadlier crisis. Having cleared Mosul, the Iraqi security forces are now gearing up to assault Tal Afar, perhaps the largest city still completely under ISIS’ control. Iraq after ISIS will face potential crises, notably the Kurdistan independence referendum in September. But perhaps most importantly, Iraq desperately needs to rebuild. It will take $1 billion just to repair basic infrastructure in Mosul. The total reconstruction costs of Iraq, which had to be retaken from ISIS mile by mile from practically the outskirts of Baghdad to the Syrian border, will be immense.
If the UAE and Saudi Arabia want to compete for influence with Iran they’ll need partners like Sadr. But contributing the resources and expertise of their construction conglomerates might be how to win Iraqi hearts and minds.
The post Muqtada al-Sadr Visits the UAE appeared first on The American Interest.
Diesel Is Driving Election Narratives in Germany
There’s an election campaign underway in Germany right now, and one of the biggest issues the candidates are wrangling with has to do with something as seemingly mundane as a transportation fuel. Diesel cars, foundational to the German car industry, are no longer the eco-darling they once were, and earlier this week Angela Merkel, who is running for a fourth term, conceded for the first time that Germany will need to move towards banning diesel cars in the near future. That has set off a political furor in the country.
It’s hard to believe that diesel was once considered the greener fuel option, but 20 years ago countries across Europe—Germany included—began to push carmakers to increase sales of diesel vehicles over their gasoline-powered variants. Because they generally get higher mileage, diesel cars were perceived to be the greenest option, and were seized upon in a part of the world where the budding market for environmentally conscious consumers was most fertile. Unfortunately, diesel has a drawback: its tailpipe emissions include far more local air pollutants, and many of Europe’s major cities are battling a surge in smog as a result of the continent’s switch to diesel vehicles.
The diesel problem is especially notable in Germany, which was ground zero for the emissions test cheating scandal that rocked Volkswagen back in 2015 (and a number of other German car manufacturers since then). Carmakers in Europe have a long history of gaming vehicle emissions tests—by using special lubricants, taping up door panels, removing side mirrors and car stereos, and getting creative with tire pressure, companies have been able to “juice” the mileage numbers of their vehicles. That chicanery reached a crescendo when it was revealed that VW had installed special software in its vehicles that could detect when a car was being tested, and make performance adjustments in order to decrease tailpipe emissions and boost mileage.
It was against this backdrop that Merkel made her comments supporting a diesel ban in Germany. Her biggest electoral foe—the Social Democrats’ Martin Schulz—was quick to weigh in, claiming that “the zig zag course of Mrs. Merkel has unsettled unions and has left it unclear that those who caused this crisis will be held responsible.” Schulz criticized Merkel for supporting a diesel ban because of its potential effects on consumers and autoworkers. In a veiled attack at the cozy relationship between Berlin and the German auto industry, he said that German consumers “need certainty that they will not be made to pay for the mistakes of the carmakers and their wheeling and dealing with political organizations, or even agencies of the German government.”
Industry malfeasance occurred under Merkel’s watch, true, and it remains to be seen how much blame for this mess—and the potential impact it might have on hundreds of thousands of German jobs—the German voters will lay at her feet. We’ll find those answers soon enough, but the more important question is how Berlin will balance its strong industrial economic concerns against its environmental idealism.
Diesel turned out to be a major policy mistake: higher mileages are being paid for in thousands of premature deaths due to air pollution, and all the while unscrupulous carmakers are gaming the system to their benefit. But it won’t be cheap or easy to transition away from the fuel, especially in a country that has built an entire sector of its economy around surging demand for this variety of vehicle.
The post Diesel Is Driving Election Narratives in Germany appeared first on The American Interest.
After the Health Care Sideshow
The multiyear effort to reform the U.S. health care system finds itself today in a puzzled swoon. We are stuck between the ever more obvious debilities of the Affordable Care Act—deranged insurance markets, rapidly escalating costs from induced industry over-consolidation and other causes, worse-paid doctors and poorer care overall, and more besides—as well as the pathetic inability of the Republican-dominated Congress, together with a Republican White House, to do anything about it.
Worse in a way, both the ACA and the efforts to fix or repeal it are not really about health care reform at all. They are a mere sideshow almost entirely about the question of paying for health care—about the insurance system—which is not even close to the same thing. Just as having car insurance doesn’t make anyone a better driver or navigator, having health insurance says nothing about the quality of health care that the insured person is liable to receive. Our political class, which to all appearances has failed even to identity let alone deal with the causes of cost inflation in health care, has managed to delude both itself and anyone who pays too much attention to it into missing this simple but critical point.
There is no reason for a medical doctor to belabor again in TAI what’s right and mostly wrong with the ACA—Scott Atlas does some of that here, and Robert Pearl did it definitively several months ago. And there is no point either in replaying the details of the abject failure of the Republican majority in Congress to act on the issue. That exercise would only show that the Republicans are no more capable of devising a policy now than they were when Barack Obama was the one holding the veto pen. The only thing that has changed besides their capture of both the Legislative and Executive branches is that many Republicans are again wary of the President, albeit for entirely different reasons.
Some observers think that, given the present impasse, Congress now has no choice but to work out a bipartisan fix for the worst of the immediate problems we face, even in the absence of significant reform—and that, its current rhetoric of brinksmanship notwithstanding, the White House will have no choice but to approve whatever Congress comes up with. I’m no expert in American politics—just an ophthalmologist—but allow me to voice some skepticism about that scenario. Both parties have used the health care issue to engage in what Ronald W. Dworkin justifiably called political shenanigans—essentially offloading the liabilities and risks inherent in our present dilemma onto their own core constituencies as well as others. That is what the party leaderships have been doing for more than a decade; it is not clear that they know how to stop, let alone how to do anything else. And allow me to repeat that, in any case, no imaginable fix coming out of this Congress will actually address the essence of the challenges we face.
Let us instead look beyond legislative possibilities. What is likely to happen in the real world to the existing American health care system in the absence of significant legislative action? Things will not just stay the same; the ACA and other factors have set an array of trends in motion, most of them negative. And what, in the fullness of time, should happen?
Lost in the funhouse of legislative flailing over health care is the fact that by law the U.S. Secretary of Health and Human Services—former Representative Tom Price, sworn in this past February despite fierce resistance from Senate Democrats—has a good deal of discretionary authority. Ironically, perhaps, the ACA assigns more than 1,200 discretionary powers to the Secretary, far more than before. We have some idea what Secretary Price will do with those powers.
A former orthopedic surgeon, Secretary Price has long taken an interest in the problems of the ACA and of the health care sector in general. In his Empowering Patients First Act of 2015, for example, then-Congressman Price proposed: more reliance on the market by increasing Health Savings Account (HSA) allowances; a tax credit for purchasing private insurance; and state-administered pools for high-risk patients who can’t find affordable insurance elsewhere. The idea here, supported by several independent analysts, is to essentially make government financially responsible for the small percentage of sick people who use a wildly disproportionate percentage of available health care resources so that the private insurance market can again become solvent—that being, in his view, the most efficient way to deal with the insurance aspect of the problem. His bill also would have protected those with pre-existing conditions from being dumped by their insurers.
So what will Secretary Price do now that he has become, by default, the main change agent in the health care picture? He can’t repeal any of the ACA’s major clauses, so he cannot by fiat eliminate its unwise personal mandate, taxes, and cost-shifting expansions. But he can be less enthusiastic about enforcing the individual mandate to purchase insurance. He can allow insurance companies to provide less rich coverage of the ACA’s mandated ten essential health benefits. He can resist subsidizing insurance providers for low-income citizens from general revenue funds. (Subsidies currently being delivered are the subject of a lawsuit brought by congressional Republicans, who claim that President Obama unlawfully appropriated general revenue funds without congressional approval.) President Trump is likely to unilaterally exercise his authority to drop the subsidy, since it will save $430 billion over the next ten years. That alone will hasten the demise of the ACA under the hallowed precept that “the worse it gets the better it gets.”
There is more Secretary Price can do. Trump and House Speaker Paul Ryan had proposed a three-stage revision of the ACA. Two of the major provisions were reform of medical liability on a national basis and opening up insurance company policy sales across state lines in order to stimulate more competition and hence get lower premiums. A third seemed about to flow from Trump’s expressed shock at the high prices of pharmaceuticals, suggesting Administration support for government negotiation of prices. Price cannot snap his fingers and make these things happen, but he can feed the Republican majority in Congress some insights and some data to help them craft legislation to make them happen even in the absence of any explicit ACA repeal.
Of course, very influential lobbies are pitted against all three of these proposals. Nevertheless, given the long lead times required to get anything significant done in the U.S. Federal legislative system, it would not hurt to do some serious thinking about these kinds of reforms now in preparation for the day the situation becomes more pliable. And it has to become more pliable, for the U.S. health care system is speeding toward crisis. As the engineering-inspired aphorism has it, something has got to give.
Ultimately, we will have no choice but to face the real issues in health care reform. On the bright side, some dramatic scientific-technical breakthroughs in diagnostics and treatments are coming, and so are a variety of care delivery innovations, some enabled by new communications technologies and others by better managerial and business-model designs. But to take advantage of the former and to enable some of the latter to scale up quickly, we have to get a better grip on costs.
Health care has exhibited a strong tendency toward cost disease in recent decades. This is not the place to analyze all the factors involved, but the resultant dilemma is clear. Aaron Carroll, a professor of pediatrics at Indiana University Medical School, has cited the “iron triangle” of health care, meaning that health care provision involves a balancing act between cost, quality, and availability. We can improve one or two sides of the triangle, but only at the expense of one or two of the others. As things stand now, health care quality in the United States is very high at its best. Availability is not universal, even with the ACA, but the supply of health care is more than adequate. Cost is the distinguishing problem we face, and it is a very big problem.
In 2015 U.S. annual health care expenditure stood at $3.2 trillion, representing 17.1 percent of GNP. While most of the factors behind rising health care costs are common to advanced societies, the United States is an outlier when it comes to the cost side of the iron triangle. Compared to our 17.1 percent, the health care sector represents 10-11 percent of GNP for most first-world countries, only about 8.8 percent in Britain.
The health care systems in other first-world countries are varied, but all achieve their cost containment results by being more socialized than the United States. The result in nearly every case is a two-tiered medical system: basic care for the great majority, and a private quality-medical sector for those who can afford it when they need it. This condition is the very rough equivalent of dividing the insurance pool into separate parts, though not in the same way that Congressman Price’s bill would have done: Government-controlled systems in two-tiered health care environments avoid paying for a lot of expensive care that takes place on the private side of the divide, aiding the struggle for solvency. But the question still becomes, how much and in what ways do the other two parts—quality and availability—of the iron triangle suffer from the cost-containment strategies of socialized medical systems?
Again, there is no need to repeat here what others have made manifest. The basic data you need to answer this question can be found in Scott Atlas’s essay. The gist is that the quality of care is demonstrably worse in the public sectors of socialized systems, and the availability and timeliness of care is usually much worse. This means that if we want to preserve the quality and availability of health care in the United States, we have to get at the cost problem in ways other than by increasing the role of government—state as well as Federal.
How can we do that? The place to begin is to establish the facts as they exist and as they will doubtlessly trend in the absence of any major changes in the law beyond what Secretary Price can do with the stroke of a pen. But it is no simple matter to track where U.S. health care sector money goes. Donald Trump may be surprised to find out how complicated all this is, but no one else should be.
Health care economists typically divide the sector into five subsectors: physicians and other health care providers; hospitals; insurance companies; pharmaceutical companies; and long-term care providers such as nursing homes, to include expenses on durable goods like wheelchairs. In 2015 the hospital share was 32 percent, physicians and other clinical providers 20 percent, pharmaceuticals 10 percent, net cost of insurance 7 percent, and home health care and nursing homes about 8 percent. Other expenditures included investment (5 percent), government administration and services (4 percent), and other health care comprised of research, personal care, dental services, medical products and equipment, and public health activities (14.8 percent). Physicians experienced an 11.5 percent fall in profit margins from 2014 to 2015, while nursing homes had a 2 percent profit margin decrease. Health plans and hospitals did better, with a 3-4 percent profit margin increase. Drug companies experienced a 21.6 percent surge in profit margins.
These trends are affected by asymmetrical regulation. Physicians, hospitals, nursing homes, and durables providers are price-regulated by the government, which is more likely to result in lower rather than higher prices. Insurance companies must get rate increase approvals from state governments, but they are usually approved due to the insurance companies’ market power. The government by law cannot set prices or negotiate prices with pharmaceutical companies, so that helps explain some of the data cited just above. But whichever side is right about the propriety of pharmaceutical companies hauling in such large profit increases, government-negotiated drug prices would, at a maximum, cut national health care expenditures by 5 percent, or roughly $150 billion per year. Since hospitals are responsible for a third of all medical expenditures, more significant savings might be available here. However, most for-profit hospitals run on a small margin, and anyway 62 percent of hospitals are non-profit, while 20 percent are government run. Perhaps the small size of the private sector here is part of the problem: If 82 percent of hospitals are non-profit, what incentive do they have to increase efficiency, decrease cost, and increase productivity?
They and for-profit hospitals are also fenced in by layers of regulations that have only growth thicker over the years. Public and government unions have also been able to entrench themselves, draining valuable resources while retarding innovation and productivity enhancements. Some of these unions have become corrupted, a phenomenon most easily seen in the problems with the Veterans Affairs system. Price controls for Medicare and Medicaid are also a problem for hospitals.
In some cases, hospitals have enough oligopolistic power to negotiate favorable Medicaid rates with state governments. Those that do not have incentive to merge or to form alliances in order to enable them to negotiate favorable rates not just with Medicaid but also with the largest insurance companies—which are themselves oligopolies or effectively monopolies in some regions. Since 82 percent of the hospital market charges high rates in part to cover various inefficiencies, private hospitals have little incentive to compete with them on price. They charge the same high prices as the less efficient non-profits and, if necessary, compete for market share through advertising and other marketing techniques. Any net income from efficiencies left over after local taxes (which non-profits and government-run hospitals don’t pay) goes to shareholders.
Hospital oligopolies, and the high prices they command, are further reinforced by local and state laws requiring any proposed competing hospital to obtain “a certificate of need.” Although the certificate of need is granted by a government agency, existing hospitals essentially must agree to allow the creation of a new competitor. Naturally, they do not always agree, further limiting competition. Furthermore, physicians are often specifically prohibited from starting competing hospitals. So as far as cost disease goes, the hospital sector is a real mess, but more regulations, more government ownership, and more price controls are not likely to improve the situation, as these are already major parts of the problem.
Physicians and other health care providers are the next-largest sector. Physicians took an 11.5 percent pay cut in 2015 compared to 2014. Some readers may be less than sympathetic because they perceive physician salaries to be too high. Suffice it to say, this is not how doctors see the world.
Consider that, for example, Medicare policy has methodically reduced physician pay, “frog-in-boiling-pot” style, for the past decade and a half. About eight years ago a prominent economist on Medicare Payment Advisory Commission, the board that advises the Centers for Medicare and Medicaid Services (CMS) on Medicare rates, told me that they regularly reduced reimbursement by 4 percent each year to take advantage of increased physician productivity. Physician productivity was at that time rising at 6 percent per year; instead of rewarding increased physician productivity they were punishing it. They could get away with this because 6 percent is two points higher than 4 percent, so physicians would still see an increase.
At that time, too, a physician advisory panel often had significant influence in how the pie was divided up among various medical specialties. But in 2015 the bureaucrats at CMS had to make much steeper cuts in order to defund Medicare by $575 billion dollars, largely in order to shift those dollars into Medicaid. So they disbanded the advisory panel and made much steeper cuts to reimbursements in many surgical specialties.
The net effect of this is that large numbers of physicians have been driven out of private practice by low reimbursement. Some who could afford it simply retired. Most others have become employees of hospitals, universities, and other institutions where they lose autonomous decision-making power. Some are also forming or joining groups that can charge higher fees and have more leverage with insurance companies. Lower reimbursement levels forces physicians to become more rushed since they must see more patients per hour to make the same amount of revenue. Add to that the data-clerk demands of electronic medical records and the experience of being a physician not only becomes more impersonal, it also often leaves patients with poorer care and too many unanswered questions.
Meanwhile, a doctor could get the impression that, for any practical purpose, the public health experts advising CMS want to stop rewarding doctors so much for taking care of sick patients and reward them more for documenting that they have tried to keep the patient healthy. If newer physicians get reimbursed as much or more for just recording the fact that they told someone not to smoke or to lose weight than they do for trying to heal and comfort the ill, why focus on healing and comforting? Preventive care is a fine idea, and it can save lots of money, but people will still get sick and need doctors to care for them. So it seems foolish to skew the incentive structure too far away from traditional norms.
All in all, being a physician has become a less rewarding profession in recent years, and financial reimbursement is perhaps the least of the reasons. Price controls and weak market power compared to hospitals and insurance companies have turned most physicians into mere pawns. It is not clear that doing this to the people most knowledgeable about how to deal with illness is a good idea in the long run.
Speaking of insurance companies, they collect and pay out huge sums of money but net only 7 percent of the health care sector—roughly $210 billion per year. Of course, insurance companies are notorious and newsworthy for handing out huge payouts and stock options to executives, so the popular perception differs rather significantly from the reality. Many people think that as long as the American medical system is immersed in profit motives on the part of various actors, prices will always be skyrocketing because greed is part and parcel of human nature. This view is a guaranteed applause line in some circles, but that doesn’t make it true. Consider that the American medical system has always been immersed in for-profit environments, but skyrocketing prices are a relatively recent phenomenon. If the profit motive is really the guts of the cost problem, how to explain this? Don’t strain yourself: You can’t.
In stable markets insurance companies are just like casinos: the law of averages always works out well for them. The ACA, however, has sharply destabilized the market environment by mandating new forms of politically induced risk. The companies must confine themselves to no more than 20 percent administrative fees. They have to refund any excess over 20 percent or, for large markets, 15 percent. They must cover unlimited medical expenses for all patients. They must insure medically catastrophic patients even if they paid no previous insurance premiums. They must cover the ten essential (and expensive) benefits for every insured. The varieties of insurance they can offer is by fiat standardized to only three levels and can be adjusted for individual risk only slightly.
Not surprisingly, the companies had a hard time breaking even on this silly scheme despite raising premiums, instituting larger deductibles, and increasing copays where legally possible. Insurance companies have very predictably lost interest in the ACA for the most part, which has only magnified the oligopolistic nature of their industry. That has left the larger companies to try to exert their oligopolistic—and sometimes monopolistic—powers. The “big boys” have been able to get away with this to some extent because the companies are regulated by state insurance commissions and need not face out-of-state competition.
The McCarran-Ferguson Act, which exempts the insurers from U.S. anti-trust laws, helps too. In other words local insurers can sit together and decide what they will pay for physician and hospital services. Hospitals and physicians can’t do this. A local group of surgeons that tried to get together to negotiate with the insurance companies stimulated the insurance companies to call the Department of Justice anti-trust division, which scared the presumptiveness right out of those surgeons. So the reason insurance companies are raising premiums left and right is not the profit-motive, but the wildly distorted and now thoroughly destabilized market they have to navigate in order to survive.
Finally in our list of niches, nursing homes, and home health care don’t sound like very promising areas for cost reduction. Their profit margins fell 2 percent from 2014 to 2015, and there is little evidence of industry concentration, meaning there is a lot of competition even in such a relatively small sector. More likely, things will get much worse with regard to cost disease in this sector, for two reasons: Large numbers of Boomers are retiring with inadequate savings, and the homecare industry, where wages are relatively low, is at risk of being unionized. No big savings in view there.
So where does this survey lead us? It tells us that, if we value the quality and access to health care most Americans have, we can get a grip on the cost disease problem only by making real changes in the structure and market environment of all five key sectors of the health care industry. As things stand, Congress and the electorate are polarized on two opposing remedies. Democrats and “progressives” favor single-payer insurance, “Medicare-for-all”, and what some call “Canadian Medicine.” Republicans and conservatives favor introducing more competition, limiting some benefits, and delegating insofar as possible the administration of medical allocation decisions to the states and individual consumers.
At the heart of this difference, probably, is an ethical disagreement: To what extent is health care a universal human right, and to what extent is it a service one may choose to buy according to one’s means, like going to a hair stylist. I would guess that far more people take the former position today than was the case even a decade or two ago, but that just raises another question: Just how much health care, as with just how much education, does that entail? You can get wide agreement that there should be some basic minimum level of care that all citizens receive, but defining what that basic minimum is remains elusive on account of an even deeper philosophical disagreement. Democrats and progressives would set that minimum very high, on the grounds that accidents of birth and luck determine a person’s life station. Republicans and conservatives would set that minimum lower, on at least the implicit grounds that merit and character still matter in determining a person’s station in life.
Philosophy aside (since we will never arrive at a meeting of minds on that level), a more practical consideration should weigh on a judgment about these competing remedies: More Federal government control over the various intersecting markets for medical care will result in more bureaucracy, and with it more regulation, subsidies, higher costs, and corruption. It will also stifle trial-and-error, experimental reform efforts at industry and state government levels. Worse, as already noted, the usual result of greater government control in other countries has been a reduction in some combination of quality and availability in an attempt to control costs.
We can already detect this process in the United States, because we are already at least half way to having a form of socialized medicine—an in-between place that is the worst of all worlds from a cost-control perspective. Government programs pay for about two-thirds of all health insurance. Medicare, Medicaid, and veterans’ health care account for about 48 percent, and government workers’ insurance programs—not least Tristar for the military—and a range of other subsidies push the total to about 66 percent or higher (these are not and cannot ever be scientifically precise calculations).
Moreover, amidst this two-thirds, and with heavy implications for rest, government sets reimbursement prices for physicians, hospitals, nursing care, and various medical equipment and supplies. The reimbursement levels do not distinguish quality of service among different providers, because they cannot. After more than a decade of price controls actual reimbursements for various services now bear little if any relationship to actual patient demand. The Federal government also sets treatment guidelines in Washington despite considerable regional variations in perceived need and the availability of services offered.
It has also imposed mandatory electronic medical recordkeeping. Despite meager financial reimbursement for early implementation, each practitioner in my office lost roughly $330,000 according to net present value calculation the day they signed the contract for the computer program. This is a regulatory mandate in the sense that a practitioner could lose up to 9 percent of government reimbursement for not having the required computer system. There are thousands of regulations at all levels of government requiring expensive and time-consuming compliance, coupled with heavy financial, civil, and criminal penalties.
As a result of all this, we by no means have anything approaching either a free or lightly regulated market in health care. Not that markets can solve all problems, but there simply cannot be a rational allocation of health services or any incentive to reduce costs consistent with rational allocation under a system that suffocates markets as much as the present system does.
What to Do?
It takes a book, at least, to spell out in detail the range of things we can do to heal the sickness in our health care markets. So let me just briefly telegraph some of my personal favorite incremental changes to lower costs, increase availability, and improve the quality of U.S. medical care.
First, for insurance, let individuals rather than companies get the tax breaks Congress serves up; let all insurers compete in every state; and provide low- or no-income patients with Medicaid HMO-style care, with increasing choice as their income and insurance payments increase.
Second, for hospitals, let anyone build a hospital as long as it passes inspection; government, “non-profit,” and for-profit hospitals should pay the same taxes and have access to the same bond facilities.
Third, for pharmaceutical companies, change the law so that the government can negotiate prices for those domains of medical care it takes responsibility for; make it legal for citizens to purchase medications from abroad; and allow foreign companies to get FDA certification as to the purity of their medications in order to protect the consumer against counterfeits, impure medicines, and deceptive claims.
Fourth, for physicians, make medical liability no-fault, with retraining or license restriction for offenders; make compliance with complex regulations no-fault, with retraining or restriction if compliance becomes a serious problem; and allow physicians to decide in advance for which kinds of patients they want to accept for Medicare, rather than the all-in-or-out requirement now.
Can we really do any of these things in the face of opposition by major lobbies? It isn’t easy. Various corporations and interest groups spent $3.15 billion for lobbying at the Federal level in 2016, and the health-related sector ranked first of all lobbying sectors with expenditures of more than $151 million. Yet there is hope. Over time the cumulative effect of blatant rent-seeking behavior in a democracy can cause voters to rise up, as we now see with the pharmaceutical industry. Congressmen are investigating excessive price increases in generic and off-patent medicine, such as with the EpiPen scandal, and the State of Maryland recently passed a law barring “unconscionable” increases in the price of generic drugs. The kind of democracy-induced pressure hasn’t reached the insurance sector yet, but if enough of the electorate experiences the limited or complete lack of availability of ACA insurance carriers, it will—and then Congress will be able to act more decisively against the droning of the lobbyists.
I am persuaded that if we can move back toward a more free-market orientation, costs would go down. Patients with choices who are spending more of their own money would admittedly be faced with making difficult decisions as to what health care services were worth to them, but it is not beyond the ability of medical professionals to counsel such people effectively. Competition would allow the supply and demand curves to more rationally, if still imperfectly, determine prices.
For patients who lack enough money to buy, or simply refuse to buy, health insurance, a non-choice government-conceived and operated system will still be required to provide a minimum of care. But since such patients aren’t spending any of their own money, they can’t participate in market decisions and have forfeited any ability to make choices. Medicaid already serves this function, even if—as every serious analysis of Medicaid outcomes shows—it isn’t very effective.
The choices are ours, and we will make them because eventually we will have to. Whether the ACA will be repealed or not has always been a mere side issue of a much larger set of problems. Repealed or not, the ACA cannot remain unaltered because the reality to which it has contributed is not sustainable—not mainly because of what the ACA addressed, but because of what it left unaddressed.
The post After the Health Care Sideshow appeared first on The American Interest.
August 14, 2017
Iran’s Banks Face Trouble from Within and Without
Banking is perhaps the most profound example of Iran’s economic isolation from the world. Iran remains one of the few countries where international electronic payments are impossible. Foreign tourists (flocking to Iran in ever greater numbers) have to carry cash. Iranian businesses struggle to set up shop abroad, while foreign businesses are wary that they won’t be able to get their money out. That’s in large part a remnant of nuclear and other sanctions against Iran, but also has a lot do with the dysfunction of Iran’s own banks. As the Financial Times reports:
“With the adoption of [International Financial Reporting Standards], many major banks have suddenly realised they can easily have losses in their accounts and do not know how to deal with that,” said one senior banker. “Iranian bankers do not understand the concept of compliance and transparency.”
On top of that, interest rates as high as 30 per cent have contributed to a high rate of non-performing loans — for some banks, this can run to as high as 40 per cent of their loans. Without government support, bankers say, many banks would go under. [….]
Also contributing to the uncertainty in the sector is the growth of small credit institutions that account for about a quarter of all banking activity, bankers say. Most of them are not regulated by the central bank or any other body. These are often officially or unofficially affiliated to political, religious and military power centres including the Revolutionary Guards. There is no exact figure on their numbers but they proliferated under former president Mahmoud Ahmadi-Nejad, a populist who was in power from 2005-13, and authorities estimate they account for about a quarter of the country’s banking activities.
Opponents of the Iranian regime ought to be able to take pleasure in the struggles of their banking sector. Unfortunately, our schadenfreude is spoiled by one of the most insidious elements of the Iran nuclear deal.
The unspoken critical element of the Iran nuclear deal is that 15 years after its implementation—when most of the restrictions on Iran’s nuclear development are set to expire—the regime will be sufficiently integrated into the global system that they will be willing to negotiate an extension to the limitations. The most optimistic version of this is that Iran by 2030 will have chosen President Obama’s “different path” that would see Iran become a normal international actor. Phillip Gordon, former special assistant to the President and White House coordinator for the Middle East, North Africa, and the Gulf region under President Obama, argued as much here in The American Interest in 2016:
We do not know what Iran will look like in October 2030, 15 years from the day the nuclear deal was officially implemented. But we can be fairly certain that the current Supreme Leader (now 75 years old) will no longer be in power and a new generation of Iranians—perhaps less marked by the conflicts of the past—will be in charge. There seems to be at least the possibility that such a new leadership will have chosen the “different path” President Obama referred to.
It’s why all of Obama’s key advisors on Iran—Antony J. Blinken, Jon Finer, Avril Haines, Philip Gordon, Colin Kahl, Robert Malley, Jeff Prescott, Ben Rhodes, and Wendy Sherman— wrote and signed an op-ed in Politico highlighting the election of Iran’s President Rouhani against “hardliners,” by implication making Rouhani a moderate or reformer as he is so often falsely described.
Which brings us back to Rouhani’s banking reforms.
Even if we take the most plausible version of the Obama administration’s defense of the 15 year sunset provisions, the only reason why Iran would be willing to extend the limitations is if the benefits of sanctions relief are so great, and Iran is so closely drawn into the global economy, that its leaders wouldn’t possibly want to risk returning to the status quo ante. For that to happen, Iran’s economy, including its banks, needs to boom.
That’s how you end up with the perverse and embarrassing scene of the United States Secretary of State John Kerry acting as an Iranian trade envoy encouraging European bankers to do business in Iran. It’s also why the Iranians kind of have a point when they say that non-nuclear sanctions violate the nuclear deal.
Though there is some dispute about whether or not Iran is in full compliance with the deal, the IAEA and other nuclear non-proliferation experts and monitors continue to state that Iran remains in compliance. The deal, while poorly negotiated, ought to remain in place so long as they continue to do so. That’s especially true given that President Trump—despite saying in an interview with the Wall Street Journal last month that he thinks he’s going to declare them non-compliant during the next certification review in September—hasn’t laid any of the groundwork in Congress, with our partners in Europe, or with China and Russia for an alternative to the deal.
In imposing new sanctions for Iran’s other bad behavior, Congress will have to find a very narrow path between the necessity of punishing Iran, leaving America unsupported if the nuclear deal collapses, and leaving enough economic incentives that Iran will want to extend the deal after 2030. It won’t be easy.
The post Iran’s Banks Face Trouble from Within and Without appeared first on The American Interest.
Peter L. Berger's Blog
- Peter L. Berger's profile
- 227 followers
