Peter L. Berger's Blog, page 54
February 4, 2019
A Show Trial, and The Death of Putin
On a snowy Wednesday last week in Moscow, a 32-year-old Russian Senator named Rauf Arashukov was in a rush, driving to an emergency session at the Senate accompanied by his personal bodyguards, where an extremely important question was to be voted on. Russia’s upper chamber of the Parliament was stripping Arashukov of immunity that day, but he did not know that yet. He was running a few minutes late, but he finally made it to the gloomy Soviet building in Downtown Moscow. The Senate’s speaker, Valentina Matvienko—the Iron Lady of the Putin regime—announced that the session was to be held behind closed doors, and gave the floor to a set of special guests present in the chamber. Attorney General Yury Chaika took to the stage and read a resolution stripping Senator Arashukov of immunity, charging him with organizing two murders, stealing natural gas from Gazprom, and staging an attempted coup in his native region, Karachaevo-Cherkessia, in Russia’s Caucasus.
The Investigative Committee’s head Aleksandr Bastrykin, along with his deputy, were sitting in the front row coldly listening to Chaika, very much unlike Senator Arashukov. Having heard the criminal charges, the young Senator jumped from his seat and bolted for the exit, where, alas, law enforcement officers were already waiting, blocking the doors. The strict voice of Madam Matvienko ordered the youngster to come back and sit down. Arashukov obeyed, as if hypnotized, and walked over to Bastrykin, taking a seat next to him. Needless to say, the vote was almost unanimous: only one brave soul defied the consensus by abstaining.
After he had been stripped of his Senatorial privileges, Arashukov was arrested along with his bodyguards and the driver. The siloviki apparently got so carried away by executing their duties that they even arrested a wrong person—another Senator’s driver—who was in a car parked next to Arashukov’s. The poor man was later released.
It’s not that Arashukov was the first Russian Senator to ever get arrested; some twelve Senators have been charged with criminal offenses in the past ten years. There had been even one case of a Senator being stripped of immunity. But never in modern Russian history did a Senator get arrested so theatrically, right in the middled of a session, with the two highest-ranking silovik officials executing the arrest. This was a scene fit for Kim Jong-Un’s Pyongyang, not Putin’s Moscow.
The show trial was probably meant to scare or at least impress the public; instead, the performance only made everyone laugh (well, everyone except the rest 169 Senators). It was like a scene from Armando Iannucci’s The Death of Stalin, a black comedy about the last days of the Soviet dictator and the power struggle that ensued—a film that Russian censors ended up banning. Initially having issued a distribution certificate for the film, the Ministry of Culture changed its mind two days before its scheduled release date. They called it an “unfriendly act by the British intellectual class” and part of an “anti-Russian information war.” But anyone who saw the film understood what the real reason was: it depicted Joseph Stalin, lying face-down in his own urine as his Central Committee henchmen schemed and intrigued over who would become the next Soviet leader. A slew of high-profile arrests followed. The parallels to Putin’s regime, and the uncertainty surrounding succession, were impossible to ignore. God forbid Russians even contemplate what is forbidden!
And as if they were auditioning for a role in a Death of Stalin sequel, Russian celebrities rushed to distance themselves from the now-toxic Senator, a playboy who had lured them into his inner circle with his lavish and ostentatious lifestyle. A former opera singer, a former ballerina, a Russian pop-star, a famous journalist—all had either performed at or been spotted attending Arashukov’s family events as friends, and all said they did not know him or had ever heard of him. Iannucci could not have plotted the scene better: Bastrykin himself was spotted spending quality time at Arashukov’s luxury spa hotel in Karachaevo-Cherkessia. The Investigative Committee told the press that these photos of Bastrykin “do not correspond to reality.”
What certainly does correspond to reality is that Asharukov’s arrest is yet another twist in the ongoing power struggle among the siloviki in Russia. What’s different today, as opposed to a few years ago, is that there have been so many of these struggles that it doesn’t seem to make much sense to establish who is against whom and why anymore. It’s everyone against everyone now, with participants picking and changing sides as the angles open up. A lot of interests were represented by Asharukov: he is considered a close friend of Chechen dictator Ramzan Kadyrov, and his father was the CEO of a regional Gazprom subsidiary for many years. We may never know who Asharukov crossed, but what is certain is that law enforcement in Putin’s Russia works mostly as a tool of the powerful. They would only investigate a murder (both of the murders that Asharukov is accused of ordering were committed in 2010) if it can bring down a rival.
And that is a sad reality behind the sometimes-comical theatrical flourishes of the Putin regime. The only good news here is that we will probably get to enjoy another black comedy about Russian dictators some time in the future—hopefully not too long from now.
The post A Show Trial, and The Death of Putin appeared first on The American Interest.
February 1, 2019
The Arrogance of Public Health Advocacy
Early in my anesthesiology career, I took care of an elderly man who needed knee surgery but who smoked like a chimney. The surgical team feared he would suffer the usual smoker’s complications, so we told him to stop smoking a week before his operation. He refused. In the end we reached a compromise: On the day before surgery he would get by with nicotine patches.
After surgery we noticed him moving all around in bed, craving nicotine, as he had yet to receive his patch that morning. This was actually a good thing, as he risked forming blood clots if he remained immobile. We decided to deny him his patches for a few days to keep him jumpy until he could start physical therapy. Unfortunately, he tricked us: He found a way to sneak cigarettes into his room. His renewed smoking probably caused his incision to heal poorly, since the carbon monoxide in cigarette smoke interferes with oxygen unloading in the tissues. Nevertheless, when I took away his cigarettes he told me to go to hell.
The story is a metaphor for today’s counterproductive policy toward e-cigarettes. Companies like JUUL Labs have created an e-cigarette substitute for smokers to “vape.” Although the substitute contains nicotine, it lacks the carcinogens and carbon monoxide found in “real” cigarette smoke. Despite improvement over traditional cigarettes, many public health experts oppose vaping, thinking it represents more of a gateway to cigarettes than a liberation from them. Nor does the FDA allow e-cigarette makers to advertise their products as being safer than traditional cigarettes. This has caused the public to mistakenly view both products as equally bad.
E-cigarettes are like the nicotine patches in my patient story: While it is best for people to abstain from all cigarettes, better that they use a less dangerous form. The public health activists are like the surgical team that denied the man his patches: In their quest to bring perfect health, they sometimes end up causing worse health. The average American is like my patient: resentful toward those who tell him or her how to live.
Yet my story is also a lead-in to a major difference between doctors and public health activists that has consequences for our politics: Doctors tend to be far more humble than public health activists about what science can accomplish.
I and the other doctors on the surgical team referenced above soon recognized our mistake. We forgot that the foolishness of human beings is limitless; so is the malevolence of chance. The unexpected always happens. In hindsight, we should have just given the man his patch.
Real life often pushes doctors to be practical in this way. Doctors respect science, and most of what they do is anchored in science; but they will ignore science if the situation demands it. In my anesthesiology practice, for example, patient attitudes often force my hand in ways that science would consider suboptimal. In one case I used a breathing tube instead of a facemask to give anesthesia because the patient feared the mask’s pressure on her face would give her wrinkles. The patient had a history of asthma, which made a breathing tube risky, yet she was so nervous about her appearance that I relented. Human beings have certain repetitive characteristics, without which practicing medicine would be impossible; yet each patient has his or her own psychology and even physiology, and this sometimes makes the constancy and logic that one hopes for in medical practice impossible.
Ironically, while public health has a weaker link to science than anesthesiology, it is less humbled by science’s limitations. Indeed, lack of humility has emboldened public health to insert itself into practically every conceivable public policy debate. Along with its traditional menu of concerns, including sanitation and immunization, the public health field now voices opinions on such issues as gun control, mental health, drug abuse, domestic violence, social justice, gender equality, sustainability, wealth redistribution, children’s day care, and foreign policy.
This is arrogance of the long-sighted kind. Public health activists drape themselves with the scientific method, declaring, “Why should not a method of investigation that has succeeded so well in solving problems in medicine be used to improve people’s well-being in a social, ethical, and political sense?” Because public health has a bona fide link to science, through medicine, which no social scientist can lay claim to, it has transformed itself into science’s emissary to policy debates once thought far removed from science. The fact that almost every life problem spills over into the public realm eventually, while also touching on somebody’s physical or mental health, makes public health’s portfolio potentially limitless. Not even social science claims such a range.
Yet public health advocates also reveal arrogance of the shortsighted kind. Vaping is a case in point. Public health experts rightly express concern over vaping’s new popularity among teenagers, as well as over the tardiness of e-cigarette manufacturers in addressing the problem. But while e-cigarette use has increased among young people, regular cigarette use has declined. Perhaps a world without any e-cigarettes might have led to perfect success: a decline in regular teenage smokers and zero teenage e-cigarette smokers.
Then again, everything I know about human nature, culled from my experience as a doctor and as a teenager decades ago, tells me that a fixed percentage of teenagers will partake in vice. Saint Augustine wrote as much 15 centuries ago in his Confessions when he described how he and his friends perversely stole and destroyed good pears just for the fun of it. Without e-cigarettes, the rate of regular cigarette use among teenagers would probably have stayed the same; one vice was simply replaced with another—albeit a safer one. Yet rather than work with this reality of human nature, the public health establishment continues to fight vaping in the spirit of a crusade.
Ideology and Arrogance
Public health’s special position in health care has given rise to this crusading spirit and to its new aggressiveness in public policy. Let me explain how.
In 1767, Sir George Baker became the first exemplar of modern public health. Using the scientific method, he traced the origins of a colic epidemic in Devonshire, England, to lead poisoning conveyed through the common cider people drank. Baker’s discovery highlights how public health differs from most other professions now orbiting medical science: Public health’s connection to science existed at the outset.
Nursing, social work, and clinical psychology all lack this early connection. Until the second half of the 20th century, the nursing profession championed compassion and selflessness over science. Social work in its early years championed the volunteer with a “good heart.” Clinical psychology’s status rose only after mid-century, when the field embraced the scientific model of mental illness and demanded doctorates of its practitioners. Long before these other fields had discovered science, or even started working with data, public health experts were on par with doctors, refining the science of epidemiology and applying bacteriology to prevent epidemics.
Public health is unique in a second way: nurses, social workers, psychologists, and doctors deal with individuals; public health experts deal with whole populations.
Individuals, like all real things, have resistance; they do not reliably conform to abstract principles or universal categories. Every nurse, social worker, psychologist, and doctor knows this limits science’s applicability. Because public health experts deal with whole populations, they are less likely to see how abstract scientific principles can fail. A public health expert might say, “We must fight cigarette addiction to improve health.” The phrase can be taken for truth because it evokes no precise image, and because the expert who utters it does so in good faith. But the policies the phrase inspires do not necessarily end cigarette addiction. Why? Because there is a divergence between words and things, between the scientific principle and the reality of individual human behavior. A simple phrase does not represent with sufficient exactitude the complexity of addictive behavior expressed by any one person—as most social workers, psychologists, nurses, and doctors can attest.
These two historical tendencies in public health combine to make the field both arrogant and ideological, relatively speaking.
Arrogant because public health experts do not watch their science fail on a daily basis. Because they work with large populations rather than with individual cases, public health experts often think with words—for example, the American Public Health Association’s (APHA) goals to “reduce global childhood mortality” and “support global food security.” Goals like these are easy for the thinker with words; the delay between error and the serious consequences of error—a very short timespan for an anesthesiologist—is too long for the public health expert to learn humility or even responsibility. After articulating a principle, the public health expert sees nothing go right or wrong for years, if ever, and so the value of the words can only be judged by their good intentions. When the entire planet becomes a platform for action, and the desired goals verge on being utopian, the issues themselves start to lack physicality. The public health expert is thus tempted to believe that everything has been done when only words have been spoken.
In addition, among sociologists, social workers, and politicians, public health experts are often the only people in the room who can claim a real connection to science. Because many Americans think science is the last word on the art of thinking, public health’s historical connection gives the field cachet.
Public health is ideological because all ideologies contain an element of hope and aspiration that can only be dampened by contact with reality. An ideology is a big set of ideas, a sweeping philosophy relevant on a large scale and for a long period of time. It thrives by ignoring details; it is so simple in its explanations that a single slogan can sum it up. Individual cases with particular details detract from the smoothness of an ideological system. Because public health experts do not manage individual cases, reality is less likely to quash their ideological enthusiasm.
Ideology plays a role in other health care fields. For example, in the 19th century, the new social category of “childhood” led to the establishment of pediatrics. Family practice arose as a specialty in the 1960s when laypeople pressured doctors to recreate the cozy physician generalist of yesteryear. But the very nature of what doctors do—care for individual patients—curtails their range of action. It makes no sense, for example, for a doctor to expound on foreign policy. Public health, on the other hand, has potentially no limit to its portfolio. The only limit is imposed by ideology itself.
During the 19th and first half of the 20th centuries, two ideologies limited public health’s range. The first ideology was the division between prevention and cure, which drew from the larger well of modern ideas that divided state and individual, public and private, politics and economics, and fact and value. Public health became synonymous with prevention, through state action, in the form of sanitation measures, garbage disposal, and quarantines. According to the ideology, since the state prevents mass armies of soldiers from invading the body politic, it should do the same against mass armies of germs. In other words, “prevention” is legitimately concerned with issues beyond any one individual’s control. “Cure,” on the other hand, involves that aspect of medicine that individuals can (supposedly) control, such as heart disease or broken bones. Curative medicine, or “private health,” became the preserve of doctors. For most of the 19th century, public health experts carefully stayed on their side of the line.
“Negative freedom” was the second ideology to limit public health. It defines freedom as letting an individual do whatever he or she wants. For example, letting a person go to McDonalds to eat a Big Mac, because he or she wants to eat a Big Mac, is an act of negative freedom. Based on early modern political thought, the ideology restrained government from telling people how to live. During the 19th century, public health experts were wary of violating the concept.
In the second half of the 20th century, the two ideological bolts holding public health in check exploded. The rigid division between prevention and cure, which made little sense from a medical point of view in any case, collapsed along with the other divisions in modernity. Society’s health and an individual’s health became the proper concern of public health.
Meanwhile, negative freedom gave way to the idea of “positive freedom,” which says that people are free only when they are being true to themselves. According to the new ideology, a man going to McDonalds to eat a Big Mac is not free; he is a slave to his desire for Big Macs, which he knows are bad for him. By keeping the man from going to McDonalds, public health experts insist they are making him free.
The stage was set for public health to insert itself into almost every policy debate imaginable, and to claim to know what is best for every person.
Public Health Everywhere
Several years ago, I met a young woman in a public health program who was starting up a research project. Although she had spent most of her life studying in school and before that, hanging out at the mall with her friends, she wanted to demonstrate the benefits of marital counseling on “family dynamics” and “mental health.” She asked me if I knew any people who might serve as subjects for her study. Teasingly, I replied yes, that I knew a man who was a doctor like me; he was also a poet; his name was Zhivago. Although he was married and had a child, with a second one on the way, he was having a fling with a woman named Lara. I noted that Zhivago’s life situation was pretty hectic, and that his neighborhood was going through some rough times.
The public health student brightened and said these people sounded perfect for her study. She thought Zhivago might be experiencing “situational anxiety” common during a “mid-life crisis,” which would explain his need to go outside of his marriage for “validation,” while Lara, she said, probably had “self-esteem issues.” She felt certain that Zhivago’s wife was “depressed.” All in all, she believed counseling would help them get their lives back on track.
That the young woman had never heard of the book (or movie) Doctor Zhivago surprised me. Yet her supreme confidence surprised even more. With the few mental health principles she had learned in public health school, she felt capable of solving the great intractable love triangle of 20th-century fiction (Zhivago, Lara, and Tonya) that was carried on amid that “pretty hectic” situation known as the Russian Revolution.
Although she knew the basic principles of mental health classification, her interest in isolated phenomena and the particular details of the individuals was conspicuously absent. Nor did she have any real life experience of her own to offer. Still, she was more than ready to slot Zhivago, Lara, and Tonya into diagnostic categories before hearing more about them. When I told her about Lara’s husband being away in the army and Lara’s first affair with a much older man, her eyes glazed over. Indeed, the more singular the phenomena, the more she lost interest. It was a symptom of the ideological state of her mind, and of her arrogance.
One finds more of the same on the APHA policy webpage. Peruse the topics and almost every major policy debate today is joined eventually, including debates over tax rates and the Israeli-Palestinian conflict. Public health feels entitled to comment on all of them, since each issue touches on someone’s health eventually, much the way that every person, one way or another, causes someone else trouble by dint of simply existing. Much of the commentary suggests ideology more than expertise, usually of the progressive kind. But progressiveness is not the problem. The problem is the false syllogism that encourages public health experts to speak out on all these issues.
For example, many economists spend their careers studying poverty and the ramifications of state-directed wealth redistribution. It’s a complicated issue. Public health cites mental health studies to enter the debate. One study says that, “poverty taxes people’s brainpower,” or, as the APHA newsletter puts it, “It’s just thinking about concerns about financial issues that leads to cognitive impairment [among the poor], so that itself is very powerful.” This is a stilted way of saying that poor people have a lot on their minds.
This simple truism does not need verification; nonetheless public health verifies it in studies structured according to the scientific method. Public health even uses a scientific name for the stressed out mind: “cognitive impairment.” This gives the truism the aura of science. Public health then frames the poverty debate as a question of health, which commands attention. In the end, public health insists that its favored solution, wealth redistribution, is the most medically sound.
This is shallow thinking. Misplaced pride accounts for some it. The scientific method is based to a high degree on intentional ignorance, as investigators purposely isolate certain details and leave out all the rest. This is why astronomy, physics, and chemistry are so amenable to the method. The quantities involved are so vast or so tiny that many details must be left out, making the experiment clean. Still, while supposing such isolation to be accurate, investigators suppose what is false. This is not a problem in hard science—an unreal condition is created; a formula results; the formula is then tested under conditions that replicate the original state of ignorance. But when human beings are involved, the details cannot be shut out so easily—and there are many such details. An infinite array of feelings, drives, and memories prevent the artificial isolation needed for the scientific method to work. Poverty alone is a complex state of mind. This is why the scientific method cannot reliably predict human behavior or any person’s response to a given stimulus. And yet the APHA says that its policy approach “reflects the latest available scientific research.” By pretending that poverty can be approached scientifically, public health pretends what is ridiculous.
In a second example, the 2018 APHA policy statement calls violence a public health concern. Violence has been a social problem for many thousands of years, but we are led to believe that the APHA has found the way forward. It calls for “encourag(ing) community health programs to start programs that detect and interrupt the transmission of violence using professionally trained workers,” addressing violence “using a trauma-informed and culturally competent approach,” calling on “federal, state and local governments to invest in public health approaches to violence prevention,” and “establish(ing) an active surveillance system for monitoring violence in communities.” The policy statement also includes references to “marginalized populations” and “law enforcement violence.”
The APHA may speak of a link with science in other parts of its platform, but here they do not choose their words according to science; they choose them according to the effect they wish them to have. The turgid clauses; the words themselves, heavy on the Latinisms; the pretense of calm, matter-of-fact omniscience—all these carry the reader forward. The words possess the flavor of erudition, and by combining them with ideological catchwords and occasional references to science, the policy proposal reaches its object, which is to overwhelm readers and give them confidence that violence is a fixable problem, just like dirty drinking water.
The APHA webpage discusses other problems such as equity, gun-related suicides, and clean energy. The paragraphs devoted to these issues share the same structure as the one on violence. The phrases seem to interlock spontaneously; and while they are all formed on a similar model, they are subtly adjusted to make each policy pronouncement seem fresh. The whole document presents in logical order a list of ills that have plagued humanity for more than 2,000 years.
Yet despite the document’s polish it carries one great risk: the risk of unreality. Such ills cannot be fixed easily if at all, which most people not mesmerized by a caricature of science recognize.
Public health activists are clever enough to understand the scientific method, but they are not clever enough to understand its limits. Their minds are crowded curiosity shops where science, ideology, and hubris all find a place.
Smoking, Again
A college student drinks gin and soda and gets a headache. Then he switches to rum and soda and gets another headache. Perversely, he blames the soda for causing his headaches. In fact, the soda was only associated with his headaches. The real cause was the alcohol.
It is ironic that public health activists fight vaping, since regular smoking is one of the few lifestyle factors convincingly established as a cause of common and serious disease. Smoking causes lung cancer and heart disease, sun exposure causes skin cancer, and sexual activity that spreads the papilloma virus causes cervical cancer. Most other causations mentioned in connection with lifestyle are mere associations—not unlike the college student’s soda-induced headache.
Although epidemiologists admit this, public health as a field rarely advertises the point. There is hypocrisy at work here. Public health emphasizes its link to science, but when it wants to unnerve people with a mere association, it conveniently plays down the association-causation distinction.
This happened during the 1990s debate over silicon gel breast implants. An association between silicon gel breasts implants and suicide (and substance abuse) was advertised. The public health establishment disliked breast implants for a variety of reasons, many of them ideological—for example, the notion that women were putting themselves at risk in the service of the “beauty myth.” When people believed implants caused suicide and substance abuse (and public health experts rarely disabused them of this notion), public opinion hardened against the product. In the end, the implants were restricted for a decade, until new evidence vindicated them. Public health experts never stated in so many words that they were being ideological—they said they were being scientific. They spoke of everything but that; and yet, they were that.
In smoking, regular cigarettes cause disease. There is real science here. If vaping has serious risks, those risks have not yet been shown, let alone shown decisively in the form of causation. Then why the obsession with vaping, especially when cigarette smoking is clearly so dangerous? Writers have mused that public health is in bed with the tobacco companies, who fear vaping will cut into their cigarette sales, or Big Pharma, which peddles its own nicotine products.
There may be a simpler explanation. Public health activists know the difference between fact and fantasy, but they believe the fantasy—that vaping is as bad as smoking—because they themselves have invented and ornamented it. And who is so strong as not to believe his or her own invention? Even expert scientists have difficulty. All that is in agreement with one’s personal desires seems true; all that is not makes a person angry. This is simple, garden variety confirmation bias at work. Although public health activists have a strong link to science, they are also invested in ideology more than most health care professionals; unlike scientists, who may question their hypotheses, they feel allegiance to a set of ideas and are stirred more emotionally as a result. Science works best when it is indifferent to the system it invents; it works worst when it clings to it passionately. Ideology and arrogance prevent the necessary disinterestedness that is the hallmark of good science.
Public health activists need to change their mindset and hence their ways, and not just on the issue of vaping. Because they have the scientific method, they have logic on their side, which emboldens them to think they can speak authoritatively on almost any issue. Data is collected, numbers get punched in, scientific-sounding terms are invented, and all the while activists fail to realize that they are making no progress despite their efforts. The scientific method gives them an agility that others lack, but it also gives them the bad habit of believing that all is accomplished when they have indulged in a process of reasoning that has the aura of truth.
For a discussion see Curtis Brainard, “What’s Healthy? Don’t Ask Scientists, or the Press Either,” Columbia Journalism Review, September 19, 2007. In the piece, writer Gary Taubes reports: “The appropriate question is not whether there are uncertainties about epidemiologic data, rather, it is whether the uncertainties are so great that one cannot draw useful conclusions from the data.”
The post The Arrogance of Public Health Advocacy appeared first on The American Interest.
Even the Easy Parts Are Hard
American politics have become so dysfunctional, thanks in part to the special interests’ capture of our political class, that even what should be easy parts of dealing with the policy agenda are hard. We now endure a situation in which normal politics have become toxic: When Congress fails to act on public policy problems, the problems almost always get worse; when Congress does act on them, they also get worse.
Case in point: the so-called December 2017 tax policy reform. How anyone could append the word “reform” to a plutocratic giveaway that changes the structure of the tax code not one iota eludes me, except as an example of either shameless spinning or the full-frontal deterioration of the proper use of the English language. Or both.
Some policy problems actually are hard because the subjects are genuinely complicated: immigration reform, healthcare, gun control, and others we can all list. Even some of these, however, have relatively simple solutions in theory.
To take just one example, healthcare insurance premiums could be made affordable again for the vast majority of Americans if the three categories of the most expensive generic cases were carved away from the main insurance pool: the very ill elderly; trauma cases; and treatment for chronic and progressive diseases like diabetes. These cases could be handled by a secondary insurance market, just as secondary insurance markets operate in many niches of a modern economy. Even if some combination of state and Federal government were to heavily subsidize insurance premiums for these classes of cases, it would still be simpler and cheaper than the Affordable Care Act.
But never mind how we pay for healthcare—which, just by the way, is not the same problem as reforming healthcare itself (another example of the massive confusion of language we suffer). The latter really is complicated, which everyone paying attention has known for years. I can think of at least three cases of simple fixes for non-trivial problems that, in normal times, would be gimmes. That they’re not gimmes illustrates the fact that the current two-party monopoly of Congress—with some notable and frustrated exceptions—is an idea- and deliberation-free zone with little to no interest in public service.
So, first, we have a problem over early voting that, in some states, has become a major issue of contention. Democrats seem to want to extend the period of early voting basically forever, while Republicans would constrain it to about 15 minutes if they could. Everybody knows why: People who work at jobs in lower socio-economic echelons have a tougher time getting to the polls, and those people tend to vote Democratic.
I dislike early voting in general because things can change between when someone “early” votes and when election day happens to be. Ideally, everybody should be voting in the same situational context, which is also why announcing election results in eastern states before polls have closed out west is a bad idea. The solution to the early voting problem is obvious: Make election day a Federal holiday—in both presidential election years and in midterm election years. Do that, and the rationale for early voting goes away in the blink of an eye.
That would mean adding two holidays every four years, but a compensatory change is readily available: Get rid of Columbus Day, and Veterans Day too.
Columbus Day amounts to Italian-American day, which is nice but unnecessary. Worse, lately Columbus Day has been connected to Indigenous Peoples Day. That is not for no good reason, but the trajectory now let loose is likely to turn Columbus Day into another divisive symbol in American society and politics. Seems to me we can do without that.
Another reason to get rid of Columbus Day is that it’s become a class-discriminatory holiday. Pretty much only bankers, bureaucrats, and mail carriers get off; most people who actually work for a living usually don’t, and public schools are usually in session as well. That creates havoc for lots of families with minor children, which we could also do with a little less of these days.
As for Veterans Day, that has to do with veterans from World War I. Don’t look now, but we’ve plum run out of living veterans from World War I. So Veterans Day should be combined with Memorial Day, and the holiday should be renamed Veterans and Memorial Day. Pretty simple, I think. We certainly should honor the men and women who have put themselves in harm’s way in defense of our country, but we don’t need separate holidays for those who lived and those who died in the process.
If we did this, we would add two holidays every four years but get rid of eight. That would mean three more work days added on average to the calendar year. That’d be good. From the beginning America has always been about the nobility of hard work. We have wandered too far from thinking of all honest labor as noble, so all real Americans would cherish those added productive days. (It would, of course, be great if we had a holiday where bankers and bureaucrats had to go to work but the rest of us didn’t. That’ll never happen….)
Second, let’s take a look at the Social Security trust fund. We’ve known for many years that our demography is leading to an insolvency train wreck—more healthy retirees and fewer workers to support them. Every expert who follows this problem agrees that at current rates of tax inflow and disbursements we have at most seven years before the entire system implodes. It’s hard to think of a clearer example of the irresponsibility of Congress—or perhaps we should be honest about calling it cowardice—that this can has been kicked so far down the road.
Of course something has been done in recent years to slow the train engines, but what has been done has been both band aid-like and prejudicial, again to people who work with their bodies for a living: The retirement age for both men and women has been kicked out several years. This is class prejudicial because the data clearly show that people who do physical labor for a living have shorter lives than those who do not. So extending the retirement age amounts to a reverse Robin Hood: taking from the poor to give to the rich.
None of this is necessary, because if Congress had any courage it would do the simple and honest thing: Simultaneously remove the cap from taxing income and means-test benefits. Back during the Depression, when the Social Security Administration was created, the basic concept was that of an insurance pool for the elderly: Those who could not save for retirement would be helped by those who could. Back in those days life expectancy was so much shorter than today that not a lot of people actually drew a lot of benefits. But the psychology of the time was one in which we, as a hurting nation, thought that society had a moral obligation to bring the less fortunate along with the rest of us. There is an iconic line from It’s a Wonderful Life that expresses perfectly the zeitgeist of the era. Mr. Potter is trying to drive the Building & Loan out of business, and George Bailey (Jimmy Stewart) pleads with the agitated crowd not to break ranks: “We can get through this thing all right. We’ve got to stick together, though. We’ve got to have faith in each other.” Exactly that is how most Americans understood the spirit of Social Security during the Depression, and for some years thereafter during wartime. Then somehow the conception changed, first into a forced savings account for those too intemperate to save for their own retirement, and then into an exclusively personal nest egg even for those who were forethoughtful enough to plan. Mr. Potter has prevailed. So now even retirees whose yearly income exceeds $1 million draw Social Security benefits. Is that more obscene than insane, or more insane than obscene? It’s a toss-up as I see it.
Means-testing benefits should be prorated or graduated so that above a certain threshold of post-retirement yearly income a retiree would receive, say, 80 percent of benefits, and then above a higher threshold 60 percent, and so on until, above quite a high threshold, benefits would be zero.
By simultaneously lifting the tax cap and means-testing benefits, virtually any demographic situation can be handled without fear of system bankruptcy. The relevant data could be reviewed every few years and modest adjustments to the thresholds made as required by changing circumstances. In other words, enacting this reform would permanently fix the problem.
And if we’re going to actually fix this right, we should at the same time ban Congress from repeatedly ransacking the fund, as it has done several times in the past, in yet another serial act of cowardice.
Of course, wealthy Americans would pay for this solution in two ways: higher taxes now and fewer eventual benefits later. They would, however, still be very affluent people by any historical standard or relative measure. I much prefer this approach to a reverse Robin Hood one, although many selfish wealthy individuals doubtless would not. Too bad: Let’s have a healthy debate over this and see where the chips fall. I doubt that, given a viable alternative, most voters would tolerate a situation in which the political power of the very wealthy causes the Social Security system to tank to the massive disadvantage of the great majority of Americans.
Third, if you haven’t noticed, we’ve just suffered yet another government shutdown. It’s not the first in recent years and unless we do something about the generic problem it won’t be the last.
The main underlying problem is that shutdowns and threats of shutdowns have become political footballs in which many ordinary Americans suffer. The two-party monopoly political class doesn’t care that ordinary Americans suffer, or if they do care they’re certainly showing it in a strange way. They care only about how to manipulate partisan optics in their favor.
This puerile gaming constitutes a completely self-inflicted wound. Lots of other democracies occasionally have trouble passing a budget in a timely fashion, but they never suffer government shutdowns because they have legislated automatic continuing resolutions. This fix, simple enough to write into law on a single sheet of paper, keeps the government open at the same basic funding level as before the budget policy crunch until a solution is reached on a new tax and spending package.
Many parliamentary systems like that of the United Kingdom use the Westminster method—no amendments allowed. So they rarely miss passing a budget in a timely fashion, for otherwise the government would fall and elections would ensue. Budgets pass because governmental majorities based on election results guarantee it.
So some have suggested that the United States adopt a no-amendments budget rule, similar to the method used in the past for BRAC and fast-track trade authority. But that doesn’t work in a presidential system. Had Congress been unable to amend the first budget put forth by the current Administration, the result would have been even more destructive than the one we did get.
The Westminster method is a bridge too far for us, but legislating an automatic continuing resolution isn’t. So why doesn’t any Senator or Congressman even suggest it? You tell me.
In normal times it would take Congress a single busy week to fix the early voting morass, solve the Social Security problem, and eliminate the damage of government shutdowns. Guess what: These aren’t normal times.
What do We the People intend to do about it? I can barely wait to find out.
The post Even the Easy Parts Are Hard appeared first on The American Interest.
January 31, 2019
Bureaucracy vs. Democracy
The daily detonations from the White House are diverting our attention from deeper flaws in modern American government. Donald Trump was elected for a reason: many Americans are fed up with Washington. They see Washington as a kind of alien power, dictating policies and behavior without regard to the feelings or needs of real people.
Neither party has gotten the message of this voter revolt. They continue to rely on the lack of credible political alternatives to take turns in power without taking responsibility for changing things. Instead of presenting positive visions for change, the parties stoke voters’ passions by pointing fingers. Pulling out of this downward spiral more than just new leaders; it requires a new vision.
But where can we find this new vision? “There is only one sure way to quiet our populist distempers,” argues the recent Niskanen Center report The Center Can Hold: “that is for . . . democratic institutions to deliver effective governance . . . through successful problem-solving.” Political scientist Francis Fukuyama digs deeper, in his new book Identity, into the innate human needs for belonging and self-respect.
These and other diagnoses of voter alienation converge at one point: a sense of disempowerment by Americans, at every level of responsibility, to make practical and moral choices. Almost without our noticing when it happened, bureaucratic structures have crowded out human agency. Nothing much works sensibly, I argue in Try Common Sense, because no one is free to make it work. Of course Americans are angry: Washington is inept, and makes us inept, by entangling public choices in a jungle of red tape.
The modern bureaucratic state must be replaced, not repaired. We must simplify governing structures to liberate human judgment and initiative at all levels of society. No institutions, including democratic ones, can work effectively when people are prevented from drawing on their knowledge, instincts, and experience about how to get things done. Refocusing government on public goals, and away from micromanaging daily choices and interactions, will relieve much of the frustration and anger that drives voters toward populist leaders and extremist solutions.
The current bureaucratic framework in the United States has largely been constructed since the 1960s. It bears little resemblance to the legal frameworks existing before that time. The 1956 law authorizing the Interstate Highway System, for example was 29 pages long; the most recent transportation bill, passed into law in 2015, was almost 500 pages long, and it must be implemented pursuant to regulations that are themselves thousands of pages long. Environmental review, an innovation of the National Environmental Policy Act of 1969, was intended to make decision-makers and the public aware of significant impacts of a project. The expectation was that reviews would be short (rules suggest no longer than 150 pages) and done in a matter of months. Now reviews can be thousands of pages and take upwards of a decade.
There is no shortage of grumbling about bureaucratic paralysis—indeed, bureaucracy is attacked even by the people in charge of it. But there’s been little skepticism about the assumptions underlying the way we make public choices.
The main premise of modern bureaucracy is that it can prescribe a correct way of doing things. For 50 years, regulation writers have been trying to dictate choices in every area of conceivable public interest. Unlike regulatory goals, which are relatively straightforward, most regulatory detail prescribes the correct way to achieve the goals. It is this assumption—that bureaucratic structures should dictate the details of implementation—that accounts for much of the failure and frustration of modern government.
The imperative to rethink the current bureaucratic structure can be understood by analyzing its failures through the lenses of three separate disciplines: economics, psychology, and legal philosophy. In each case, the bureaucratic structure violates core truths and principles of the discipline:
Clear law, economists assume, promotes growth by eliminating legal uncertainty. In practice, however, so-called clear law leaves little or no room for responsible humans to make tradeoffs, to balance competing interests, or to engage in the trial and error processes of integrating experience, which is source of most solutions and innovation. The current “one-size-fits-all” approach to regulation is a variation of central planning, with the added disadvantage that the original planners are often dead or retired and so cannot themselves make adaptive choices.
The psychological assumption of bureaucracy is that people have the cognitive capacity to process detailed rules. But research by experts in cognitive load shows that bureaucracy is ineffective not just because of the cost of compliance but because it shuts the door to the cognitive processing needed for success.
The legal theory of bureaucracy is that law can be objective and should operate independently of individual judgment and values. But most of the critical precepts of the rule of law, including reasonableness, good faith, and fairness, hinge on judgment in the particular circumstances. The modern effort to objectify law into rigid rules and rights has led to an epidemic of selfishness. The main goal of law is social trust; the effect of modern bureaucratic legal structures is to infect social dealings with distrust and defensiveness.
Bureaucracy precludes human choice at the point of implementation. That is its goal. It prescribes choices in advance in order to avoid mistakes by fallible humans, without understanding the far greater mistakes caused by rigidity. In economic terms, modern bureaucracy dramatically reduces the supply of human initiative and innovation. If bureaucratic compliance were an occasional diversion, necessary to achieve institutional stability or coordination, the benefits might outweigh the costs. But the legal requirements and risks bearing down on us are not occasional, but continuous. The endemic inefficiency and ineffectiveness of American healthcare, public schools, infrastructure, and regulation suggest that almost any alternative that allows practical choices would reap enormous public and private benefits.
Bureaucratic rigidity has little to do with the partisan debate over regulation vs. de-regulation. The goals of regulatory programs aimed at the common good are generally valid. (This excludes a large stack of programs which distort markets and public administration with special interest preferences, such as farm subsidies, labor preferences, and corporate tax giveaways.) Just as a reliable rule of law enhances economic prosperity by enhancing trust instead of defensiveness in commercial dealings, so too government oversight of products, services, and safety can enhance freedom by protecting against externalities such as pollution and reducing concerns over adulterated food or unsanitary conditions.
The evidence is overwhelming, however, that even vital regulatory programs are wasteful and ineffective. That is because the failures of modern government are largely failures not of faulty goals but of implementation: Doctors, teachers, officials, business managers, and others find themselves unable to act on their best judgment because of bureaucracy or other legal concerns.
Every President since Jimmy Carter has promised to streamline red tape but, with a few notable exceptions (industry deregulation under Carter, welfare reform under Clinton), their efforts have yielded little, and the bureaucratic burden has gotten progressively heavier. The steady growth of bureaucratic verbiage is a natural consequence of the drive to clarify each new ambiguity or wrinkle. Tangled bureaucracies are the inevitable consequence of the premise: to create an instruction manual for correct public choices. This unexamined frame of reference explains why would-be reformers have had little success in taming the bureaucratic behemoth.
The Bureaucratic Imperative to Remove Human Choice
No one designed this bureaucratic tangle. No experts back in the 1960s dreamed of thousand-page rulebooks, ten-year permitting processes, doctors spending up to half their work day filling out forms, entrepreneurs faced with getting permits from 11 different agencies, teachers scared to put an arm around a crying child, or a plague of legal locusts demanding self-appointed rights for their clients. America backed into this bureaucratic corner largely unthinkingly, preoccupied with avoiding error without pausing to consider the inability to achieve success.
The tendency toward detailed rules is rooted in human nature. Humans like the idea of telling other people how to do things. Evolution has wired us to be risk averse, and controlling others’ decisions addresses the risk that people will choose poorly. Just as people don’t go into the ocean for fear of shark attacks, they like rules to prevent a human shark from making abusive choices. Rules also have the advantage of avoiding the personal risks inherent in taking responsibility. David Hume observed that people “are mightily addicted to general rules.” Even bureaucrats with impregnable job protection, Washington Monthly founder Charles Peters observed, cling to rules because of an almost pathological fear of being put on the spot: “The rule made me do it.”
The debate over rules started early in American history. The main dispute around the ratification of the Constitution was the objection of the “Anti-Federalists” that the powers of government needed to be specifically prescribed. The Anti-Federalists argued that the Constitution was “made like a fiddle, with but few strings,” so that those in power might “play any tune upon it they pleased.” The resolution was not to prescribe detailed powers in the Constitution, but to add a Bill of Rights whose goals and principles would be enforced by officials in the Constitution’s system of separated powers.
With some notable exceptions, the organizational literature that evolved following the industrial revolution embraces a control vision of administration. These theorists, such as Frederick Winslow Taylor, believed not just in control but in the idea of one correct solution; if the central organizer was smart and thoughtful, the institution would hum like a perpetual motion machine. The efficiency of assembly lines over craftsmen’s workbenches were living proof of the power of a fixed route of march.
There were also a few notable counter-thinkers in management theory, led by Chester Barnard and Peter Drucker, who focused on individual responsibility and institutional culture. Herbert Kaufman’s The Forest Ranger (1960) described an open structure in which forest rangers made many practical choices without a host of bureaucratic constraints promulgated from distant Washington, DC. Kaufman concluded that a culture of shared values and “guided discretion” provided the key to this successful structure.
The upheavals in the 1960s caused public organizations to veer even further toward legal and bureaucratic prescriptions. Like a hot iron branding the public consciousness, the abuses by institutions and people with authority—from racism and environmental abuse to the Vietnam War and gender inequality—created demand for a governing approach purged of bias and any opportunity for bad judgment.
Rulemaking burgeoned—from a little more than 10,000 pages in the late 1950s, the Federal Register grew to almost 100,000 pages in 2016. While the 1960s reforms are known for introducing new areas of government oversight—such as civil rights, environmental protection, and safety laws—most of the bureaucratic detail had less to do with the scope of regulation than with instructing people exactly how to meet public goals. Whereas forest rangers once did their jobs with reference to a pamphlet of rules, they now had thick volumes of specific instructions. Almost every aspect of the workplace rule-writers could think of was turned into a legal prescription: OSHA now stipulated that stairwells must be “illuminated by either natural or artificial ” light. How else can stairwells be lit?
The evil to be purged by this dense bureaucracy was human judgment, especially by people with authority. But thick rulebooks were insufficient. Guarding against unfair choices required something more, and here the fix here was to apply due process—the hallowed constitutional protection against arbitrary confinement or confiscation of property—to personnel and management decisions. Teachers would have to prove in a disciplinary hearing that Johnny threw the pencil first; supervisors would have to prove that Ethel didn’t work hard or get along with co-workers.
The expansion of due process had the effect of inverting the hierarchy of authority—the burden was now on the public supervisor to prove the correctness of his supervisory decisions. Public managers no longer had the practical ability to manage public personnel. It is basically impossible, for example, to terminate a public employee for incompetence. Another effect, commonly, is a listless public culture; it is difficult to maintain energy and camaraderie in an office where everyone knows that job performance doesn’t matter.
The new bureaucratic machinery to honor due process was imposed in the name of individual rights, but the meaning of rights had been transformed from protection against state coercion to a tool for self-interest against co-workers and the organization. Instead of government making choices as to what best served the common good, it made choices based on the legal demands of each claimant. Instead of being held to a standard of public service, public servants now demanded, in essence, that the public should serve them.
This flash flood of individual rights swept nearly everyone along. Virtually ignoring his own findings in The Forest Ranger, even Herbert Kaufman flipped his point of view and embraced detailed rules and processes in a short book, Red Tape. Kaufman acknowledged the costs of bureaucratic stultification but argued that the rules were essential to avoid any hint of unfairness.
For 50 years since the 1960s, modern government has been rebuilt on what I call the “philosophy of correctness.” The person making the decision must be able to demonstrate its correctness by compliance with a precise rule or metric, or by objective evidence in a trial-type proceeding. All day long, Americans are trained to ask themselves, “Can I prove that what I’m about to do is legally correct?”
In the age of individual rights, no one talks about the rights of institutions. But the disempowerment of institutional authority in the name of individual rights has led, ironically, to the disempowerment of individuals at every level of responsibility. Instead of striding confidently toward their goals, Americans tiptoe through legal minefields. In virtually every area of social interaction—schools, healthcare, business, public agencies, public works, entrepreneurship, personal services, community activities, nonprofit organizations, churches and synagogues, candor in the workplace, children’s play, speech on campus, and more—studies and reports confirm all the ways that sensible choices are prevented, delayed, or skewed by overbearing regulation, by an overemphasis on objective metrics, or by legal fear of violating someone’s alleged rights.
A Three-Part Indictment of Modern Bureaucracy
Reformers have promised to rein in bureaucracy for 40 years, and it’s only gotten more tangled. Public anger at government has escalated at the same time, and particularly in the past decade. While there’s a natural reluctance to abandon a bureaucratic structure that is well-intended, public anger is unlikely to be mollified until there is change, and populist solutions do not bode well for the future of democracy. Overhauling operating structures to permit practical governing choices would re-energize democracy as well as relieve the pressures Americans feel from Big Brother breathing down their necks.
Viewed in hindsight, the operating premise of modern bureaucracy was utopian and designed to fail. Here’s the three-part indictment of why we should abandon it.
1. The Economic Dysfunction of Modern Bureaucracy
Regulatory programs are indisputably wasteful, and frequently extract costs that exceed benefits. The total cost of compliance is high, about $2 trillion for federal regulation alone.
The opportunity costs of ineffective bureaucracy are hard to measure but likely far greater— discouraging initiative and innovation undermines the can-do character of American culture. Hard metrics are not the only evidence of suppressing economic activity. Doctors and nurses who spend up to half their time “doing paperwork” are not caring for patients. Is it a coincidence that we spend almost double what other Western nations do on healthcare?
Bureaucratic delay is far more costly than generally recognized. In the Common Good report “Two Years, Not Ten Years,” I found that a six-year delay in infrastructure permitting more than doubles the effective cost of projects. Moreover, lengthy environmental reviews generally cause significant environmental harm by delaying projects that alleviate polluting bottlenecks. Often there are few or no benefits to offset the substantial costs and delay. For example, a project to raise the roadway of the Bayonne Bridge—a project with little impact because it used existing foundations—was 10,000 pages, plus another 10,000 pages of exhibits.
There has been surprisingly little effort to redesign regulations to be less rigid and wasteful. Liberals are preoccupied with maintaining a rear-guard action to defend the goals of regulation. Conservative calls for broad deregulation go too far. That’s why they’ve been notably unsuccessful for the past four Republican administrations– voters like Medicare and want government to protect common resources such as clean water.
Ironically, what both sides have in common is to demand what they call “clear law.” There will be little or no room for either government overreach or business evasion of regulation, the theory goes, when human judgment is replaced by detailed rules. Their allegiance to this automatic, hands-free conception of regulation is what causes waste and frustration—not the goals of government oversight. Bureaucratic rigidity cannot avoid crippling costs and inefficiencies:
The high marginal cost of perfect compliance. Regulation is typically designed in absolute terms, requiring uniform compliance with each rule or mandate. Striving for complete compliance is sometimes important: A pre-flight checklist is important to avoid a disastrous air accident due to an otherwise small mistake. All restaurants need grease traps to avoid clogging the sewers. But most regulatory goals involve more complex tradeoffs. Often a public goal can be substantially achieved at a small cost by accepting a measure of imperfection. Criminal law doesn’t eliminate all crimes, just as contract law doesn’t eliminate all cheating, but they are both effective to instill public trust needed to support our freedom in social dealings.
The goal of privacy laws in healthcare, for example, could be substantially achieved with a few broad principles and protocols, particularly if there were not onerous consequences for the occasional slip up. The main goal of healthcare law should be effective care; spending extra tens of billions for perfect privacy means that those resources are not being used to save lives.
The inability to balance competing public goals. Rigid regulation is commonly counterproductive because it conflicts with other public goals. Safety, for example, is only half the equation. The other half is what we’re giving up to achieve it. Delaying the approval of a new drug in order to achieve perfect understanding of all the side-effects can cause harm to thousands of patients who need treatment now. “Helicopter parenting” stunts a child’s emotional and physical growth.
Discouraging innovation and initiative. The engine of innovation is trial and error. But regulation that comprehensively dictates choices in almost all areas of social and business endeavor doesn’t let people try this and that. Instead, it traps people in a kind of spider web. Often, a person is uncertain whether a new approach conflicts with a rule and lacks the time or resources to find out. When California initiated a program to waive rules for certain innovative school programs, it found, after reviewing the waiver applications, that most applications required no waiver; the schools had merely assumed that the innovations violated some rules. Just as unreliable law dampens economic initiative, so too does uncertainty about what is being regulated.
2. Bureaucracy Causes Cognitive Overload
The complex tangle of bureaucratic rules impairs a human’s ability to focus on the actual problem at hand. The phenomenon of the unhelpful bureaucrat, famously depicted in fiction by Dickens, Balzac, Kafka, Gogol, Heller, and others, has generally been characterized as a cultural flaw of the bureaucratic personality. But studies of cognitive overload suggest that the real problem is that people who are thinking about rules actually have diminished capacity to think about solving problems. This overload not only impedes drawing on what calls “system 2” thinking (questioning assumptions and reflecting on long term implications); it also impedes access to what they call “system 1” thinking (drawing on their instincts and heuristics to make intuitive judgments).
William Simon tells the story of the bureaucratic clerk who refused to restore welfare to a desperate mother who filed the application late. The clerk asserted to the mother: “There is nothing I can do.” In fact, the error could have readily been fixed by another department; what the clerk meant was that there was nothing that the clerk could do. But in the mental cubicle of bureaucratic rules, all the clerk thought about was her own compliance, not solving the predicament of the young mother before her. This might be called “system zero” thinking—where mental capacity focuses on artificial constraints instead of instincts, norms, or public goals.
Research by psychologist John Sweller and others have demonstrated that “working memory”—the conscious part of the brain—can process only a few things at once. “Long-term memory,” by contrast, is not conscious and holds a vast store of experience and evolutionary instincts that are drawn into working memory as people deal with particular problems. A chess grandmaster doesn’t consciously remember thousands of variations, but is able to draw on past experience, stored subconsciously, to quickly respond to multiple boards before him. Dealing with the needs of a particular person requires a social worker to perceive the emotional and physical state of that client, and make judgments that draw upon instincts and experience about what is needed. When both engines shut down when his passenger airliner took off from LaGuardia Airport, Captain Chelsey “Sully” Sullenberger was able to draw on a lifetime of experience to keep the glide angle just steep enough to maintain airspeed and avoid stalling, and then to feather the nose up just before ditching the plane so that it would “plane” onto the river instead of crashing.
Humans are smart mainly because of their ability to draw on long-term memory. In most situations of technical expertise, the skills are internalized in long-term memory and constitute what we think of as “understanding.” Their ability to do so requires, however, that working memory be available to receive these signals and wisdom. People go back and forth, figuring things intuitively. Nicholas Negroponte found that children in Ethiopia, having never seen a computer, could figure out how it worked in short order by trial and error. That’s because modern computers are designed with multiple pathways to get to a goal, so people can make them work intuitively.
Bureaucratic constraints are difficult to internalize because they rarely fit together into a broad understanding. They’re more like a shopping list, which must be referred to consciously in order to meet each requirement. This conscious effort to refer to detailed bureaucracy is mentally exhausting.
The fallibility of human judgment has received substantial attention in recent years. Atul Gawande, a surgeon and author of The Checklist Manifesto, has shown how checklist protocols before pilots take off, or before surgeons begin operating, can avoid tragic errors that occur because human cognition is not very good at keeping lists. The virtue of going down the checklist is precisely that it absorbs all the cognitive capacity of working memory to make sure that a person hasn’t forgotten one small detail that might cause a disaster. A checklist avoids error in complicated activities, but it does not itself achieve success. To fly a plane or perform surgery requires, after running down the checklist, going back to human instincts and experience to do the job well.
The functioning of the brain has critical implications for the design of organizational systems. The brain is “minute in its ability to process new material but massive in its ability to process very extensive and complex, previously learned information.” In computer terms, the brain has terabytes of hard drive memory, but eight-bit processors.
Highly functioning modern organizations, such as Toyota, have institutionalized a process of continuous improvement (known by the Japanese term “kaizen”), where workers constantly strive for better ways to do things and then communicate their findings throughout the organization. A mindset of learning from continuous trial and error achieves understanding in each worker, not just mindless compliance, and pays off in results, as Toyota has demonstrated. On the other hand, most people can’t read or understand a computer instruction manual.
Bureaucracy is the world’s largest instruction manual—a self-enclosed system where every detail is laid out in words, whether needed or not, and regardless to its importance to accomplishing a public goal. Bureaucratic rules also tend to be poorly written and contradictory. Instead of providing multiple pathways to a goal and allowing people to find the best ways to get there, bureaucracy imposes one “correct” way, and fails because people can’t internalize it as well as because, like central planning, it doesn’t honor the circumstances of each situation.
Bureaucracy also causes alienation. By pulling people away from their natural instincts, bureaucracy imposes severe psychological and cultural costs. Working, as Studs Terkel observed, is about the “search… for daily meaning as well as daily bread.” People derive dignity and satisfaction from the ways they do their job well. In The Mind at Work, his study of waitresses, plumbers, and other manual jobs, Mike Rose exposes the complexity and know-how required to do these jobs well. In The Moral Life of Schools, Philip Jackson, Robert Boostrom, and David Hansen show that successful teachers in fact have little in common; their success hinges on many subtle factors. People are energized by the satisfaction of drawing on all their instincts to achieve good things—teaching students, healing patients, having a cheerful relationship with a customer, or finding and fixing a leak. People who get into a “flow” of doing things in their way find that work is both invigorating and satisfying. They flourish, to use Edmund Phelps’s term.
Burnout is another side-effect of a bureaucratic system that denies people the opportunity to take initiative and solve problems in their own ways. Psychologist Christina Maslach and her colleagues have found that one of the main causes of burnout is “lack of control”: “when workers have insufficient authority over their work or are unable to shape the work environment to be consistent with their values.”
The prerequisite to satisfying work is the people have a sense of ownership in how they do things. That’s what allows them to have the dignity of making a difference. “Few things help an individual more than to place responsibility upon him,” Booker T. Washington noted; “Every individual responds to confidence.” Conversely, as Friedrich Hayek observed in The Road to Serfdom, “Nothing makes conditions more unbearable than the knowledge that no effort of ours can change them.” Broad voter anger and alienation at Washington should not surprise us: Bureaucracy denigrates human self-worth.
3. Bureaucracy Subverts the Rule of Law
The purpose of law is to enhance freedom. By prohibiting bad conduct, such as crime or pollution, law liberates each of us to focus our energies on accomplishment instead of self-protection. Societies that protect property rights and the sanctity of contracts enjoy far greater economic opportunity and output than those that do not enforce the rule of law.
The main mechanisms of law are supposed to be protective, not prescriptive. Law provides legal walls on the edges of a free society that, by guarding against abuse, define and safeguard the field of our freedom. As Isaiah Berlin put it in his essay “Two Concepts of Liberty,” law provides “frontiers, not artificially drawn, within which men should be inviolable.”
Bureaucracy, too, aims to be protective—striving to prevent bad conduct by setting rules in advance. But it does this by supplanting freedom with prescriptions dictating how to do things correctly. Bureaucracy protects the egg by killing the goose. Its suffocating rulebooks and procedures smother the freedom that people need to accomplish their goals.
Modern bureaucracy is built upon three misconceptions about the rule of law:
The myth of clear law. Both liberals and conservatives demand detailed rules as a matter of received wisdom. The motivation in each case is mutual distrust. Liberals see detailed rules as the way to shackle corporate malefactors. Conservatives see detailed rules as the way to prevent officials from abusing their powers. The theory is that detailed law achieves clear legal boundaries. But it doesn’t work: Thousand-page rulebooks do not achieve legal clarity. Sometimes law can be both precise and clear, as with speed limits or effluent discharge limits. But for most human activity, words are insufficient to capture the complexity of a situation. Too many words usually obscure or even impede regulatory goals. No human can comprehend thick rulebooks, such as the 4,000 detailed rules mandated by Federal worker safety law. Studies have shown that perfect compliance with hundreds or thousands of rules is literally impossible, even for large companies with giant legal staffs. J.P. Morgan Chase employs thousands of lawyers, yet constantly runs afoul of regulators. Small business cannot know, much less comply with, all these requirements, leading to a predictable pattern of involuntary noncompliance.
Clarity in law is usually achieved not with precise rules but with goals and principles that people can readily understand and internalize. “Standards that capture lay intuitions about right behavior,” Richard Posner observes in The Problems of Jurisprudence, “may produce greater legal certainty than a network of precise but technical, nonintuitive rules.”
There is “more honest truth in the inspiring generality,” legal philosopher Frederick Pollock observed, than in “many an arid” rule. These general goals and principles are enforced by officials who interpret and apply norms of reasonableness.
The unfairness of uniformity. Most legal scholars also accept as received wisdom that uniform application of rules ensures fairness. To the contrary, by not letting responsible people take into account the circumstances, mechanical application of rules often guarantees unfairness. Disciplining an eight-year old under “zero tolerance” laws for bringing to school a hat decorated with plastic soldiers carrying rifles is absurd. So too is a life prison sentence for someone who stole three golf clubs, under a “three strikes and you’re out” law, because of prior theft convictions. Balancing conflicting public goals is much of what government is called upon to do—playing traffic cop in a crowded society. Rigid law is synonymous with injustice. As Justice Benjamin Cardozo put it, “Justice . . . is a concept by far more subtle and indefinite than any that is yielded by mere obedience to a rule.”
Fear of abusive decisions. Precise law, the theory goes, prevents officials from acting arbitrarily or corruptly. To the contrary: The inability of mortals to comply with thousands of rules puts arbitrary power into the hands of each official. That’s part of why Americans go through the day looking over their shoulders. Is your paperwork in order?
Most officials are not inclined to abuse their authority, but, even so, the effect of too many precise rules means that regulation is enforced arbitrarily. In studying Illinois nursing home regulation, John Braithwaite found that government inspectors focused on only a small percentage of the rules, and which rules varied from inspector to inspector. Studies of corruption conclude that the best protection is to give responsibility to officials to use common sense when making regulatory judgments. When the spotlight shines on decisions by a particular official, he is less likely to act in ways that call attention to his decisions.
The error of legal philosophy that helped spawn modern bureaucracy was that law can function without human judgment. Law achieves trust, and supports practicality, only when applied with human values and understanding. “The first requirement of a sound body of law,” Oliver Wendell Holmes wrote in The Common Law, “is that it should correspond with the actual feelings and demands of the community.” The way law achieves this is that people are able to draw on norms of fairness and reasonableness at the point of implementation. Otherwise law is brittle, and words of law are parsed for selfish purposes. Legal philosopher Jeremy Waldron puts it this way: “The Rule of Law is , in the end, . . . a human ideal for human institutions, not a magic that somehow absolves us from human rule.”
Remaking Law to Support Human Flourishing
Rigid rules work no better for law than central planning does for an economy. The reason is the same: as Hayek put it, “the knowledge of the particular circumstances of time and place” requires that humans be free to make choices on the spot.
Law is different from the marketplace in that it is enforced by coercive state power, not by the aggregate of “decentralized planning by many separate persons,” in Hayek’s words. Law thus requires mechanisms to guard against arbitrary or unfair enforcement. But protecting against arbitrary legal choices, like other choices, also requires “the knowledge of the particular circumstances of time and place.” Just as “zero tolerance” and mandatory sentencing laws basically mandate unfairness, so too barring the judgment of an environmental official over the scope of review will practically guarantee higher costs to taxpayers and cause environmental damage by creating unnecessary bottlenecks.
Freedom is concentric: Your freedom is dependent on the freedom of choice by people up and down the hierarchy of responsibility. Only if an official is free to decide what’s practical in a particular instance will the citizen be free to be practical in the same situation. If the teacher isn’t free to maintain order in the classroom based on perceptions of who is misbehaving, then the disruption caused by the breakdown of authority will deprive students of their freedom to learn. If the supervisor isn’t free to make personnel decisions based on perceptions of who is doing the job well, the energy and camaraderie of the office culture will dissipate like a deflating balloon, as everyone realizes that performance doesn’t matter. If a judge isn’t free to dismiss a lawsuit when Johnny breaks his leg while horsing around at recess, then schools will ban running, or end recess altogether.
There’s no reason for regulatory frameworks to deny people the ability to adapt to the circumstances before them. Find any governing activity that works—say, a successful public school, or an infrastructure project that got built on time and within budget, or successful agencies such as the Centers for Disease Control—and you will find responsible public employees who take ownership for their daily choices rather than mindlessly complying with rules.
Instead of trying to prune the regulatory jungle, as attempted by every President for the past 40 years, the cure for bureaucratic paralysis is to change the premise underlying it: Stop telling people how to do things correctly; instead make people take responsibility for outcomes. In some areas, for instance in many state licensing laws and in antiquated subsidies from the New Deal era, the solution is to eliminate the programs—in other words, deregulation. With most regulatory programs, however, the cure is to replace current programs with an open framework that gives people room to adapt to changing circumstances.
Creating a principles-based framework of regulation is far simpler, in all respects, than striving to dictate choices in every conceivable setting. Australia in the 1980s replaced a detailed rulebook on nursing homes with 31 general principles—for example, to provide a “homelike environment.” Within a short period, the quality of nursing homes materially improved. One advantage was to liberate operators to focus on the residents, not compliance with the rules. They could readily internalize general goals and principles. Instead of spending their days with noses in rulebooks, operators could be sensitive to the needs of the person in front of them.
Whether an official has transgressed some legal principle should not be a question of objective compliance, but should be decided by the judgment and perceptions of other officials up a hierarchy of authority. A simpler legal framework can provide as many checks and balances as seem prudent and practical, but they all must ultimately rest on human judgment. Using a human hierarchy to oversee decisions also reduces the incidence of flawed decisions that Kahneman and Tversky found with isolated individuals. To guard against unfair firings, for example, Toyota convenes co-workers to get their views.
The resulting uncertainty at the edges of legal application tends to incentivize most people to avoid sharp or selfish practices. A gray area of legality will drive most people to operate near the center, further reinforcing social trust and enhancing social capital. With the Australian nursing homes, for example, the operators, regulators, and other stakeholders had different visions of what constitutes a “homelike environment,” but it was precisely the uncertainty of how those disagreements might get resolved that gave all stakeholders an incentive to strive for reasonable accommodations.
What’s different here is not simply abandoning “one-size-fits-all” rules, but, most radical to the modern legal mind, allowing decisions to be made without objective proof to back them up. As in a market setting, people will make choices based on what they feel is appropriate. How do you prove what’s fair in a particular setting? Or which teachers are ineffective because they bore their students? As in other settings in a free society, people must be free to make judgments about other people, and so on up the line.
Giving people back the freedom act on their best judgment means abandoning the utopian dream of perfect fairness and uniformity. As in the marketplace, giving people responsibility to meet public goals does not guarantee success. Some people will have bad judgment and bad values. People will disagree, on practically every issue. But trying to preempt that disagreement with dense rulebooks enervates democracy. Conversely, democracy is energized by disagreement; that’s why people get involved. The giant bureaucratic blob, by contrast, is immune to the will of the people, makes sensible choices illegal, and drives alienated voters into the arms of strongmen.
Leaving aside totalitarian systems, our options for an operating philosophy for modern government are either a rigid bureaucratic system that ossifies bad choices, stifles innovation, and is morally obtuse, or an open framework of goals and principles that gives people sufficient room to succeed (or fail) in meeting them. The former engenders distrust, alienation, and a popular demand for authoritarian styles of leadership. The latter, so long as voters are attracted to practical leaders, engenders trust, social capital, and the opportunity to flourish.
The best cure to citizen alienation is citizen ownership. Americans must feel free to make sense of their daily choices, to deal with officials who also can be practical, and to elect leaders who can govern in a way that responds to voter desires and needs. That’s why America should abandon modern bureaucracy and rebuild a governing framework grounded in human responsibility.
See Philip K. Howard, “Two Years, Not Ten Years: Redesigning Infrastructure Approvals,” Common Good (September 2015); and Howard, The Rule of Nobody: Saving America from Dead Laws and Broken Government (W.W.Norton, 2014), pp. 9-13.
William Manning, as quoted in Saul Cornell, The Other Founders: Anti-Federalism & the Dissenting Tradition in America, 1788-1828 (University of North Carolina Press, 1999), p. 229.
See generally, Jerry Z. Muller, The Tyranny of Metrics (Princeton University Press, 2018), which argues that overreliance on metrics skews sensible choices by shifting the focus from outcomes to measurements.
H. M. Levin, “Why Is This So Difficult?” in Educational Entrepreneurship: Realities, Challenges, Possibilities, Frederick Hess, ed. (Harvard Education Press, 2006), p.173-74.
John Sweller, “Evolution of Human Cognitive Architecture,” from The Psychology of Learning and Motivation Vol. 43, Brian H. Ross, ed. (Elsevier, 2003), p. 215.
Jeremy Waldron, “The Rule of Law and the Importance of Procedure,” from Getting to the Rule of Law, James E. Fleming, ed. (New York University Press, 2011), p. 25.
The post Bureaucracy vs. Democracy appeared first on The American Interest.
Cleaning Up Ukraine’s Energy Sector
Ukrainians have suffered dearly at the Kremlin’s hand. Russian President Vladimir Putin refuses to release the 24 Ukrainian sailors captured in the Kerch Strait last year, thousands of Ukrainian civilians have died in the War in the Donbass, and 2 million more live under Russian occupation in Crimea.
But the threat to Ukraine also comes from within: If Russia ceased its aggression tomorrow, many of Ukraine’s problems—bureaucratic inefficiency, corruption, and lack of transparency—would persist. These problems are particularly prevalent in the energy sector, which is directly linked to economic growth and national security.
As it enters the fifth year after the Euromaidan Revolution, Ukraine must advance its energy reforms to safeguard its economy and sovereignty. This year’s upcoming presidential and parliamentary elections provide the opportunity for candidates to demonstrate their dedication to this task.
Sustained progress would benefit all Ukrainians: Consumers would enjoy better energy services and lower prices; the domestic energy sector would create high-skilled jobs and boost economic output; and the government would secure new revenue streams that could bolster national priorities such as defense and social services. The alternative is the status quo: unrelenting threats from Moscow and sluggish economic growth at 2 percent, which spells continued stagnation.
After Euromaidan, energy reform became central to the new Ukrainian government’s anti-corruption campaign. This was partly due to International Monetary Fund (IMF) conditionality, but also because of the enduring mismanagement of the energy sector, which had generated pernicious budget deficits, jeopardized energy security, and limited economic potential. The National Anti-Corruption Bureau of Ukraine found that corruption is more prevalent in the energy sector than anywhere else in the Ukrainian economy.
Today, the reform process has stalled. For example, Ukraine moved to close the gap between natural gas prices for households and industrial consumers, reducing subsidies that were a major budgetary drain. However, import prices have increased over the last two years while household prices have stayed the same, crowding out other budgetary priorities like social welfare and defense as the government foots the bill. Compounding this problem, illicit arbitrage—illegally selling low-priced household gas to industrial consumers at higher prices—further runs up unnecessary government spending and begets corruption that the Kremlin can exploit. In the past, out-of-control subsidy spending left Ukraine at Moscow’s mercy: In 1993, Ukraine had to cede part of its Black Sea Fleet to Russia in exchange for energy debt relief.
[image error]
Growing Disparity Between Wholesale Natural Gas Prices for Households and Industrial Consumers, in Ukrainian Hryvnia
(Source: Naftogaz)
Because of entrenched elite interests, no policy is in place to prevent the government from fully reintroducing subsidies, despite the current government’s recent promises to the contrary. Former Prime Minister Yulia Tymoshenko, who currently leads polls for the presidency, has already declared she would reinstate subsidies, making it a centerpiece of her campaign.
But Ukrainian politics still leave some room for surprise. Tymoshenko’s lead currently gives her only 20 percent of the vote, leaving room for other candidates to enter the race. Volodymyr Zelenskiy, Ukraine’s most famous comedian, announced his bid in a New Year’s address. Reputable polls see the actor second to Tymoshenko or besting her in a run-off vote. Perhaps tellingly, Zelenskiy is best known for his role on Servant of the People, a popular TV series in which he plays a teacher who accidentally becomes President after a video of his rant about government corruption goes viral. Others with name recognition like Svyatoslav Vakarchuk, former Member of Parliament (MP) and lead vocalist for the popular Ukrainian band Okean Elzy, would have an immediate edge if they entered the race.
Regardless of who ultimately wins, the next Ukrainian President will face the same set of domestic and international pressures as current President Petro Poroshenko: namely, balancing macroeconomic stability with the resurgence of economic populism. Simply put, Ukraine must continue to meet the domestically unpopular conditions of its IMF loan without ceding to populist demands to keep utility bills low.
For Ukraine’s leaders, the dilemma is acute. On the one hand, Ukraine has little choice about meeting the terms of its IMF loan, which enshrines energy reforms including fossil fuel subsidy cuts. Although there are myriad examples of countries that flout aid conditionality, few, if any, can do so without holding strategic importance to a major donor country or significant resource endowments. Ukraine has neither advantage, making the IMF’s threat to refuse aid credible. Kyiv has buckled to its demands in the past.
Even absent the IMF, cutting subsidies would strongly benefit Ukraine. Although subsidies are always politically popular, economists widely regard them as a poor form of social assistance because the wealthiest receive the most benefit. Cutting them instead would encourage domestic energy production, reduce opportunities for corruption, and decrease deficits.
On the other hand, cutting subsidies—however wise and necessary—is a politically toxic move amid economic stagnation. In the years after Euromaidan, utility bills have constituted an increasing share of household expenses for the average Ukrainian: as much as one-third of monthly salaries. For these voters, higher utility bills are directly felt, while appeals to the benefits of macroeconomic stability are vague and unconvincing. Steep, indiscriminate price hikes risk a repeat of the Orange and Euromaidan Revolutions.
A successful Ukraine needs to both continue receiving IMF assistance while acknowledging the concerns of poorer Ukrainians. Happily, there is a politically viable set of policies that any future Ukrainian President—including Tymoshenko—should adopt to advance reform. In a paper published last month by the Council on Foreign Relations, we recommend that Ukraine do so by depoliticizing gas prices, firmly establishing regulatory independence, and carefully dismantling energy-sector monopolies.
Most importantly, Ukrainian policymakers should cede the authority to price natural gas to an independent regulator. It is well-documented that countries that attempt fossil fuel subsidy reform without ceding pricing authority reverse course under political pressure—from Namibia, compelled by its powerful constituency of taxi drivers, to France amid the “Yellow Vests” protests today. In cases where governments ceded pricing authority like South Africa in the 1950s and Turkey in the 1990s, subsidy reform remains intact.
To maintain fiscal stability while preserving social welfare, Ukraine should also improve its targeted subsidy program. Instead of subsidizing gas for all households, Ukraine has been attempting to introduce a targeted system for only those in need. However, population-wide subsidies remain intact, and the current “targeted” system, under which 60 percent of Ukrainians qualify, is hardly better than a universal scheme. Further, there is no unified customer registry to verify whether recipients actually qualify for assistance or even exist. As a result, subsidy payments are made to oligarch-run intermediary firms rather than directly to consumers, making the system easy to exploit.
The next Ukrainian Parliament should follow through on legislation that would transfer payments as cash directly to consumers to decrease the risk of graft. Second, the Ukrainian government should create a comprehensive registry of social subsidy recipients to verify payments. Actively maintaining, updating, and auditing this database would help reduce subsidy spending without sacrificing social assistance and losing public credibility.
The new Ukrainian government should also focus on public messaging around the benefits of subsidy reform. These include mitigating corruption and inefficiency, cementing independence from Russian gas, and creating jobs and business opportunities in Ukraine’s energy sector. With a public information campaign and visible action on reform, the next Ukrainian President can head off civic discontent.
Besides eliminating subsidies, an admittedly politically sensitive move, the future President can take other measures to crack down on corruption and inefficiency. For example, the President should strengthen Ukraine’s national energy regulator, which was ostensibly redesigned to align with EU energy legislation. In reality, Ukraine has yet to establish a body to effectively and independently enforce market rules that encourage competition. Ukraine should reinforce the commission by ensuring it is funded by user fees, eliminating direct presidential appointments of commissioners, and taking input from civil society and—for a limited time—the international donor community in selecting commissioners. A firmly independent regulator would help instill confidence in Ukraine’s energy market for foreign and domestic private companies looking to invest.
Another problem is that private companies that do attempt to produce gas in Ukraine face a byzantine licensing system that impedes new participants and strengthens existing players. Simplifying licensing would go a long way to helping firms without an inside track enter the market, raising Ukraine’s energy production. Cautious, transparent privatization of state gas production assets would also open more space for independent producers. All these policies would prove popular among the international donor community and those looking to do business in the country. If the new government demonstrates quick action on reform without abridging welfare, these policies could also earn the support of the average Ukrainian.
Ukraine must ultimately decide its own fate, but the United States should support Kyiv by increasing diplomatic pressure paired with technical assistance.
The U.S. government should elevate energy reform to the top of its diplomatic agenda with Ukraine, acknowledging, as former Secretary of State Tillerson put it, that “it serves no purpose for Ukraine to fight for its body in [the] Donbass if it loses its soul to corruption.” Absent action from the Trump Administration, Congress should proactively extend its targeted sanctions program against Russian oligarchs to include their Ukrainian counterparts, and press the IMF for more stringent compliance for loan disbursements.
To complement this pressure, the United States should provide technical assistance. Specifically, the Federal Energy Regulatory Commission (FERC), the Department of Interior, and State Department should work with their Ukrainian partners to design independent regulatory agencies, formulate simplified licensing procedures, develop unconventional gas resources, collect royalty revenues, and combat corruption. Assistance on energy efficiency programs would be particularly popular with any future Ukrainian President.
The United States would benefit from a prosperous, energy-secure Ukraine, capable of standing up to Russian aggression. Supporting Ukraine’s energy reforms is a low-risk, high-reward strategy for Washington to counter Moscow’s influence at NATO’s border without overcommitting to military options that antagonize Russia.
But any future Ukrainian policymaker should not expect U.S. support without demonstrated progress. It is important for them to acknowledge that keeping gas prices low allowed corruption to spread and weakened Ukraine’s economy in the first place. Nevertheless, the future Ukrainian President can advance a successful energy reform agenda that satisfies both international and domestic needs. With a push in the right direction from its leadership and U.S. allies, Ukraine can put its energy reforms back on track.
The post Cleaning Up Ukraine’s Energy Sector appeared first on The American Interest.
January 30, 2019
As the EU Loses Steam, NATO Gains In Importance
Many in Europe seem to fret over the solidity of the American commitment to their continent. Every personnel move in the current administration, every tweet and interview, and every rumor is analyzed for signs of a weakening of U.S. security guarantees, and thus of NATO. In itself, such a daily agonizing in European capitals is a symptom of a deep insecurity born out of weakness. An insecurity that is not easily assuaged even by facts on the ground—the continued and increased U.S. military presence in Europe, the money and resources expended by Washington on Europe’s security, the discussions on the need to adjust the basing structure in the continent in order to reflect current security needs, the willingness of Washington to impose costs on the main threats to Europe (Russia, Iran, and increasingly, China). It seems that regardless of what the U.S. does, it will have to prove constantly its devotion to Europe’s security.
European fears of a dramatic U.S. retreat from Europe are not justified. The U.S. is not abandoning Europe and its allies. And NATO is here to stay with the United States at its foundation.
By origins, geopolitical necessity, and design, the United States remains a European power. These three features are deeply ingrained in American grand strategy and, arguably, some have been strengthened by the current administration in Washington.
The origins: the U.S. is a Western power, whose domestic regime, culture, and traditions arise out of Europe. Trade, security concerns, and demographic changes may pull the U.S. toward the Pacific and Latin America, but the sources of American political strength come not from those regions but from Europe. This is why the U.S. is firmly part of a wider Western civilization and has defended it with great expense of blood and treasure. And, unlike in the Obama Administration, there is now a stronger awareness of the necessity to preserve the West as a civilization, threatened by external foes and by internal complacency and doubts. President Trump made this renewed purpose clear in his 2017 Warsaw speech and the National Security Strategy reinforced the idea of being confident in the principles underwriting the West.
The geopolitical necessity: U.S. security is not on the coasts of California or New Jersey, but it arises from an equilibrium of power in Eurasia. A Europe that is either under the sway of a hostile power and mired in internal squabbles is not in the interest of the United States. As power disequilibria in Europe become more pronounced—with Brexit, growth of German influence, squabbles between Rome and Paris—the necessity of the U.S. to participate in these continental dynamics only increases. As Yale University academic Nicholas Spykman observed in the 1940s, American “isolation was valid when Europe was in balance, as it was valid for Britain when the continent was balanced.” When Europe is not in balance, there is a geopolitical pressure to avoid isolationism that locks the U.S. on its own hemisphere—a tempting but incredibly costly and dangerous approach.
Moreover, the Trump Administration has clearly recognized great power competition as a key feature of the security environment. It has codified this view in key documents (the National Security Strategy, the National Defense Strategy), setting the course for the various bureaucracies in Washington: these documents institutionalize the need to compete with rivals. This has particular relevance for NATO because the alliance is crucial for the ability of the U.S. to compete with (that is, to deter and, if necessary, defeat) great power rivals in and around Europe.
In fact, NATO was probably at a greater existential risk when the U.S. considered combating terrorism and nation-building as the principal objectives of its foreign policy. With protracted great power rivalries on the horizon, NATO is not a diplomatic luxury but a geopolitical necessity.
Finally, the design: NATO is the main link between the United States and Europe. In the years to come, it is likely to be the principal, if not only, institutional arrangement that supports the key American interest of a stable Europe. Historically, the U.S. implemented this strategic objective by encouraging two parallel paths: an integration of European states that would instill political stability and generate economic growth, combined with a defense treaty and military cooperation geared to deter foreign enemies and to consolidate allied unity. The former was institutionalized in the European Union, the latter in NATO. As the EU loses its capacity to be an ordering force in Europe, the importance of NATO increases.
NATO is the only tool at U.S. disposal to keep Europe safe and stable; there are no alternative means to pursue this core American objective.
But the continuity in U.S. objectives—and thus, the continuity of American support for NATO—should not lead European allies into complacency. Spending more on defense is one of the necessary steps, as President Trump has reminded European allies over and over again, making them uncomfortable. Adjusting the basing structure in Europe, to reflect the aggressive posture of a Russia hell bent in its neo-imperial pursuit, is another adjustment that has to happen.
More broadly—and most importantly at this moment—European allies, at least some of them, have to stop pretending that Europe itself can offer an alternative to NATO. French and German leaders in particular have voiced misguided aspirations to establish common EU forces. This is a recurring desire that never translates into reality because of deeply different security interests: France is interested in the Sahel and North African region (clashing in the process with Italian interests); Germany has deep misgivings about sending its own soldiers abroad (and never in combat operations); other Europeans are either uninterested in such Europe-only military projects or, as in the case of Poland and the Baltic states, their main, if not only, preoccupation is Russia. It is a dangerous illusion, therefore, to believe that Europe can have a continent-wide military alliance without the United States in it.
The U.S. is not leaving NATO. On the contrary, it is likely that in the years to come, U.S. foreign policy toward Europe will be grounded in NATO even more that it has been in the past. But for the alliance to be effective, Europe has to do its part too, starting from a clear recognition that some European states should not undermine the strength of NATO by pursuing implausible projects of military cooperation devoid of U.S. presence.
The post As the EU Loses Steam, NATO Gains In Importance appeared first on The American Interest.
How Woodrow Wilson Lost the Peace
In every age, some of the world’s leading thinkers have argued that the trajectory of humanity is a steady, even inevitable, advance toward ever-greater prosperity, peace, and moral enlightenment. In reality, the undeniable progress that humanity has made over the millennia has frequently been disrupted, even reversed, by catastrophe and collapse. In our competitive and anarchic world, the relationships between states and peoples have repeatedly been punctured by horrific breakdowns of peace and security. Societies are upended and even destroyed; human suffering unfolds on an epic scale; the world’s most advanced nations descend into depravity; the accumulated achievements of generations crumble amid shocking violence. From the Peloponnesian War in the fifth century B.C.E. to the world wars of the 20th century, the history of international affairs has often seemed a monument to tragedy.
If tragedy is a curse for those who endure it, it can be a blessing for those who draw strength and wisdom from it. The memory of tragedy has often impelled the building of international orders that have succeeded—if only for a time—in holding the forces of upheaval at bay. In the wake of great geopolitical crackups like the Thirty Years’ War and the wars of the French Revolution, leading statesmen have found the foresight to create new systems of rules to regulate the relationships between states, and—just as critically—to erect the stable balances of power that sustain them. Driven by painful experience, they have accepted the geopolitical hardships necessary to avoid the far greater costs of a return to upheaval. Many of the great diplomatic achievements of the modern era—the Peace of Westphalia, the Concert of Europe, and others—have rested on such an understanding. Ralph Waldo Emerson captured the basic ethos: “Great men, great nations, have not been boasters and buffoons, but perceivers of the terror of life, and have manned themselves to face it.”
There is, however, another kind of response to tragedy. If knowledge of tragedy can have an invigorating effect on those willing to fully profit from its lessons, it can also be enervating, even crippling to effective statecraft. After all, great efforts and prolonged exertions can ultimately lead to exhaustion and cause nations to flinch from the necessary application of power. Too much experience with a tragic world can tempt leaders and citizens to seek refuge in withdrawal, appeasement, or utopianism. Such human impulses are understandable enough after a period of trauma. Yet when they morph into an unwillingness to defend an existing order under assault, the results can themselves be tragic.
In this regard, the aftermath of World War I stands as the cautionary example. That conflict caused a greater spasm of violence than any previous upheaval, and inspired a near universal conviction that such carnage must never happen again. Yet the years thereafter did not see an effective order-building project in the mold of Westphalia or the Concert of Europe. Rather, they saw a well-meaning but quixotic attempt to escape the harsh constraints of power politics, followed by a catastrophic paralysis in the face of rising dangers.
The embodiment of the first tendency was Woodrow Wilson. Wilson was hardly the only person who believed that World War I must be “the war to end all wars”: The rapturous public reception he received in Europe and elsewhere after the war testifies to the widespread popularity of his ideas. But he was surely its most eloquent advocate. Wilson had no lack of appreciation for tragedy, and his vision for the postwar world was deeply rooted in his revulsion at the great horror that had befallen humanity in this “most terrible and disastrous of all wars.” His solution, breathtaking in its ambition, was to create a fundamentally new world order that would allow humanity to break free of the depravities that, he believed, had ushered in such a cataclysm in the first place.
In his Fourteen Points speech in January 1918, Wilson promoted what we would now call a liberal international order—one that sought to address the perceived causes of instability and aggression by promoting national self-determination and disarmament, enshrining a liberal trading system and freedom of the seas, strengthening international law, and creating a global organization that would arbitrate grievances and thwart conquest. Most importantly, Wilson shunned the idea that statecraft should consist of the search for equilibrium and the pursuit of national self-interest, arguing instead that the world’s nations must stand on moral principle and practice collective security. “There must be, not a balance of power, but a community of power,” he told the Senate in 1917; “not organized rivalries, but an organized common peace.”
This “community of power” sounded, at least superficially, somewhat similar to what had emerged after Westphalia and Vienna. It also featured an unprecedented leadership role for the United States not just as the conscience of humanity, but as a coordinator and convener of collective action. Crucially, however, the primary currency of power in Wilson’s new order would shift from military force to reason and morality. “We are depending primarily and chiefly upon one great force, and that is the moral force of the public opinion of the world,” Wilson informed his fellow leaders at the Versailles peace conference. If coercion was required, it would be undertaken on behalf of humanity as a whole through the unanimous action of an international community. There could be no going back, no return to the old ways of secret diplomacy, shifting coalitions, and cold-eyed geopolitical competition. For Wilson, a world in which common rules could be identified and accepted, international moral opinion could restrain threats, and nations could cooperate on the basis of the global good was the prerequisite for escaping future tragedies. Once this true peace was achieved, he promised, “Men in khaki will not have to cross the seas again.”
At Versailles, however, Wilson’s desire for a transformative peace collided both with his own animus against German militarism and with the desires of America’s European allies—namely France—for a more punitive settlement. For French Prime Minister Georges Clemenceau, the cause of World War I was not the existence of the balance of power, but its breakdown under pressure from a rising Germany. The solution was to reduce German power and aggressively enforce that outcome over time. “If we have no means of imposing our will,” he warned, “everything will slip away bit by bit.”
The resulting settlement was an awkward hybrid. The Treaty of Versailles saddled Germany with the blame for World War I, while also seeking to contain future German militarism through restrictive measures. The treaty adjusted territorial boundaries in Europe in an attempt to create geopolitical buffers around Germany, authorized the allied occupation of the Rhineland for up to 15 years, and stripped Germany of its overseas possessions. It called for strict curbs on Germany’s armed forces and required the German government to pay reparations to the Allies.
Yet the treaty was not as harsh as sometimes believed, because it neither permanently dismembered Germany nor permanently crushed its economic capacity. The treaty, moreover, aimed to do much more than just punish Germany, because it reflected Wilson’s spirit and many of his guiding ideas. Among other things, the treaty provided for an unprecedented degree of national self-determination within Europe; it essentially codified the destruction of four European empires by blessing the emergence of smaller independent states. Most notably, the treaty created the League of Nations, a body that built on earlier precedents and ideas yet nonetheless represented a revolutionary effort to forge an international community dedicated to confronting aggressors and preserving the peace. “The treaty constitutes nothing less than a world settlement,” Wilson declared upon his return to America in July 1919. It marked a visionary effort “to get away from the bad influences, the illegitimate purposes, the demoralizing ambitions, the international counsels and expedients out of which the sinister designs of Germany had sprung as a natural growth.” The trouble, however, was that the settlement Wilson did so much to shape contained the seeds of future upheavals, precisely because it—like the President himself—was not attentive enough to the tragic geopolitics he aimed to escape.
The settlement left Germany deeply embittered but mostly intact and therefore only temporarily constrained—a combination that practically ensured future revisionism. In fact, Germany’s geopolitical position had arguably been enhanced by the end of the war. Before 1914, Germany had been surrounded by great powers: the Russians, the Austro-Hungarians, and the French. By 1919, the Communist Revolution in Russia and the breakup of the Austro-Hungarian Empire had left an exhausted France as Germany’s only formidable neighbor. The triumph of self-determination, meanwhile, was simply encouraging German revanchism: first, by surrounding Germany with weak states in the east; and second, by giving its future leaders a pretext for seeking to assert control over foreign lands—in Austria, Czechoslovakia, and Poland—where ethnic Germans were numerous.
For its part, the League of Nations was an indisputably progressive effort to safeguard the peace, but it also suffered from critical flaws. In particular, it left the two most powerful European countries—Germany and the Soviet Union—on the outside of a settlement they had great incentive to disrupt. Moreover, its collective security role hinged on the assumption that its leading members could act unanimously in the face of aggression, a Wilsonian conceit that would prove impossible to realize. Two earlier postwar settlements—the Peace of Westphalia and the Concert of Europe—had proven comparatively durable because they rested on both a commitment to shared values and a stable geopolitical foundation. The post–World War I settlement, by contrast, was biased toward revanchism and instability. “This is not a peace,” Marshal Ferdinand Foch, the Supreme Allied Commander during World War I, declared. “It is an armistice for 20 years.” When the U.S. Senate declined to ratify American participation in the League, in part because of Wilson’s obstinate refusal to accept any conditions on U.S. involvement, the postwar system became more precarious still.
That rejection was the product of another type of American escapism in the interwar era—the tendency to withdraw at a time when there appeared to be no immediate threats to U.S. security. Domestic opposition to the League and other parts of the Versailles settlement arose from a variety of concerns: that they would undermine U.S. sovereignty, usurp Congress’s constitutional prerogatives with respect to declaring war, and abrogate the tradition of strategic non-entanglement in Europe. Underlying all this, however, was a sense of strategic complacency brought on by the fact that, with Germany’s defeat, geopolitical dangers to America seemed to have retreated far over the horizon. Had Wilson been more of a political realist, he might nonetheless have salvaged a compromise with the treaty’s more moderate opponents and thereby preserved a strong, if modified, American leadership role in the order he sought to create. In the event, however, the combination of domestic reluctance and Wilsonian intransigence ensured that the Senate eventually rejected American participation in the League. The United States would stay deeply involved economically in Europe during the 1920s, but it never committed strategically either to the community of power Wilson envisioned or to a more traditional balance of power that might have better underwritten the peace.
These escapist tendencies persisted into the interwar era, with mostly pernicious results. Wilson’s League may have been defeated at home, but his core ideas remained influential both in the United States and abroad. Indeed, leading thinkers often found Wilson’s thesis more persuasive than Clemenceau’s—they argued that the problem was not that the balance of power had collapsed but that such a mechanism had ever been relied upon. They therefore determined to set aside the traditional instruments of statecraft in hopes that moral pressure and communal adherence to liberal principles would make war a thing of the past. This movement was exemplified by the myriad disarmament conferences that followed World War I, and by the signing of the Kellogg-Briand Pact of 1928, which outlawed war as an instrument of national policy. “This should be a day of rejoicing among the nations of the world,” the Washington Star opined after the conclusion of that agreement. War, it appeared, was being banished into illegality.
George Kennan would later describe this period of American statecraft as “utopian in its expectations, legalistic in its concept of methodology, moralistic in the demands it seemed to place on others, and self-righteous in the degree of high-mindedness and rectitude it imputed to ourselves.” War was no longer to be prevented through deterrence, alliances, and the willingness to use force, but through the willingness to abjure precisely these measures. Other Americans, disillusioned by the failure of the postwar settlement to live up to Wilson’s grand ambitions, or simply convinced that the geopolitical sky would remain cloudless for years to come, were happy to “return to normalcy” and steer clear of European security matters. All of these impulses—idealism, cynicism, and disengagement—were understandable responses to World War I. All, unfortunately, did more to weaken than fortify the constraints on future aggression.
The same could be said of another response to the tragedy of World War I—the democratic powers’ unwillingness to forcefully resist growing challenges to the settlement they had created. During the 1920s, memories of the last war were strong, but the dangers of the next one still seemed largely hypothetical. Over the course of the 1930s, the international landscape darkened. The world sank into depression; protectionism ran rampant as international cooperation collapsed and nations pursued beggar-thy-neighbor policies. More ominous still, aggressive authoritarianism returned in Europe and Asia alike.
Radical ideologies flourished in some of the most powerful states on earth; the fascist nations armed themselves and used violence and coercion to alter the status quo from Manchuria to Central Europe. One by one the advances accumulated; slowly but unmistakably the geopolitical balance shifted against the democratic powers. Despite all this, the democracies often seemed frozen, unable to stir themselves to multilateral action or an effective response. The United States remained mostly geopolitically absent as the situation in Europe progressively worsened; the other Western democracies mostly sought to avoid confrontation until 1939, after Hitler had built up great strength and momentum. As Joseph Goebbels, Hitler’s propaganda chief, later remarked, “They let us through the danger zone. . . . They left us alone and let us slip through the risky zone and we were able to sail around all dangerous reefs. And when we were done and well armed, better than they, then they started the war.”
Far from moving aggressively to thwart the revisionist powers, the democracies often handcuffed themselves strategically. The French adopted a military system that made it nearly impossible to use force absent general mobilization; that requirement, in turn, made even the limited use of force nearly inconceivable in the 1930s. The British slashed real defense expenditures to pay for the rising costs of social services. In absolute terms, the money spent on the army and navy hardly increased between 1913 and 1932, despite the vast diminution of purchasing power caused by two decades’ worth of inflation. Into the early 1930s, defense budgets reflected the assumption that no major conflict would occur for at least a decade—a rule that gave London tremendous incentive to avoid such a confrontation.
The interwar statesmen were not cowards or fools. There were many reasons, all seemingly plausible at the time, why the democracies adopted a posture that appears so disastrously naïve and misguided in retrospect. Collective action was hard to organize amid divergent national interests and the economic rivalries caused by depression and protectionism. Feelings of guilt that the postwar peace had been too harsh discouraged confrontation, while budgetary pressures and desires for normalcy inhibited rearmament. There persisted a strong Enlightenment belief in the power of dialogue and diplomacy to resolve disagreements. Even in the late 1930s, British Prime Minister Neville Chamberlain would say that “if we could only sit down at a table with the Germans and run through all their complaints and claims with a pencil, this would greatly relieve all tensions.” And, as is often the case in international politics, citizens and leaders found it difficult to understand how crises occurring in faraway places, or involving seemingly abstract principles such as non-aggression, really mattered to their own security.
Yet the most fundamental factor was simply that all of the democratic powers were deeply scarred by memories of what had come before and seized with fear that another great conflict might occur. Upon returning from Versailles in 1919, Walter Lippmann had concluded that “we seem to be the most frightened lot of victors that the world ever saw.” Throughout the interwar period, the haunting memory of World War I hung over the Western powers, menacing them with visions of new destruction should conflict return.
Central to these fears were the jaded interpretations of World War I that increasingly took hold in the 1920s and 1930s. In the United States, historical revisionism took the form of accusations that the “merchants of death”—the arms industry and the financial sector—had manipulated America into joining a costly war that did not serve its national interests. By 1937, a full 70 percent of Americans polled believed that entering the war had been a mistake. In Europe, a generation of disillusioned observers argued that the great nations of the world had stumbled into a catastrophic conflict that none of them had wanted or fully anticipated, and from which none of them had benefited. As David Lloyd George wrote in his Memoirs, “The nations slithered over the brink into the boiling cauldron of war without any trace of apprehension or dismay.” According to this interpretation, a willingness to act boldly in the face of crisis led not to stability and deterrence but to a deadly escalatory spiral. The implication was that the greatest risk of another awful conflagration lay in overreacting rather than under-reacting to threats.
Indeed, World War I had been so searing an experience—even for the victors—that it convinced many thinkers and statesmen that nothing could be worse than another major struggle. Stanley Baldwin, three times Prime Minister of England between 1923 and 1937, thought that the war had demonstrated “how thin is the crust of civilisation on which this generation is walking,” and he frequently declared that another conflict would plunge the world into an unrecoverable abyss. This attitude permeated Western society and politics in the years preceding World War II.
It was evident in the infamous resolution of the Oxford Union in 1934 that its members would fight for neither king nor country, and in the profusion of antiwar literature that emerged on both sides of the Atlantic in the 1920s and 1930s. “They wrote in the old days that it is sweet and fitting to die for one’s country,” Ernest Hemingway wrote in 1935. In modern war, however, “You will die like a dog for no good reason.” It was evident in the series of Neutrality Acts passed by the U.S. Congress out of conviction that the greatest danger to America was not passivity but entanglement in another European war. It was evident in France’s reluctance to use or even threaten force against Hitler when his troops reoccupied the Rhineland in 1936, despite the extreme weakness of Berlin’s position at that time.
Finally, it was evident in the crippling fear that the result of another war would be to lose another generation of soldiers in the fields of France and a great mass of civilians to indiscriminate terror attacks from the air. British Foreign Secretary Lord Halifax put the basic attitude bluntly in explaining the government’s reluctance to push Germany too hard, stating that “he could not feel we were justified in embarking on an action which would result in such untold suffering.” Or as Neville Chamberlain stated, more infamously, at the time of the Munich crisis, “How horrible, fantastic, incredible it is that we should be digging trenches and trying on gas masks here because of a quarrel in a faraway country between people of whom we know nothing.” Tragedy, for the interwar generation, was not a source of resolve in the face of danger. It was an inducement to an inaction that contributed, in its turn, to still greater horrors to come.
The great order-building achievements of the modern era have flowed from the fact that leading powers were able to turn an acquaintance with tragedy into the mixture of diplomatic creativity and strategic determination necessary to hold dangerous forces at bay. The great failure of the interwar period was that the democracies were too often paralyzed by the past. Donald Kagan concluded his sweeping book On the Origins of War and the Preservation of Peace with the declaration that “a persistent and repeated error through the ages has been the failure to understand that the preservation of peace requires active effort, planning, the expenditure of resources, and sacrifices, just as war does.” This is a lesson that too many in the interwar era forgot in their efforts to escape, rather than confront, the tragic patterns of global politics. In doing so, however, they helped ensure that their post–World War II successors would not make the same mistake.
The post How Woodrow Wilson Lost the Peace appeared first on The American Interest.
January 29, 2019
Witnessing Putin’s Rise
After 19 years in power, Vladimir Putin’s reign can seem in retrospect an inevitability. Yet when Boris Yeltsin made his surprise announcement appointing Putin acting President on December 31, 1999, few Russians knew anything about the former KGB officer, and fewer still anticipated the scope of the crackdown to come. Ever since, debate has raged over who is most responsible for Putin’s rise, and whether his rule was the natural continuation of the Yeltsin era or a sharp break from it.
Entering the fray now is acclaimed filmmaker Vitaly Mansky, whose new documentary Putin’s Witnesses recently debuted on RFE/RL’s Current Time network. Mansky, a former filmmaker for Russian state television who now lives in exile, has assembled a cinematic time capsule that pairs rare archival footage of the Yeltsin-Putin transition with his own present-day voiceover, reflecting on events he both witnessed and participated in.
We sat down with the filmmaker as he brought the film to Washington to discuss the film’s origins, the debates it has re-ignited in Russia about Putin’s rise, and why Mansky intends it as both personal repentance and cautionary tale.
Sean Keeley: Putin’s Witnesses is a chronicle of “Operation Successor,” the process by which Vladimir Putin was installed as Russia’s leader in the final days of Boris Yeltsin’s tenure, and the early days of his rule. Nearly two decades have passed since Yeltsin’s surprise resignation. Why revisit the subject now?
Vitaly Mansky: I was not interested in the topic until 2012, because in general I think before 2012 Putin acted more or less within the law. I know not everyone agrees with that, but nevertheless he served two presidential terms, he ceded power to Medvedev, and before that cynical statement the two made—that all that had been agreed upon in advance—I didn’t give a deep thought to the events of 1999. But when he returned to his so-called third term, I started coming back to those events that I had witnessed and of course, the subsequent events gave me no choice whether to make this film or not. And specifically after the events of 2014 this dimension of the film was clear to me: the necessity of my own repentance, and an appeal to society to reckon with its own participation, or non-participation, in these events.
Karina Orlova: If I understand, then, before the infamous rokirovka [“castling,” the Putin-Medvedev switch – ed.] in 2011, you, like many of us, hoped that Putin wouldn’t come back to his third term? Some now say that it all was predetermined from the beginning.
VM: I thought that Medvedev would at least stay for two terms, and then it somehow would resolve itself. I didn’t idealize Putin and to me it was all clear by 2003, but still I didn’t perceive Putin to be eternal. I was personally affected by the changing of the Russian Constitution, when they increased the presidential term from 4 to 6 years, the rokirovka and the way it was done. It was offensive to everyone.
KO: Is it fair to say that although the Moscow mass protests of 2011-2012 were provoked by fraud in the 2011 parliamentary elections, the real reason was Putin’s comeback?
VM: Of course. I don’t see those protests as anything other. But my pessimism is largely based on the results of that same Bolotnaya opposition, because it ended up not so powerful and quite short-lived and, worst for me, it was the last large-scale resistance. The total absence of public resistance has made me the ultimate pessimist. It also made me leave Russia in 2014. Of course, the events in Ukraine were the primary reason, but the public’s lack of will stripped me of any chance for optimism.
SK: Let’s return to your film. This movie is composed almost entirely of archival footage filmed from 1999 to 2001, affording viewers a rare, intimate glimpse into the lives of Putin and those who facilitated his rise. How and why were you allowed to capture such footage? What was your original assignment for Russian television at the time?
VM: It might be hard to believe now, but the situation was as follows. I worked as the head of documentaries on the state TV Channel Rossiya. In 1999 I was making a film about Mikhail Gorbachev. Yeltsin was still in power, and Gorbachev was on the blacklist, so it was quite a hard task to make a film about Gorbachev on state-owned TV channel happen. And when I started production on that film, a political decision was made by the channel to also produce a film about Yeltsin. I heard the “I’m tired, I’m leaving” speech with the rest of the country; I didn’t know about the resignation in advance. And on January 2, 2000, even though it was just the beginning of the long national celebration for the New Year holiday, and without seeking an approval from the TV channel CEO, I started production on a film about Putin, who was announced as Acting President of Russia.
The film was supposed to answer the question “Who is Mister Putin?” for a Russian audience. I didn’t plan to film Putin himself. I assigned a production team in Saint Petersburg with the task of finding and filming the testimonies of the people who lived in the same neighborhood as Putin, who studied in the same classrooms, who went to the same gyms. And among the first footage that I got was a very long and emotional interview with Vera Gurevich, Putin’s German teacher, which made it clear that she had a very serious influence on him. It was also clear that she was in a tough economic situation at that time. So all those facts made me pass the video tapes of the interview to Putin. I acted out of purely humane motives. And a couple of days later the CEO of Rossiya Channel let me know that I had been invited to his office at the Kremlin, where Putin welcomed us. He asked us who else we were filming and other details.
That is when I first met Putin. Strictly speaking, he invited me to a meeting. What I’d like to underline here is that a film about the Acting President was launched without any prior arrangement with the Administration. This is a telling detail about how things were in 1999.
KO: And how did you get access to Boris Yeltsin and his daughter Tatiana? Why were you allowed to film them?
VM: I knew Tatiana a little bit when I worked for the REN-TV channel. I was known through my work, I had a certain social status. And basically, they discussed who would be the director of the film. Essentially, our communications started in an informal way, without protocols. It might seem strange, and I recall that some of my colleagues asked me while I was filming if I was scared. But at the time it was clear. And this is a general principle of my work, that I treat participants of my film very informally, whoever they might be: homeless people, babushkas from a village, the Dalai Lama, or whoever.
I think, too, that this was part of their doctrine at that time. Yeltsin had been filmed by many Russian directors before me, and before that Valentin Yumashev (his PR advisor, and later son-in-law – ed.) had carefully crafted Yeltsin’s official public image. That’s how Yumashev actually emerged. He was a journalist at the Komsomolskaya Pravda newspaper and he made up a Yeltsin who for some reason took a tram to get to a medical center—no matter that the tram only went in the opposite direction, so this was an absolutely fake image. And Muscovites perfectly know that there is no such tram route. But “Yeltsin on a tram” appeared as a very down-to-earth guy who shared the burdens of the people.
Of course, Operation Successor was engineered mainly by Yumashev. Putin was his creature. So I think that it was Yumashev who decided that a documentary that pictured Yeltsin in an informal ambience would be better than an old-fashioned documentary. But that all is my conjecture, because I never discussed it directly with anyone.
SK: Speaking of Yeltsin, one of the film’s most memorable scenes unfolds in his private quarters on election night, as he watches Gorbachev comment on TV. You say in voiceover that Yeltsin was surprised to see Gorbachev because Yeltsin “knew perfectly well what happens to ousted leaders in Russia.” Soon after, Yeltsin finds himself on the outs, when he awaits a phone call from President Putin that never comes. Did you have a sense even then, on election night, that Yeltsin worried he would be quickly forgotten or repudiated like Gorbachev was?
VM: No, I’d say that Yeltsin didn’t understand anything at that time. And to be precise, the day after the election Putin, accompanied by TV cameras, came to him with flowers and followed the protocol completely. Moreover, if Putin knew that Yeltsin was being filmed when he awaited Putin’s call, he would have phoned him, undoubtedly.
But it’s not a question about Yeltsin so much as Putin. I’m almost certain that by the end of the first year of Putin’s governance that, if not disillusioned, Yeltsin did have questions about Putin, at least from what I witnessed while filming in their apartment. There’s a shot in the film, when Yeltsin is watching Putin’s first address to the nation on TV and Tatiana’s look is caught on camera, looking not at the TV but at her father, watching his reaction. This silent scene speaks volumes about the nerves that existed in the family at that time. We know that Yeltsin learned Putin’s name from Valentin, through Tatiana. That’s why Tatiana makes the first phone call after the election results came out, to Valentin, to connect her father to him.
KO: And what doubts or questions with regard to Putin did Yeltsin have at that time? It was probably not concerns about Russia derailing from its democratic path. What did Yeltsin care about? Preserving the wealth of his family and clan?
VM: I think that Yeltsin expected that Putin would be more manageable. I think they all thought they had created a puppet. And Yeltsin had certain principles, he had his basic idea that he should have been consulted on some issues, that his opinion should have counted. And it becomes clear from my short dialogue with Putin about reinstating the old Soviet anthem that those decisions were made without Yeltsin, and this was absolutely not the Family’s plan. Of course, written agreements were formally upheld. And indeed, until the end of Putin’s first term the government was not changed; the Prime Minister, Mikhail Kasyanov, was the Family’s.
KO: You mentioned that they thought Putin would be their puppet. And here is my next logical question: Where is Boris Berezovsky in your film? He is nowhere to be seen: a person who brought Putin to power, created the Unity political party, and won the parliamentary election in 1999 that made Putin’s win possible.
VM: That’s not true, not true.
KO: Why?
VM: Because Berezovsky was kept in the dark, and he was tricked. In fact, even before Putin was announced as acting President, he was out of the game.
KO: But the Unity Party, the election to the Duma?
VM: Of course, but that was a year before.
KO: But could Putin have won otherwise?
VM: The thing is, everyone saw that Berezovsky was aspiring to too big a role, and nobody wanted to grant him it. I was not present when all those talks took place, but I saw Berezovsky’s reflection in various situations I witnessed, and it was clear that Berezovsky was out of the game. The timeline was laid out very nicely in Petr Aven’s book about Berezovsky. He was basically convinced that everything was tip-top, he was pushed to take a trip with his mistress to a sea resort, and when he got back he was already a nobody.
KO: You are talking about the book by Alfa Group’s co-owner Petr Aven, The Time of Berezovsky, which diminishes the role of Berezovsky in politics in 1999 and 2000. The book has been criticized for its deliberate and unfair erasure of Berezovsky from those events, mainly by Echo of Moscow chief editor Alexey Venediktov and by Sergey Parkhomenko, a journalist, publisher, and senior advisor to the Kennan Institute.
Why is there now such a passion to rewrite the role of Berezovsky? Why is it that people suddenly claim that Berezovsky was a nobody?
VM: First of all, I am not rewriting the role of Berezovsky because I never wrote it in the first place. But what I saw, what I felt at that moment, was that Berezovsky was out of the game. Basically, Berezovsky’s person in Putin’s campaign was Ksenia Ponomareva [the campaign deputy chief and former CEO of a TV channel owned by Berezovsky – ed.). She was the only person whom Berezovsky could rely on, and the first person to be thrown out of Putin’s group. Others left quietly over a long period of time. Ksenia was gone by the summer of 2000.
KO: I understand that your film was conceived from the start as an assemblage of old footage, meaning nothing could have been added later.
VM: Foreseeing your question, I’ll say that of course I had the idea of talking to the participants of those events today. But it is completely obvious that no one, even if they had agreed to talk, would have given me more than 10 percent truth and 90 percent personal interpretation. By the way, those who criticize me blame me for passing judgment retroactively, from today’s vantage point. But if you listen to my voiceover, there is no analysis at all, except for some biographical facts about the fate of those who are in the film and my final words, a short monologue. I do not give any opinions. This is an unemotional account of the events of that time that I witnessed.
KO: You just said that no one would have told you the truth if you were to interview them today. Well, then, Petr Aven’s book must be a pack of lies, since he interviews exactly the same people we see in your film, and his book is a series of interviews.
VM: They didn’t tell the truth but they established the timeline of events. And those are two different things, truth and timeline. It’s not even so much what they say as the author, who provides additional data to the described events. We know on what day Berezovsky wrote an open letter to Putin on the pages of his own Kommersant newspaper, where he addresses Putin as “Volodya,” and so on. These are all facts; their descriptions of the facts are interpretation. And the interpretations they provide are, of course, dodgy. Let’s put it this way.
SK: There is a scene in the film where you note all the members of Putin’s team on election night who would later go into opposition or exile: Gleb Pavlovsky, Mikhail Lesin, Ksenia Ponomarvoya, Mikhail Kasyanov. The implication seems to be that Putin was surrounded by liberals, or democrats perhaps, who only later saw his true colors and grew disillusioned. Was that your impression? What would you say to critics who say these people were never liberals in the first place?
VM: First of all, we need to define our terms. Can we say that Anatoly Chubais [Yeltsin-era businessman and official in charge of privatization – ed.] is, was, and will be a liberal democrat? Well, he was more liberal than others. And certainly, in the group of people who orchestrated the successor operation there was no Ozero Cooperative, the almighty Igor Sechin [Chairman of Rosneft and leader of the Kremlin’s siloviki faction -ed.]. There were no people who are the faces of Putin’s elite today. And the people who worked for Putin then were in one way or another members of Yeltsin’s team. Can we consider Yeltsin a liberal? Well, in the context of the whole civilized world, of course he was not. But for Russia he was more liberal than his main rival Zyuganov [the Communist Party candidate for President in 1996 – ed.].
KO: You antagonize in a peculiar way the Ozero Cooperative [a group associated with Vladimir Putin’s inner circle – ed.), and seem to distinguish between them and the so-called Yeltsin oligarchs who brought Putin to power. Do you really think that there is a difference between on the one hand, “Putin oligarchs” like Igor Sechin, the Kovalchuks, and the Rothenbergs, and on the other, “Yeltsin oligarchs” like Abramovich, Fridman, Aven, and Deripaska? I have argued that they are all cut from the same cloth
VM: Well, actually, I think there is a big difference. I’ve known some of them, though not Deripaska. But Fridman is no Sechin, that’s for sure.
KO: He is not, but that did not prevent him from facilitating a sketchy deal between TNK-BP and Rosneft, allegedly with the personal approval of Putin, which saw Fridman and company mysteriously receive up to $8 billion dollars from the Russian budget. How are these people any different from the almighty Igor Sechin?
VM: They are more liberal. And therefore, less horrible. Do you want me to say that all of them are scoundrels? Absolutely, to a certain extent everyone is a scoundrel, but this is how the system has been constructed. Because Russia did not have the time for a civil society to build up that could influence the political climate in the country. That’s why in the absence of civil society the political climate shapes itself, at random, into a direction that suits it best. And nothing constrained this development in Russia, there were no constraining institutions in the state. That is why by the end of his second term Yeltsin had transformed into anything but a liberal President.
And when in 1996 we all agreed that we would trample on democratic principles for the sake of saving democracy, well, Putin’s succession resulted precisely from that year. Some trace it back to 1993 but I disagree, I think that the shelling of Russia’s White House was a different story. But democracy in Russia was finally deflowered, and in a very perverted way, in 1996. In 2000 it started sleeping around without sticking to any principles.
KO: You might want to agree, then, with one of the critics of your film: Boris Vishnevskiy, a Saint Petersburg State Representative and member of the liberal Yabloko party. Writing in Novaya Gazeta about your film, he says that “Putin’s system is not a rejection of Yeltsin’s but is its direct continuation. Today’s constitutional autocracy was built precisely by Yeltsin’s team.”
VM: Well, in this sense we might say that death is a continuation of birth. Nevertheless, 1999, for all its shortcomings, mistakes and failures, was a more liberal time than 2018. I have to insist on this definition of more liberal. At least in 1999 there was political competition, for what it was worth. There were political clans, officially named or not, that competed with each other. Those clans owned rival media outlets, there was NTV and ORT. There was the Puppets political comedy show where politicians were criticized heavily.
KO: And nevertheless it all led to Putin.
VM: Because the civil society that would prevent Operation Successor did not exist. I would like to remind you that when it was announced to the country that Putin became Acting President, basically the following was announced, too: there will be no more presidential elections, and we are not interested in your opinion, because we, with our narrow circle, have decided what happens in Russia for at least the next four years.
Those two statements, had they been made in a country with an existent civil society, would have provoked an immediate, severe, and explicit reaction. In Russia no one took to the streets, no one protested. Not a single person did. Even in 1968 when Soviet tanks invaded Prague, eight people came out with posters protesting against it. In 1999 not a single person came out. Everyone just swallowed it. And by the way, I’m convinced that Putin has become the Putin he is today because he did not meet any public resistance on his path to power. If he did face resistance at any turn, he would have had to slow down. But there was no resistance, as there is none now.
KO: Returning to our discussion on the 50 shades of liberalism of the Yeltsin and Putin clans, the people who really brought Putin to power, are, first of all, very well established and still rich. They are also relatively young: Abramovich, Fridman, Aven, Chubais, Surkov, Volodin, Deripaska. They might indeed look not only more liberal, but very liberal compared to Putin’s siloviki.
VM: Thank you! That is exactly what I meant.
KO: Yes, but if the transition of power were to happen now—and Putin’s regime will come to an end, sooner or later…
VM: We all will come to an end, so it’s not an argument. Salazar was unconscious, laid in bed, when he was brought specially printed newspapers and kept being the leader of Portugal.
KO: The Russian people are known for their sudden unpredictability…
VM: That is our only hope.
KO: Exactly. And if this hope is to be fulfilled, those people, the so-called Yeltsin oligarchs, who have the money, the power, their people in the highest positions of the government, if they again put their man into the Kremlin, should we allow this to happen for a second time? Is it fair to make this artificial, in my opinion, contrast between the terrible siloviki and the Yeltsin clan, rather than some real examples of Western liberalism and democracy, for once?
VM: I absolutely agree that it is better to be healthy and wealthy, than poor and sick. What’s there to debate?
KO: The debate is whether we should objectively assess the people who were behind Putin’s arrival in order to prevent them, the very same people, from performing Operation Successor 2.0.
VM: Am I right that you think that I idealize the team that brought Putin to power and, moreover, call for sympathy for them because they were later kicked out?
KO: My point is that you made a brilliant film, and if the events were to repeat, we would probably enjoy another great documentary of yours.
Those who argue about your film lived through the 1990s as adults, they were conscious witnesses to that time. Do you think that those who did not—either because they were too young, or they have never been to Russia, like many Hill staffers here in DC who will shape future policy—will get a full picture of those events from your film?
VM: Honestly, I wouldn’t want to see any film that would attempt to give a full picture of anything. But we’ve screened this film to audiences that had absolutely no idea about Russia’s realities, zero. We’ve screened it in Australia, Brazil, Canada, for ordinary people, not political experts. And to me their reaction was very telling because what they got from this film was documented evidence of the anti-democratic, absolutely corrupt and informal codes that govern relations inside the Russian political elite. In Russia, among audiences with knowledge of the 90s, some would want to see this film more propagandist, some more oppositional. But to non-Russian audiences, it is more than enough when they see the film to understand the downfall of political morality in Russia. And the film is a very clear and convincing testament to that.
KO: You’ve said yourself in previous interviews that your film is a repentance. What do you personally repent of?
VM: I personally repent that I took part in Operation Successor, the unconstitutionally performed transition of power. I don’t think it’s even important to define the scale of my role. The percentage of my participation makes no difference. The very fact of my participation is enough. I also think that every other citizen took part in those events, in one way or another. Even if it was an action as simple as turning off the TV and not paying attention.
Yes, indeed, such an action is a million times less significant than what others did, including myself. But all together we allowed this absolute abuse of democratic principles and every one of us today bears some guilt for what happened to the country. This is the main narrative of my film. And when some Russians, including very respected ones, now claim that “No, I was part of a deeper resistance, I was bombing bridges, I was poisoning wells, I put up a protest flag on the city hall building in my village,” this all comes from the devil. Because none of them, none of us, did anything to resist the process of anti-democratic seizure of power. Period.
Parkhomenko commented to TAI that Aven’s book indeed altered history to minimize the role of Berezovsky in lobbying for Putin’s appointment, albeit before the events depicted in Mansky’s film. For more on the controversy about Aven’s book, see Karina Orlova, “Putin 4.0: It’s All in the Family,” The American Interest.
For further detail on this deal, see Damir Marusic and Karina Orlova, “The Great Oligarch Whitewash,” The American Interest, May 30, 2018.
See Karina Orlova, “Putin 4.0: It’s All in the Family,” The Americacn Interest.
The post Witnessing Putin’s Rise appeared first on The American Interest.
January 28, 2019
How Russia Plans to Win the “Hybrid War”
As Russia settles in for a lengthy period of tension with the United States and the West more broadly, how do Russians see the standoff shaping up? What might their long-term strategy be, and what sort of denouement might they envision, for what Russia Carnegie scholar Dmitry Trenin has dubbed the “Hybrid War?” After all, the Russians saw it coming long before we did: Putin’s memorable speech at the 2007 Munich Security Forum offered us a glimpse of Moscow’s mindset long before the Ukraine crisis erupted in 2014. Russia didn’t stumble inadvertently into its conflict with the West, and the Kremlin betrays no sign of anticipating defeat. Presumably Russia’s leaders have given some consideration to how their country might come out on top.
My thoughts on this question take the form of an imaginary memo addressed to President Putin’s foreign policy adviser, Yuri Ushakov, from one of his senior subordinates. It does not purport to present any actual, existing policy of the Government of Russia, but rather speculates on the thought processes in Russian governing circles as they contemplate the country’s geopolitical trajectory at a time of enormous opportunities and grave dangers.
* * *
Esteemed Yuri Viktorovich!
As you prepare for your meetings next week, I wanted to provide you with some of my thoughts, as well as ideas from the experts in my department, regarding the international situation and the challenges we face over the coming decades.
On paper, the state of affairs is not encouraging. The United States, NATO, the European Union, and all of their various satraps and hangers-on are arrayed against us, striving in every way to prevent Russia from rising from her knees and resuming her rightful place as a great power and an independent center of power in a multipolar world. As many people (including the defeatists in our own country) like to point out, by any measure—population, military strength, economic might, or soft power—we are at a huge disadvantage.
However, as you’re well aware, this simplistic bookkeeping perspective on the balance of forces obscures Russia’s decisive strengths as well as the Enemy’s critical weaknesses, and I think it worthwhile at the outset to enumerate them.
Above all, Russians are united in rejecting the inferior status that the West has sought to foist on us since 1991. We are a great nation with glorious traditions and a world-class culture, and we will never accept the role of playing second fiddle in some American-led orchestra (since we’re talking about American leadership, perhaps “vaudeville ensemble” would be a better analogy). All their political, economic, and military pressure cannot overcome our principled determination to reject Washington’s diktat.
The sting of the 1991 humiliation has not abated with the passage of time. By 1985 the correlation of world forces had shifted decisively and irrevocably (or so it had seemed) in favor of the socialist camp, yet a mere six years later, everything lay in ruins. It would have been one thing to lose the Cold War to a nation like the Germans or the French, with their rich histories, imposing martial traditions, and magnificent cultural achievements. Even the English would have been tolerable; they may be Anglo-Saxons, but at least they’re real ones—not like those ersatz Anglo-Saxons across the ocean. However, to lose the Cold War to a nation of cowboys that has never produced a writer of the caliber of Dostoevsky, or a composer on a par with Tchaikovsky, but instead has blanketed the globe with its tawdry advertising and its third-rate Hollywood pop culture—this was the greatest ignominy imaginable. It was Rome being beaten by the barbarians. Russians will not soon forget it—or forgive it.
In addition, unlike our adversaries, Russians are planners, accustomed to taking the long-term view; we are as well a nation of chess-players who instinctively think several moves in advance. Starting nearly 15 years ago, when it became evident that the imperialists would not allow Russia her rightful place in the sun, our leadership set in train a number of processes: the reform and restoration of our military, nationalization of the elites, the strategic leveraging of our hydrocarbon resources, and the purposeful, sustained weakening of anti-Russian regimes in the post-Soviet space. All of these prepared us well for 2014, when the West dropped any pretense of cooperation with Russia and instead assumed a policy of undisguised hostility.
The Westerners failed to take account not only of our preparedness, but of our resilience. A nation that could withstand the full onslaught of the Wehrmacht will not be broken by some pitiful patchwork of Western sanctions—especially in view of the skill and professionalism of our macroeconomic team.
Finally, our centralized decision-making gives us enormous tactical nimbleness. We were able to liberate the Crimea before the lumbering, slow-witted Westerners even realized what was happening. We are well placed to take advantage of other such opportunities as may present themselves over the coming decades.
In contrast to our unanimity, agility and grim determination to prevail, the Westerners cut a sorry figure. Notwithstanding their fierce hostility toward us, their divisions—national, ideological, political, and even personal—prevent them from bringing their preponderance of wealth and might fully to bear on Russia. With a few minor exceptions like the Balts, the conflict is simply not existential for them as it is for us. I’ll admit that I did not predict either the severity or the tenacity of Western sanctions since 2014. Nevertheless, I am confident that we can outlast the imperialists, and that they will weary of the standoff or become distracted long before they can compel us to acquiesce to their vision of Russia’s third-class status in the world.
I don’t presume to instruct you, who have so much first-hand experience with the Americans, about the state of play with our principal adversary. However, let me make a couple of points in connection with the larger geopolitical picture.
While I was happy enough to see Clinton defeated in the 2016 American elections, I was never, as you will recall, delirious with joy over Trump. Any President who seeks to “make America great again” with a massive military buildup and increased U.S. hydrocarbon production could never be, from our perspective, an unalloyed good.
Still, as we have all recognized, Trump’s bombastic and combative character creates a propitious environment for Russia to drive wedges among our adversaries. In particular, the deep antagonism between Trump and European elites holds out the prospect of a radical, definitive Transatlantic decoupling. NATO has always been a grotesque, unnatural alliance, with a group of cultured nations allowing themselves to be led by a country of bumptious yahoos. Indeed, it can only be a matter of time until Western Europeans recognize that their best interest lies in making common cause with a kindred European civilization—namely, Russia.
Nevertheless, I must warn against excessive optimism on this score. All the Transatlantic tension in the world will do us little good if it falls short of an actual breakup of NATO. In fact, if Washington manages either to shame or to frighten the Europeans into taking their own security more seriously, Russia could end up worse off than before. I note as a cautionary tale the fact that security cooperation has never been closer or more fulsome between Washington and certain congenitally anti-Russian countries such as Poland, Romania, and the Baltic States, as well as the American puppet regimes in Ukraine and Georgia.
While it is psychologically deeply satisfying to see a large portion of the American elites convinced that we have the power to sway their elections, our bumbling propagandists don’t deserve the credit they’re receiving. If they’re so talented, why couldn’t they have prevented the expansion of NATO and the alarming growth of Western influence in the post-Soviet space more generally? Were they lulling the Enemy into a false sense of security, waiting until Russia’s back was to the wall before revealing their awe-inspiring prowess at flipping Western elections? I think not.
On the other hand, I hope that our special services have drawn the appropriate conclusions from the unimaginable impact of the Steele dossier, which has sown incomparably more discord and division than any of the puerile pabulum that our Internet Research Agency has generated. If I may make an outside-the-box suggestion, we should ensure that in 2020 there is a comparable “salacious and unverified” dossier on each American presidential candidate. In fact, let there be multiple, mutually contradictory dossiers on each candidate—both to dovetail with the general post-truth tenor of our “active measures,” and all the better to get the Americans wrapped around the axle of their own domestic political antagonisms. If we can pull it off, it should be quite an entertaining show!
Of course, that assumes the bunglers in our vaunted special services could start thinking creatively, instead of persisting in the sorts of ham-handed operations that merely give the West further pretexts for sanctions against us. But you’re already aware of my opinion on this rather delicate matter, so I won’t belabor the point.
Let me turn to the topic of alliances. In any “balance sheet” assessment of Russia’s conflict with the Americans, the juxtaposition of NATO’s formidable assets with our own admittedly paltry, disappointing CSTO is counted as a major advantage of the imperialists. However, the NATO alliance is a double-edged sword. The tendency toward lowest-common-denominator, consensus-driven policymaking can lead to indecision and paralysis. A related shortcoming is a navel-gazing preoccupation with the “state” or “health” of the alliance in the abstract, instead of a focus on the urgent tasks at hand. It is always amusing to observe the Westerners fretting among themselves about whether or not they all “share the same values”—as if mutual security depended on everyone holding identical opinions about climate change, progressive taxation or same-sex marriage! The sheer audacity of our Crimea operation is completely beyond such people.
In contrast to their ossified, unwieldy alliance, we have a) our freedom of action; and b) a group of like-minded countries that, though not formally allies, see the world much as we do. China, North Korea, Iran, and increasingly Turkey agree with us—for their own specific reasons, of course—regarding the injustice of the world order imposed by Washington. We don’t need to coordinate every action with them. We don’t need to ensure that we all have a common political structure, economic model, or set of social policies. We merely need to chisel away, each in our own fashion and at our own pace, at the tottering supports undergirding the Liberal World Order, and eventually we will bring the whole repulsive edifice crashing to the ground.
The by-now almost complete defection of Turkey from the American camp has been one of the most significant developments of the past few years. While astute Russian diplomacy has played a role, the principal reason has been sheer objective reality. Dread of Soviet might was the only glue that really bound Turkey to the Atlantic Alliance in the first place, and those ties began to unravel as soon as the USSR collapsed. Since then Turkey and Russia have shown themselves to be kindred spirits, with Turkish abhorrence of the West and dissatisfaction with the Liberal World Order every bit as intense as our own. The Turks have a Herrenvolk complex nourished during the centuries when the Ottoman Empire spanned three continents and crushed everyone in its path. They fancy for themselves a leadership role in the post-Ottoman space, and presume that American jealousy and hegemonism are the only things preventing Turkey from resuming her “natural” status as a great power. With that psychological backdrop, when the Americans rather unwittingly found themselves allied in Iraq and Syria with the Turks’ mortal enemies, the Kurds, the fate of the Turkish-American alliance was sealed.
Once again, a word of caution is in order. Like the chess-players that we are, we should always be thinking a couple of moves ahead. Just as Turkey’s Western orientation did not survive the collapse of the Soviet Union, our current entente with Turkey will not survive the collapse of the American hegemonic order, when objective reality will once again intrude. And the objective reality is that Turkey’s “near abroad” overlaps considerably with our own, and Ankara’s neo-Ottoman pretensions are almost diametrically opposed to Russian interests in the Balkans, the Caucasus, Crimea, Central Asia, and the Middle East. Once NATO falls apart and the Americans are back across the ocean where they belong, we must be prepared to make a quick pirouette to check Turkish ambitions in our backyard. Happily, the odious legacy of Ottoman rule should actually help us regain some of our lost positions in the Balkans, Georgia, and the Arab world.
It is worth bearing in mind that our strategic partnership with China is situational as well, and it will not survive in its present form once we have dispatched the Liberal World Order to the scrapheap of history. As you know, I am not one of those fools who imagine that China poses a greater threat to Russia than the United States. Nevertheless, I find it irksome that the Chinese seem quite content at the current time to let Russia do all the heavy fighting, and take most of the return fire, in our common battle against American unilateralism. China has gotten away with its encroachments in the South China Sea, while Russia has been grievously sanctioned merely for reclaiming what is rightfully ours in Crimea and the Donbass. The fact that Washington beats us mercilessly while letting the Chinese off practically scot-free is further proof—as if we needed it—of America’s special animus toward Russia.
All the same, our demographic and economic weakness with respect to China in the Far East is an objective fact that will not soon be remedied, and there is no point tempting fate by assuming eternal Chinese benevolence. Once the Europeans send the Americans packing, we should extend the hand of friendship and cooperation to the Germans, French, and Italians (the Poles, Romanians, and British are an altogether different story) to create a united front of European civilization to ward off any possible depredations against our common European home by either the Turks or the Chinese. And while the departure of the Americans from Europe cannot come too soon, I would not be at all sorry to see them remain engaged in the Far East, where they do us little harm currently—and might even make themselves useful to us someday. Hard to believe, but the Americans might turn out to be good for something after all!
While I am bullish on Russia’s prospects in the current conflict with the West, and confident of our ability to hold our own in a post-American multilateral world order, I would be remiss if I failed to note two nagging concerns.
First, in the metaphorical besieged fortress in which we find ourselves, our financial reserves are the sustenance required to see us through the siege—and corruption is like an army of rats quietly consuming our provisions. In the existential struggle in which we find ourselves, Russia simply doesn’t have the luxury to see its precious resources plundered in this manner. And while our current President has exercised admirable discipline in building up our financial reserves, I fear lest any successor lack his foresight and allow our vital resources to be squandered through myopic fiscal populism or outright theft.
Second, I must acknowledge a certain sense of discouragement at the degree to which our natural protégés—the other states in the post-Soviet space—seem inclined to distance themselves from Russia. I’m not just talking about the shameless treachery of the Georgians and Ukrainians. Even people like the Belarussians, Armenians, and Tajiks—who hypocritically accept our generous economic and security assistance and constantly wheedle us for more—apparently lack the basic human decency to provide even rhetorical support to their selfless Russian benefactor in her hour of need. Moreover, all our post-Soviet neighbors are engaged to varying degrees in a willful policy of falsifying our common history, minimizing Russia’s crucial role in bringing the light of civilization and the blessings of technology, education, and prosperity to these previously benighted regions.
Nevertheless, I cannot believe that such flagrant ingratitude will continue indefinitely. Either the brazen local elites who seized power after the collapse of the Soviet Union will realize the error of their short-sighted anti-Russian policy, or a new, more discerning generation will grasp the wisdom—indeed, the imperative—of aligning their nations with Russia. I remain convinced that one way or another—perhaps with a little judicious persuasion here and there—the little brothers will all find their way back into our happy Eurasian home. Together with our other partners we will smite the arrogant Americans and end their blood-soaked unilateral hegemony once and for all.
Our cause is just—victory will be ours!
The post How Russia Plans to Win the “Hybrid War” appeared first on The American Interest.
January 27, 2019
Untangling Venezuela’s Authoritarian Web
The presidency of Venezuela is now technically vacant, and the country’s erstwhile dictator, Nicolas Maduro, who had again installed himself as leader after a rigged vote, is—for the moment—out on a limb. As massive protests rock the capital, Juan Guaidó, President of the National Assembly, has declared himself Interim President and has been recognized by 14 Western Hemisphere nations, including the United States and Canada. Striking back, Maduro has given U.S. diplomats 72 hours to leave Venezuela, while the United States has countered that its core personnel will remain because an illegitimate government cannot break diplomatic relations. Once those three days have passed, it is most likely that Maduro will remain at the helm, surrounded by the remnants of his repressive, corrupt narcostate.
Recognizing Guaidó as the legitimate President is a bold strategy, but one that is unlikely to work in the long run. U.S. policymakers are still uncertain whether the protests in Venezuela are motivated by personal support for the relatively unknown Guaidó or by general opposition to Maduro. There is also a much deeper problem: Maduro has transformed the institutions of Venezuela’s government into deeply ramified networks channeling corruption and criminality, and connecting him to other authoritarian regimes such as Cuba, Russia, and China, backsliding states such as Nicaragua and Turkey, and enablers within the United States and other established democracies, such as bankers and realtors who help launder illicit funds.
A comprehensive approach to reconstructing democracy in Venezuela requires uprooting the illicit networks and combatting “authoritarian learning,” the process by which dictatorships share “best practices” in corruption and repression. Venezuela began developing and hardening networks with Cuba during the Chavez-era, but Maduro’s growing relations with other actors like Russia, Turkey, and transnational criminal organizations play an important role here, too. Even Hezbollah came out with a public statement of support for Maduro.
For the United States and its partners striving to return Venezuela to the community of nations, the right call is to back up with serious policy what until now has been a game of thrones. That begins not with veiled threats of “appropriate actions” (whatever those are), but concerted support for the remaining diplomatic personnel and installations. One possible counter-attack for Maduro will be to deploy his Russian-style paramilitary colectivos to attack those institutions (with plausible deniability). Another option, outlined by Diosdado Cabello, head of the Maduro-controlled Constituent Assembly, would be to cut off electricity and starve U.S. diplomats—a strategy at which the Maduro regime excels.
It’s key to understand the deeper forces at work when facing a challenge such as Venezuela. Authoritarians such as Maduro survive by ensuring that those close to them are fat and happy, while keeping those who oppose them hungry and weak. They do whatever it takes to maintain a monopoly on power: bribing foreign and domestic officials, pilfering the state-owned oil company (PDVSA) to the tune of $350 billion and laundering the profits abroad, creating a “sovereign” cryptocurrency (the “Petro”) to evade sanctions, and contracting with criminal organizations to cover their tracks along the way. This is in stark contrast to democratic systems that disperse power through institutions, governing through imperfect (yet transparent) public policy.
In the highly unlikely event that Maduro steps down voluntarily, the authoritarian-corruption nexus into which he has thrust Venezuelan institutions and elites will continue to work to co-opt the highly fractured opposition in the Assembly. Some Assembly members are already lost, others implicated in corruption, and many unwilling to bear the serious risks of confronting Maduro—imprisonment, exile, and even death.
U.S. policy should make it clear to Maduro that he ultimately has three choices: self-exile to the land of one of his authoritarian patrons, hold new and legitimate elections that he would surely lose, or face another onslaught of U.S. sanctions and indictments, including cutting off Venezuela’s oil imports and issuing Interpol notices for the arrest of regime insiders. Levying sanctions and hunkering down in the American Embassy may feel like a delaying tactic, but it buys the United States more time to map out a strategy to unravel the tangled web that could keep authoritarianism alive long after the dictator is gone.
Fortunately, the constellation of center-right governments now in power in Latin America allows the United States to lead a deepened regional commitment to dismantling the illicit networks that have hidden Maduro’s assets and helped him maintain power. Along with Colombia and the new Brazilian government, which is eager to become the epicenter of anti-Chavismo, the United States can play a key role in helping the opposition establish new transitional institutions and deliver humanitarian aid to suffering Venezuelans.
The United States must begin thinking long-term. Toppling Maduro requires more than a challenger, and democracy is about far more than changing the leader. Saving Venezuela will require humanitarian aid, patient institution building, and a transition from reliance on illicit networks to a formal and more transparent market economy.
The post Untangling Venezuela’s Authoritarian Web appeared first on The American Interest.
Peter L. Berger's Blog
- Peter L. Berger's profile
- 226 followers
