Peter L. Berger's Blog, page 66
November 8, 2018
Piece of Work
The Job: Work and Its Future in a Time of Radical Change
Ellen Ruppel Shell
Currency, 2018, 416 pp., $30
Journalism professor Ellen Ruppel Shell has many strong views about the way people earn a living. Accordingly, her new book, The Job: Work and Its Future in a Time of Radical Change, weighs in on the nature of work, its future, its purpose, its meaning, how people prepare for it, government assistance to the poor, unions, income inequality in the United States, and much more. Shell is an engaging writer, and she does a good job weaving together stories of Ivy-educated MBAs working at Whole Foods with references to Karl Marx and Friedrich Nietzsche. She writes interesting and detailed profiles of her fellow college professors, and is a keen observer of higher education. She relays creative ideas about the future of workers’ organizations in the United States, as well—all useful and meritorious.
Yet The Job is flawed, propagandistic, and sloppy. Even where it offers well-observed reporting, innovative ideas, and interesting analysis, the book mostly fails to really analyze its own findings. Moreover, Shell fails to confront data that challenges her worldview, so she ends up telling only part of the story. Above all, The Job is fundamentally crippled by a strong confirmation bias that inevitably affirms progressive nostrums and, in so doing, establishes itself most notably as a missed opportunity.
At the core of Shell’s argument are the interrelated ideas that the nature of work has changed radically in the recent past and will change even more in the future. Further, she argues that nearly all of these extreme changes have been for the worse and, combined with globalization, have shrunk the American middle class and enriched a sometimes-undeserving few. As such, she contends a radical reinvention of the American system is in order because markets are fundamentally a problem to be overcome rather than a better-than-the-alternatives tool for ordering the world. Or, as she puts it, “work is far too vital a human need to trust to the vagaries of a fickle global marketplace.”
To right these wrongs, Shell suggests a laundry list of progressive ideas: to end America’s default system of at-will employment (which she calls “human rental”); to replace a social assistance system largely intended to support the working poor with stronger support for those who do not wish to take jobs that, according to her, actually serve to keep them poor; to revive organized labor in new forms; to increase worker ownership and control of businesses; to give business far less input on the content of educational curriculums; to engineer policies intended to create the kinds of jobs she deems desirable; and to redistribute wealth away from the well-off who “don’t always deserve the lion’s share they take.” In short, it’s a laundry list of progressive-leaning policy prescriptions that will seem familiar to many readers. And indeed, it is possible to come up with evidence and even decent arguments in favor of doing nearly all of the things she suggests.
What often goes missing, however, is the data to support her claims—and what Shell does cite is poorly used. For example, at the core of the argument is the contention that the world of work is in trouble because “temporary and contract work and self-employment have grown faster than have permanent, full-time jobs,” while there has been a “rise of contingent ‘gig’ work.” To be fair, many people assume this is true. But nearly all available data flatly contradicts this and many other of her assumptions as well.
In particular, there’s little evidence that work is actually becoming less stable in the United States, that gig work is increasing, or that self-employment is becoming more common. The opposite appears to be the case. The most recent assessment undertaken by the Bureau of Labor Statistics concludes that “self-employment has trended down.” Further, the BLS’s most recent survey of Contingent and Alternative Employment Arrangements (released in June 2018 and covering 2017) finds that “the last time the survey was conducted” (in 2005), measures of contingent employment like temporary and contract work “were higher.” While the later study may not have been available when Shell was writing (although she does include other recently released pieces of information about things like Amazon’s Prime service), it merely affirmed a wealth of already-available BLS data that my R Street colleague R.J. Lehmann collected as early as 2015.
Moreover, the BLS also finds that the number of people in part-time jobs almost always tracks the economic cycle (rising in bad economies and declining in good ones) and that part-time workers overwhelmingly want to work part time—those economists sometimes refer to as threshold earners. Additionally, tenure at the same job—employer loyalty to employees, which Shell continually implies has somehow eroded—rose during the Great Recession, and now seems to have stabilized at levels slightly higher than those of a decade ago.
Furthermore, the health and retirement benefits that Americans have are also, on average, improving. According to the Census Bureau, the percentage of Americans with health coverage is at an all-time high, largely because of the Affordable Care Act. Finally, as the American Enterprise Institute’s Andrew Biggs has shown, per capita retirement savings also sit at an all-time high.
This doesn’t mean that no support exists for Shell’s worldview, only that most authoritative sources contradict it. For example, companies offering retirement products produce dozens of white papers pointing to a retirement crisis and many academics agree with them. Using definitions far broader than those of the BLS, some academics—for example, Lawrence F. Katz of Harvard and Alan B. Krueger of Princeton, both of whom Shell cites—argue that contingent employment is actually rising and find some data to support their claims. However, they do so by using a far looser and broader definition than the BLS’s and by outright ignoring the ongoing decline in self-employment. But Shell herself doesn’t even enter the debate here. Given that such contentions about the changing nature of work are absolutely central to her thesis of “radical” change, it’s a major problem that she simply assumes away well-documented authoritative data that directly contradict her thesis.
Even when the data do support Shell’s ideas, she uses them poorly. For example, it’s inarguably correct that the percentage of Americans in the middle of the middle class has shrunk while poverty rates have budged only a little over the past fifty or so years. The latter is a significant problem worthy of serious public policy attention. But she never engages the data showing that the shrinkage of the middle-middle class results mainly from the growth of the upper middle class and wealthy. In other words, most people falling out of the middle class fall up, not down—and that has a lot to do with demography combined with the income and wealth fluctuations of people over typical lives—it being understood, of course, that when one compares middle class cohorts from, say, 1980 and 2010, one is obviously not talking about the same individuals. (It’s not clear that Shell understands this.) Yes, that so many people remain poor is a problem (and never mind for now that the definitional goalposts have been moved); that many previously in the middle have grown rich isn’t.
Such oversights are not surprising given the shallowness of Shell’s research. For example, while most studies agree that job satisfaction in the United States has declined somewhat in recent years, Shell shows it declining from near universal to a minority of the population, by using different sources of data that measure satisfaction differently at apparently arbitrary points in history. This type of cherry-picking borders on propaganda.
Likewise, her repeated claims that we’re on the verge of a great disruption in the nature of work that will cause jobs to disappear also don’t stand up to scrutiny. She admits that the manufacturing jobs she sometimes romanticizes have rebounded sharply in the United States since the end of the Great Recession. This actually undermines her idea of “radical change”: many of the jobs coming back are not significantly different from those that vanished during the downturn, since labor productivity growth has been quite slow in the 2000s. (At least if we assume that the industrial-age metrics we use to make such assessments still measure accurately in the current IT-infused age). Also, if jobs are being displaced, then how to explain why gross employment is at an all-time high, job openings exceed the number of unemployed, and unemployment is at its lowest level in decades?
Shell does discuss this topic in passing as she looks at declining workforce participation. But, as might be expected, her treatment of the subject is woefully incomplete. Today’s workforce participation rate is indeed lower than it was between the 1980s and early 2000s and, as she repeatedly stresses, is lower than in other developed countries. But even here, in her desire to show “radical change,” Shell dramatically overstates her case. Today’s workforce participation rate is actually higher than it was in the 1950s and 1960s. So if working now “doesn’t seem worth the trouble,” as Shell says, things must have been much worse in 1955, when about 59 percent of working-age adults held jobs (as opposed to about 63 percent today).
She also seems to have missed the work of the Congressional Budget Office showing that “that the vast majority of the recent decline in the overall rate stems from the aging of the baby-boom generation into retirement.” She does rightly note that the trends for prime-working age men are concerning and worthy of attention, but this is a long-term problem that has evolved over decades: The workforce participation rate for men has never had a sustained uptick since data collection started in 1948. A 70-year trend is a chronic condition; not a radical change. In short, it’s troubling that Shell doesn’t even try to explain how and why such a definitive lack of radical change in the observed data about work necessitates the radical changes in public policy she proposes.
Furthermore, with great frequency, the book’s claims are simply sloppy and misleading. For example, Shell writes that Amazon.com “needs relatively few people,” which is a very odd claim to make about America’s second-largest private employer. It’s even stranger because she has to know that it’s false when she acknowledges Amazon’s own significant workforce expansion in order to talk about pay and working conditions of which she disapproves.
Elsewhere, she claims that unspecified “neoliberal policies of the Ronald Reagan Administration” somehow “buried” a trend toward worker ownership of business in the 1970s—but then goes on immediately to quote Reagan himself praising the idea of worker ownership. In another place, she says that the World Wide Web “went public” in 1991. In fact, 1991 is the date that a few computer scientists in labs began work on what later became the web we know today. It was late 1992 before there was a practical web browser and several more years before the Internet came into wide use.
She also claims that “hundreds of thousands” of call-center jobs have vanished while the BLS actually says that they’ve grown, and predicts that roughly 100,000 new ones are on the way. She’s dismissive of Enterprise Rent-A-Car mandates for a college degree for most hires but appears unaware of its purposefully unusual business model that demands huge numbers of small-scale location managers, and thereby helps people advance into management ranks with real profit/loss responsibility more quickly than almost any other, large, nationally known company. Indeed, this practice has landed Enterprise on “best places to work” lists.
At other times, Shell just takes cheap shots. For example, she identifies long-time IBM CEO Louis Gerstner, who headed RJR Nabisco before joining IBM, as notable for popularizing former cigarette mascot Joe Camel. In fact, Gerstner is known and remembered almost entirely for his work at IBM and worked for IBM at the time he chaired an educational effort of which Shell disapproves. As I read further, far from being convinced by Shell’s argument, I simply found myself beginning to wonder whether I could trust any of her claims as complete, correct, or faithfully represented.
When it comes to Shell’s relationship with data, the most unintentionally revealing part of the book may be a portrait of Princeton University’s Kathryn Edin, who is treated as a be-all-end-all source of information about the nature of America’s welfare system. Tellingly, Shell finds Edin admirable because the professor “turned her attention beyond data” to hear “stories” about the poor in America. Relying on Edin, Shell blithely assures readers that the poor don’t actually work less than everyone else, a claim flatly contradicted by actual data that show (unsurprisingly) that higher-income households spend much more time at work than those with lower incomes. This difference may well not be entirely a result of deliberate behavioral choices or moral failings, but it is also hard to contend that the data support Shell’s suggestions that systemic factors rather than individual behavior are the main cause of poverty or lower incomes. The “progressive” search for victims whose circumstances have nothing to do with their own choices truly never ends.
Further, on the basis of an interview with Edin, Shell argues that programs intended to support the working poor (like the earned income tax credit) are bad ideas because they encourage jobless people to take the first available position. So, while work is important, in other words, it has to be work of a sort that Shell defines as “good.” There are reasonable criticisms of the EITC and other programs intended to support the working poor, but Shell doesn’t engage them in any serious way, let alone give credit to the enormous benefits they provide, including the simple fact that they encourage otherwise idle people to work.
This similar lack of willingness to intellectually engage sometimes obvious objections is made further apparent in a passage where Shell calls for an end to the system of at-will employment in the United States and for its replacement with something else. She concedes that at-will employment “leads to more jobs being created more quickly” in the United States than in other developed countries but nevertheless dismisses it because, according to her, the very instance of at-will employment weakens employees’ bargaining power and somehow makes them take jobs that are not good or satisfying enough. This simply doesn’t jive. If work is as hugely important to people as her central project claims it is—and it is—then isn’t a system that creates more jobs better than one that doesn’t? And how does a system with higher mobility between jobs—and more jobs overall—necessarily make workers weaker? Isn’t the easy ability to switch jobs the best and strongest bargaining chip any employee has? And isn’t that asset much stronger in a system that has more open jobs? Maybe not; life is often trickier than it first appears. But it’s at least worth considering such obvious contradictions, which Shell so often fails to do.
Despite such flaws, Shell’s role as a respected journalism professor and prolific writer makes the reader expect excellent reporting—and she sometimes delivers. An obvious example comes in her description of a visit to Berea, Kentucky and its famous work college, which offers a genuinely interesting and sometimes even heartwarming portrait of an Appalachian community that is doing well in many respects, despite difficult circumstances. And although it is overly romantic and sometimes shallow, a similar anecdote about Finland (which she sees as a model for America’s labor market and the nation in general) is also an alluring piece of reporting.
But even here, there’s a huge oversight: While Shell devotes some space to acknowledging problems in Finland and differences between the small Nordic nation and the United States, she fails to mention that it’s not particularly good at providing the higher-wage jobs she says she would like to see. While it does have a somewhat lower poverty rate than the United States Finland also has a significantly lower median income; slightly below the OECD average. Its median household brings in less money each year than the median one in Alabama, Madison County, Kentucky (where Berea is) or for that matter, Appalachia as a whole. The OECD also reports worse housing and other conditions in Finland relative to the United States and the country does worse on many other measures as well. So if we want a strong, economically prosperous middle class in the United States—as Shell claims to—then the income of those in the middle is the best measure of this. Perhaps the greater leisure time, self-reported life satisfaction, and government-provided benefits enjoyed by Finns could be judged a worthwhile trade-off, but it’s problematic to say that “the standard of living in Finland is among the highest in the world” without pointing out that the median household there would be considered, at best, lower-middle class and probably “working poor” in the United States.
Shell’s reporting is lacking in some other ways. One of the major flaws in this regard is the absence of a diversity of voices and perspectives that would be key in a study like the one attempted in The Job. The bulk of Shell’s interview subjects are her fellow college professors, qualified only to comment on their specific areas of interest. (Several people identified by skills or backgrounds—a “roboticist” for example—actually appear to earn their incomes from university faculty appointments).
Further, she rarely cites scholarly work by these people, preferring mostly to quote from interviews. Even Kathryn Edin, a prolific scholar, is mentioned in the text for her doctoral dissertation rather than one of her books or academic papers. Further, Shell usually provides far more background on a variety of her academic peers than she does the relative handful of ordinary workers with whom she spoke directly. Some of these people are identified by characteristics like a hairnet or an offer of brownies rather than names. And although this may be to protect their anonymity, for someone who seems to want to champion the interests of the American working class, Shell does much less than might be expected to let them speak for themselves. Alas, class bias bubbles up even when it’s unwelcome.
For all of its sloppiness of method, poorly thought-out conclusions, and dubious analysis, The Job partly redeems itself with some good ideas. A chapter on education, job training, and colleges is particularly noticeable in this respect.
Shell describes ways that well-intentioned and well-designed efforts to align education with the predicted future needs of industry don’t always meet the needs of the people involved. Accordingly, she suggests that education should aim for broader and sometimes more humanistic goals. She’s right that businesses are almost always self-interested and, in many cases, don’t necessarily even know what they themselves might need in the future. Business-directed central planning, in other words, doesn’t work any better than central planning directed by a government and, since it lacks democratic input, may even be worse.
Her treatment of unions and other worker organizations also deserves praise. Drawing on the ideas of highly successful labor organizer David Rolf (whom she interviewed but whose written work she neglects to cite in the text) and Freelancers Union President Sara Horowitz, Shell suggests that firm-based collective bargaining is not the wave of the future and instead calls for the creation of more flexible and innovative worker organizations that operate over the range of an industry or profession. Also to her credit, she looks at a variety of ways this might happen. Her exploration of some Hollywood guilds, though brief, is particularly enlightening.
More than anything else, however, Shell’s book is a 416-page exercise in confirmation bias: She dislikes American capitalism, the American education system, and American culture more broadly. Worse, however, though she likes to talk about the working class, it’s the educated elite she allows to take center stage. She would like the United States to be more like Europe where, she claims (without evidence) that school children are “taught to revere poets and philosophers” rather than people involved in the sordid business of inventing things and making money. And in making these evaluations, she has sought out anecdotes that confirm her prior beliefs, even as she ignores copious data that undermine them.
To be sure, she diagnoses some problems correctly like declining job satisfaction in the United States, businesses with excessive influence over the education system, the decline of worker organizations (particularly for the less skilled), and the persistence of a sizable group of poor people in a wealthy country. But she also assumes, implies, or asserts a number of things that simply aren’t supported by the data and, throughout, seems to exalt her casual observations above empirical reality. While there is some good to be found in Shell’s book, it is a deeply flawed project, and a misleading one for those who would depend upon it.
[This article has been updated since initial publication.]
See Tyler Cowen, “The Inequality that Matters,” The American Interest (January/February 2011).
For more detail see Neil Gilbert, “The Inequality Hype,” The American Interest (January/February 2017).
The post Piece of Work appeared first on The American Interest.
Hack Job
The Job: Work and Its Future in a Time of Radical Change
Ellen Ruppel Shell
Currency, 2018, 416 pp., $30
Journalism professor Ellen Ruppel Shell has many strong views about the way people earn a living. Accordingly, her new book, The Job: Work and Its Future in a Time of Radical Change, weighs in on the nature of work, its future, its purpose, its meaning, how people prepare for it, government assistance to the poor, unions, income inequality in the United States, and much more. Shell is an engaging writer, and she does a good job weaving together stories of Ivy-educated MBAs working at Whole Foods with references to Karl Marx and Friedrich Nietzsche. She writes interesting and detailed profiles of her fellow college professors, and is a keen observer of higher education. She relays creative ideas about the future of workers’ organizations in the United States, as well—all useful and meritorious.
Yet The Job is flawed, propagandistic, and sloppy. Even where it offers well-observed reporting, innovative ideas, and interesting analysis, the book mostly fails to really analyze its own findings. Moreover, Shell fails to confront data that challenges her worldview, so she ends up telling only part of the story. Above all, The Job is fundamentally crippled by a strong confirmation bias that inevitably affirms progressive nostrums and, in so doing, establishes itself most notably as a missed opportunity.
At the core of Shell’s argument are the interrelated ideas that the nature of work has changed radically in the recent past and will change even more in the future. Further, she argues that nearly all of these extreme changes have been for the worse and, combined with globalization, have shrunk the American middle class and enriched a sometimes-undeserving few. As such, she contends a radical reinvention of the American system is in order because markets are fundamentally a problem to be overcome rather than a better-than-the-alternatives tool for ordering the world. Or, as she puts it, “work is far too vital a human need to trust to the vagaries of a fickle global marketplace.”
To right these wrongs, Shell suggests a laundry list of progressive ideas: to end America’s default system of at-will employment (which she calls “human rental”); to replace a social assistance system largely intended to support the working poor with stronger support for those who do not wish to take jobs that, according to her, actually serve to keep them poor; to revive organized labor in new forms; to increase worker ownership and control of businesses; to give business far less input on the content of educational curriculums; to engineer policies intended to create the kinds of jobs she deems desirable; and to redistribute wealth away from the well-off who “don’t always deserve the lion’s share they take.” It’s a pretty standard—though lamentably predictable—agenda for a diehard progressive who sits somewhere between Elizabeth Warren and Noam Chomsky on the ideological spectrum. And indeed, it is possible to come up with evidence and even decent arguments in favor of doing nearly all of the things she suggests.
What often goes missing, however, is the data to support her claims—and what Shell does cite is poorly used. For example, at the core of the argument is the contention that the world of work is in trouble because “temporary and contract work and self-employment have grown faster than have permanent, full-time jobs,” while there has been a “rise of contingent ‘gig’ work.” To be fair, many people assume this is true. But nearly all available data flatly contradicts this and many other of her assumptions as well.
In particular, there’s little evidence that work is actually becoming less stable in the United States, that gig work is increasing, or that self-employment is becoming more common. The opposite appears to be the case. The most recent assessment undertaken by the Bureau of Labor Statistics concludes that “self-employment has trended down.” Further, the BLS’s most recent survey of Contingent and Alternative Employment Arrangements (released in June 2018 and covering 2017) finds that “the last time the survey was conducted” (in 2005), measures of contingent employment like temporary and contract work “were higher.” While the later study may not have been available when Shell was writing (although she does include other recently released pieces of information about things like Amazon’s Prime service), it merely affirmed a wealth of already-available BLS data that my R Street colleague R.J. Lehmann collected as early as 2015.
Moreover, the BLS also finds that the number of people in part-time jobs almost always tracks the economic cycle (rising in bad economies and declining in good ones) and that part-time workers overwhelmingly want to work part time—those economists sometimes refer to as threshold earners. Additionally, tenure at the same job—employer loyalty to employees, which Shell continually implies has somehow eroded—rose during the Great Recession, and now seems to have stabilized at levels slightly higher than those of a decade ago.
Furthermore, the health and retirement benefits that Americans have are also, on average, improving. According to the Census Bureau, the percentage of Americans with health coverage is at an all-time high, largely because of the Affordable Care Act. Finally, as the American Enterprise Institute’s Andrew Biggs has shown, per capita retirement savings also sit at an all-time high.
This doesn’t mean that no support exists for Shell’s worldview, only that most authoritative sources contradict it. For example, companies offering retirement products produce dozens of white papers pointing to a retirement crisis and many academics agree with them. Using definitions far broader than those of the BLS, some academics—for example, Lawrence F. Katz of Harvard and Alan B. Krueger of Princeton, both of whom Shell cites—argue that contingent employment is actually rising and find some data to support their claims. However, they do so by using a far looser and broader definition than the BLS’s and by outright ignoring the ongoing decline in self-employment. But Shell herself doesn’t even enter the debate here. Given that such contentions about the changing nature of work are absolutely central to her thesis of “radical” change, it’s a major problem that she simply assumes away well-documented authoritative data that directly contradict her thesis.
Even when the data do support Shell’s ideas, she uses them poorly. For example, it’s inarguably correct that the percentage of Americans in the middle of the middle class has shrunk while poverty rates have budged only a little over the past fifty or so years. The latter is a significant problem worthy of serious public policy attention. But she never engages the data showing that the shrinkage of the middle-middle class results mainly from the growth of the upper middle class and wealthy. In other words, most people falling out of the middle class fall up, not down—and that has a lot to do with demography combined with the income and wealth fluctuations of people over typical lives—it being understood, of course, that when one compares middle class cohorts from, say, 1980 and 2010, one is obviously not talking about the same individuals. (It’s not clear that Shell understands this.) Yes, that so many people remain poor is a problem (and never mind for now that the definitional goalposts have been moved); that many previously in the middle have grown rich isn’t.
Such oversights are not surprising given the shallowness of Shell’s research. For example, she cites data from the website payscale.com to support an assertion that zookeepers are underpaid. But the number she provides is merely the first hit on a Google search. The definitive data from BLS on “animal caretakers” provides evidence of even lower wages than she suggests; something that would have strengthened her argument. Moreover, while most studies agree that job satisfaction in the United States has declined somewhat in recent years, Shell shows it declining from near universal to a minority of the population, by using different sources of data that measure satisfaction differently at apparently arbitrary points in history. This type of cherry-picking borders on propaganda.
Likewise, her repeated claims that we’re on the verge of a great disruption in the nature of work that will cause jobs to disappear also don’t stand up to scrutiny. She admits that the manufacturing jobs she sometimes romanticizes have rebounded sharply in the United States since the end of the Great Recession. This actually undermines her idea of “radical change”: many of the jobs coming back are not significantly different from those that vanished during the downturn, since labor productivity growth has been quite slow in the 2000s. (At least if we assume that the industrial-age metrics we use to make such assessments still measure accurately in the current IT-infused age). Also, if jobs are being displaced, then how to explain why gross employment is at an all-time high, job openings exceed the number of unemployed, and unemployment is at its lowest level in decades?
Shell does discuss this topic in passing as she looks at declining workforce participation. But, as might be expected, her treatment of the subject is woefully incomplete. Today’s workforce participation rate is indeed lower than it was between the 1980s and early 2000s and, as she repeatedly stresses, is lower than in other developed countries. But even here, in her desire to show “radical change,” Shell dramatically overstates her case. Today’s workforce participation rate is actually higher than it was in the 1950s and 1960s. So if working now “doesn’t seem worth the trouble,” as Shell says, things must have been much worse in 1955, when about 59 percent of working-age adults held jobs (as opposed to about 63 percent today).
She also seems to have missed the work of the Congressional Budget Office showing that “that the vast majority of the recent decline in the overall rate stems from the aging of the baby-boom generation into retirement.” She does rightly note that the trends for prime-working age men are concerning and worthy of attention, but this is a long-term problem that has evolved over decades: The workforce participation rate for men has never had a sustained uptick since data collection started in 1948. A 70-year trend is a chronic condition; not a radical change. In short, it’s troubling that Shell doesn’t even try to explain how and why such a definitive lack of radical change in the observed data about work necessitates the radical changes in public policy she proposes.
Furthermore, with great frequency, the book’s claims are simply sloppy and misleading. For example, Shell writes that Amazon.com “needs relatively few people,” which is a very odd claim to make about America’s second-largest private employer. It’s even stranger because she has to know that it’s false when she acknowledges Amazon’s own significant workforce expansion in order to talk about pay and working conditions of which she disapproves.
Elsewhere, she claims that unspecified “neoliberal policies of the Ronald Reagan Administration” somehow “buried” a trend toward worker ownership of business in the 1970s—but then goes on immediately to quote Reagan himself praising the idea of worker ownership. In another place, she says that the World Wide Web “went public” in 1991. In fact, 1991 is the date that a few computer scientists in labs began work on what later became the web we know today. It was late 1992 before there was a practical web browser and several more years before the Internet came into wide use.
She also claims that “hundreds of thousands” of call-center jobs have vanished while the BLS actually says that they’ve grown, and predicts that roughly 100,000 new ones are on the way. She’s dismissive of Enterprise Rent-A-Car mandates for a college degree for most hires but appears unaware of its purposefully unusual business model that demands huge numbers of small-scale location managers, and thereby helps people advance into management ranks with real profit/loss responsibility more quickly than almost any other, large, nationally known company. Indeed, this practice has landed Enterprise on “best places to work” lists.
At other times, Shell just takes cheap shots. For example, she identifies long-time IBM CEO Louis Gerstner, who headed RJR Nabisco before joining IBM, as notable for popularizing former cigarette mascot Joe Camel. In fact, Gerstner is known and remembered almost entirely for his work at IBM and worked for IBM at the time he chaired an educational effort of which Shell disapproves. As I read further, far from being convinced by Shell’s argument, I simply found myself beginning to wonder whether I could trust any of her claims as complete, correct, or faithfully represented.
When it comes to Shell’s relationship with data, the most unintentionally revealing part of the book may be a portrait of Princeton University’s Kathryn Edin, who is treated as a be-all-end-all source of information about the nature of America’s welfare system. Tellingly, Shell finds Edin admirable because the professor “turned her attention beyond data” to hear “stories” about the poor in America. Relying on Edin, Shell blithely assures readers that the poor don’t actually work less than everyone else, a claim flatly contradicted by actual data that show (unsurprisingly) that higher-income households spend much more time at work than those with lower incomes. This difference may well not be entirely a result of deliberate behavioral choices or moral failings, but it is also hard to contend that the data support Shell’s suggestions that systemic factors rather than individual behavior are the main cause of poverty or lower incomes. The “progressive” search for victims whose circumstances have nothing to do with their own choices truly never ends.
Further, on the basis of an interview with Edin, Shell argues that programs intended to support the working poor (like the earned income tax credit) are bad ideas because they encourage jobless people to take the first available position. So, while work is important, in other words, it has to be work of a sort that Shell defines as “good.” There are reasonable criticisms of the EITC and other programs intended to support the working poor, but Shell doesn’t engage them in any serious way, let alone give credit to the enormous benefits they provide, including the simple fact that they encourage otherwise idle people to work.
This similar lack of willingness to intellectually engage sometimes obvious objections is made further apparent in a passage where Shell calls for an end to the system of at-will employment in the United States and for its replacement with something else. She concedes that at-will employment “leads to more jobs being created more quickly” in the United States than in other developed countries but nevertheless dismisses it because, according to her, the very instance of at-will employment weakens employees’ bargaining power and somehow makes them take jobs that are not good or satisfying enough. This simply doesn’t jive. If work is as hugely important to people as her central project claims it is—and it is—then isn’t a system that creates more jobs better than one that doesn’t? And how does a system with higher mobility between jobs—and more jobs overall—necessarily make workers weaker? Isn’t the easy ability to switch jobs the best and strongest bargaining chip any employee has? And isn’t that asset much stronger in a system that has more open jobs? Maybe not; life is often trickier than it first appears. But it’s at least worth considering such obvious contradictions, which Shell so often fails to do.
Despite such flaws, Shell’s role as a respected journalism professor and prolific writer makes the reader expect excellent reporting—and she sometimes delivers. An obvious example comes in her description of a visit to Berea, Kentucky and its famous work college, which offers a genuinely interesting and sometimes even heartwarming portrait of an Appalachian community that is doing well in many respects, despite difficult circumstances. And although it is overly romantic and sometimes shallow, a similar anecdote about Finland (which she sees as a model for America’s labor market and the nation in general) is also an alluring piece of reporting.
But even here, there’s a huge oversight: While Shell devotes some space to acknowledging problems in Finland and differences between the small Nordic nation and the United States, she fails to mention that it’s not particularly good at providing the higher-wage jobs she says she would like to see. While it does have a somewhat lower poverty rate than the United States Finland also has a significantly lower median income; slightly below the OECD average. Its median household brings in less money each year than the median one in Alabama, Madison County, Kentucky (where Berea is) or for that matter, Appalachia as a whole. The OECD also reports worse housing and other conditions in Finland relative to the United States and the country does worse on many other measures as well. So if we want a strong, economically prosperous middle class in the United States—as Shell claims to—then the income of those in the middle is the best measure of this. Perhaps the greater leisure time, self-reported life satisfaction, and government-provided benefits enjoyed by Finns could be judged a worthwhile trade-off, but it’s problematic to say that “the standard of living in Finland is among the highest in the world” without pointing out that the median household there would be considered, at best, lower-middle class and probably “working poor” in the United States.
Shell’s reporting is lacking in some other ways. One of the major flaws in this regard is the absence of a diversity of voices and perspectives that would be key in a study like the one attempted in The Job. The bulk of Shell’s interview subjects are her fellow college professors, qualified only to comment on their specific areas of interest. (Several people identified by skills or backgrounds—a “roboticist” for example—actually appear to earn their incomes from university faculty appointments).
Further, she rarely cites scholarly work by these people, preferring mostly to quote from interviews. Even Kathryn Edin, a prolific scholar, is mentioned in the text for her doctoral dissertation rather than one of her books or academic papers. Further, Shell usually provides far more background on a variety of her academic peers than she does the relative handful of ordinary workers with whom she spoke directly. Some of these people are identified by characteristics like a hairnet or an offer of brownies rather than names. And although this may be to protect their anonymity, for someone who seems to want to champion the interests of the American working class, Shell does much less than might be expected to let them speak for themselves. Alas, class bias bubbles up even when it’s unwelcome.
For all of its sloppiness of method, poorly thought-out conclusions, and dubious analysis, The Job partly redeems itself with some good ideas. A chapter on education, job training, and colleges is particularly noticeable in this respect.
Shell describes ways that well-intentioned and well-designed efforts to align education with the predicted future needs of industry don’t always meet the needs of the people involved. Accordingly, she suggests that education should aim for broader and sometimes more humanistic goals. She’s right that businesses are almost always self-interested and, in many cases, don’t necessarily even know what they themselves might need in the future. Business-directed central planning, in other words, doesn’t work any better than central planning directed by a government and, since it lacks democratic input, may even be worse.
Her treatment of unions and other worker organizations also deserves praise. Drawing on the ideas of highly successful labor organizer David Rolf (whom she interviewed but whose written work she neglects to cite in the text) and Freelancers Union President Sara Horowitz, Shell suggests that firm-based collective bargaining is not the wave of the future and instead calls for the creation of more flexible and innovative worker organizations that operate over the range of an industry or profession. Also to her credit, she looks at a variety of ways this might happen. Her exploration of some Hollywood guilds, though brief, is particularly enlightening.
More than anything else, however, Shell’s book is a 416-page exercise in confirmation bias: She dislikes American capitalism, the American education system, and American culture more broadly. Worse, however, though she likes to talk about the working class, it’s the educated elite she allows to take center stage. She would like the United States to be more like Europe where, she claims (without evidence) that school children are “taught to revere poets and philosophers” rather than people involved in the sordid business of inventing things and making money. And in making these evaluations, she has sought out anecdotes that confirm her prior beliefs, even as she ignores copious data that undermine them.
To be sure, she diagnoses some problems correctly like declining job satisfaction in the United States, businesses with excessive influence over the education system, the decline of worker organizations (particularly for the less skilled), and the persistence of a sizable group of poor people in a wealthy country. But she also assumes, implies, or asserts a number of things that simply aren’t supported by the data and, throughout, seems to exalt her casual observations above empirical reality. While there is some good to be found in Shell’s book, it is a deeply flawed project, and a misleading one for those who would depend upon it.
See Tyler Cowen, “The Inequality that Matters,” The American Interest (January/February 2011).
For more detail see Neil Gilbert, “The Inequality Hype,” The American Interest (January/February 2017).
The post Hack Job appeared first on The American Interest.
The Iceman Cometh
“Liberal International Order“ rates some 100,000 entries in Google, preferably with “collapse” or “R.I.P.” appended. The doomsayers have a point. Born in 1945, the LIO was an American project secured by American power. Now, it is being undone by America as Donald Trump is putting the axe to what his 12 predecessors since Harry S. Truman had safeguarded.
In his address to the 2018 UN General Assembly, Trump took to the chainsaw. “America will always choose independence . . . over global governance.” We will “never surrender America’s sovereignty to an unaccountable global bureaucracy.” We “reject the ideology of globalism and embrace the doctrine of patriotism.” So good-bye to the LIO, which is just another label for “globalism,” now demoted to a four-letter word by Trump.
What is—or was—the LIO? First of all, it is a set of global institutions—an alphabet soup of acronyms. At the core lay the UN, enshrining the sovereign equality of all nations, banning interference in domestic affairs and prohibiting force except in self-defense.
IMF, OECD, GATT and its successor WTO underpinned the liberal economic order. Its cornerstones were free trade, multilateralism, and the provision of capital to feed global growth. Dissed by Trump, the World Trade Organization was a historic first. Trade conflicts would no longer be resolved by gunship diplomacy, but under universally binding rules. To encourage free trade, the U.S. opened its vast markets. The infamous Smoot-Hawley Tariff of 1930 had topped out at 59 percent. Today, Trump is flinging punitive tariffs across the world.
The strategic foundation of the LIO was an American-sponsored network of alliances to defend those who could not defend themselves—from NATO for Europe to security treaties with Japan and South Korea in Asia.
Why dredge up ancient history? Because this order, Made in U.S.A., has blessed its members with the longest peace ever, as well as bourgeoning trade and economic growth. So no more global war, nor another Great Depression.
Why would the United States want to mess with that order, especially when it costs the U.S. taxpayer just 4 percent of GDP for defense? In World War II, it was ten times more. This modest investment fetched a tidy return. On top of strategic stability, it bought the U.S. a richesse of political benefits: authority and influence, agenda-setting and convening power. Never has a hegemon done so well for itself by doing good for others.
Donald Trump is not going for retrenchment like Barack Obama, nor for isolationism as after World War I. His game is to switch from institutionalism to power politics, which the Founding Fathers had abhorred as the devil’s work. In trade, Trumpism is not win-win, but “I win if you lose.” It is “fire and fury,” escalating tariffs, contempt for hallowed Western institutions like NATO and the G-7. It is “America first” and damn the rest. It is John Ford’s pistolero Liberty Valance—no longer Spider-Man whose uncle counseled, “With great power comes great responsibility.”
What world order next—if any?
Westphalia 1.0 through 3.0
Not gifted with prophesy, scholars like to look to the past for inspiration. A favorite model is the “Westphalian System,” the first stab at world order. In 1648, World Order 1.0 ended a century of religious mayhem that had wiped out one-third of Central Europe’s population.
Westphalia launched a thoroughly modern process in interstate politics—what we praise as “multilateralism” today. For the first time in history, a mammoth congress sealed the deal. 109 delegations traveled to Münster and Osnabrück, twice the number of those founding the UN in 1945.
There, they crafted a rules-based system—another first. Like the UN 400 years later, it enshrined the principles of state sovereignty and inviolable borders. What princes and potentates did at home was nobody’s business, hence cuius regio, eius religio—whose realm, his religion.
So no more crusades in the name of the Almighty, and no more “regime change.” Keep God out of it. That did not end wars, by any means. But it took the boundless fury out of the state system that had, during the Thirty Years’ War, wiped out eight million souls for using the wrong prayer book.
1.0 held for the next 150 years, until the French Revolution. Suddenly, “God” was back, this time in the guise of the secular democratic faith. This belief in the righteousness of “liberty, equality and fraternity” again fueled war to the max. A wondrous “force multiplier,” the democratic catechism propelled Napoleon to Moscow and Cairo. The price was again a million-fold death.
So on to “Westphalia 2.0.” At the Congress of Vienna in 1815, the Greats laid down new rules. One was the “balance of power”—no nation must be strong enough to gobble up the rest, as Napoleon had done. The other norm was “dynastic legitimacy”—no revolutionary fervor would ever again be allowed to topple Europe’s royals.
Henry Kissinger wrote the book on this grand bargain, aptly titled A World Restored (1957). In a recent work, World Order, he defined the term. It designates “a set of commonly accepted rules that define the limits of permissible action and a balance of power that enforces restraint where rules break down,” preventing one state “from subjugating all others.”
Compared to Europe’s next “Thirty Years War,” aka World Wars I and II, the 19th century was sheer bliss, with short campaigns and limited objectives. The next try was “Westphalia 3.0,” the post-1945 system built around the UN. Like 1.0 and 2.0, this compact did not launch Kant’s “perpetual peace.”
But a miracle unfolded nonetheless. While some 250 major conflicts with 50 million victims have ripped through the world, the last 70 years have been mercifully free of Great Power wars. America and Soviet Russia remained at their best behavior. This longest peace was a gift the world had never seen.
What Is the Present World Order?
Was 3.0 sturdier than its predecessors? The Westphalia and Vienna systems certainly did not stop the usual suspects from fighting: the Habsburgs, France, England, Russia, Prussia, then Germany. So what made the difference?
Go back to Kissinge’s two conditions of world order. One is a set of rules defining “permissible action.” The other is a balance of power that prevents “subjugation.” What is more effective—norms or power?
Rules are made of paper, balances from steel. Hence realists steeped in history put their money on balance. Balance does not prevent Great Power war. But in the end, superior counter-force stopped those who would crush the rest, from the Spanish Habsburgs in the 16th century all the way to Hitler’s Germany in the 20th. What about the Soviet Union? Credit goes first to the overwhelming power of the U.S., and then to something completely new under the sun: an ultra-stable balance in the deadly shadow of nuclear weapons.
The fantastic stability of World Order 3.0 has held in spite of the endless strife taking place beneath the overarching “balance of terror.”
Think Korea, Algeria, Vietnam, Afghanistan, Iraq, Serbia, and Libya, alongside routine mayhem in the Middle East and Africa. In the past, such “little wars” had invariably triggered Great Power wars, never mind Westphalia and Vienna. Nor did the UN secure the peace. The miracle rested on a single unwritten rule in the age of the atom: whosoever shoots first, dies second. As a result, no nuclear power has ever attacked another; Pakistan’s foray into India in 1999, one year after going nuclear, is the exception that proves the rule.
There is a catch, though. Balances don’t form by themselves. There has to be one player who harnesses coalitions against the imperialist du jour. Until World War II, that role belonged to Britain, the mighty island power. It masterminded all those alliances that laid low the world’s tyrants from the emperors of Habsburg-Spain to Der Führer of the Third Reich. After 1945, the mantle fell on the United States, which held in check the Soviet Union and China while chastening a slew of lesser despots.
Why the United States? Its unmatched economy and military deliver only part of the answer. For the first time, Number 1—historically the greatest threat to the balance—did not go for empire, which invariably provokes “ganging up” on the part of Numbers 2, 3, 4. Instead, this Gulliver produced on a global scale what economists call “public goods.” Such goods are the free movement of products and capital, open sea-lanes, institutionalized conflict resolution, and security systems like NATO.
Once these goods exist, anybody can enjoy them, like a city park or a public school. The U.S. provided the startup capital and financed most of the maintenance—but not out of sheer altruism, let alone, as Trump thinks, because Uncle Sam was suckered. For fabulous were the returns in terms of loyalty and legitimacy. The genius of U.S. diplomacy was to push its own interests by serving those of others. Yet for Trump, “globalism” ranks right next to the Beelzebub.
Wrecking the House America Built
If 45 brings down the house that America built, what’s next—“Westphalia 4.0?” Maybe negotiated in Beijing or Moscow? Not likely, because it takes liberal states like Britain or the U.S. to craft a liberal order. In centuries past, absolutist regimes like those of Spain, France and Tsarist Russia sought to impose despotism wherever they conquered. Bolsheviks and Nazis never dreamt of free trade or peaceful adjudication, let alone liberal democracy.
Luckily today, there are no new Napoleons or Stalins on the horizon, despots who would bring secular salvation to the world by vanquishing it. Russia and China no longer pray to the God of Marxism. They are revisionist, not revolutionary powers. They want a bigger pile of chips; they don’t want to overturn the table, let alone wreck the casino. They play an opportunistic game: let’s see how far we can go at a calculable risk. They will place modest bets, testing U.S. resources and resolve.
As Trump is swinging his axe, could the U.S., the housekeeper, lose out in this one-plus-two world? Ironically, Trump’s what-do-I care machismo seems to work—for now. With his trade and sanctions wars, Trump has gotten the attention of Europe, China and Russia; he may yet soften up North Korea and Iran. Best of all, the oldest game of world politics—balancing against Mr. Big—has not kicked in against the U.S. America’s rivals are not organizing global hunting parties to bring down this rogue elephant. Not yet.
Such reticence may be testimony to the overwhelming power of the U.S. How do you win a trade war against the mightiest economy that controls the channels of world trade and finance? How do you best a giant still embedded in a globe-spanning system of alliances, when you have none of your own?
Maybe Trump will be history by 2020. Maybe America will resume its role as benign hegemon by defanging malfeasants, rewarding friends and securing the LIO Made in U.S.A. But if Trumpism—might makes right—is the future, the U.S. will not flourish. Its commercial rivals will band together to unseat the almighty dollar that underpins Trump’s orgy of sanctions. They might raise ever more trade barriers against the U.S. while excluding it from regional free-trade areas more important than the Trans-Pacific Partnership Trump has spurned in favor of one-on-one deals. They will form coalitions to isolate the U.S. in the diplomatic arena.
This dark scenario comes with a warning to the Demolition Man. He is chopping away at the very order that has granted America a lifetime of primacy at a reasonable price. Rogue elephants don’t have friends; they provoke fear and defiance. You can’t punish those whom you need to harness against Iran. World order does not form ex nihilo as history shows. It demands a sponsor and housekeeper. Does the U.S. really want to live in a world where Russia and China end up making the rules? Neither does Europe, and the rest of the world.
The post The Iceman Cometh appeared first on The American Interest.
November 7, 2018
Three Moral Economies of Data
On October 24, Tim Cook, the CEO of Apple gave an epochal speech to a conference of European data officials. Many outside the technology industry have warned that we are sleepwalking our way through a vast transformation of politics, economy and society. Our world is being remade around us by data and by algorithms. Tools and sensors that gather data on people’s behavior and dispositions are increasingly pervasive. Our cars secretly upload information about the music that we listen to, and where we listen to it. Our televisions, like the televisions in 1984, can listen as well as speak. Mountain of data is piled upon mountain. The obdurate heaps are sieved, winnowed and harvested by machine learning algorithms, vast unthinking engines of calculation and classification, that allot us to categories that may redefine our lives while being incomprehensible to human beings, and ceaselessly strive to predict and even manipulate our actions. Together, these technologies are commonly known as “artificial intelligence” (AI)—and the implications for politics and economics are vast.
Cook’s speech spoke eloquently to the problems of AI. What was remarkable was not what he said but that it was Cook, the leader of a major technology company, who was saying it. Cook told his audience that AI needed to be subordinated to human values and not allowed to displace human ingenuity and creativity. Of course, Cook was far better positioned to make such an argument than the CEO of Google or Facebook would have been. His company’s fortunes rely on selling physical products to consumers, instead of selling consumers to advertisers. Yet what he did was to bring battle over the relationship between technology and morality to the heart of Silicon Valley.
The speech was a bold and very consciously political move. The technology guru and intellectual entrepreneur Tim O’Reilly has observed, that if data is increasingly the central source of value within a corporation, then how that data is managed and monetized is likewise central to how value will be captured and distributed within and across national economic units. This is a vast transformation of our economy. To give some sense of its scale, 2014 was the first year in which the value of data exchanges internationally exceeded the value of traded goods.
Of course, how value is distributed across an economy has enormous political implications. The economic implications are portentous, to be sure, but the social consequences are equally far reaching. Data collection is reshaping individual privacy, while predictive and manipulative algorithms have profound implications for how we think about the autonomy, or agency, of the individual—both, again, quintessentially political questions.
To understand these politics, we need to think about the moral frameworks that they are embedded in. Google and Facebook’s model, in which individuals come to know themselves and be known through data, is not just driven by greed. It also exemplifies a deeply felt morality: both companies see themselves not just as businesses, but as evangelists bringing the true faith to the unredeemed. Cook’s speech represents a different and incompatible morality, in which technology is harnessed so that it enhances rather than transforms our capacity for moral judgments.
Yet it is not just the clashing moral visions of companies that vie with each other for supremacy. It is the clashing morals of national and supranational societies. Cook applauded the European data officials that he was addressing, telling them that Europe’s General Data Protection Regulation (GDPR) was leading the way, and that countries such as the US should follow. Many in the US—companies such as Google and Facebook, Republican and Democratic senators and members of Congress, think tanks, academics and public intellectuals—will sharply disagree, elevating instead what they see as a different and better understanding of technology and society. Understanding these varying moral visions—how they clash, reinforce each other or influence each other—will be key to understanding the politics of the twenty-first century.
We propose here a moral economy framework for thinking about how governments and societies are choosing to address the political, economic, and social implications of data and data-driven artificial intelligence. By “moral economy” we mean the combination of values and norms that underpin how state and private sector actors work together to allocate, produce and distribute various goods and services. Every set of economic arrangements implicitly embeds and promotes some set of moral suppositions about how the participants in that economy should relate to one another; they define, in other words, their rights and responsibilities.
The moral economy of free-market capitalism, for example, is rooted in the belief that markets allow people to make their own choices and thus promote liberty, and also that markets encourage the efficient use of resources, thus increasing aggregate prosperity; on the flip side, free-market capitalism is less concerned with issues of equality or the promotion of community. Socialist economies have their own underpinning moral systems, as do those characterized by feudalism, and so forth. The point is not that some economies are more or less moral than others; it is rather that all economies have a specific underlying morality. This is also true for emerging economies of data.
Emerging data economies enable new moral economies—values that people are supposed to hold, and standards that are supposed to shape their behaviors. Sometimes these values and standards are so universally accepted as to be invisible; other times they are deeply contested. When they are contested they become very visible indeed. Either way, they shape the boundaries of what people think is acceptable and possible.
A given new technology does not automatically result in a specific moral economy, but can enable many different possible moral economies.The outcome depends on the existing political, economic, and social systems with which it intersects, and how individual and collective actors interpret and guide the collision. Furthermore, just as technology reshapes moral economies, the moral economy can reshape how technology develops, creating pressures that push toward some paths of development and away from others. Different societies, with different moral economies, can furthermore interact with each other, using political tools to try to reshape how technologies are used, and seeing their own moral economies change as technology leaks across political, social, and moral borders. As tech insider Kai-Fu Lee says, “It’s up to each country to make its own decisions on how to balance personal privacy in public data. There’s no right answer.”
While prediction about technology development is a hazardous business, we believe that the next two decades are likely to be shaped by the interactions within and between three distinct approaches to the moral economy of data: in the United States, China, and the European Union. While all countries and cultures will have their reactions to data and their uses, these three matter most not only because these are the primary sites where the data-driven technologies are being developed, but also because the each of these three is developing a distinct moral economy of data. Clashes among these three moral economies over how these technologies should be deployed are inevitable.
The U.S. moral economy of data is perhaps the best understood of the three. On the one hand, the U.S. moral economy of data treats personal information as an economic commodity that corporations are free to treat as largely distinct from the individual from which it is drawn. Data can be agglomerated, packaged, traded, and utilized in ways that are similar to, but in some ways more sophisticated than, the bundling of mortgages and other financial instruments. From this approach emerges a multitude of classifications, which not only shape the advertisements people are exposed to, but increasingly the market chances and opportunities they have. One group of people will see attractive offers of finance; another group, predatory loans.
On the other hand, there is a sharp difference between the liberality with which the U.S. moral economy of data treats market exchange, and the skepticism with which it treats the state’s use of personal data. While the surveillance capabilities of the U.S. national security state are enormous and have increased dramatically, these are tolerated only insofar as they are focused on targets outside the United States. Areas where external and internal information overlap (as necessarily happens, given the nature of bulk collection) are the focus of sharp-edged suspicion and legal contestation. In both its permissiveness toward corporations and its suspicion of the state, the U.S. moral economy of data reflects the libertarian taproots of American internet culture.
The European moral economy of data is less well understood. Instead of treating personal data as a commodity “in the wild” that corporations can harvest and sell, it treats it as an aspect of individual human dignity, which is in principle inseparable from the human being with which it is associated. On May 25, the European Commission introduced the GDPR, which epitomizes this approach.
The GDPR approach has implications for both the economy (where data can only travel with permission of the individual to which the data refers, and with a strong associated set of rights), and the state (where security-related uses of data need to be limited and purpose specific). Over a period of two decades, this understanding of the moral economy of data was embattled and subdued, thanks both to the market power of e-commerce companies (and the willingness of the U.S. state to support them) and the desire of security agencies within the European Union to have untrammeled access to the information they believed they needed to do their job. Now, as the emergence of the GDPR shows, both sets of constraints have weakened, thanks both to constitutional and legal changes in the European Union, and the enthusiasm of judges, regulators, and non-governmental organizations to protect and, if possible, extend the European moral economy worldwide.
The Chinese moral economy of data differs again. In contrast to both the United States and European Union, no very strong distinction exists between the state and commercial sectors, which tend to blur into each other, especially where large companies are concerned. There is furthermore little concern with formal rights of the kind that shape both the U.S. approach to government use of data, and the EU’s general approach. Instead, a strong emphasis on the value of data both for profit and social stability takes pride of place. Both the private sector and the government (often hand-in-glove) are gathering enormous amounts of data, on everything from online behavior to how people walk down the street, with relatively little oversight and control.
The development of the so-called Social Credit System, which is meant to be a kind of FICO score for your entire life, represents the direction all this is headed. This is not, as of now at least, creating the Panoptic Leviathan suggested by some overwrought Western commentary (usually written from the point of view of the American moral economy of data). Many of these schemes are not additive but instead represent the initiatives of specific parts of the state or commercial sector. While the state is powerful and well-staffed, exactly because it interpenetrates society, it is interpenetrated by society’s political and economic conflicts as well. This gives rise to a new moral economy that combines (a) easy access to data (including forms of data that are controversial in democracies) with (b) economic dynamism and aggressive willingness to explore the entire possibility space of profitable opportunities and (c) the general concern of the Chinese state with political stability and the continued rule of the Communist Party.
[image error]Now, these three distinction moral economies positions are themselves evolving. In particular, two of the actors, have evolved in historically consequential ways over the last 10-15 years, namely China and the European Union.
Since the emergence of the data economy at the start of the 21st century, the United States has remained in the X-No/Y-Yes camp. Though there has been serial outrage over internet companies’ failure to safeguard customer data, and much ire directed at social media companies over their role in stoking political divisions, as of now there has been no concerted attack on the right of internet companies to exploit the data they collect on their users. This may change in the future. Some people on the left attack large internet firms as a new expression of corporate power. Think tank intellectuals associated with Senator Elizabeth Warren are beginning to articulate an antitrust case against the data leviathans. Yet none of this has yet translated into a coherent and deep rooted movement for reform.
By contrast, Europe and China a dozen or so years ago both began in the diametrically opposite quadrant from the United States—that is, trusting the government with data but suspicious of corporate control. The reasons for their suspicion of corporations and support for their governments were quite different, however. The chariness of continental Europeans in particular reflected a more aggressive regulatory approach to companies and a suspicion of untethered profit motives. Often Europeans were willing to support counter-terrorism programs after violent attacks, even when they were deplored by privacy officials. For the Chinese, it reflected the twin artifacts of the unapologetic political hegemony of the CCP and concomitant suspicion of foreign companies gathering data about its citizens.
These different reasons for being in the X-Yes/Y-No camp in turn reflect how each has evolved. In the European Union, the Europeans have become even more suspicious of companies, and even more committed to regulatory management, all the while becoming increasingly skeptical of counter-terrorism arguments favoring governmental access to these data. This was reinforced by the post-Snowden revelations perception—only partly accurate—that anti-terrorist surveillance was primarily being pushed by the U.S. The past two years have seen the formation of a loose consensus across regulators and politicians that antitrust, privacy law, and constraints on the sharing of information should work in harness to protect this moral vision; meanwhile, the ability of security agencies to use personal information has been limited by European court judgments on data retention. The result is that they the EU has pretty much migrated to the No/No camp—suspicious of both governmental and corporate control over personal data. The GDPR and looming anti-trust actions are the fruits of this shift.
On the other hand, the Chinese dealt with their position by essentially replacing the American internet companies with homegrown varietals, all of which are firmly subordinate to, if not in direct partnership with, the regime; as long as the state controls the companies, the state is happy to give the companies free rein to innovate. The result is that China is now in the Yes/Yes camp—suspicious neither of governmental nor of (what is close to indistinguishable now that the companies are all Chinese) corporate control over personal data. The fruit of this is the Social Credit system, which is openly and unapologetically being implemented by companies in partnership with regional and municipal authorities to improve social service provisioning—including the social service of monitoring and responding to citizen unhappiness, and of course in tandem “managing disruptive behavior.”
These transitions have created an interesting geopolitical inflection point. Whereas a dozen years ago, both the Chinese and the Europeans were in some ways aligned in their suspicions of the United States, which had made diametrically opposite “moral economy” decisions about data privacy from what they both preferred (albeit for different reasons), the evolution of their two positions have gone in opposite directions, with the result that the Chinese and the Europeans are now more sharply at odds on this topic than either is with the United States.
Indeed, the advent of Donald Trump has reinforced this dynamic. On the one hand, Trump has stopped U.S. jaw-jawing at the Chinese about human rights (he harasses them on trade and currency issues, but not on data privacy or its social-control deployment), while the Europeans have become far more critical of China’s data policies. Interestingly, the European Union is probably at this point the most aggressive of the three players in attempting to extend outward its own moral economy of data. As described by one European participant in these debates:
Europe really wants to take its role seriously and become the global gold standard setter and also the global regulator for these issues on monopoly. And in a way, to find a European way. The Silicon Valley or Washington approach is that they do what they want and then move fast and break things and then see what happens, and if they make money it’s fine. The Chinese approach, on the other side, they basically control everything, including the content, and have the social rating system and stuff like that. We don’t want that. We are having much broader support for a European approach, that tries to regulate technology, to regulate technology companies, to regulate the platform and what have you, based on our European values, on privacy, on freedom of information and the rule of law.
Tim Cook would like to harness this assertiveness for his own purposes. “We at Apple believe that privacy is a fundamental human right,” he declared, echoing the idiom of the European moral economy of data. But his elaboration on the point still assumed that any change in the US would build on the existing US model—the right of privacy articulated in Justice Brandeis’ famous dissent, rather than the encompassing notion of privacy favored by the Europeans:
We at Apple are in full support of a comprehensive federal privacy law in the United States. There, and everywhere, it should be rooted in four essential rights: First, the right to have personal data minimized. Companies should challenge themselves to de-identify customer data—or not to collect it in the first place. Second, the right to knowledge. Users should always know what data is being collected and what it is being collected for. This is the only way to empower users to decide what collection is legitimate and what isn’t. Anything less is a sham. Third, the right to access. Companies should recognize that data belongs to users, and we should all make it easy for users to get a copy of…correct…and delete their personal data. And fourth, the right to security. Security is foundational to trust and all other privacy rights.
In other words, while Cook was willing to move some ways toward acknowledging the European view that data integrity should be treated as a “human right,” he is still hedging by insisting that data be treated as property, and therefore fundamentally commodifiable.
It is important to emphasize that none of these moral economies is internally monolithic, any more than any moral economy ever completely ossifies. Each combines moral verities that are more or less unchallenged with areas of sharp contention. Furthermore, the coexistence of these moral economies in a globalized and highly interdependent world means that internal fissures are likely to interact with external pressures in complex ways. What happens if, for example, machine-learning techniques based on large-scale individual surveillance data percolate from China to the United States? How might U.S. platform companies change their business model (with associated implications for the U.S. moral economy of data) if they are obliged by European courts to provide far stronger data rights to citizens? How might China respond if the United States and the European Union work more closely together to try to bind international data exchange to individual liberties against the state? And so on. We anticipate that, as with Tim Cook, there will be efforts to translate preferred actions from the idiom of one moral economy to another, but that, as with all translations, substantive differences will remain between the different frameworks. A good translation is not a transposition; it is a rearticulation of some of the animating spirit of the original work in an alien vernacular with different referents.
To get our heads around what all these divergent and evolving moral economies may mean, much further inquiry need be done. First, we need much better maps of the different moral economies of data, their implications, their agreed-on verities, and their areas of conflict, as has already been done for the United States. This should also include looking at the moral economies of data beyond “the big three”: India, Japan, and even Israel will be important technological players in the development of data intensive information systems, and are likely to have differing views of how privacy, security, state interests, and corporate independence should be balanced in the development of data-intensive applications.
Second, we need to work from these improved understandings to assess the areas in which they respectively reinforce and impede the abilities of different actors in the state, business, and civil society to achieve their objectives (as well, often, as implicitly shaping those objectives).
Third, we need to chart out the reciprocal ways in which these moral economies shape different trajectories of technological development, respectively favoring certain lines of research while disfavoring others. Fourth and finally, we need to examine the interaction between these three different understandings of the moral economy in an interdependent world, where none can prevail entirely, and each is obliged to press back against or to accommodate moral imperatives that do not originate internally.
1. Definitions of artificial intelligence, machine learning, deep learning, and even data itself reside in highly contested terrain. At the least, a consensus is forming around the idea that there are five stages of AI. Our definition here does not attempt to parse these distinctions. It refers broadly to the emerging suite of data-driven computing technologies that impinge on various cultural frameworks. The standard textbook here is Stuart Russell and Peter Norvig, Artificial Intelligence: A Modern Approach, which has sold hundreds of thousands of copies.
2.For a cognate analysis, see Diane Francis, “Three Glimpses of the Future,” The American Interest (June 2018): https://www.the-american-interest.com/2018/06/20/three-glimpses-of-the-future/.
3.Each of these approaches is internally contested, yet in each the contestation involves a distinct set of moral orientations and associated forms of political, economic and social organization. These three moral economies are moreover organized according to distinct and sometimes contradictory logics, which means that there is likely to be continued contestation across as well as between them.
4.A wide body of scholarship and writing addresses the U.S. moral economy of data, but it is most directly addressed in Marion Fourcade and Kieran Healy, “Seeing like a market,” Socio-Economic Review 15:1 (2016), pp. 9-29.
5.Telephone interview with Ralf Bendrath, former senior policy adviser for Jan Philipp Albrecht, June 2018, quoted in Henry Farrell and Abraham L. Newman, Of Privacy and Power: The Transatlantic Struggle over Freedom and Security, forthcoming Princeton University Press, 2019, p. 174-5.
The post Three Moral Economies of Data appeared first on The American Interest.
Chronicles of the Meme War
LikeWar: The Weaponization of Social Media
P. W. Singer and Emerson T. Brooking
Eamon Dolan / Houghton Mifflin Harcourt, 2018, 416 pp., $28
The word surreal appears so many times in LikeWar that I lost count, but each time its use was appropriate; indeed, it would have been relevant myriad other times. How not? The book’s subject is social media, and how those media have been transformed into vehicles manipulated by loathsome villains to brainwash the unsuspecting and wreak chaos, hatred, and even violence. (Social media are sometimes used for decent ventures by decent people). LikeWar is scrupulously researched, deftly written—surprising in a dual-authorship book—and well worth reading. Its depiction of a world being driven crazy, or worse, by a unique new communications instrument constitutes a ghastly dystopian vision.
Despite the word weaponization in LikeWar’s subtitle, P. W. Singer and Emerson T. Brooking spend little time on specific military uses of the internet. But social media campaigns that augment military operations have their place in the book, as do the extraordinary range of activities initiated to undermine democracies, strengthen dictatorships, demonize numerous ethnic minorities, and pamper Taylor Swift’s fans (the book has its lighter moments). The latter notwithstanding, the sheer nihilistic rage detailed in LikeWar sometimes has a numbing effect. But the authors aren’t exploiting the “cruel surreal spectacle[s]” that they examine, to borrow their own description of the online extravaganza that accompanied the Islamic State’s 2014 war in Iraq. They’re on the side of the angels; they want the internet cleaned up.
To give the Devil his due, the vile individuals, cabals and governments presented in this book seem much more creative at utilizing social media than the good guys (who usually appear to be playing catch-up). “Half of the world’s population is online,” the authors assert, and most of us, the book suggests implicitly, are vulnerable to heinous cyber-machinations. The malicious tactics can take a multitude of forms, which the book explores at length.
The Islamic State’s invasion of northern Iraq in 2014, for instance, was accompanied by “a choreographed social media campaign to promote it.” The goal of the cyber project was to “sow terror, disunion, and defection,” and not just in Iraq. One might wonder if the invaders—religious fanatics dedicated to their cause—would have succeeded even without the cyber exercises against less-than-competent Iraqi troops. But that question is almost incidental: Because those ruthless zealots carved out a “caliphate” (including the city of Mosul), terrified a lot of Americans by expertly staging executions on the internet, enlisted 30,000 new recruits, and prompted “lone wolf” terrorism throughout the world, the Islamic State’s online crusade became a role model for the dregs of humanity everywhere—and some of those dregs wielded real power.
China didn’t need any inspiration. Under President Xi Jinping the country gives new meaning to the Cold War term brainwashing. “Through the right balance of infrastructure control and enforcement, digital-age regimes exert remarkable control over not just computer networks and human bodies, but the minds of their citizens as well,” the authors write. “No nation has pursued this goal more vigorously—or successfully—than China.” The Chinese government, ruled by the all-powerful Communist Party, ensures that “[a]lthough Chinese internet users [can] build their own websites and freely communicate with other users inside China, only a few closely scrutinized strands of cable [connect] them to the wider world.” The modern equivalent of the nation’s Great Wall is the “Great Firewall.” But Xi and his colleagues haven’t stopped there:
Chinese authorities also sought to control information within the nation. In 1998, China formally launched its Golden Shield project, a feat of digital engineering on a par with…the Three Gorges Dam. The intent was to transform the internet into the largest surveillance network in history—a database with records of every citizen, an army of censors and internet police, and automated systems to track and control every piece of information transmitted over the web. The project cost billions of dollars and employed tens of thousands of workers. Its development continues to this day.
Such passages make the surveillance state of 1984 seem quaint by comparison.
Countries and religious movements aren’t the only entities that employ the internet for sinister purposes. There are cyberpunks of all kinds and allegiances, whose goals may include influencing elections, menacing perceived enemies, promoting bigotry, raising hell for the sheer sadistic joy of it—or all of the above. The authors maintain that social media “[have] revolutionized white nationalist, white supremacist, and neo-Nazi groups, spiking their membership and allowing their views to move back into mainstream discourse. In the United States, the number of Twitter followers of such groups ballooned 600 percent between 2012 and 2016.” LikeWar devotes several pages to the metamorphosis of a cartoon character, the hapless Pepe the Frog, into a meme (defined as “the vessels by which culture is transmitted—and a crucial instrument by which LikeWar is fought”) and a right-wing symbol. It’s an outlandish story, ludicrous in a way, but in another way disquieting for what it says about the brio and cunning with which contemporary fascists successfully orchestrate virtually anything to needle (or worse) their foes. Poor Pepe; poor us.
Fascist ideology—I use the term loosely—essentially comes down to a reveling in the ecstasy of violence. But fascists aren’t the only ones who are buzzed by mayhem. Presenting the cyberbully. The book recounts a dreadful story from 2006 in which a 13-year-old girl was driven to suicide by a malicious sockpuppet account (an phony online persona) controlled by the mother of an ex-friend. The circumstances induced major social media companies, including Twitter, Google, and Facebook, to “[ban] personal threats or intimidation, soon expanding to include a more general ban on harassment. . . . These rules seemed simple. They’d prove to be anything but.” That is, of course, inevitable when First Amendment issues are addressed in this country. Lawyers and pundits have historically been highly adept at parsing censorship regulations in order to neutralize or at least weaken them. The giant online media companies, which like most of us are both supportive of free speech and appalled by cyberbullying—and, of course, interested in chasing profits—find themselves constantly entangled in excruciating legal, political, philosophical, and, yes, literary dilemmas concerning how to police their domains. (“Each new rule required more precise, often absurd clarification.”) In LikeWar, anecdotal evidence suggests that these quandaries will continue for a very long time.
At the beginning of LikeWar, the authors claim that “this is not a book about the Trump presidency.” But inevitably our pernicious President pervades the book like acid corroding metal. During his presidential campaign, “he was a literal [online] superpower. He had by far the most social media followers. […] He deployed this network to scale, pushing out the most messages, on the most platforms, to the most people. Importantly, Trump’s larger follower pool was made up of not just real-world voters but . . . a cavalcade of bots and sockpuppet accounts from around the world that amplified his every message and consequently expanded his base of support.” Trump’s “crucial, deciding force was a new group: a cohort of mostly tech-savvy angry, young, white men who inhabited the deepest bowels of internet culture.”
Why did these anomic Caucasians love Trump? Alienation from the economy and culture are mentioned. “But most of all,” the authors say, “they liked Trump because in the fast-talking, foulmouthed, combative billionaire, they saw someone just like them—a troll.” The trolls whom Singer and Brooking describe—defined as “internet users who post messages that are less about sharing information than spreading anger” and whose “specific goal is to provoke a furious response”—sound like the thugs in A Clockwork Orange given new toys to play with. We’ve come a long way since we at least paid lip service to bestowing political allegiance based on rational considerations, civilized decorum, and what’s best for the country.
Even more disturbing is LikeWar’s inspection of the Kremlin’s internet interventions in the 2016 presidential campaign. The authors survey the mind-boggling sockpuppet campaign created by the Russians to interfere in the 2016 election. Russian-controlled sockpuppets used three techniques: “One is to pose as the organizer of a trusted group. . . . The second . . . is to pose as a trusted news source. . . . Finally, sockpuppets pose as seemingly trustworthy individuals: a grandmother, a blue-collar worker, a decorated veteran, providing their own heartfelt take on current events (and who to vote for).”
Did the unscrupulous incursion work? It certainly disseminated its reports to an extraordinary number of people: “By cleverly leveraging readers’ trust, these engineers of disinformation induced thousands—sometimes millions—of people each day to take their messages seriously and spread them across their own networks via ‘shares’ and retweets.” Followers of the mainstream media are familiar by now with the Russian conspiracy on behalf of Trump, but LikeWar tweaks even those generally trustworthy media by pointing out that they were also successfully breached by Putin’s dirty-tricks agents. However, the most unnerving aspect of this Russian cyber-warfare, as it is presented in this book, is the intimation that the shenanigans were influential.
P. W. Singer is a strategist at the New America Foundation, consults with the military, and is the author of a number of previous books, including the well-regarded Wired for War, a study of the modern intersection of technology and the military. Emerson T. Brooking was a research fellow at the Council on Foreign Relations and has written for The Atlantic and Foreign Policy. These gentlemen have all the right professional credentials for writing LikeWar and, unusually for public intellectuals, they are neither cynics nor advocates for a particular political cause. Despite their bona fides, though, there is reason to question certain suggestions they make for countering the web’s dark forces. “We’re all part of the battle,” the authors declare, and offer advice for how enlightened folks can restore civility and sanity to the internet, and thus to our society. Some of their counsel: governments must “take this new battleground seriously”; “information literacy is no longer merely an education issue but a national security imperative”; “When someone engages in the spread of lies, hate, and other societal poisons, they [sic] should be stigmatized accordingly.” “Silicon Valley must accept more of the political and social responsibility that the success of its technology has thrust upon it.” There’s more, but I believe readers should discover it.
The guidance appears sound enough, but is it viable? Can any course of action, any reform however shrewd and wise, decontaminate cyberspace? I fear that the answer is no. As I see it, Singer and Brooking have succeeded too well in carrying out their muckraking, in delineating a grotesque, savage cyber world that taints nearly everyone. The (almost certainly unintended) subtext of their reporting is that there is no longer any boundary between the internet and the “outside” world. Nearly all of us are sockpuppet memes, characters in a perverse, garish electronic landscape—the ultimate video game, perhaps—craving the exhilaration of hazing and razing our enemies. The authors’ thoughtful correctives won’t be implemented, and wouldn’t work anyway. Too many people are turned on by this funky nightmare.
For an excellent account of this topic, see Fred Kaplan’s 2016 Dark Territory: The Secret History of Cyber War (Simon & Schuster, 2016).
The post Chronicles of the Meme War appeared first on The American Interest.
November 6, 2018
The Criminal Responsibility of Opioid Addicts
In 2016, Julie Eldred, 26 years old, was arrested for stealing and fencing jewelry. This was her second theft offense and she had stolen both times to buy fentanyl, to which she was addicted. She was quickly arraigned and admitted the crime. The judge placed Eldred on probation with the condition that she remain drug-free and submit to randomized drug testing. This is a routine condition of probation and parole for addicted criminals and criminal defendants diverted to drug court programs.
Ten days after being placed on probation, Eldred tested positive for fentanyl. No in-patient hospital treatment bed was available, so the same judge revoked her probation and sent her to prison. After 11 days in prison, an in-patient bed was found, and she was discharged from prison to the hospital.
After her release, Eldred challenged the constitutionality of imprisoning her for her probation violation. She claimed that because she was an addict and relapse is part of the course of the disease of addiction, it was cruel and unusual punishment to imprison her for conduct that she was helpless to avoid
The Massachusetts Supreme Judicial Court quickly decided to hear the case, which immediately attracted intense interest from the local and national media and from professional groups. The New York Times had an almost full page editorial about it entitled, “If Addiction Is a Disease, Why Is Relapse a Crime.” (Of course, no one claimed that violating probation is a crime, but newspapers must be sold.) Virtually all the interested professional groups, such as the Massachusetts Medical Society, the American Academy of Addiction Psychiatry, and the Massachusetts Society for Addiction Medicine, supported Eldred’s claims by filing amicus briefs. The only exception was the National Association of Drug Court Professionals, whose jobs depend on courts using staying clean as a condition for release. If diverted defendants fail this requirement and are dropped from the specialty court program, they are returned to the criminal justice process for resolution. (Full disclosure: Three other addiction experts, Gene Heyman, Ph.D, Scott Lilienfeld, Ph.D., Sally Satel, M.D., and I co-authored another amicus brief in support of Massachusetts’s position that revocation did not violate the constitution, a brief joined by seven other addiction experts.)
Claims like Eldred’s have a long constitutional history. In a 1962 case, Robinson v. California, the United States Supreme Court was asked to decide whether punishing someone for being an addict was cruel and unusual punishment. The Court held the statute unconstitutional, but there were seven separate opinions and the basis of the holding was unclear. Two interpretations were dominant. The first was that punishing someone for a status, such as having red hair or being an addict, was unconstitutional because no harmful act was involved. The second was that addiction was involuntary behavior, and it is unconstitutional to punish people for conduct they cannot control. There was also a third view that the state cannot punish people for having a disease, but that view has been conclusively shown to collapse into one of the first two.
The debate was settled by Powell v. Texas in 1968. Leroy Powell was a chronic and severe alcoholic who had been arrested for being drunk and disorderly in public about 200 times. Tried yet again for that offense, Powell used expert witness testimony to establish that being drunk and disorderly was a compulsive symptom characteristic of his disease of chronic alcoholism. He was convicted, fined $50 and appealed immediately to the Supreme Court of the United States because no other court had appellate jurisdiction. (This was clearly a test case.) His claim on appeal was that public drunkenness was compelled by his disease and therefore it would be unconstitutionally cruel and unusual to punish him for this allegedly involuntary behavior. In short, Powell was asking the Court to create a constitutionally-mandated defense of involuntariness or compulsion that would apply in all cases because there is no way in principle to limit the defense to those behaviors that are the basic criteria for the disease. Any criminal behavior that is genuinely involuntary for any reason would have to be excused.
Powell’s claim was plausible on its face, and he was a sympathetic defendant, but a plurality of the Supreme Court rejected his argument in an opinion written by Justice Thurgood Marshall. Justice Marshall gave many good reasons, including hesitance to impose a one-size-fits-all constitutional defense on the states, which are usually granted a great deal of deference by the Court concerning the definitions of offenses and defenses, and the fear that the liberty of people with alcoholism might be more impaired if they were responded to by the civil justice system. These are still valid worries. For our purposes, however, more important was Justice Marshall’s conclusion that the medical science was insufficient to justify characterizing Powell’s conduct as a compulsion symptomatic of his disease.
Justice White concurred in the holding on the ground that Powell was distinguishable from Robinson because in the former the defendant was being punished for conduct independent of Powell’s alcoholism despite Powell’s claim to the contrary. He also noted that if it is unconstitutional to punish someone for the status of being an addict, it seemed equally objectionable to punish them for the conduct—using the addictive substance—that is a fundamental criterion for characterizing the person as an addict.
Powell thus established that Robinson was limited and no state or federal court since has been willing to establish a general “involuntariness” or “compulsion” defense, whether or not the alleged compulsion is produced by an underlying disease. Some appellate judges have been willing to grant such a defense, including for more serious crimes such as armed robbery, that supposedly may be part of the pattern of the defendant’s disease of addiction, but these judges have never been able to command a majority opinion. This is where the law on the responsibility of addicts has stood since 1968. Addiction or any of its arguable criteria, such as compulsion, is not the basis for a defense to criminal conduct.
Julie Eldred attempted to succeed where other addicted defendants had failed. Her claim was largely indistinguishable from Powell’s. True, the behavior she was being “punished” for, relapsing by using again, is not independent of her disorder whereas Powell’s conduct was, but the claim of compulsion based on a disease is identical. She tried to upend settled legal doctrine for half a century on the ground that the current science of addiction is finally able to establish that addiction is genuinely a disease and that addicts are compelled to relapse. In the language of the National Institute of Drug Abuse (NIDA), the Federal government’s foremost addiction authority and research funder, addiction is a “chronic and relapsing brain disease” and relapse (use) is a compelled symptom of it, a characterization Eldred embraced.
A unanimous Massachusetts Supreme Judicial Court rejected Eldred’s constitutional claim in the summer of 2018. In a legally narrow opinion, the court held that on the state of the judicial record before it, they could not conclude that Eldred’s use of fentanyl was involuntary in the legal sense, and thus her constitutional claim had to fail. The Court seemed to leave open the possibility, however, that a future case might build a sufficient record to support such a claim. But for now, the settled law is still settled.
There is no doubt that other addicted defendants will attempt to use the current science of addiction to justify a claim that addicts are not responsible for the actions of possessing and using the substances to which they are addicted. Have Federal and state courts, Congress, and the state legislatures been as unfeeling and harsh as the New York Times editorial quoted above suggests? Is the legal treatment of addiction fair and optimum social policy? Should future addicted defendants prevail or is the present policy reasonable?
To answer these questions, the first issue to be addressed is the meaning of addiction. The American Psychiatric Association’s influential Diagnostic and Statistical Manual of Mental Disorders, DSM-5, does not use the term, but instead lists individual “substance use disorders” depending on the problematic substance involved. Nonetheless, most addiction researchers define the term to mean something like persistent drug seeking and using, especially “compulsively” or with craving, in the face of negative consequences (without being clear whether these consequences are subjectively recognized or simply objectively exist). There are no validated biological criteria for addiction.
The conclusion that addicts can’t help using, that they are compelled to use, underlies the claim that they are not responsible and should be excused. How can it be fair to blame and punish people for conduct they cannot control? But what if the conclusion about loss of control (or any of the synonyms, such as having “no choice”) is highly contestable. Then perhaps legal policy would not look so objectionable.
The meaning of compulsion is unclear and is often based on a common-sense inference. The addict’s persistent seeking and using is accompanied by craving (but not always), negative interpersonal, medical, occupational, and legal consequences (but not always), subjective feelings of wanting but not liking the substance (but not always), and the addict’s claim that he wants to quit but cannot. After all, why would he continue to use under these dreadful circumstances? It must be true, it is concluded, that the use “must be” “compelled” and the addict is therefore unable to quit using.
Observe, first, that the the primary, utterly necessary criteria for addiction are intentional actions—persistent seeking and using of substances. These are not pure mechanisms, like spiking a fever in response to an infection or metastases of a primary tumor. Injecting or inhaling a controlled substance, for example, is not a muscular spasm. They are intentional human actions and human action can always be evaluated morally, unlike a pure mechanism, which is not subject to potential moral evaluation. We don’t blame a hurricane for the destruction it wreaks or the infected person for showing a fever, but we do blame people who commit arson. Why can’t we blame and punish addicts for using substances because, unlike mechanisms, they are people who have choices or control over their actions? It may be fearsomely difficult for addicts to control their use, much as it is difficult to break many bad, strong habits, but don’t we expect people to exert control even over difficult choices when they have good reason to do so.
At this point, proponents of excusing addicts for at least possessing and using make the following major counter-argument. The gist, already alluded to above, is that addiction is a disease—indeed, a chronic and relapsing brain disease—and using is an “involuntary” sign of it. I believe that various parts of this argument are essentially contestable and sometimes flat out wrong.
Even if addiction can be usefully considered a disease, and many dispute this, it is not a disease like any other. Further, it assumes the conclusion to be proved to say that actions that are a sign of addiction “must be” involuntary just because they are a sign of a disease. Although intentional human action can contribute to both the cause and cure of many diseases, most diseases cannot be “cured” by a simple decision to stop the disease process because most signs and symptoms of diseases are not actions; they are mechanisms. In contrast, if the addict decides to stop using and acts on that decision for a non-trivial time period, the person is no longer diagnosable as an addict. My intentional action of taking an antibiotic may cure an infection, but I didn’t cure the infection directly by my action. The addict directly cures the addiction by intentionally not using. Addiction may be a disease, but it does not mean that addicts have no or little choice over the action of using. Lack of choice or control must be proven independently and not simply by assertion. Even if addiction is a disease and use is a sign of it, perhaps addicts do have substantial choice about whether to use, even if it’s a hard choice to give up using.
If persistent use of substances changes the brain, doesn’t this mean that addiction is a brain disease, after all, and that addicts should therefore be excused? Of course, persistent drug use changes the brain, but every experience changes our brains. Reading this article or learning a new language changes your brain. If brain changes were indicative of disease states and lack of control, all human behavior would be the symptom of a disease and no one would be responsible for any behavior. This argument proves too much.
But what if the changes are of a specific nature that makes stopping difficult, say, by usurping the usual reward systems that make activities necessary for survival like eating and procreation pleasurable and recruiting these systems for drug use? We know that it is not easy for addicts to stop using and it would be entirely unsurprising if some of the difficulty stemmed from altered neural anatomy or physiology. When addicts don’t quit using, is it because they cannot stop or simply will not stop? The empirical evidence on this question strongly suggests that most and perhaps all addicts have substantial choice about whether to use.
After a number of unsuccessful attempts to quit using, most addicts quit permanently without addiction treatment, although they may be assisted by family and friends and by organizations such as AA. Although the evidence for why they finally quit is anecdotal, it all points to them discovering a good enough reason for them to give up using, such as shame, the inability to look after family, the desire to live a better life, and similar good reasons. The high rates of spontaneous ceasing to use coupled with the reasons for doing so are very inconvenient facts for the chronic and relapsing brain disease model.
Virtually all the studies that have been done that show high rates of relapse involved addicts in treatment for addiction, but these are not a random sample of addicts. Addicts in treatment disproportionately have another psychiatric diagnosis and it is impossible to know whether addiction alone accounts for the relapse. The same subjects are also the data base for the studies that show differences between the brains of those with and without addiction, so once again we don’t know if addiction alone accounts for the brain changes.
Addicts also respond to incentives. Imagine that I give a heroin addict really good stuff and the means to inject, such as a clean needle, but I credibly threaten to kill the addict immediately if he uses. The addict won’t use (unless he also desires to die). Try stopping the unfortunate Parkinson’s disease sufferer from shaking by the threat that he’ll be killed if he does shake. The usual response to such arguments is to concede that of course addicts have “some” control, but that the amount of such control is effectively trivial. Again, the evidence does not support this assertion. Some treatment programs, such as those for addicted physicians and airline pilots, as well as probation and parole programs and drug courts, all use the threat of sanctions to deter addicts from relapsing, and most do; the sanction gives them a good enough reason to quit. Addicts are not automatons. They are acting human agents who can respond to reason despite their addiction. Finally, even if there are some addicts who are otherwise responsible but cannot control their substance use, we cannot reliably identify this sub-category.
Leroy Powell himself furnishes an excellent concluding example. On the morning of his trial, Powell had been given a drink, presumably by his counsel to help him avoid the shakes. Powell’s expert psychiatrist had testified that although he had some control over taking a first drink, once he had that drink he was powerless to stop drinking. Powell’s cross-examination at trial, which was quoted by Justice Marshall in his opinion, disclosed, however, that Powell was not drunk at his trial. When asked why not, Powell responded that he didn’t keep drinking because he knew he had to come to court and that he would have been unable to do so if he kept drinking.
There are attractive theories suggesting that at the time of possessing and using, when addicts are in states of peak desire for the substance, many may not be responsible for their conduct. But these theories run afoul of the following consideration. When not in states of peak desire, when quiescent, addicts are fully responsible agents who know that if they don’t seek help or take other measures to deal with their addiction, they will use and get into trouble again. It is their duty at that point to take such steps, or they deserve to be held responsible if they use later in a state of non-responsibility. A person suffering from epilepsy who knows that his seizures are not well-controlled by medication should not get behind the wheel of a car. If he seizes while driving and causes a fatal accident, he will be held responsible even if he was “blacked out” at the time. Now, some addicts may be so completely mentally disabled by their lives of addiction and consequent deprivation that they are simply not responsible most of the time and should not be held responsible for use (or most anything else that they do), but most addicts are not like that. Most addicts can fairly be held responsible when they possess and use because they had the capacity to avoid doing so by potential earlier behavior.
In short, the available evidence does not support the oft-repeated claim that addicts cannot control use, and it would be unfair to blame and punish them for it. Perhaps future discoveries of the various disciplines that investigate addiction will challenge this conclusion, but for now Justice Marshall’s dictum in Powell still applies to the claim by addicts like Joyce Eldred that they must be excused for using. They are asking for too much based on too little evidence.
In the current state of understanding, courts should not limit legislatures by imposing a one-size-fits-all, constitutional excuse for addicts. As the legislatures struggle to respond to addiction, including the opioid epidemic, using criminal blame and punishment is one potential tool. Furthermore, imposing the defense might well have unintended negative consequences. For example, it would be difficult to limit the defense simply to use by those on probation and parole or to defendants charged with possession or use alone. Recall that sympathetic judges in the past have thought the logic of the excuse should apply to any criminal behavior that is a compulsion symptomatic of an individual defendant’s addiction. The effects on the plea-bargaining process, which adjudicates roughly 98 percent of Federal criminal cases and 94 percent of state cases, would be immense and uncertain. Addicted defendants might be treated even more harshly.
Having defended the current legal regime of permitting the state to blame and punish addicts for possession and use, let me be clear that I don’t think the current regime is optimal. For example, sending Joyce Eldred back to prison, where substance treatment services are extremely limited if available at all, was not the best response for her or society. At the very least, there should be adequate treatment services in jails and prisons, including methadone maintenance and medically assisted treatment (MAT) in general. And a hospital bed would have been a better place for her.
More broadly and basically, I think legislatures would be wise to decriminalize possession of small amounts of substances for personal use, whether by addicts or others, and to adopt a public health stance towards addiction. This is already beginning. Marijuana possession is being decriminalized by legislation and by permissive law enforcement processes. Very few inmates are in prison for simple possession of substances generally. Finally, to fully support the argument that most addicts can fairly be held responsible because they have a duty when quiescent to take steps to prevent further use, the state has a duty to make more adequate treatment services available in the community so that addicts who need assistance really do have viable alternatives. In addition to strengthening the argument, it is also the right thing to do.
Much more could be said about the optimal legal response to the immensely complex problem of the opioid addiction epidemic and of addiction in general, but excusing addicts is not a justified policy. Addicts are not puppets controlled by their neuronal strings. As those who treat addicts know, addicts must be encouraged to take responsibility for themselves and to exercise the self-control they undoubtedly possess.
The post The Criminal Responsibility of Opioid Addicts appeared first on The American Interest.
Behind the Khashoggi Affair
On November 2, Turkish President Recep Tayyip Erdoğan took to the op-ed page of the Washington Post to insist that more pressure be put on Saudi Arabia to answer lingering questions about the death of Jamal Khashoggi. Erdoğan’s op-ed is symptomatic of the way his government has maneuvered to gain maximum benefit from the Khashoggi case. Erdoğan hits several important points: He emphasizes that Turkey is a “responsible member of the international community” and a NATO ally, and refers to Khashoggi as a “kind soul” and “honorable man.” He claims Turkey has “moved heaven and earth” to get to the truth of the case, and has shared evidence with the U.S. government to make sure others “keep asking the same questions.” Most importantly, Erdoğan writes that he does not believe “for a second” that the Custodian of the Two Holy Mosques—that is, King Salman—would have ordered the murder.
Deconstructing these arguments tells us a great deal about Turkey’s interests in the case. Rarely in the past several years has the embattled Turkish President been able to wear a robe of righteousness in leading Western media. Now, given the intense media interest in the Khashoggi affair, the Turkish President sees a chance to present his country in a positive light. It is hard to blame him for seizing that opportunity, given that Turkey faces more than $200 billion of debt coming due after its currency lost almost half of its value in the past year. Ankara has no alternative to striking a deal with the IMF, which will require a rapid improvement of Turkey’s relations with Western powers.
Similarly, Turkey’s indignation over a movie-like murder plot executed on its territory is understandable. But the irony of Erdoğan shedding crocodile tears over the fate of a journalist is impossible to miss. As many an analysis of the situation has noted, Turkey is the world’s leading jailer of journalists, and the country has dropped to 157th place of 180 countries on the World Press Freedom Index (still, it should be noted, 12 spots higher than Saudi Arabia). Erdoğan’s supposed outrage also must be seen in the context of growing Turkish intelligence involvement abroad. As the New York Times reported this past April, Turkish intelligence agents are believed to have seized close to 100 political opponents in 18 countries since the 2016 attempted military coup against Erdoğan, generating diplomatic scandals from Kosovo to Mongolia. European states have grown increasingly alarmed at the brazenness of Turkish covert activities on their soil.
Thus, there is obviously more to Turkey’s response than sincere horror at Saudi Arabia’s behavior. A number of observers have noted the masterful way in which Ankara has stage-managed the release of information surrounding the Khashoggi case, one even likening Ankara’s drip-drip release of uncorroborated allegations to Turkey’s blockbuster serialized television dramas. Indeed, lurid stories—of audio and video recordings of Khashoggi’s death, of bone saws being brought into the Embassy, of a body dissolved in acid—have dominated the news coverage. None have been backed up by publicly available evidence; all have been traced to unnamed Turkish officials.
Less obvious is why Ankara is capitalizing on this story to land blows on Saudi Arabia. True, the Saudis and Turks are regional rivals for leadership in the Muslim world, and Turkey came to Qatar’s rescue when Saudi Arabia started a campaign against the small Emirate in June 2017. More questionable is the frequently made assertion that Ankara and Riyadh are on opposing sides of a spectrum pitting “popular demonstrations and democratic politics” against “stability” and “authoritarian powers.” Turkey and Saudi Arabia are indeed on different sides of a geopolitical and ideological divide in the Middle East. But a contest between “democracy” and “authoritarianism” has nothing to do with it.
Getting to the bottom of all this requires asking what Jamal Khashoggi was doing in Turkey in the first place. And that, in turn, leads us back to the role of the Muslim Brotherhood in the geopolitics of the region.
The reason Jamal Khashoggi entered the Saudi consulate in Istanbul on that fateful day in October was to obtain paperwork to marry his fiancée, Turkish national Hatice Cengiz. Cengiz graduated from Istanbul University’s Faculty of Theology in 2013. She then pursued postgraduate studies on Islamic traditions in Oman at the small, private Sabahattin Zaim University. Zaim was a leading specialist on Islamic banking, and one of the founders of the Society for the Dissemination of Knowledge (Ilim Yayma Cemiyeti), which also founded the university in 2010. Ilim Yayma, founded in 1951, was among the key organizations in the emergence of political Islam in Turkey. As the late investigative journalist Uğur Mumcu documented in his groundbreaking 1987 book Rabıta, Ilim Yayma and circles around it benefited from Saudi largesse traceable to the Muslim World League for decades. A declassified 1953 CIA report on the Muslim Brotherhood’s activity in Turkey describes Ilim Yayma as “the cover name of an Arab secret organization which has as its purpose the establishment of secret schools to train Imams and preachers.” This, of course, was when the Kingdom and the Muslim Brotherhood were still allied.
Following her graduation in 2017, Cengiz moved to set herself up as an independent expert on Gulf issues, and published a number of articles and studies for the think tank INSAMER. INSAMER is the “Humanitarian and Social Research Center” run by the Turkish Humanitarian Relief Foundation, IHH. That organization gained fame in 2010, when it served as the chief organizer of the Gaza Flotilla against Israel, prompting 87 U.S. Senators to request the Obama Administration to consider designating IHH a terrorist organization. While IHH has considerable humanitarian activities, it has also been implicated as an instrument in Turkey’s covert support for armed groups in Syria connected to the Muslim Brotherhood. Cengiz also serves as a coordinator of the Center for Turkish-Arab Relations, a role in which she met Khashoggi in May 2018.
Khashoggi is known to have been a member and advocate of the Muslim Brotherhood for decades. These sympathies were obvious from an August 28 piece he wrote for the Washington Post titled “The U.S. Is Wrong about the Muslim Brotherhood.” In this full-throated defense of the Brotherhood, Khashoggi framed the organization solely as a defender of democracy and called the Egyptian crackdown on the Brotherhood “nothing less than an abolition of democracy.” “The choice,” he added, is “between having a free society tolerant of all viewpoints and having an oppressive regime.” What Khashoggi conveniently omitted from his analysis is that Muhammad Morsi’s tenure in Egypt was far from democratic, and involved a brazen attempt to grab power in extra-constitutional ways. By no means did it involve the “toleration of all viewpoints”: As Eric Trager and others have documented, it involved a rapid descent into authoritarian rule and intolerance of dissent.
The rise and fall of the Muslim Brotherhood fundamentally altered the nature of Middle East geopolitics, and pitted Turkey and Saudi Arabia against each other. This had not always been the case. Back in 2007, when King Abdullah visited Turkey, President Abdullah Gül and then-Prime Minister Erdoğan departed from diplomatic protocol when they went to visit the King at his hotel suite. This caused considerable uproar in Turkey, where it was widely interpreted as two Islamists groveling at the feet of the Custodian of the Two Holy Mosques rather than having the foreign head of state pay respect to them, as is common practice. Then, when the conflict in Syria escalated, Ankara and Riyadh joined forces to support the Sunni armed groups against the Assad regime, which was boosted by Russia and Iran. For a brief time, the Middle East seemed to align neatly along sectarian lines, with a Sunni bloc led by Turkey and Saudi Arabia pitted against an Iranian-led bloc supported by Moscow.
This did not last long, because the Sunni bloc itself fractured deeply over the crisis in Egypt. The rise of the Brotherhood to power in 2011-12 was greeted warmly by Ankara, not least since the Brotherhood forms a core ideological influence on Erdoğan’s AKP. In fact, Turkish leaders saw the Arab upheavals as a historic chance to establish Turkish leadership in the Middle East, using the Brotherhood and its affiliated groups from Tunisia to Syria as the vehicle. Qatar, with close ties to the Brotherhood, took a similar stance. Conversely, Erdoğan was outraged and alarmed by the Egyptian military’s overthrow of Morsi’s government, not only because it hurt Turkey’s regional leadership ambitions, but because he saw it as part of a broader regional plot against his own power. This also explains Erdoğan’s decision to pull out all brakes to come to Qatar’s rescue. Sources close to the Turkish leadership make clear that Erdoğan views the failed coup against him in July 2016 as part and parcel of the same conspiracy that unseated Morsi and sought to overthrow the Emir of Qatar. Behind this conspiracy, in Erdoğan’s mind, stands not just the Gulf monarchies but “world Zionism.”
But the monarchies of the Gulf—with the exception of Qatar—saw matters otherwise: They viewed the Brotherhood as a dangerous, subversive, and revolutionary organization that could threaten stability in the region as well as their own hold on power. The United Arab Emirates took the lead in designating the Brotherhood a terrorist organization, something wholeheartedly endorsed by Saudi Crown Prince Muhammad bin Salman.
Both the Crown Prince and the Brotherhood advocate reform, but the type of reform they have in mind is diametrically opposed. Where the Brotherhood wants to entrench an Islamist political ideology that is inherently anti-Western, anti-Israeli, and mildly sympathetic to Iran (which supports Hamas, the Brotherhood’s Palestinian wing), the Crown Prince has embarked on a process of modernization and reform that would turn Saudi Arabia into a something entirely different. He views Iran as the Kingdom’s archenemy, the United States as his chief ally and partner, and Israel as a de facto ally. There is nothing democratic about his reforms: As has been widely noted, he has brought greater personal freedoms on a societal level but further restricted the boundaries of political expression. In other words, Muhammad bin Salman has embarked on a traditional process of top-down authoritarian modernization. For Turkish Islamists, what the Crown Prince is trying to do is reminiscent of the top-down authoritarian secular reform process of its archenemy, Mustafa Kemal Atatürk.
No wonder, then, that Turkey’s pro-Erdoğan media reacted with alarm to the Crown Prince’s reform program. The tone taken by the chief AKP mouthpiece, the Islamist daily Yeni Şafak, was telling: In October 2017, its editor-in-chief Ibrahim Karagül blasted the Crown Prince’s announcement that Saudi Arabia would move toward “moderate Islam” tolerant of all religions as a “very dangerous game” instigated by the United States and Israel. Never able to refrain from hyperbole, Karagül went on to claim that the Crown Prince’s announcement was part of an American plan whose final aim was to occupy Islam’s holy sites, Mecca and Medina.
Erdoğan himself chimed in: reacting to Mohammed Bin Salman’s talk of “moderate Islam,” he retorted that “there is no moderate and immoderate Islam: there is one Islam.” Going further, he castigated the Crown Prince for adopting a Western idea: “The trademark of ‘moderate Islam’ does not belong to you, it belongs to the West. . . . Why did this emerge again? It is about weakening Islam, weakening our religion.” Not to be outdone, Mohammed bin Salman called Erdoğan part of a new “triangle of evil” with Iran and terrorist organizations (presumably referring to the Brotherhood), and accused him of trying to build a new “Ottoman Caliphate.” In statements after the Khashoggi killing, Erdoğan has studiously avoided mentioning the Crown Prince, but has repeatedly cited his respect for King Salman. His aim appears to be to enlist the West to force the King to sideline his ambitious son, and replace Mohammed bin Salman with a more pliable and less assertive figure.
Against this background, it is clear that the conflict between Riyadh and Ankara is in part about geopolitics and leadership of the Middle East. But it is equally about ideology and even survival. For a brief time between 2011 and 2013, Erdoğan and the Muslim Brotherhood were on the offensive, and sought to establish a new regional order. But they then faced a severe setback in Egypt, and both Qatar and Turkey were put on the defensive. Erdoğan and the Emir even saw their very hold on power threatened. Meanwhile, the prime beneficiary of this intra-Sunni conflict is Iran, which has exploited it to consolidate its positions in Syria, Lebanon, and Iraq.
The Khashoggi affair has allowed Erdoğan to present Turkey in a positive light. But the conflict between Turkey and Saudi Arabia is deeper and more complicated than it is made out to be. As the United States determines how to respond, nothing less than the balance of power across the Middle East is at stake. If the episode has any lessons, the first is that there may be no “good guys” in the triangle of rivalries in the region. Most importantly, the experience of the past decade suggests that forces that advocate revolution in the name of “democracy” very often have something entirely different in mind. For them, “democracy” is an instrument that allows them to enlist Western support for their ambitions of power. But under the surface, it has become clear they have a view of the world that is fundamentally incompatible with Western values and directly hostile to Western interests.
The post Behind the Khashoggi Affair appeared first on The American Interest.
November 5, 2018
Symbols on Horseback
With Halloween over and the full implications of Election Day looming, let us take a moment to imagine a future in which America’s political norms and institutions have collapsed, allowing an authoritarian party to seize power. Among the nightmares conjured by such imagining, let us focus on one: the possible fate of America’s public monuments. As David Blankenhorn wrote here recently, “We humans saturate the physical landscape with intentional displays of moral meaning [that] … cry out to the living and the unborn: ‘This is who we are, where we stand, and how we aim to be remembered.’” There’s no need to specify which side in today’s culture war would be quicker to summon the bulldozers.
Relevant here is a 1985 essay by historian Richard Stites about the cultural upheavals following the Bolshevik Revolution, in which he distinguishes between “mindless vandalism,” of the sort committed during peasant revolts throughout Russian history, and “iconoclasm proper,” meaning the “self-conscious removal or demolition” of “hated images and artifacts of the past” in order to replace them with “new symbols and emblems.” Contrary to official Soviet history, Stites does not impute to angry peasants trashing the property of landowners the same ideological motives driving revolutionaries to attack imperial eagles, czarist statues, and Orthodox churches.
Americans experience more mindless vandalism than iconoclasm proper, so this distinction can be lost on us. For example, last August, when a splotch of red paint appeared on the statue of Robert E. Lee on Monument Avenue in Richmond, the Richmond Times-Dispatch did not use the term “iconoclasm,” much less “iconoclasm proper.” Of course, no sane newspaper would use such a word. But neither did the paper highlight the political nature of the act. Instead, it was reported as “vandalism,” even though the “vandal” signed the splotch “BLM,” the acronym for Black Lives Matter.
If the blue-state authoritarians were driving the bulldozers, Lee and his fellow Confederates would soon bite the dust. But iconoclasm proper is harder to pull off in a democracy, even a struggling one like Macedonia. Consider the latest news from Skopje, where a 27-year conflict with Greece has found a focus in one highly visible monument.
Since 1991, when the Republic of Macedonia declared its independence from the former Yugoslavia, its efforts to join the European Union and NATO have been stymied by Greece, on the ground that by calling itself “Macedonia,” the new nation was staking a claim both to the northern part of Greece that goes by the same name, and to the classical heritage associated with King Philip of Macedon and his son, Alexander the Great. In 2015 former Prime Minister Nikola Gruevski spent €7.1 million on a 50-foot-tall statue of a wavy-haired hero wearing a buff muscle cuirass and seated on a rearing stallion atop a 30-foot column. Currently known as “Warrior on a Horse,” the hero holds a sword in his right hand, and with his left pulls so tightly on the rein that his mount seems in danger of toppling. Not only that, but while warrior and horse are reasonably well sculpted, the work seems amateurish. Artists without classical training tend to make heads too big, and that error is discernible here. The stallion’s forequarters are proportionately larger than his hindquarters, and the fact that he is rearing only makes the statue more top-heavy.
[image error]
Alexander Statue in Skopje (Wikimedia Commons)
But if our stallion can hold his pose long enough, his rider may soon have a new name. Subsequent to a referendum held in September, the current Prime Minister, Zoran Zaev, agreed to the Greek demand that his country be renamed North Macedonia. If all goes well, North Macedonia will join the European Union and NATO, and it will be kosher to call the statue Alexander, as long as the authorities add a plaque explaining that Alexander was part of “Hellenic culture,” as opposed to the predominantly Slavic culture of the current population.
Can Americans hope for a similar resolution with regard to Robert E. Lee and other wavy-haired heroes of the Confederacy? I fear not, because, unlike the Macedonians and Greeks, the two sides in our culture war are no longer staking rival claims to the same heritage. On the contrary, the more fanatical defenders of the Confederacy dream of reviving the Lost Cause, while the more zealous “woke” folk dream of razing the entire symbolic landscape.
Have pity, then, on the Office of the Mayor and City Council of Richmond, which this past July released a report by the Monument Avenue Commission, a 10-member group that had spent the previous year trying to reconcile all viewpoints on the question of how best to deal with what Mayor Levar M. Stoney called “one of the most picturesque grand boulevards and urban residential neighborhoods in the world.” The report quotes the mayor’s original charge that, rather than remove the Confederate statues lining the wide, tree-lined mall dividing the street, the commissioners should seek ways to “add context.” But when this charge was shared with the public, the responses were “swift and visceral”: “Why desecrate the beauty and history of Monument Avenue?” “Why is ‘removal’ not a consideration?” “What does ‘context’ mean?”
The first two questions echo the arguments that typically erupt on this topic, and unsurprisingly, the second resulted in the commission adding “removal” to its list of unlikely solutions. More intriguing is the third question, because “adding context,” or the more academic-sounding “contextualization,” is not a notion that springs readily to the mind of the average citizen. This, I suspect, is why Mayor Stoner grabbed hold of it. Gauzily high-minded yet somehow warmly inclusive, “adding context” seems the perfect choice for civic leaders and politicians hoping to please all of the people all of the time.
But what does it mean? In Richmond, adding context is reported to mean “permanent signage” applied to every offending object, in order to provide a more “holistic narrative,” with a future plan to provide “mobile app and video to convey the history of the monuments and what they stand for.” In less cautious cities, contextualizing is celebrated as the deployment of “design provocations” for the purpose of “evoking powerful emotions” in a diverse public presumed to be seeking relief from the oppressive atmosphere exuded by racist and retrograde monuments.
The trouble is, the typical job of contextualizing is done by a well-credentialed artist steeped in postmodern theory, whose approach is long on words and short on visual eloquence of any kind, much less the kind that speaks to the general public. And while the results are usually praised in the local media, the general public tends not to care, either because they have other things to worry about, or because they find the whole business baffling and would rather look at a beautiful statue than read a placard explaining why they should feel offended by it.
Hence Mayor Stoner’s comment to the Richmond Times-Dispatch that “adding new statues, whether it’s the U.S. Colored Troops or Oliver Hill or Doug Wilder, would be an advancement for the city of Richmond.” This was of course done back in 1996, when a bronze sculpture of tennis champion Arthur Ashe was placed on Monument Avenue over the objections of a wide range of city residents—including one from a local arts group that “questioned the quality of the statue and called for a new one to be designed.”
That arts group had a point. Unlike the imposing monuments of Lee and his fellow Confederates, but unfortunately very much like Alexander in Skopje, the statue depicts Ashe with an oversized head and in an ungainly pose—both arms raised, racket in one hand, books in the other—that is supposed to evoke his twin achievements in sports and learning but more immediately suggests that he is about to toss the books in the air and smash-serve them over an invisible net.
More appropriate, but also more provocative in the former capital of the Confederacy, would have been a monument to the 175 regiments of U.S. Colored Troops, escaped slaves and freedmen, who fought on the Union side in the Civil War. Today that idea is under serious consideration, perhaps because there are now several such monuments around the country, the most venerable being the Memorial to Robert Gould Shaw and the Massachusetts Fifty-Fourth Regiment by Augustus Saint-Gaudens. Unveiled in 1897, that masterpiece now stands on Boston’s Beacon Hill, where the regiment marched past the Massachusetts State House on May 31, 1863.
One hundred years after the completion of the Saint-Gaudens, a similar monument was unveiled at the current location of the African American Civil War Memorial Museum in Washington, DC. “The Spirit of Freedom” is a 10-foot-tall cylinder, like a tree trunk, from whose surface emerge an array of black soldiers, sailors, and civilians. Sculpted by the Louisville-based artist Ed Hamilton, the work departs drastically from the standard-issue hero on horseback but without sacrificing the power and expressiveness of lifelike figures—all of whom, I am happy to report, are well-proportioned and have heads no larger than the good Lord intended.
It pains me to say this, but all of this rather suggests that the best resolution of America’s monument wars may be Vladimir Lenin’s. As noted by Stites, the “currents of iconoclasm” were so violent after the Bolshevik Revolution that for a time it seemed nothing would survive. For Stites the question is, “Why did the storm which built up a ferocious power in the early revolutionary years not blow away all traces of the old culture and its makers?” And the answer is found in how Lenin spent the first anniversary of the Bolshevik Revolution on November 6-8, exactly 100 years ago this week:
Lenin participated in the festivities in a way that illustrated in capsule form his views of destruction and preservation. He left the Kremlin, a complex of architecture which he insisted on preserving but from whose premises he had banished the statue of Alexander III [who had Lenin’s older brother hanged after an uprising in 1888], and went into Red Square to attend the unveiling of newly erected revolutionary monuments. His festive mood was marred by the sight of provocative and abstract decorations by the Futurist participants. . .[Lenin] wished to preserve the great historical and artistic monuments of the past, even those intimately and graphically tied to name of Romanov. Lenin was a traditionalist in artistic taste and a political realist. . .The signs of an effete aristocracy could be banished to the museum; the excesses of ego-asserting Futurists could be toned down or curbed; and the ‘new’ signs of Revolutionary order could be fashioned out of elements of the old.
It may be objected that when Lenin’s cultural policies look good to Americans, we are in serious trouble. Lenin was a ruthless dictator responsible for uncountable deaths. But he was also the one, amid the vandals, iconoclasts, and fanatics of his age, who applied the brakes to the cultural desecration of Russia. Because of him, Stites reminds us somewhat ruefully, “the modern tourist on a successful visit to Leningrad [can] take in the Nevsky Prospect,. . .the Peter Paul Fortress, the Winter Palace, St. Isaac’s Cathedral, the Gold Room, the university, the Russian Museum, the Philharmonia, and the ballets of Tchaikovsky at the Kirov Theater.”
It would be nice to think our struggling democracy could do the same.
“Iconoclastic Currents in the Russian Revolution,” in Bolshevik Culture, Abbott Gleason, et. al., eds. (University of Indiana Press, 1985), pp. 1-2.
Along with Robert E. Lee, the Confederate figures depicted on Monument Avenue are Stonewall Jackson, J.E.B. Stuart, Jefferson Davis, and Matthew Fontaine Maury.
Monument Avenue Commission Report (July 2, 2018), p. 8.
Commission Report, p. 8.
Unlikely because, as the Richmond Times-Dispatch points out, Virginia “state law limits local governments’ power to remove or modify war memorials, and some localities have concluded that the law flatly prohibits taking down Confederate statues.”
1925 Vermont Avenue NW. Last month the museum signed a 99-year lease with Community Three Grimke, LLC, a company planning a $5 million renovation of the historic Grimke School at 1912 U Street NW.
Stites, pp. 16-18.
The post Symbols on Horseback appeared first on The American Interest.
Going After the Enablers
Charles Davidson for The American Interest: Bill, it’s great to see you again. This will be the third interview you’ve granted The American Interest, and I hope not the last. You’ve come to focus more on the issue of Western enablement of kleptocracy, and I was wondering if you could comment on how the Global Magnitsky Act could, or should, be used to sanction those in the West who are complicit in hiding the money of the “bad guys.”
Bill Browder: Well, before we even get into that, we should talk about who are the Western enablers and what are they doing in different categories. In the Magnitsky case, we’ve had an opportunity firsthand, to see how the Western enabling system works, because as the Russians have tried to cover up the murder of Sergei Magnitsky, and cover up legal liability, and cover up the money laundering, they’ve engaged a bunch of people from the West.
The most interesting examples are connected to Natalia Veselnitskaya and Prevezon. Prevezon received proceeds of the crime that Sergei Magnitsky had uncovered. They were prosecuted by the Department of Justice, and as part of their trying to wiggle out of their legal liability, they spent a vast amount of money on Westerners to help them. The most interesting story was the story of lawyers that they chose. Natalia Veselnitskaya chose an American lawyer with a very curious name, John Moscow. You actually couldn’t make up such a name for a story like this. John Moscow . . . but what made John Moscow so particularly interesting in our story was that John Moscow had actually been the lawyer for us, in finding the dirty money, and in introducing us to the Department of Justice. And then, once the money was found, and once we got the Department of Justice to open a criminal case, John Moscow then switched sides and became a lawyer for the Russians.
It’s a perfect example of how people in the West will sacrifice their values, their legal obligations, in order to make money for the Russians. John Moscow wasn’t the only enabler that joined the Veselnitskaya team. You also had Glenn Simpson, the opposition researcher who was an anti-corruption activist . . . or, he claimed to be a Russian anti-corruption activist prior to this whole series of events, and then he started taking money directly from the Russians to try to cover up the legal liability of Prevezon, and to try to blame me for all of the problems.
TAI: Well Bill, you’ve certainly had your experiences with Western enablement, and you’ve certainly had your experiences with kleptocracy, so this is certainly on point. I’m familiar with this story, and know a lot of these people.
Now, I’d like to jump in this regard, to the Global Magnitsky Act, specifically section 1263, paragraph four—the language regarding the authorization of imposition of sanctions on intermediaries, which refers to someone who “has materially assisted, sponsored, or provided financial, material, or technological support for, or goods or services in support of” either human rights abusers or various acts of corruption. Who should be worried by this language? What does this mean?
BB: So the Global Magnitsky Act is really an unbelievably powerful tool. It does several things. The Global Magnitsky Act goes after human rights violators, those people who had been involved in torture and killing of people. It goes after people who are involved in high level corruption, so kleptocrats, people who are stealing massive assets from their governments. And then, most importantly, it goes after two categories of people: People who are acting as nominees for these people, their agents. That’s the first category. And then, it goes after people who have materially assisted, sponsored, or provided financial, or technological support or goods and services, for those people I’ve just mentioned.
TAI: That sounds pretty interesting. What exactly does this mean?
BB: This comes right back to the issue of the Western enablers. Basically, one could sanction, let’s say, a Nicaraguan general, who is involved in this thing, but one, in theory, could also sanction a British lawyer who is hiding the assets of the Nicaraguan general. These powers haven’t been used yet. There have been a number of Global Magnitsky designations which have been focused on the horrific human rights abusers. But I suspect, and certainly I will be involved in, advocating for the use of the Global Magnitsky Act against the enablers in the West.
It doesn’t take many cases of a lawyer, or some type of financier, to get caught in this thing, to create an absolute terror and panic among the entire Western enabler community. Right now, there are a lot of people in the West who choose, consciously, to amorally assist very bad people, who have done very bad things, in exchange for money. They view their situation as having very little downside, and only financial upside. What the Global Magnitsky Act does, particularly paragraph four of section 1263—it goes after the enablers. All it takes is a few of these stories, and we’ll see a sea change in the risk/reward calculus for these people.
TAI: If we look at the Global Magnitsky Act, or what’s now called GLOMAG—I mean, this thing is becoming so ubiquitous that it’s got its own shorthand amongst its proponents and advocates—there’s a bill that’s passed in the UK, I understand that the Baltics have passed something. I’m wondering: How do these other GLOMAG bills compare to the U.S. legislation?
BB: Well, first of all, I should say I hate the acronym. I hate it for one simple reason.
TAI: Good, I do too. It’s inelegant. It doesn’t sound good.
BB: Well, it doesn’t sound good, but also, the whole purpose from my perspective of the Magnitsky Act was to name it after Sergei Magnitsky, the victim. And to sort of bastardize that is very unappealing to me. People in government and elsewhere should continue to call it the Global Magnitsky Act.
TAI: Sounds good to me.
BB: It’s not easy to get laws passed anywhere. In fact, some people would argue that it’s easier to win the lottery than it is to get a law passed in the United States. And so, when we got the Magnitsky Act passed, and we got the Global Magnitsky Act passed, we worked as hard as we could on it, and we were extremely happy with the outcome—and were very surprised that we had such a positive outcome. In other countries, we face the same type of challenges and so, in each country, the law is different. The closest comparable law to the U.S. Global Magnitsky Act is the Canadian Magnitsky Act, which basically sanctions human rights abusers, and people guilty of grand corruption, and it publicly names their names, bans their visas, and freezes their assets.
The British law, in theory, does the same thing. However, when you get out to the Baltics, Estonia, Latvia, and Lithuania, all it does is ban their visas. They haven’t yet created legislation to freeze their assets. Lithuania is the first country among the Baltics to introduce asset freezing legislation to accompany the visa banning legislation.
And so, the way I look at this is we take whatever successes we can, and we try to harmonize these things over time. The big, big prize that we’re now a bit closer to winning is the EU. The European Parliament has unanimously called on the EU, as far back as 2014, to implement a Magnistky Act, but the bureaucracy inside the EU has been resistant. In particular, a woman named Federica Mogherini, who is head of the European External Action Service, which is their equivalent of the State Department, has rejected all of Parliament’s requests.
However, recently, the Dutch government has put its own proposal together, and on November 20th, there will be the first meeting in The Hague between Holland and 28 other EU member states, where they’re also inviting the United States, Canada, and Australia to attend, to discuss an EU Magnitsky Act. The major downfall of the Dutch proposal, as it stands now, is in an effort to appease the Russians, the Dutch have deleted Sergei Magnitsky’s name off of the legislation, which I’m absolutely not going to allow to happen. Dana Rohrabacher, Putin’s favorite congressman, tried to do the same thing in America, and it didn’t succeed; and I won’t let the Dutch succeed in doing that in Europe.
TAI: Very interesting. Certain countries within the EU—can we just very quickly, rifle through a few countries, and where they stand and what might happen there? I’m thinking Germany, I’m thinking France, Spain, Italy, and then outside of Europe, Japan also. Can we just quickly zip through those?
BB: Well, so far we have parliamentary initiatives going on in Holland, Denmark, Sweden, France, and Romania. Germany, of course, is one of the most important countries, and we’re now just gathering parliamentary support there.
TAI: What’s the state of play in Germany?
BB: Well, Germany is a complicated place because on the one hand, there are a number of parliamentarians who are quite supportive of this, and I would even suggest that Angela Merkel is not diametrically opposed to it. But Germany is a country where there is a huge amount of Russian business influence, and Russian political influence, and there’s a lot of people who don’t want to upset Putin.
TAI: Isn’t there some chairman of the board of some Russian company that used to have some high position in German politics?
BB: Well, of course, there’s the famous case of Gerhard Schröder, the previous Chancellor, who was running the country one day, and then the next day, when he finished his job as head of state, he immediately took on a job making huge amounts of money on the board of Gazprom’s pipeline company.
TAI: And he’s now chairman of Rosneft?
BB: Yes. And he attends Putin’s birthday party each year, and he’s openly using the vapor trails of his political credibility to lobby for Putin’s influence in Germany.
TAI: What about France?
BB: Well, France is an interesting country, because the French have been, as a population, historically sympathetic to Russians. The Russians have bought more real estate in France than probably any other country in the world. They love their houses in Saint-Tropez and Cap Ferret. They all have beautiful apartments in the 8th Arrondissement in Paris. They all take their mistresses shopping in Paris. It’s a real sort of oasis for the Russians. And France really appeals to them because the French really know how to do luxury, and the Russians are big into ostentatious luxury.
However, there’s a real opportunity in France, and it comes down to simple human emotion. Putin, several days before the French election, had his intelligence services hack the emails of Emmanuel Macron, who is now the President of France. As much as he appears as a clearheaded statesman in all of his international meetings, I’m certain, as a human being, that Macron is furious that Putin tried to steal the election from him. And so I think that we probably have more of an opportunity in France than many other countries, for that specific reason.
TAI: Interesting. What about the country of Prosecutor Jose Grinda Gonzalez—Spain? Anything particular to comment on there?
BB: We worked very closely with Jose Grinda on the Magnitsky case, specifically on money laundering in Spain. He’s opened a criminal investigation into money that came from Russia to Spain, that’s connected to the Magnitsky case. In the parliament, it’s a little more complicated, primarily because Spain, among all the European countries—it feels very far away from Russia. The weather is different than Russia, the language is different. And they have a lot of political turbulence right now, so it’s hard to get people to focus on international issues.
TAI: What about Italy?
BB: In Italy, the Northern League now has a seat at the table. The League is a far-right party, openly pro-Putin party. I personally won’t travel to Italy, for fear of being arrested. The interior minister of Italy is an open Putin-lover. Which doesn’t mean that we’re not trying to do something in Italy. We have parliamentarians there who are also proposing some type of Magnitsky initiative. Even in a country like Italy, it’s helpful to start putting these things into motion, because it creates an automatic debate. And sometimes outing people on the wrong side of this debate is as useful as anything else.
TAI: Well, that really sucks, not to be able to travel to Italy, I must say. Now, last one and then we’ll move on to other pastures: What about Japan—anything that you can quickly note there?
BB: Japan is on my target list. I have some friends in Japan who are in the political arena, who have offered to connect me to parliamentarians in Japan, and so I’m beginning that process right now.
TAI: As Global Magnitsky spreads—and I think we see that this is an infection that you are encouraging, quite effectively, it would seem—what are the prospects that Global Magnitsky becomes a coordinated, international framework, for fighting kleptocracy, human rights abusers, and other “bad guys”? What are the chances that we get genuine international coordination, that becomes a sort of offensive pushback containment policy, however you want to characterize it, against the general phenomenon of kleptocracy?
BB: I think we’re close to a tipping point, in terms of Global Magnitsky becoming a global standard. When I first started this process, no government wanted to talk to me, because no government wants to be in a position of challenging the kleptocracy or the human rights record of another country. But, as time has gone on, and more countries sign on to this thing, then it becomes shameful not to be signed on to it. I believe that once we get the EU on board, that’s when all the countries start to harmonize their Magnitsky programs, and that’s also when you end up in a situation where it will really be devastating for the bad guys.
But let me just back up for a bit.
I’ve always had this fantasy of making true international pariahs of human rights violators, in the same way as we made terrorists international pariahs—that human rights violations, you know, gross human rights violations, torture, murder, should be punished as badly as terrorism. I don’t think we’re that far away from that being the case. If this standard then gets used liberally, which I hope it will be, then all of a sudden, I’m hoping that it becomes a true deterrent, that people have to weigh out the costs and benefits, in their own countries, of doing stuff, because they realize that the downside for them is that they get stuck in their own countries. The worst downside of getting stuck in their own countries is that—well, a lot of these people don’t last for very long in their own countries. They know they have no place to flee to afterwards.
TAI: Right. Now, so if the world gets GLOMAGed—and we’re going to want a different verb from that, obviously—what’s next? What other legislation, regulation, or policies are needed to supplement Global Magnitsky? And, for that matter, are there any amendments you’d like to see to the existing act?
BB: People ask me that all the time, and I think the Global Magnitsky Act, in its current form, is rock solid in the U.S. I’m extremely happy with how it’s being used, and the effect that it has. I would say that what I’ve seen in my own fight against kleptocracy and impunity, outside of America, when it comes to fighting money laundering, you have a lot of good laws in place. And you have truly incapable prosecutors and police forces, who are unable to understand, identify, and prosecute money laundering. If you can get the people, through Global Magnitsky, and you can get the money through proper law enforcement of money laundering statutes, then it pretty much changes the balance of power between the good guys and the bad guys. But, at the moment, we have absolutely just shameful lack of enforcement in countries like Britain, the Scandinavian countries, various other places in Europe. And it’s so bad that you could pretty much, with almost 100 percent certainty, conclude that anybody who is guilty of money laundering will get away with their crime.
TAI: Well, if we focus just on the U.S., we have Global Magnitsky—is there any legislation, or regulations, or policies? I mean, we certainly could use a lot more enforcement resources in various areas—you and I have discussed that in other contexts. But in the best of all possible worlds, anything else? Or does Global Magnitsky, and the enforcement of existing laws and regulations, do the trick?
BB: Well, I think that there’s a whole world here in the U.S.—of money launderers, enablers, company formation agents, accountants, bankers—that is involved in assisting kleptocrats hide their money. There have been some good starts. For example, in the United States—in Los Angeles, New York, Miami—any property purchased over a certain value, you have to disclose the beneficial owners, and as a result of that, the number of cash transactions has dropped by 95 percent. Limited liability companies can’t have nontransparent ownership structures. All these types of things can greatly reduce the amount of kleptocracy that America supports.
TAI: Bill, you and I are both optimists, but we are surrounded by lots of cynics and naysayers, and opponents. So how do we answer the question as to whether or not we’re at the beginning of a rollback vis-à-vis those who you correctly call the “bad guys”? I mean, is there hope for freedom, democracy, and liberty? Because we live in a world where, by all the Freedom House measures, democracy is in retreat. Freedom is in retreat. It’s been diminishing every year. And authoritarian regimes have been on the upswing. If we look at the last decade, the bad guys have been doing better and better, and the good guys have been doing worse and worse. So, how does this play out? Do we win or do they win?
BB: Well, one could argue that all this work we’re doing—fighting kleptocracy—is like changing the deck chairs on the Titanic. I live in England where we are suffering through Brexit. And there are all sorts of cultural civil wars going on in America. The Brazilians just elected their own dictator. The Hungarians, the Turks, the Egyptians, with their dictators—it all looks pretty horrifying.
However I wouldn’t give up on any democracies just yet. I think that a lot of bad people have taken advantage of weaknesses in the system, and I think that the vast majority of Americans, Brits, and people of other nationalities, are good people that don’t want to have this happen. And our systems are not so flawed, like Russia, that we have to submit to it.
But it all looks pretty terrifying right this second.
TAI: Let’s switch gears—to the Danske Bank scandal, or whatever we want to call it. Could you explain it from your perspective, briefly?
BB: My perspective is the perspective of the Magnitsky case. Sergei Magnitsky discovered the theft of $230 million from the Russian government. He exposed that theft, and he got killed for it. For the last nine years, we’ve been tracing that money, to make sure that the people who killed him don’t benefit, and the people who helped launder that money don’t benefit. We discovered where a lot of that money went. Every time we discover where the money went, we then apply to the law enforcement agencies of that country to open a criminal case, freeze the assets, and prosecute the enablers. There are now 16 live money laundering investigations going on in 16 different countries.
In one of those countries, we discovered that a small bank in Estonia, which was a branch of the Danish Danske Bank, had laundered a significant amount of those proceeds—more than $200 million of the $230 million from the Magnitsky crime, went through the Estonian branch of Danske Bank. We applied to the Estonian prosecutors, and we applied to the Danish prosecutors to investigate. At the time of our applications, neither of them investigated. They valiantly resisted us in doing an investigation. Eventually, a small newspaper in Copenhagen called Berlingske did a big investigative piece, and they discovered that a lot more than $200 million went through the Estonian branch of Danske Bank. They estimated that the number was close to 9 billion. Off the back of their investigation, Danske Bank hired an outside law firm to conduct a proper independent investigation, and that law firm concluded that the number was close to $234 billion of Russian and other former Soviet money went through this one branch of Danske Bank in Estonia.
TAI: What strikes me about this case—among other things—is that the recently departed CEO of the whole shebang, of all of Danske Bank, was in charge of this Estonian branch when he was a regional manager. He was responsible for the part of Danske that included Estonia. I always wonder if there is any way that he could not have known what was going on…
BB: I’ve read the report very carefully, the report that was prepared by this external law firm. The report, by the way, has extremely damning information about what happened. But management paid for the report, and it exonerates him—the CEO—and the board of directors.
TAI: That seems surprising.
BB: So if they pay for the report, and it exonerates them, it doesn’t seem too surprising to me at all. In the report, there are extremely granular details of constant warnings, coming as early as 2006, about money laundering from Russia going through that branch. 2006. In subsequent years, I believe, Citibank and Deutsche Bank cut off their correspondent banking relations with the bank because they were so suspicious of the transactions. There was a whistleblower report. There was our complaint. And none of this seemed to resonate in any way with the CEO of the bank or with the Board of Directors. They allowed this to continue.
TAI: So, as our children’s generation might say, LOL?
BB: Well, this is not a laughing matter. This was money connected to a murder, the torture and murder of Sergei Magnitsky. This is very serious business.
TAI: Could the Global Magnitsky Act be used against ex-executives of Danske Bank, potentially, if they’re found to be guilty?
BB: There’s one crucial aspect of the Global Magnitsky Act, which is that it’s in place for where there’s impunity. And so, in Russia, there was impunity for all the people who killed Magnitsky. The Russian Magnitsky Act sanctioned them. If the Danish authorities refuse to prosecute one of their members of the establishment because he’s so well connected there, then this would be a case for that.
TAI: Now, a few other things, sort of as a coda for our readers, that are a bit lighter. We can maybe have a little more fun with this.
I’m wondering what your take is on the Panama Papers, and the so-called Paradise Papers, and what effect you think this will have on things, and maybe how it’s affected even the work you’re doing with Global Magnitsky—whatever comes to mind.
BB: Well, the Panama Papers provided us with a crucial breakthrough in the Magnitsky case. We had always wondered why Putin was ready to ruin his relations with the West over the Magnitsky case, and we’ve discovered it through the Panama Papers. A man named Sergei Roldugin was exposed in the Panama Papers. He’s a ‘cellist. He was exposed as having accumulated two billion dollars. He’s a man who was very close to Vladimir Putin—they were best friends in childhood, and he was the godfather of Putin’s daughter. When the details of his ownership structures were revealed in the Panama Papers, we were able to connect all that directly to the crime that Sergei Magnitsky had uncovered. And so, effectively, from the Panama Papers, we could conclude that Putin was a beneficiary of the crime that Sergei Magnitsky was killed over, which gives him a very strong personal motive to retaliate. Which is what he has done.
TAI: It’s all interconnected, it seems, wherever one goes. What about the Paradise Papers—or the Appleby Papers, as I like to call them? Are there any connections to the Magnitsky cases there, or anything worth noting?
BB: Well, the real frustration on the Paradise Papers is that I’m aware of a number of exposes that got caught in midstream, which have never seen the light of day, because Appleby sued The Guardian, and perhaps other news organizations. I know of one specific story, which is truly amazing, scandalous, and upsetting, which still sits underneath a court order not to disclose it.
TAI: Well, it sounds like there is still work to be done.
BB: That’s an understatement.
TAI: Yeah, that’s an understatement. Well, it’s better than being bored I guess. But I do think we will win eventually. I mean, I think that there is, in fact, a positive history to political reform, and we live in a very cynical age, and we tend to forget the examples of when the good guys win. So, with Bill Browder on our side, I think we will eventually win.
Now, you wrote this book, Red Notice, and I’ve heard rumors that it may become a movie or something. Is it possible to publicly comment on Red Notice in Hollywood?
BB: What I can say is that I’ve had three careers so far in my life. I’ve had a career on Wall Street, a career in Moscow, and a career in Washington, and one deals with some really hairy characters in each place. But, the one thing I can say is I’ve never dealt with as much dysfunction and dishonesty, as I have found in Hollywood.
TAI: Last item: your next book, can you give us any sort of preview?
BB: Well, I think that this interview is a good preview of my next book.
My main opponent throughout the last 20 years of my life have been corrupt Russians. But, I almost have a certain understanding of why these Russians are the way they are, because they were brought up in a totally brutal, non-religious, amoral society, where good behavior was almost squelched out. And so—I don’t approve of it, but I’m almost… I can almost empathize with how they came to be the way they became.
In the West, we have people who went to the same schools as us, worshiped at the same churches, had nice mothers who didn’t beat them, and fathers who paid attention to them, and they grew up, and some of these people decided very consciously with all the values that we all believed in, to turn over to the dark side, and work with the Russians. I have more contempt for those people than I do for the Russians.
TAI: Bill, thank you. It’s been a real pleasure, as always. I look forward to our next interview for The American Interest, and wishing you all the best.
BB: Thank you.
The post Going After the Enablers appeared first on The American Interest.
What Russia Can Learn from the Pecora Commission
As is well known, starting on October 24, 1929 and continuing through October 29, the U.S. stock market plunged off a cliff constructed of jerry-rigged speculation and “irrational exuberance,” triggering what became known as the Great Depression. Less well known is the fact that the eventual reaction to the Crash brought about a new framework of regulation and, in time, a new set of professional norms that went along with it. Decisions taken in the first year of the Roosevelt Administration in banking and securities regulation, as well as labor market and social security reforms, produced not only important pillars of U.S. economic policy that stand to this day, but also many reforms that became the recognized international “gold standard” of efficient and effective modern regulatory systems. As a result, these early New Deal-era U.S. institutional reforms served as a model for many other countries in the following decades.
The question now is whether these reforms can inspire emulation in post-communist countries still suffering institutional deficits as they attempt to build mature, prosperous, and stable market-based economies.
At first blush, the prospect may seem promising. The market crash of 1929-32 destroyed a huge portion of American private savings and investment. As with many other bubbles in the history of stock and commodity market collapses, the Crash was preceded by a period of fast-growing equities prices inflated by the unscrupulous (but not yet necessarily illegal) behavior of many banks and investment companies. A set of scoundrel cascades had developed in what was essentially a positional competition: When one or two niche actors adopted shady or risky behaviors, competitors felt pressure to follow suit lest they suffer competitive disadvantage. In a “wild west” context that proffered few constraints, the situation grew accident-prone, partly because the throngs of new participants in the stock market were unaware of the shady and risky behaviors going on behind ornate closed doors.
The “securities and exchange” status quo in Russia today, as well as in several other formerly European communist countries, in several respects resembles the 1920s-era American “wild west.” The collapse of the Soviet Union and soon the Russian economy was in some ways analogous to the Crash. Certainly, lots of Russians lost their life savings; indeed, the post-Soviet Russian economic depression was much deeper than the American one 60 years earlier. Thereafter, what became known as the Russian “oligarchy” engaged in fierce positional competitions, and it is not unreasonable to describe much of the related ensuing behavior as a set of scoundrel cascades. At the same time, just as was the case in the United States in the 1920s, the Russian regulatory framework has been insufficiently robust to rein in such cascades. That framework may therefore be ripe for institutional reform as the second generation of post-Soviet economic and political elites seeks a more stable economic environment.
All that said, the differences between America then and Russia now are at least as striking as any similarities. The differences break down into remote cultural differences and more recent path-dependency choices. As to the former, the underlying political cultures of the two nations are very different, with the United States shaped by an early-modern “contract”-based paradigm and Russia by an older and more hierarchical, autocratic-dynastic model. Citizenship does not mean the same thing to typical Americans and Russians, and attitudes toward political authority differ as well.
As to path-dependency disparities, the United States was a market-based economy both before and after the Great Depression; what changed was the relative weight and design of governmental regulation. Russia, on the other hand, had been frozen in a statist command-economy for more than 70 years before 1992, and before 1917 Russian economic modernization has progressed but modestly away from what can reasonably be described as a particular form of feudalism.
Additionally, liberal market-friendly policymakers in Russia were discredited almost entirely by the wretched conditions of the 1990s, so that institutional consolidation as time has passed has not rested in the hands of those inclined toward institutionalized market systems. American advice has been discredited as well, since the economic debacle of the 1990s is associated in many Russian minds not only with failed policies but also with American consultants who, for better and for worse, chaotically helped design such policies under tight constraints and with limited financial assistance.
In light of this mixed picture, is there anything useful in the post-Crash American experience that Russia could learn from in the months and years ahead? Possibly, yes; but to properly understand the sharp limits of that utility, a little history comes in handy.
The Pecora Commission
The shady practices uncovered during government-directed and mandated investigations that began during the waning months of the Hoover Administration on March 2, 1932 had an electrifying effect on American politics. On that late winter day, the U.S. Senate passed a resolution authorizing the Committee on Banking and Currency to investigate “practices with respect to the buying and selling and the borrowing and lending” of stocks and securities. During its first 11 months the investigation made little progress. But in early 1933, just weeks after the inauguration of Franklin Delano Roosevelt as President, the Committee’s Chairman, Peter Norbeck (R-SD), hired a new, now fourth chief counsel: former New York City deputy district attorney Ferdinand Pecora.
Pecora, who was born in Sicily in 1882, had spent a dozen years accumulating a reputation as an honest and effective prosecutor. During that time, too, he had shut down over a hundred “bucket shops”—unregulated private gambling parlors that took bets on everything from stock prices to a range of commodity prices. The bucket shops of the early 20th century bore more than a trivial resemblance to the derivatives markets of more recent times, except that the size of the latter dwarfs the size and influence of the former. What they have in common is their casino-like nature, which inheres in the fact that there is no need to actually own a stock of futures contract in commodities in order to profit from changes in prices.
Pecora was in line to become District Attorney when, in early 1929, Joab Blanton chose him to become his heir apparent. But the Democrats at Tammany Hall refused to nominate him; he was just too honest and effective for their tastes. So Pecora resigned and went into private practice until 1933, when Senator Norbeck brought him to Washington. There Pecora found what we call nowadays a “target-rich environment.”
[image error]
Ferdinand Pecora, via Wikimedia Commons
Once Pecora sized up in some detail what had gone on before the Crash, based in part from his familiarity with Wall Street habits and “ethics,” he decided to apply shock tactics to the case. He subpoenaed high-profile bankers to testify, and used his subpoena power as well to obtain the records of the nation’s largest financial institutions. The key episodes of the public hearings Pecora managed were the testimony of New York Stock Exchange president Richard Whitney and the chairman of National City Bank (now Citibank), Charles Mitchell. During the interrogation Mitchell recounted not only the dubious financial transactions that allowed him to avoid taxes, but also many of his banking group’s questionable banking and investment practices that led to the loss of a huge portion of private savings deposited in the National City Bank.
It was not just depositors who had suffered after the Crash, but also investors in bank stock. The National City Bank—and of course many others—had hidden bad loans accumulated on its balance sheet by packaging them into composite securities and pawning them off to (as we call them today) non-qualified investors—sub-prime shenanigans, in other words. Though ordinary shareholders suffered enormous losses on bank stocks, Mitchell and his top officers had set aside millions of dollars from the bank in interest-free loans to themselves. This was perhaps not quite as bad as Goldman Sachs making billions by betting against its own clients before 2008, but it was close. The interrogation of J.P. Morgan, Jr. uncovered evidence that the financier maintained a “preferred list” of friends of the bank who were offered stock at highly discounted rates. Those friends included a former U.S. President and a sitting Justice of the Supreme Court.
The committee hearings continued for nearly a year, and as time passed Pecora learned how to use the media to amplify the revelations that his interrogations uncovered. That helped Pecora to highlight the most shocking episodes, and to parley that shock into broad congressional support for stricter regulatory oversight of U.S. financial markets. The result was the Securities Act of 1933, under which Congress established the Securities and Exchange Commission (SEC) to regulate the stock market and to protect the public from fraud. (After the Pecora hearings ended in July 1934, President Roosevelt appointed Pecora to be one of five SEC commissioners).
The Pecora Commission’s report was also used as background analysis for the separation of investment and commercial banking that was incorporated into the Glass-Steagall Act passed later in 1933. Also on the basis of the Pecora investigations the Federal Bank Deposit Insurance Corporation (FDIC) was established to guarantee individual bank deposits. That brought a huge amount of capital out from under the nation’s mattresses and into the banking system, a more efficient mechanism than bedding for getting it into the hands of those who could make best use of it.
And finally, the Pecora hearings set a new standard for Senate committee investigations themselves. As the official record of Pecora Commission stated, “the Committee on Banking and Currency’s investigation of Wall Street established several historic precedents. It regularized the practice of Senate committees subpoenaing materials from individuals as well as institutions. . . . Under Ferdinand Pecora’s direction, the committee staff had set a high standard for thoroughness and offered a model of excellence for future Senate investigators.” The bottom line was that a skilled and relentless man managed to create a bright and shining example of how extrajudicial public investigations could identify not only violations of law, but also lead the way to reforming the legal framework itself.
The Path-Dependency Problem
If there are lessons Russian reformers can learn from Ferdinand Pecora, they do not derive from direct application of the Pecora experience. The context is simply too different.
Before the collapse of the Soviet Union, its Constitution did not recognize private property except for what was used by citizens in everyday life. There were no private banks or corporations and no legal framework regulating their activities—so there was nothing to reform. Post-Soviet Russia needed to design its own banking and securities regulation from scratch. Russian lawmakers and experts set about doing just that in 1992, and as they did they analyzed the U.S. regulatory framework and adopted many of its characteristics in drafting Russian laws.
But that, ironically, has become part of the problem. Whatever from-scratch frameworks developed, they did not develop fast or strong enough to prevent lots of bad behavior that produced enormously bad results for most Russians. Indeed, several episodes over the past 25 years have demonstrated that financial crises resulting in massive losses of private savings can occur regardless of regulations that look good on paper. A little history, Russian history this time, helps to explain the gap between regulatory frameworks on paper and how the real Russian world works.
In March 1990, the tottering USSR adopted the law “On Property,” which allowed the creation of joint-stock companies and gave citizens the right to own shares. Immediately thereafter new companies and banks started to appear in the form of joint-stock companies or limited liability corporations, and these began to attract funds from citizens and companies to form their capital. Due to a lack of regulation allowing new companies to implement large investment projects, the size of these new joint-stock companies was, as a rule, fairly small. In addition, the development of new companies was hampered by the continued system of centralized distribution of the vast majority of manufactured goods in the country, as well as a continuing state monopoly on exports and the extremely limited access of private companies to purchase foreign currency.
Russia inherited from the Soviet Union tens of thousands of industrial, agricultural, transport, and communications enterprises. The planned economy did not envisage competition between producers, so the average size of Soviet enterprises was much larger than in Western countries. Already in mid-1991, six months before the collapse of the USSR, privatization began in Russia, when state-owned companies and banks began to be transformed into joint-stock companies, and their shares were transferred to private ownership. Privatization significantly accelerated in 1992 after the adoption of legislation on voucher privatization, which brought the first truly massive securities to the Russian market.
Changes in the Russian economy were moving rapidly forward, so fast that lawmakers could rarely keep up. Remember that Russia had to adopt the most fundamental of laws because Soviet legislation, laws of a state with a planned economy, had suddenly become obsolete in the new environment. Additionally, the political conflict between the President and the parliament, which in October 1993 turned into an armed insurrection and led to the dissolution of the parliament, obviously slowed the capacity of the legislature to act.
Hence, the first Russian law “On the securities market” did not come about until April 1996; before that, this area was regulated by the decisions of the Federal Commission on Securities (FCSM), which was part of the government executive. The activity of the FCSM before 1996 was actively supported by U.S. consultants, so it is not surprising that many norms from U.S. legislation were copied into the regulatory protocols of the Commission. At the same time, legislators could not ignore the Russian specifics so, for example, they did not introduce the separation of banking and investment activities into Russian practice because by that time Russian banks had already become key players in the securities market. In addition, the FCSM lacked the powers of law enforcement agencies and had no authority to conduct independent investigations having legal force.
Meanwhile too, unlike the United States, Russia has opted for the path of consolidating banking and financial regulation and supervision within one body, the Central Bank. Since 1991, the Bank of Russia has regulated the banking system and carried out banking supervision. In 2013, the Securities Commission was attached to the Central Bank. After that, the Bank of Russia began to regulate and control not only everything related to capital markets, but also pension and insurance companies. Since 2017, the Bank of Russia has controlled a majority of votes in the Board of Directors of the Deposit Insurance Agency (ASV), which has under its control the deposit insurance system. This year the Bank of Russia established the Banking Sector Consolidation Fund, which retains the right to inject its funds into the capital of banks that have lost financial stability and manages their rehabilitation.
Critics in the United States often grouse over the alphabet soup of Federal agencies and congressional oversight committees with a role in financial and banking regulation—and that is not even to mention the role that the states play in this sector. The structure as a whole can look incoherent, and is often accused of serving as an invitation for powerful private actors to arbitrage their own regulatory watchdogs. There is something to the criticism, no doubt, but Russia has the opposite problem: over-centralization, and hence little to no accountability in Central Bank decisions regarding any of its many functions.
These many and important differences explain why copying American legislation after the law on securities markets was enacted did not solve the problems that Russia faced in the early years of a market economy. Since the privatization of state assets was the primary form of the initial private property, the securities market did not become an important instrument for raising capital. The volume of the initial public offering was extremely small, these transactions were not significant for the market, and equities or investment products based on them did not become the basis of private savings. At the same time, during the initial distribution of former state property a struggle ensued for absolute control over assets, which implied the emergence of a controlling shareholder in whose hands more than half of a company’s shares were held. Hence the mathematics witticism of the time: 51=100, and 49=0.
As a result, the main form of dishonest behavior that the state should have been fighting against was the deliberate exclusion of minority shareholders. Usually, this exclusion came in the form of a decision to issue new shares taken at a shareholder meeting held in a remote location, where minority shareholders could not go because of high costs. The number of shareholders participating in the meeting decreased sharply, which allowed the largest shareholder to obtain the majority of votes necessary for making decisions. In cases when several shareholders competed for absolute control, a traditional instrument of unfair struggle was corrupted court decisions prohibiting certain shareholders from voting at the meeting. The reason for such court decisions could be appeals of unknown individuals, not infrequently offering bribes, who filed suits against shareholders demanding interim arrest or the freezing of shares.
Another phenomenon of the Russian reality of the first half of the 1990s was the proliferation of financial companies using Ponzi schemes (MMM, Khoper-Invest, Vlastelina, and others). These companies masked their activity under investment funds conducting active advertising campaigns on national television and collecting a huge number of participants who were promised super-high returns. Although in time all these companies collapsed, new ones appeared in their place, often with the same founders continuing similar fraudulent activities under new names. The FCSM stated that the activities of these companies did not fall within the scope of its regulation; and no other government agency, including law enforcement, had decided to take any responsibility for limiting such activities.
The fast growth of the Russian economy in the early 2000s led to a rapid increase in household incomes. Sky-rocketing oil prices and the active foreign borrowings by the Russian banks and companies led to a sharp expansion of bank lending, including for mortgages. To own a house became the dream of Soviet citizens who used to have to wait for years and even decades for the state to allocate an apartment for them. Therefore, as soon as private incomes began to grow, the demand for new housing grew even faster.
Numerous homebuilders appeared on the market. But since their financial resources were extremely limited and they could not raise bank financing, the practice of shared financing became widespread. Homebuyers paid a significant portion of the price at the time of signing the contract, often when construction had not yet begun. Soon it became clear that for various reasons—fraud or an inability to plan financial activities properly—tens of thousands of citizens across the country never became homeowners because construction companies did not complete construction, or in some cases did not even begin it. The attempt of Russian authorities to introduce legal restrictions by adopting the special law in 2005 was in vain, since no government agency received adequate regulation, supervision, or enforcement functions, and the law retained numerous evasion opportunities.
The situation worsened further in 2009, when the demand for housing fell sharply as a result of the economic crisis, and many homebuilders could not pay off their bank loans. This led to their bankruptcy and freezing of construction. In 2014, the law was amended to close many of the gaps and that in turn drove most small companies the market. The crisis in 2014-16 then led to a new wave of builders’ bankruptcy, resulting in the collapse of several very large companies, including in Moscow, leaving tens of thousands of contracts unexecuted. Finally, in mid-2018, a new law passed that prohibiting homebuilders from using the shared financing method in any form during the construction period. According to experts, this is likely to reduce new construction and increase the cost of housing.
Banking
There was no banking supervision in the Soviet Union. There was just no need for it. Until 1987, there were only four state-owned banks in the country: Gosbank provided settlements and current lending to state enterprises; Stroybank dealt with the financing of investment projects; Vnesheconombank carried out all foreign-related operations, and Savings Bank consolidated private deposits with no active operations. In 1987, Gosbank became exclusively an emission center, and its settlement and lending operations were transferred to three new banks specializing on a sectoral principle, combining current and investment financing. However, the activities and volumes of lending operations have always been regulated by the Gosplan (State Planning Committee) and the Ministry of Finance.
Private banks began to appear only in 1988, when the law “On Cooperation” allowed the creation of cooperative banks. In the middle of 1990 the Soviet government permitted the creation of joint-stock companies, and the Russian parliament passed a law on the transformation of regional branches of state banks into independent joint-stock banks. By early 1992, there were already 869 banks in Russia; by the end of the year their number had grown to 2,000. But the banking supervision department at the Central Bank was established only in the autumn of 1994.
Like many other spheres of state regulation, banking supervision cannot arise overnight; its formation requires time to train employees and develop methodologies and technologies for supervision. For several years banking supervision existed in Russia only formally and was unable to adequately assess the risks and sustainability of banks. Therefore, it was not surprising that in the 1998 crisis, the largest private banks went bankrupt for various reasons.
The rapid growth of the Russian economy in 1999-2008 favorably affected the banking system, which became one of the fastest growing sectors. However, when the global economic crisis began, many large banks faced serious problems. The government and the Central Bank allocated huge funds to save the largest banks (more than 4 percent of GDP), but many large banks went bankrupt nevertheless. The Central Bank explained these problems by blaming a global crisis, but even a helicopter-view analysis showed that many of the problems were related to the excessive risks that the banks took on themselves that banking supervision was supposed to see and prevent—but did not.
Refusing to acknowledge the failure of supervision, the Central Bank did not take adequate actions, and when the economic situation began to deteriorate again in 2013, a process called “bankfall” began in Russia. Dozens of large and medium-sized banks began to crumble, and often it turned out that the capital of such banks was negative by the time the license was revoked. Banks lost clients’ funds in huge amounts. Thanks to the deposit insurance system, most private depositors did not suffer losses, but then the insurance system itself actually went bankrupt; as early as 2015, the ASV had used all of its available financial resources and could later pay depositors only at the expense of loans provided by the Central Bank. By mid-2018, the total amount of such loans exceeded 3 percent of the amount of private deposits.
The losses of the corporate sector from the “bankfall” were so formidable that the government and the Central Bank established a special program for the rehabilitation of collapsed banks, which in effect provided for the payment to customers of collapsed banks at the expense of Bank of Russia loans. By mid-2018, the total amount of such loans exceeded 4 percent of the average annual GDP for 2015-17. The collateral damage of this process was the nationalization of the banking system; by mid-2018 the share of state-controlled banks exceeded 75 percent of the Russian banking system (measured by assets).
The Need For Reform
Obviously, the Russian financial sector is in dire need of reform. But it is hard to see how it can come about when the regime insists on over-centralization for the sake of political control, even though over-centralization is dysfunctional. All else equal, it has made and still makes sense to empanel a Pecora-style parliamentary commission empowered to analyze reasons and identify problems. So why has nothing like this happened?
Parliamentarian investigations (commissions) were actively used in the last years of the Soviet Union after the first (and the last) semi-free election that took place in 1989. Those commissions investigated political events: the Ribbentrop-Molotov pact, the invasion in Afghanistan, and so forth. In the 1990s special commissions in both chambers of the Russian parliament were empaneled on the Chechen war, on the murder of General Rokhlin, and on the financial crisis of August 1998. In 2004 a special commission was established by the Federation Council targeting the hostage takeover in Beslan when 333 people died during the assault of special forces. Though the general conclusion of that commission was favorable to the government, several of its members disclosed personal opinions contradicting the official viewpoint and imposing responsibility on the FSB (secret police), which was in charge of the assault, for the death of hostages. This bothered the authorities, it seems, and so in 2005 a special law on parliamentary investigation was adopted in Russia. The result: No investigations have been launched since.
The absence of an independent and qualified judiciary makes it almost impossible for the Russian financial regulator to prevent price manipulation in the securities market; very few cases have been discovered and proved. On the other hand, the Central Bank is responsible for banking supervision in Russia while at the same time being the regulator of the banking sector and the owner of several major banks, including the biggest Russian bank, Sberbank. Obviously, there is a conflict-of-interests problem baked in to this arrangement, resulting in irrational decisions. Additionally, the absence of an independent legislature in Russia prevents a special parliamentary commission dedicated to analyze failures of the banking supervision, and so bankfall continues.
One of the most important elements of the 1933 Securities Act defending ordinary citizens’ savings was the distinction between a qualified and non-qualified investor. Though many of the norms from the U.S. legislation have migrated to the Russian law, key definitions were not incorporated into the Russian legislation until spring 2008. There was never a clear explanation given for this, but the biggest losses related to the lack of this definition involved the state-controlled companies VTB Bank and Rosneft, which issued their shares in 2006-2007 and attracted 115,000 and 150,000 small private investors, respectively.
One could go on, but the point is already as clear as it need be for the purpose: Something like a Pecora Commission cannot emerge in a non-democratic country lacking strong checks and balances, where executive power dominates and controls the legislature and the judiciary. It cannot emerge in a country with no political competition, where parliament members are not elected but are de facto nominated by the Kremlin—and where any uncoordinated or rogue behavior results in the loss of the seat. It cannot emerge in a country where all national news TV channels are owned by the government or by the friends of the President. So clearly, unlike in the United States and other Western democracies, the practice of parliamentary investigative commissions in Russia is very limited, and it is important to understand what prevents Russian legislators from using such an instrument.
Some 85 years later, we can distill out what enabled Pecora to be so successful. First, he worked within a democratic legal framework in which the Legislative Branch could drive public policy reform, in this case with requisite sympathy from the political party that had just assumed control of the Executive. Second, he acted in the heat of crisis, when the pain of the Crash and its consequences were still fresh and raw. Third, he learned quickly how to use media exposure as pressure on the legislature, back in the day when only newspapers and radio were available to him for the purpose. What limited lessons reside here for would-be reformers of post-communist Russia, and other post-communist countries whose institutional frameworks for regulating financial markets still need work?
At the present time, Russia’s legislature cannot drive major policy innovations on its own, partly because, unlike in the United States, the Duma lacks the power of the purse. And it may be doubted whether the Russian Executive would look kindly on such reforms, since they might weaken the political and economic utility of the oligarchs who operate at the regime’s pleasure. Second, there is no longer much heat from the meltdown crisis of the 1990s, and the series of crises that hit typical Russians thereafter. A new crisis is quite possible, but it is not here yet. And third, there is virtually no independent media left in Russia to help a Russian Pecora leverage shock to influence public opinion and hence policy.
But things change. After the Putin era, the Duma may become a more independent body, and the judiciary may wriggle into the light of independence as well. Free media, never completely extinguished, may make a comeback as well. When the planets line up, the history of the Pecora Commission will furnish a tactical guidebook for how to translate opportunity into regulatory reform. Use hearings to lustrate corrupt behavior; deploy the media to spread the word; pressure parliament to do something; and make sure the vanguard of the investigations have sufficient experience to guide the framing of new regulations. One day, maybe, this can happen even in Russia. Maybe.
“Subcommittee on Senate Resolutions 84 and 234 (The Pecora Committee),”https://www.senate.gov/artandhistory/history/common/investigations/Pecora.htm.
“State-controlled banks” are here defined as those where the federal or regional governments, the Central Bank or its subsidiaries, or state-owned companies are the shareholders controlling more than 50 percent of the capital.
The post What Russia Can Learn from the Pecora Commission appeared first on The American Interest.
Peter L. Berger's Blog
- Peter L. Berger's profile
- 226 followers
