Peter L. Berger's Blog, page 69

October 21, 2018

Trump Trashes Another Treaty

John Bolton, President Trump’s national security adviser, is in Moscow this week and reportedly plans to tell Vladimir Putin that, because of Russian cheating, the United States will withdraw from the treaty on intermediate-range nuclear forces (INF) signed by Ronald Reagan and Mikhail Gorbachev in 1987. Because Trump always disparages agreements reached by his predecessors, we might expect him to start tweeting that he’s saved America from yet another bad deal. But there’s a problem: the INF treaty may be the most one-sidedly good arms-control agreement any U.S. President has ever signed. And unless you start by recognizing this fact, you won’t make the right call about what to do next.

What made the INF treaty so good? It represented 100 percent Soviet acceptance of an American offer that almost no one in Washington thought could ever fly. Moscow agreed to scrap every single one of the new missiles, known as the SS-20, that it had been deploying for over a decade to intimidate European allies of the United States—plus all the missiles that the SS-20 was supposed to replace, plus any and all missiles of the same range deployed in Asia too.  The Soviets accepted all this even though the U.S. counter-deployments that European allies had accepted on their territory were both less numerous and less powerful than the SS-20. Even more astoundingly, the INF treaty imposed no limits whatever on the main nuclear forces—both air- and sea-launched—on which the U.S. defense of Europe has rested ever since.

These one-sided terms do not mean, of course, that Russian violations of the treaty are unimportant.  Both the Obama and Trump administrations have claimed that Moscow tested (and has now deployed) a cruise missile of illegal range. Even if information about these weapons is highly classified and therefore hard to discuss in public, and even if only a handful of them have actually been deployed, any U.S. President would feel obliged to respond. The crucial question is how.

One option—apparently Bolton’s preference—is to withdraw from the treaty and go forward with new U.S. deployments banned under INF. Another is to withdraw but stick to new missiles that the treaty allows (since no new INF-banned missiles will be ready for deployment for several years). A third would be to “suspend” the treaty in some way—declare that its terms no longer constrain us and prepare for withdrawal if Russia does not come into compliance. Finally, without either withdrawing from the treaty or suspending it, the United States could begin a build-up of the many weapon systems the INF treaty already allows.

There are surely things to be said for and against each of these options, and they should be debated fairly. What such a debate will make clear, however, is that only one option—the last one—both allows an increase in U.S. nuclear capabilities in and around Europe and keeps Russian capabilities under legal limits. Yes, Moscow will probably keep nibbling at the edges of the INF deal, but the only way it can launch a big buildup is by withdrawing from the treaty itself—something it clearly hesitates to do.

By contrast, the United States is free under the treaty to move forward with a robust program of new deployments, all the while generating a steady stream of public accusations about Russian duplicity.  Moscow’s cheating means this is not a viable long-term solution, but for the foreseeable future, the one-sidedness of the INF treaty gives us the military and the moral high-ground at the same time.  Unlike the Russians, we don’t have to cheat to come out on top.

The Trump Administration does have a second complaint about the INF treaty that some officials suggest is becoming even more important than Russian cheating. Because the treaty only applies to U.S. and Russian forces, it allows China—without any restrictions whatever—to deploy the very missiles that Reagan and Gorbachev forswore. That was okay when the Chinese were our Cold War confederates against the Soviets, but when they seek (as they now do) to limit the ability of the United States to defend its East Asian allies, the arrangement suddenly seems a lot less advantageous. No surprise, then, that American military planners have begun to view the INF treaty as one-sided in a new way—favoring China.

Military competition between the China and the United States will obviously be the Pentagon’s top priority in coming years. But the idea that this need decisively devalues the INF treaty seems—at the very least—premature. Even more than it does in Europe, the United States protects allies in the Western Pacific through its navy and air force—and that includes both air- and sea-launched nuclear systems. It’s not impossible to imagine that over time we and our allies will come to think that medium-range, ground-based missiles—the kind the INF treaty keeps us from having—would add meaningfully to deterrence of China. NATO came to that same conclusion about the Soviet Union in the late 1970’s and early 1980’s. But this is not a near-term prospect. Today, in fact, virtually every U.S. ally in the region would reject the idea.

To embrace it, moreover, would involve a painful trade-off between the interests of America’s European allies and those of our friends in Asia. The United States may ultimately be driven to sacrifice a treaty that blocks a Russian nuclear buildup in Europe so as to be able to counter a Chinese nuclear buildup in Asia. But it would be foolish to do so when we still have so many ways of countering the latter and when the former has barely begun to take shape.

To date, the pushback against the Trump Administration’s handling of the INF treaty has come mainly from the professional arms-control community. Its spokesmen stress the value of “dialogue” and confidence building, of strategic “stability” and “win-win” solutions. None of this criticism is likely to have the slightest influence on John Bolton, who will treat it as so much peacenik hand-wringing. His view is that arms control agreements tie America’s hands and keep it from defending itself and its allies. He can make that ideological case if he wants to—and if he can find any agreements that prove him right. The INF Treaty is not one of them.


The post Trump Trashes Another Treaty appeared first on The American Interest.

 •  0 comments  •  flag
Share on Twitter
Published on October 21, 2018 11:16

October 19, 2018

Rewarding Work for Those Left Behind

Work is central to both self-respect and earning a living. Yet what we have seen in the United States is the disappearance of the jobs that used to provide decent wages for America’s working class. For those lacking a college degree or technical training in a high-demand field, earnings have stagnated and employment rates have declined, especially among less well-educated men. Poor job prospects have been linked, in turn, to rising mortality rates among working-age men, declines in marriage, and deteriorating communities in small-town and rural America. Many believe this economic story is at least partially responsible for the rise in populism and the election of Donald Trump in 2016. Cultural anxieties have played a role as well but are far harder to get a firm analytical fix on than economic discontent.

How can we once again ensure that there are sufficient jobs and decent wages for those who have been left behind in today’s more knowledge-based, more service-oriented economy? My view is that a big and expensive government program to create jobs would be a mistake. Rather the focus should be on maintaining full employment and rewarding work. Government has a role to play but so does the private sector. Corporations are awash in profits in the wake of the 2017 tax law—a law that should be amended to ensure a broader-based prosperity in which capital and labor are more equally rewarded.

Creating Jobs

Many people look at the disappearance of well-paid jobs in manufacturing or elsewhere and conclude that there are simply not enough jobs to employ everyone who wants to work. Their implicit view of the world is that there are a fixed number of jobs, and that with so many disappearing, it will be impossible to supply everyone with a reasonable livelihood. Economists call this “the lump of labor” fallacy.

Why is this a fallacy? The number of jobs in the economy depends on how much people are spending; that is, on the total demand for goods and services. There are no fixed limits to how much they are willing to spend. Instead of one or two pairs of jeans, we may want a pair for every different occasion. In The Rise and Fall of American Growth (2016), Robert J. Gordon explains how as recently as a century and a half ago, it would have been unusual for most people to have had more than one or two outfits. Now our closets are jammed with clothes. And don’t forget about services. Even if we don’t need more cars, cell phones, or blue jeans, we may want to have access to better health care and education, go to baseball games, or enjoy greener parks and cafés with exotic food. Yes, it may be difficult to teach a steel worker to be a nurse or a gourmet cook, but that’s a different issue than the argument that we don’t have enough jobs.

In this context, almost nothing could be more important than maintaining full employment. There is no better way to get more inclusive growth than by tightening the labor market. As Jared Bernstein has shown, tight labor markets raise wages, hours worked, employment, and thus incomes far more for the working and middle classes than almost anything else we could do. My own analysis suggests something very similar. Tight labor markets cause companies to do a lot more training, leading to a badly needed upgrading of worker skills. Yet in the 1980-2016 period, we managed to keep unemployment at or below CBO’s measure of full employment only 29 percent of the time. This contrasts to 72 percent of the time from 1949 to 1979.

To be sure, as I write this the job market is looking very healthy, with an unemployment rate at historically low levels. That said, there are still reasons to be concerned. Labor force participation rates among prime-age adults have fallen since 2000, especially among less-educated men. One possibility is that the job market is not as tight as it seems. If the Federal Reserve refrains from stepping on the brakes too hard, they might flow back into jobs, leading to a hoped-for higher growth rate.

Despite my view that a high-employment economy is the best solution to the lack of jobs in the aggregate, many will be unpersuaded. As a result, some self-styled progressives are proposing a universal jobs guarantee. One example comes from the Center for American Progress (CAP).

The rationale for a guaranteed jobs program, as articulated by its advocates, seems compelling. By setting a floor on wages and guaranteed access to jobs, the program would increase the bargaining power of all workers, not just those who take public jobs. The workers could be employed in building or repairing infrastructure, providing child or elder care, beautifying parks and neighborhoods, and so forth. In theory, there is no shortage of worthy projects to be funded. Advocates like to point to the eight million people employed during the Works Progress Administration (WPA), along with the 650,000 miles of roads and 78,000 bridges they created, as well as many other enduring legacies, between 1935 and 1943.

One big problem with a guaranteed jobs program is that those who want but fail to find jobs in a full employment economy typically carry with them a variety of barriers that give pause to employers, including public or nonprofit employers. Those barriers may include lack of skills or experience, lack of reliability, inability to get along with others, a criminal background, problems with substance abuse, and so forth. Those barriers make it likely that many will be unable to perform well in their public jobs. Building and repairing infrastructure takes skills that many jobless workers don’t have, and taking care of children or the elderly requires a different but no less important set of skills. In addition, taxpayers and those in regular jobs will not like the idea of using their money to fund what many will consider make-work jobs or boondoggles in the public sector. And those taking the jobs may similarly feel as if they are not in “real” jobs that provide the dignity and respect they seek.

Although I think a large-scale public jobs program would be a mistake, two other ideas are more deserving of support. The first is an independent investment bank, capitalized by the government and led by a board and chair appointed by the President, similar to the Federal Reserve. An independent and professional staff would then select for funding investments in infrastructure and basic research. The investment projects would be selected and approved in advance on their merits but could then be timed to align with downturns in the private economy. Those making the selections would be engineers and analysts with the requisite expertise to consider both costs and benefits and, on that basis, prioritize the investments.

The second idea is to provide a limited jobs-based safety net for those needing public assistance. It should be viewed as a way to help people transition to a regular job, especially ex-felons or others who need to demonstrate their competence and reliability, and not as a permanent source of employment. It could be combined with a work requirement for those seeking various government benefits. With such a job on offer, we could discover how many people are unable to find regular employment and how many are simply holding out for better pay and benefits or have personal reasons for not working. My expectation is that the take-up rate for such an offer would not be high, and that the cost would therefore be affordable, funded in part by safety-net savings.

Rewarding Work

There will always be low-paid jobs in services, retail trade, and other areas. We should not give up on upskilling these jobs. More career and technical education is needed. Child and elder care, for example, is a growing source of new jobs. Creating career ladders and better training for workers in these sectors can enhance their wages and prospects for upward mobility. But if we care about the value of work, whether it is a fast-food worker or the person who collects and disposes of our trash, we must also do more to reward low-paid workers for the jobs they perform. One reason they deserve to be paid more is because their jobs are often difficult or disagreeable. There is dignity in all honest work, yes; but some jobs confer more dignity than others.

There are three well-known mechanisms for helping workers in low-paid jobs. The first is raising the minimum wage. The second is worker credits, similar to the existing Earned Income Tax Credit, that boost people’s earnings through the tax system. The third is providing more child care assistance to families with children, especially single parents, enabling them to work and keep more of their earnings when they do. These policies are not only consistent with the value of individual responsibility and the dignity that work can confer, but they also encourage labor force participation and a higher rate of economic growth. They will, in short, produce more broadly shared prosperity. Let’s look at these three options in turn.

A Higher Minimum Wage. The erosion of the real value of the minimum wage since its peak in the late 1960s is one reason for growing gaps between wages at the bottom and those at the top. The big issue, of course, is whether a higher minimum would, by raising business costs, lead to less hiring.

At last count, 29 states and the District of Columbia have raised their minimum wage well above the Federal level of $7.25 an hour. Despite fears that this would depress hiring and employment, there is little evidence of that occurring so far. A much-cited study by the economists David Card and Alan Krueger, looking at fast food businesses in adjoining states with different minimum wages, found no indication that a higher minimum wage reduced employment. A meta-analysis by Hristos Doucouliagos and T. D. Stanley considered 64 different U.S. minimum-wage studies and found that the most precise estimates were heavily clustered at or near zero employment effects.

The Congressional Budget Office modeled the employment and income effects of an increase in the Federal minimum wage and found that raising it to $10.10 would reduce employment by about half a million nationwide but simultaneously improve the incomes of more than 16 million workers. The increased earnings for low-wage workers from a higher minimum wage would total about $31 billion. This is a big net gain—the benefits of the higher incomes for so many families outweighing a very small (and statistically uncertain) reduction in jobs for the least skilled, many of them teenagers.

Increases in the minimum wage have two other favorable effects. First, they encourage higher rates of pay for those just above the minimum. Second, they reduce dependence on government benefits, such as food stamps or the Earned Income Tax Credit, leading to both budgetary savings and more dignity for the recipients, who become fully or partially self-supporting.

These studies of the minimum wage rarely provide guidance on just how high one can raise the minimum before having a substantial negative effect on employment. Indeed, my colleagues Harry J. Holzer and Gary Burtless have both argued that a $15 Federal minimum wage may be too high. Given the significant variation in living costs across the country and the fact that some local labor markets have a disproportionate number of less-educated workers, it seems wisest to allow states and cities to establish their own minimums.

One example is Seattle, which raised its minimum wage in 2015 from $9.47 to $11, and again in 2016 to $13 an hour. A study conducted by researchers at the University of Washington found that these increases did have adverse effects. That study didn’t go uncontested. These kinds of dueling studies make it difficult to know what to believe, but there is likely a wage at which the policy’s negative employment effects outweigh its positive impacts on wages.

Still, the vast majority of research in this field suggests that the impacts of minimum wage increases on employment so far have been negligible. All public policies have benefits and costs. So raising the Federal minimum wage to around $12 an hour and indexing it for inflation seems like a policy whose benefits would outweigh its possible costs.

An Expanded EITC or Worker Credit. Another approach to helping the bottom of the income distribution would be to expand the Earned Income Tax Credit (EITC) to cover more lower-income working households. One of the benefits of the EITC is that it only helps those who help themselves: It is conditioned on work. It simply tops up your wages. That has led such disparate political figures as Ronald Reagan, Bill Clinton, Paul Ryan, Barack Obama, and even the Trump Administration to endorse it at various times.

The EITC is a complicated program. Workers receive a refundable credit (a subsidy that is sent to the family, much like a tax refund). The subsidy is based on an employee’s earnings, his or her family size and marital status. The subsidy phases out at higher income levels. For a single parent with two children, working full-time at the minimum wage and earning $15,000 a year, the EITC boosted his or her income in 2016 by $5,572 per year. Among childless workers, the benefits are meager.

The EITC promotes work and reduces poverty. In combination with a higher minimum wage, a more generous EITC could be a budget-neutral proposition. The reason is because a higher minimum wage reduces dependence on government benefit programs, freeing up resources that can then be devoted to wage subsidies that help people become self-supporting. Liberals have always supported the EITC; some conservatives have as well, but even more might endorse such measures if they understood the role that both minimum wages and wage subsidies play in encouraging work and in reducing dependence on other government programs, such as food stamps or TANF (welfare).

A proposal now on the table from Senator Sherrod Brown and Representative Ro Khanna would both increase the value of the EITC and dramatically expand its reach to childless workers. According to the Tax Policy Center, this proposal would increase the after-tax income received by those in the bottom 20 percent of the income distribution by 6.6 percent.

To be sure, proposals of this sort would be expensive. Researchers from the Tax Policy Center and the Center on Budget and Policy Priorities have shown that something like the Brown-Khanna proposal would cost a little over $1.4 trillion over a decade. As Neil Irwin, who initially suggested an analysis of this approach, wrote in the New York Times, “even if you conclude that a radical expansion of tax credits for working-class Americans is desirable, the politics of paying for it are somewhere between hard and impossible.”

Despite its popularity, the EITC has a number of shortcomings. The first, as already noted, is that it is very complicated and for this reason has been plagued with error rates and some fraud. The second is that it discourages marriage. The third is that, by basing benefits on the number of children in a family, it puts too little emphasis on the responsibility of parents to limit the size of their family to what they can afford.

For these reasons, a much simpler worker pay credit—similar to one suggested by the Tax Policy Center’s Elaine Maag in 2015—is preferable. It would provide a 15 percent raise to every working American up to a maximum of $1,500 per year. The credit would then phase out as earnings rose up to about $40,000 per year. Because this would be based on individual income (and not household income, like the EITC), a couple could earn far more than this by pooling its earnings. This makes the proposal very marriage friendly, in addition to rewarding work. The credit would be delivered as part of an individual’s paycheck, offsetting payroll taxes and reinforcing the idea that it is not only an earned benefit but also a form of tax relief for working families.

Paying for the Worker Credit. As noted, a modest expansion of the EITC or of its first cousin, a worker credit, combined with a higher minimum wage, need not be expensive. A much larger expansion, similar to Brown-Khanna, would be. Although Neil Irwin is undoubtedly right about the political infeasibility of such an expansion, there is nevertheless an obvious and compelling way to pay for it. It is by taxing wealth, not work.

Wealth is far more unequally distributed than income. Both the concentration of wealth and the concentration of income have reached record highs in recent years, but wealth more so than income. The Baby Boom generation, the most affluent in history, will, over the next few decades, pass on $30 trillion of their wealth to the next generation.

The inheritance of great wealth is inconsistent with basic American values. The American dream rests on the assumption that we live in a meritocracy, where a combination of skill and hard work, rather than inherited class or privilege, is the road to a better future. The estate tax is one of the few mechanisms available to limit inherited wealth. Many members of Congress are proposing to eliminate it. If they don’t want to be seen as favoring the rich and powerful, and instead want to be seen as favoring work over inherited wealth, they should adjust their stance.

The tax’s power to promote intergenerational mobility has eroded very badly over time. The estate tax is paid by only a tiny fraction of American estates—about two out of every 1,000 deaths in 2017. That contrasts to the 1970s, when there were more than 70 taxable estates for every 1,000 deaths. Most of that decline reflects the rising exemption level. In the 1970s, the exemption was $60,000 (about $280,000 in today’s dollars, or more than half a million per couple). The Tax Cuts and Jobs Act of 2017 doubled the exemption level for the estate tax, so that every estate smaller than $11.2 million (or $22.4 million per married couple) will be exempt. Not only are very few estates subject to the tax, but contrary to what many believe, even fewer of them are small businesses and family farms (about 80 in 2017, and this value will be even lower under the new tax law).

Despite the fact that they will never be hit by the estate tax, most Americans still think that we should eliminate it. This is largely due to misconceptions about who pays the tax. Of surveyed respondents who favor eliminating the tax, about 70 percent believe that it will affect them, and three-quarters believe that it might force the sale of a small business or a family farm.

Republicans have framed the estate tax as a “death tax.” This confuses the timing of the tax with whom it affects. Dead people don’t pay taxes. The main burden of the estate tax is on those who receive bequests. The reason it is important to be clear about this is because it is often argued that the estate tax involves taxing the same people twice. But it doesn’t. In effect, it involves taxing two different people just once.

Moreover, because unrealized capital gains make up a significant and growing share of larger estates, and the cost basis for taxing these gains is stepped up at death, much of this wealth is never taxed even once. And because most recipients of large bequests are themselves wealthy and did nothing to earn their inheritance, these bequests are, in essence, welfare for the rich. Some economists argue that the prospect of making a gift, or the anticipation of receiving one, changes behavior—for example, the incentive to save—but evidence that such changes are empirically important is scant. Although the anticipation of being able to make a bequest might increase saving on the part of donors, the anticipation of receiving one might reduce saving on the part of the recipient. And regardless of its impact on savings, the receipt of large inheritances is a big work disincentive. If we are going to worry about work disincentives in welfare for the poor, we should also worry about work disincentives in welfare for the rich.

Because of the rise in the exemption level and the drop in rates, revenue from combined estate and gift taxes has plummeted from what it was in the 1970s. In 1972, the exemption level was more than half a million per couple in today’s dollars, and the top marginal tax rate was 77 percent, but that rate only applied to estates worth more than $10 million in 1972 dollars (about $58 million today). If the combined estate and gift taxes were to generate the same share of Federal revenue today as they did back then, they would have produced about $85 billion in 2015. The nearly $1 trillion that this could generate over a decade would be roughly enough to provide a substantial pay raise to the working class.

In combination with a higher estate tax, a worker tax credit would prevent America from becoming more of a class-based society than it already is. It would simply ask each new generation to earn its own way. It would provide bigger paychecks to a group of Americans who have been falling behind. And it would honor the importance of work—not welfare or windfalls—for boosting one’s income.

As Franklin D. Roosevelt declared to Congress in 1935, “The transmission from generation to generation of vast fortunes by will, inheritance, or gift is not consistent with the ideals and sentiments of the American people.” Instead of repealing the estate tax, perhaps it’s time to better use it to reward and encourage work.

Subsidizing Child Care

Another way to encourage work and increase take-home pay is to help families pay for childcare. It is by far the largest expense associated with earning a living. Of those who report paying for childcare, mothers with at least one child under age fifteen spend an average of about $7,000 per year on child care, or 7 percent of family income. For families living below the poverty level, childcare expenses are nearly one-third of income. Families with younger children spend even more—those with children under five spent an average of $9,300 per year on childcare, or more than 10 percent of family income. Reducing the costs of childcare not only increases the disposable income available to these families, it also increases the employment of both married and single mothers.

Currently, there are two major sources of government support for childcare. The first is a Federal block grant program that provides funds to states to cover the childcare expenses of working families with below-average incomes. Eligible families must have incomes below 85 percent of the state’s median income, but the program has never received sufficient funds to serve all families in need.

A second source of funding is a childcare tax credit (CCTC) that allows families to subtract from any income tax liability a portion (typically 20 percent) of their childcare costs (up to $6,000 a year for two children). The credit is only available to parents who are working or in school. The biggest problem with the credit is that it is not refundable. Since most lower-income families don’t have any income tax liability (although they do pay hefty payroll taxes), it doesn’t help them at all. It primarily benefits the affluent. In fact, most of the benefits go to families with annual incomes between $100,000 and $200,000 a year. In 2016, only about 13 percent of families with children benefited at all from the CCTC, and families in the lowest income quintile rarely received any help at all.

We can do much better. I have long argued that childcare subsidies are a good way to encourage work, enhance women’s prospects, and provide safe and stable care for children. They are the policy equivalent of a hat trick. They make it possible for single parents to support their families and for more two-parent families to bring in a second paycheck. Simply making the CCTC refundable would largely benefit families who need it most. The Tax Policy Center estimates that two-thirds of the benefits of refundability would go to families with less than $30,000 in cash income—a group that receives only 6 percent of the total benefits of the existing program. The bipartisan PACE Act of 2017, sponsored by Representatives Kevin Yoder (R-KS) and Stephanie Murphy (D-FL), would make the CCTC fully refundable, raise the credit rate, and index it to inflation. This bill deserves far more support.

Some have argued for an expanded child tax credit, as distinguished from a child care tax credit. The existing child tax credit was expanded to $2,000 as part of the Tax Cuts and Jobs Act. One advantage of the child tax credit is that it gives families the choice to either work or be stay-at-home parents. But that choice exists only for the more affluent portions of the population and not for most lower-income families.

Finally, where it is possible to provide a high-quality preschool experience to the children from such families while their parents work, doing so would enhance children’s prospects as well. Such programs tend to cost more but cover all four bases—work, women’s prospects, safety, and children’s development—not just a home run, a grand slam! There has been some debate of late about whether the long-run effects of such programs on children’s later success merit expanding them, but it’s worth remembering that this is like debating whether they are a grand slam or merely a three-run homer. Obsessing about their long-run benefits seems foolish given their immediate value in supporting work.

A Bigger Role for the Private Sector

Many supporters of the Tax Cuts and Jobs Act of 2017 believe it will have the effect of creating jobs and raising wages. I am highly skeptical.

Advocates believe that corporations will invest more, thereby raising productivity and wages. It’s a new version of trickle-down, in other words. They believe the law will also make the United States more competitive with other countries, encouraging more businesses to locate or remain in the United States. But businesses had trillions of dollars of profits to invest before the law was enacted, and they have been using those profits primarily to purchase their own stock or to provide higher dividends to shareholders. The 2017 law left most corporate tax loopholes untouched and made an already complicated set of rules even more complex and open to gaming. Corporations are now freer than ever to seek out tax havens abroad in a “territorial system,” in which taxes depend on where the income was earned. Finally, the huge increase in debt occasioned by the bill will slow growth over the longer run.

At the same time, these looming deficits make it likely that some corrections will be needed down the road. When President Reagan reduced taxes in 1981, thereby ballooning deficits, it required several additional tax laws to restore some semblance of fiscal health. That fact, together with the lack of any Democratic support for the bill, makes it likely that we will see another tax law to correct the flaws in the recent one. When the law is revisited, an opportunity to nudge the private sector toward a more inclusive form of capitalism will present itself.

These corrections are badly needed. One reason that growth has not been broadly shared in recent decades is because earnings have stagnated. Productivity (output per hour) has improved, but workers’ wages have not increased in tandem as they once did. Instead, the benefits of growth have accrued primarily to those at the top of the distribution, including CEOs and shareholders. Consider what has happened to executive pay. In 2014, on average, CEOs earned $16.3 million annually, more than 300 times as much as a typical employee. That contrasts with only 30 times the salary of the typical employee as recently as the late 1970s. These big increases at the top have come at the expense of pay levels for other workers and often of needed investments in the longer-term growth of a company and the economy. One result is that labor’s share of national income has fallen. Whatever the size of the pie, a much bigger slice than in the past is going to the owners of capital and not to the workers who helped create it.

These facts suggest something is deeply amiss. Under certain assumptions, free markets are the best way to deliver broad-based prosperity. But markets are neither moral nor infallible, and in all cases they exist thanks to government frameworks composed mainly of money creation, infrastructure investment, and law—not because they fell from the sky one day long ago. Incomplete information, imperfect competition, established norms and practices, the sluggish mobility of capital and labor, and other frictions may be the rule rather than the exception. In addition, what’s good for General Motors is not necessarily good for the country. A focus on short-term profits may be needed to satisfy activist investors and those pressures may require corporate chiefs to worry about them, but that focus undermines long-term productivity. Underinvesting in worker training, given that workers often leave to go to another firm, may be rational for an individual company, but it is not beneficial for the economy at large. Similarly, if every company thinks their CEO is above average and pays them accordingly, it can lead to the upward spiral in executive pay we have seen. So government has a role to play if we want a more productive and inclusive form of capitalism.

Back in the early postwar period, corporations tended to follow what many call a stakeholder—or what I call an “inclusive”—model of capitalism. It involved paying attention not just to shareholders, but other stakeholders as well, including workers, customers, or the community. Granted, businesses in that era didn’t have to worry as much about foreign competition, and unions were far stronger. Half a century ago, the typical worker at General Motors earned $35 an hour in today’s dollars. Compare this to the $11 an hour that the typical Walmart worker earns. As Robert Reich notes, “This does not mean the typical GM employee a half century ago was ‘worth’ more than three times what the typical Walmart employee in 2014 was worth. . . . The real difference was that GM workers a half century ago had a strong union behind them that summoned the collective bargaining power of all autoworkers to get a substantial share of company revenues for its members.” But trade union membership is now a fraction of what it once was, and it is not likely to return to its heyday in the face of global competition, the decline of manufacturing, and an increasingly professional and white-collar workforce. The proportion of workers who are members of unions has fallen from its mid-1950s peak of 35 percent to about 6 percent now.

While it is not widespread, inclusive capitalism remains a successful strategy for many companies. They have showcased what can be accomplished when the private sector focuses on motivating workers—whether in the form of profit sharing, training, or providing a variety of benefits such as health care or paid leave. These companies include Costco, Trader Joe’s, Patagonia, Southwest Airlines, Publix Grocery Stores, and Ben and Jerry’s. Without such an approach, it will be difficult to achieve broadly based economic growth. It would simply require too much redistribution after the fact. Instead, we need a less unequal distribution of market incomes brought about by changing private sector practices. Done right, this can be a win-win for workers and shareholders alike. And, as many forward-looking business leaders now recognize, if current trends continue, the public may demand something far less palatable. Business tax reforms that encourage more profit sharing, more employee ownership, and more worker training could produce the kind of broadly shared prosperity we need.

One common argument against such progressive policies is that they are inconsistent with maximizing profits and serving the interest of shareholders. But there is increasing evidence that this is wrong. Steven Pearlstein has put the argument as follows:


In the recent history of management ideas, few have had a more profound—or pernicious—effect than the one that says corporations should be run in a manner that “maximizes shareholder value.” Indeed, you could argue that much of what Americans perceive to be wrong with the economy these days—the slow growth and rising in equality; the recurring scandals; the wild swings from boom to bust; the inadequate investment in R&D, worker training and public goods—has its roots in this ideology. The funny thing is that this supposed imperative to “maximize” a company’s share price has little foundation in history or in law. Nor is there any empirical evidence that it makes the economy or the society better off. What began in the 1970s and ’80s as a useful corrective to self-satisfied managerial mediocrity has become a corrupting, self-interested dogma peddled by finance professors, money managers and over-compensated corporate executives.

Pearstein is right, and in my book, The Forgotten Americans, I review the scholarly evidence on these points as well as the important recent work by the management consulting firm, McKinsey and Company. These studies show that sharing profits or ownership with workers and investing for the longer term, especially in training the less skilled, need not undermine shareholder value. In many cases they enhance it.

Jobs and wages are important, and we must find ways to improve both. The Earned Income Tax Credit has been a popular response and needs to be both simplified and expanded. But government tax credits to shore up wages at the bottom are not a sufficient long-term strategy. A big new government program may not be in the cards, and even if it were, it would be a mistake not to ask the business community to play a larger role in training and rewarding workers. The private sector needs to get involved for the sake of social cohesion and the health of our democracy. It should be nudged in this direction by a reformed tax system. In many instances, it may even find that what’s good for workers and for society is good for its own bottom line as well.


Bernstein, “The Reconnection Agenda: Reuniting Growth and Prosperity,” Center on Budget and Policy Priorities, March 30, 2015; Bernstein, “The Importance of Strong Labor Demand,” in Revitalizing Wage Growth (The Hamilton Project, 2018).

Isabel Sawhill, Edward Rodrigue, and Nathan Joo, “One Third of a Nation: Strategies for Helping Working Families” (Brookings Institution, May 2016).

See Jeffrey Sparshott, “Skilled Workers Are Scarce in Tight Labor Market,” Wall Street Journal, February 2, 2017.

Isabel Sawhill, “Inflation? Bring it On. Workers Could Actually Benefit,” New York Times, March 9, 2018.

Jeff Spross, “You’re Hired!” Democracy (Spring 2017).

Neera Tanden, Carmel Martin, Marc Jarsulic, Brendan Duke, Ben Olinsky, Melissa Boteach, John Halpin, Ruy Teixeira, and Rob Griffin, “Toward a Marshall Plan for America: Rebuilding Our Towns, Cities, and the Middle Class,” Center for American Progress, May 16, 2017.

See Korin Davis and William A. Galston, “Setting Priorities, Meeting Needs: The Case for a National Infrastructure Bank,” Brookings Institution, December 13, 2012.

Card and Krueger, “Minimum Wages and Employment: A Case Study of the Fast-Food Industry in New Jersey and Pennsylvania,” American Economic Review (September 1994).

Doucouliagos and Stanley, “Publication Selection Bias in Minimum-Wage Research? A Meta-Regression Analysis,” British Journal of Industrial Relations (June 2009).

Congressional Budget Office, “The Effects of a Minimum-Wage Increase on Employment and Family Income,” CBO, February 18, 2014.

For additional review of these issues, see Isabel V. Sawhill and Quentin Karpilow, “A No-Cost Proposal to Reduce Poverty & Inequality,” Brookings Institution, January 10, 2014.

See Holzer, “A $15-Hour Minimum Wage Could Harm America’s Poorest Workers,” Brookings Institution, July 30, 2015; Lizzie O’Leary, Paulina Velasco, and Jana Kasperkevic, “Tired of Waiting for Congress, Majority of U.S. States Have Raised the Minimum Wage,” Marketplace, June 30, 2017.

Ekaterina Jardim, Mark C. Long, Robert Plotnick, Emma van Inwegen, Jacob Vigdor, and Hilary Wething, “Minimum Wage Increases, Wages, and Low-Wage Employment: Evidence from Seattle,” NBER Working Paper No. 23532 (June 2017). Specifically, they found that the higher minimum “reduced hours worked in low-wage jobs by around 9 percent,” and reduced the number of low-wage jobs (those paying less than $19 an hour) by 6.8 percent. They found that the higher hourly income did not offset the income lost from working fewer hours.

Michael Reich, Sylvia Allegretto, and Anna Godoey, “Seattle’s Minimum Wage Experience 2015–16,” Center on Wage and Employment Dynamics Policy Brief (Institute for Research on Labor and Employment, June 2017).

Gene Falk and Margot L. Crandall, “The Earned Income Tax Credit: An Overview,” Congressional Research Service, January 19, 2016.

Center on Budget and Policy Priorities, “Policy Basics: The Earned Income Tax Credit,” CBPP, October 21, 2016.

Sawhill and Karpilow, “A No-Cost Proposal.”

Tax Policy Center, “T17-0202,” Tax Policy Center Model Estimates, Distribution Tables by Percentile, August 23, 2017.

Tax Policy Center, “T17-0024,” Tax Policy Center Model Estimates, Revenue Tables, May 5, 2017.

Neil Irwin, “What Would It Take to Replace the Pay Working-Class Americans Have Lost?” New York Times, December 9, 2016.

Elaine Maag, “Investing in Work by Reforming the Earned Income Tax Credit,” Tax Policy Center, May 20, 2015; Adam Thomas and Isabel V. Sawhill, “A Tax Proposal for Working Families,” Brookings Institution, January 5, 2001.

See Sawhill and Karpilow, “A No-Cost Proposal.”

According to the Survey of Consumer Finances, 24 percent of income accrued to the top 1 percent, compared to 39 percent of wealth in 2016. Jesse Bricker et al., “Changes in U.S. Family Finances from 2013 to 2016: Evidence from the Survey of Consumer Finances,” Federal Reserve Bulletin (September 2017).

Accenture, “The ‘Greater’ Wealth Transfer” (2015).

The House version of the GOP’s Tax Cuts and Jobs Act would have doubled the level of the estate tax exemption and repealed it after six years. The final version of the act did not eliminate the tax, though it did double the exemption.

Joint Committee on Taxation, “History, Present Law, and Analysis of the Federal Wealth Transfer Tax System,” JCT, March 16, 2015.

Tax Policy Center, “Who Pays the Estate Tax,” in Tax Policy Center’s Briefing Book (TPC, 2016).

Frank Newport, “Americans React to Presidential Candidates’ Tax Proposals,” Gallup, March 17, 2016.

Marcus D. Rosenbaum, Mollyann Brodie, Robert J. Blendon, and Stephen R. Pelletier, “Tax Uncertainty: A Divided America’s Unformed View of the Federal Tax System,” Brookings Institution, June 1, 2003.

Robert B. Avery, Daniel Grodzicki, and Kevin B. Moore, “Estate vs. Capital Gains Taxation: An Evaluation of Prospective Policies for Taxing Wealth at the Time of Death,” Finance and Economics Discussion Series, Federal Reserve Board, April 1, 2013.

Lily L. Batcheler, “Reform Options for the Estate Tax System: Targeting Unearned Income,” Testimony before the U.S. Senate Committee on Finance, May 7, 2010.

Jane G. Gravelle and Steven Maguire, “Estate and Gift Taxes: Economic Issues,” Congressional Research Service, January 19, 2006.

Douglas Holtz-Eakin, David Joulfaian, and Harvey S. Rosen provide empirical evidence that the receipt of large inheritances actually decreases the incentive to work. Their results indicate that a single person receiving an inheritance of about $150,000 is four times more likely to leave the labor force than an individual receiving an inheritance below $25,000. Holtz-Eakin, Joulfaian, and Rosen, “The Carnegie Conjecture: Some Empirical Evidence,” Quarterly Journal of Economics (May 1993).

Office of Management and Budget, Tables 2.1 and 2.5 in “Historical Tables.”

Darien B. Jacobson, Brian G. Raub, and Barry W. Johnson, “The Estate Tax: Ninety Years and Counting,” Statistics of Income Bulletin No. 120 (2007).

Franklin D. Roosevelt, “Message to Congress on Tax Revision,” June 19, 1935.

Lynda Laughlin, “Who’s Minding the Kids? Child Care Arrangements: Spring 2011,” U.S. Census Bureau (April 2013).

James P. Ziliak, “Proposal 10: Supporting Low-Income Workers through Refundable Child-Care Credits,” in Policies to Address Poverty in America, Melissa S. Kearney and Benjamin H. Harris, eds., Hamilton Project (June 2014).

Tax Policy Center, “How Does the Tax System Subsidize Child Care Expenses,” Briefing Book.

Ziliak, “Proposal 10.”

Tax Policy Center, “How Does the Tax System Subsidize.”

Elaine Maag, “What Would a Refundable Child Care Credit Mean?” TaxVox: Individual Taxes, Tax Policy Center, May 4, 2017.

The PACE Act of 2017,” Representative Kevin Yoder and Representative Stephanie Murphy, Bill Summary.

The Lee-Rubio tax reform plan offered in 2015 would have increased the child tax credit by $2,500 per child. It would be partially refundable, but because it would not be phased out at higher incomes, it would largely benefit higher-income families with children that currently receive little or no child tax credit. Elaine Maag, “Reforming the Child Tax Credit: How Different Proposals Change Who Benefits,” Urban Institute (December 2015).



The post Rewarding Work for Those Left Behind appeared first on The American Interest.

 •  0 comments  •  flag
Share on Twitter
Published on October 19, 2018 08:53

October 18, 2018

License to Kill

“You can’t betray [us] and not get punished for it.  Anyone, even those still alive, will reap the consequences.  Anyone.  It is a matter of time.”


—A currently serving murderous dictator

With the astonishingly rapid accumulation of photographic and documentary evidence, leaked intelligence intercepts, and first-rate journalistic reporting, it is increasingly clear that the Saudi state brutally murdered journalist and critic Jamal Khashoggi in its consulate in Istanbul on October 2. The floating of the first lame attempt at a cover story—that Khashoggi, who was shown on surveillance tape entering the consulate but never leaving, had somehow disappeared for his own mysterious reasons—melted like a cube of ice in the Saudi desert.

Any notion that Khashoggi’s murder was a “rogue operation” of interrogation gone awry will similarly melt in the face of withering evidence and logic. You don’t bring the kind of 15-member team of security officials—including a leading expert in forensic medicine—to conduct an interrogation unless you intend it to end with murder that must be thoroughly covered up. You don’t immediately scrub and repaint the scene of the interrogation if there was no crime to cover up. And, if the Turkish intelligence accounts are true, you don’t bring a bone saw to an “interrogation” unless you not only intend to murder the detainee but also to dismember his body to enable inconspicuous evacuation of the remains from the scene of the crime, and indeed the country. Moreover, as former CIA Director John Brennan observed on Meet the Press last Sunday, it is inconceivable that such a complex, heinous and brazen international criminal operation could have been mounted without the knowledge of the now de facto ruler of Saudi Arabia, the audacious and impulsive young man who appears to be making all the country’s key decisions, foreign and domestic—Crown Prince Mohammed bin Salman.

Since being elevated to his current position in June 2017, the 33-year-old Crown Prince has alternately appeared as both reformer and repressor, an agent of badly needed social modernization and a throwback to royal despotism. His efforts to rein in the vast social power of the extreme Wahhabi religious establishment raised the prospect of a gradual but sweeping transformation toward greater openness, rationality, moderation, and innovation.  But his ruthless power grabs—his rapid, driven ascent at barely half the age of royal competitors, breaking all the rules of Saudi royal succession; his detention and alleged coercive interrogation of a wide swath of the Saudi elite shortly after taking power; and his merciless, indiscriminate use of Saudi military force in Yemen, spreading death and displacement on a scale that could amount to war crimes—have suggested a darker, Shakespearian streak: breathtaking personal ambition tinged with total and ruthless resolve.   Well before Khashoggi’s murder, some worried that the international persona of the charming, tech-savvy prince might mask a chilling, psychopathic streak.

The quote atop this article, however, is not from Mohammed bin Salman. It comes from another absolute autocrat who has made a practice of killing his political enemies and rivals abroad. If you just guessed Vladimir Putin (or Kim Jong Un) it was a reasonable conjecture.  But, as reported by the BBC, the words were uttered in January 2014 by Rwandan President Paul Kagame, at a prayer breakfast shortly after Rwanda’s former external intelligence chief, Col Patrick Karegeya, was strangled to death in his Johannesburg hotel room. Karegeya, who had fled to exile in South Africa after breaking with Kagame in 2008, had (according to the BBC) been advising the governments of South Africa and Tanzania as they were sending troops to a UN force in the Democratic Republic of Congo that was battling M23, a rebel group widely believed to be supported by Kagame and his military. Other Rwandan officials also engaged in thinly concealed gloating about Karegeya’s murder, including the defense minister, who declared, “When you choose to be a dog, you die like a dog, and the cleaners will wipe away the trash.”

Notwithstanding the fawning admiration he has earned from international development agencies, and from former Western democratic leaders turned development philanthropists like Bill Clinton and Tony Blair, Kagame has proven himself to be a murderous thug. A stunning investigation by The Globe and Mail revealed, as has other international reporting, what appears to be a systematic campaign of the Rwandan dictatorship to track down and murder perceived enemies at home and abroad, leaving fearful surviving dissidents to hide their whereabouts and frequently change their locations. The campaign has wiped out “a succession of prominent critics and campaigners, judges and journalists,” who “have been beaten, beheaded, shot and stabbed” to death “after crossing Kagame.”

Western democracies cannot hide behind a veil of ignorance. They have known for years of Kagame’s penchant for killing his enemies abroad, and of his destructive covert military interventions in the DRC and elsewhere in Africa. They have even occasionally warned exiled Rwandans at risk. But of how much value are warnings when the legitimating embrace of diplomacy, aid, and international esteem continues unabated? After Scotland Yard tipped him off to a Rwandan assassination plot against him in 2011, a Rwandan dissident told a British journalist, “You just feel anything can happen, especially when nothing is done at the international level against Kagame. It is like he has a license to kill.”

And of course that is precisely what Vladimir Putin perceives he has had, while leaving a trail of vengeance and violence that includes more than two-dozen murders and mysterious deaths in at least half a dozen foreign countries. As with Kagame, the list of absurdly suspicious deaths, assassinations, and assassination attempts has piled up with impunity over the now nearly two decades of Putin’s control of the Kremlin and its far-reaching—and clearly revitalized—intelligence apparatuses.

These have featured not only mafia-style methods of execution—people pushed from balconies or gunned down in broad daylight—but also a bizarre, terrorizing succession of attacks by poison, an extremely painful method of elimination that dates far back in Russian history, was perfected in the Soviet communist era, and was then taken to radioactive extreme under Putin. The known instances have included the 2006 assassination in London of former KGB agent Alexander Litvinenko, who drank a cup of tea laced with radioactive polonioum-210 by two Russian agents; the 2003 attempted murder-by-poisoning of Ukrainian presidential candidate Viktor Yuschenko, who was campaigning not just for democracy but also to pull Ukraine out of the orbit of Russian domination; the sudden death by suspected poisoning of Alexander Perepilichny, who had fled to the UK after assisting an international investigation of Russian money-laundering; the poisoning attempts on the life of the young journalist and dissident Vladimir Kara-Murza, a leading advocate of the Sergei Magnitsky Act that now empowers the United States to impose targeted sanctions on Russian leaders who violate human rights; and the nearly successful nerve-agent attack on former double agent Sergei Skripal in Britain earlier this year.

As David Filipov of the Washington Post documented in 2017, many of Putin’s other enemies have died in more conventional ways, including: Denis Voronenkov, an opposition politician and fierce Putin critic who was shot and killed in Kiev in 2017 after fleeing to Ukraine the previous year; Boris Nemtsov, perhaps the most potent political opponent of Putin, who was gunned down just steps from the Kremlin in February 2015; Boris Berezosky, the billionaire oligarch exiled in Britain, who was found dead in the locked bathroom of his British home in 2013 with a noose around his head; human rights lawyer Stanislav Markelov, who was shot near the Kremlin in 2009; the accountability whistleblower Sergei Magnitsky, who was brutally beaten to death in police custody in 2009; Anna Politkovskaya, a crusading Russian reporter for Novaya Gazeta, who wrote a book accusing Putin of turning Russia into a police state and then became one of its victims, shot and killed at point-blank range in an elevator in her apartment building; and Sergei Yushenkov, a former military officer and then opposition leader who was gunned down in Moscow while gathering evidence he believed would prove that Putin’s government was behind the spectacular 1999 apartment bombings—blamed on Chechen terrorists—that paved the way for Putin’s consolidation of power.

Putin, like Kagame, has made no secret of his determination for revenge against those who betray him.  In 2010, he warned, “traitors will kick the bucket, believe me.” At the same time, of course, Russia has consistently denied any knowledge of or responsibility for the growing pattern of Putin’s critics and enemies dying (or nearly so) in suspicious circumstances abroad.

In a trenchant and tragically clairvoyant Washington Post column after Mohammed bin Salman’s “Night of the Long Knives” last November (during which he arrested much of the established Saudi elite), Khashoggi compared the Crown Prince to Vladimir Putin. Decrying the country’s staggering corruption, waste, and inequality, Khashoggi welcomed the declared campaign against corruption but warned that (as in Russia) it could only succeed with transparent investigations and a rule of law. The Crown Prince, he insisted, could not be “above the standard he is now setting for the rest of his family, and for the country.”

Now we face a great test of whether an American resident—a journalist, a public intellectual, and a forthright dissident, to whom America gave both refuge and inspiration—can be murdered in a third country with impunity by a ruler and a state that have brazenly acted above the law. With every new incident of international murder and intimidation, our values, our commitment to freedom, and our national interest in a world governed by law are under challenge. We must make it clear not only to the Saudi monarchy but to all the world’s dictators that they cannot murder their opponents with impunity. They must know that there will be consequences, and that we will hold them personally responsible.

This is why the Congress broadened the original 2012 Magnitsky law by passing in 2016 a Global Magnitsky Act that provides for targeted sanctions—travel bans and asset freezes—on gross violators of human rights. These provisions must now be applied to all Saudi officials responsible for the murder of Jamal Khashoggi, to Rwandan President Paul Kagame and his senior leadership, and to other rulers in Africa, Central Asia, the Middle East and elsewhere who make a practice of murdering their opponents at home and abroad. And our democratic allies in Europe—beginning with Britain, which (like the United States) has welcomed into its banks and real estate markets huge volumes of ill-gotten wealth from dictatorships—must join in imposing targeted sanctions.

All of this must be based on evidence, and if the Trump Administration will not seek it and provide it, Congress must demand it and subpoena it.  For a time in the 1970s, it was Congress that took the lead in pressing for a new American resolve to defend human rights in the world and demand accountability for violations.  It made a difference then, helping to ignite a period of sweeping global democratic change. Now, Congress must take the lead again.

It was only when the United States and Europe replied to the Skripal assassination attempt earlier this year with much more vigorous diplomatic and personal sanctions that Putin might have begun to get the message that continuing his reign of murder abroad would cost him dearly.  It is still not costing him enough, but the precedent is now there.  If dictators can murder any citizen who crosses them any place in the world, then the rule of law is not secure anywhere, and none of us are safe.


The post License to Kill appeared first on The American Interest.

 •  0 comments  •  flag
Share on Twitter
Published on October 18, 2018 13:28

“More Studying and Less Sex. That Is Not Something to Be Regretted.”

Aaron Sibarium (The American Interest): Thank you for agreeing to do this, Heather. To begin, why don’t you tell me what The Diversity Delusion is about, and what inspired you to write it at this particular moment.

Heather Mac Donald: The book is about the identity politics and victim ideology that have taken over college campuses. I was inspired to write it out of a combination of sorrow and rage. Sorrow, because I believe so strongly in the humanist mission of universities and the extraordinary privilege of being able to study the greatest works of civilization. And rage, because I see ignorant students being encouraged by faculty and campus administrators to reject the monuments of human thought on such absurd grounds as an author’s gonads and melanin.

TAI: In the book, you talk about this “metastasizing diversity bureaucracy,” saying it’s not just on campus, but that it’s spread to other institutions too. In particular, you talk a lot about businesses and tech companies indulging in identity histrionics. Do you see this ideology affecting national politics too? Has it started to affect not just private businesses and HR departments, but also the day-to-day political realities of America?

HMD: Absolutely. We have just lived through a month of Gender Studies 101 with the hysteria over the nomination of Judge Brett Kavanaugh to the Supreme Court. The tribal victimology that characterizes college campuses is now becoming the currency of a surprisingly large sector of the Democratic Party. Many females have decided that they represent an oppressed class and that such traditional Enlightenment values as due-process and the presumption of innocence are expendable. Campus rape tribunals have discarded essential truth-finding mechanisms such as cross-examination in the service of the #BelieveSurvivors mantra. And now that contempt for rational means of proof is entering the public consciousness as well.

TAI: Just the other day there was a really interesting piece in The Atlantic by Yascha Mounk, discussing a study that found that the demographic most supportive of political correctness was affluent whites with a college education. Minority groups, on the other hand, were solidly opposed, as were less wealthy whites. Has the diversity delusion really affected society at-large, or has it just affected a very small group within society, but one that controls a disproportionate degree of social power?

HMD: The latter group is the most important as far as influence goes. What matters is the dominant narrative, whether or not the majority of people subscribe to it. That narrative sees white males as the source of most everything evil in the world. The hemorrhaging of lower-class, white males from the American economy and civil life, documented by Charles Murray, may be partly influenced by such circumambient contempt.

To further buttress Mounk’s point, the Pew Research Center did a study of so-called gender equity in STEM within the last year and found that the more years of higher education that females had, the more likely they were to say that they had been the victims of sex discrimination.

The reality is undoubtedly the opposite. The more a workplace is dominated by highly educated products of the diversity-obsessed academy, who have been marinated in social justice thinking throughout their schooling, the more its participants will go out of their way to seek diversity throughout the employment ladder. The perception held by the female educated elite of widespread bias against them is ideological, not empirical.

TAI: That brings me to an argument you make at the beginning of Diversity Delusion. You criticize Jonathan Haidt and Greg Lukianoff’s new book, The Coddling of the American Mind, saying it doesn’t explain the ideological dimensions of campus identity politics. Why don’t you find helicopter parenting a persuasive causal story for what’s happening?

HMD: Over-parenting is a real issue. I’m always nauseated when I see young boys on tricycle scooters in New York City. These are highly stable contraptions with two front wheels, yet their riders, all of four-feet tall, are invariably wearing massive bubble bike helmets as if they’re going to somehow crash and split their crania. We are sadly far from the boyhood hell-raising described by H.R. Mencken in Happy Days.

So the risk-aversion on the part of these highly-educated, post-Baby Boom parents is a real problem. But I do not think that that is what is generating the maudlin campus victimology, because the demographics don’t really match up. The brothers of white females are subject to the same overprotective parents, as noted above, and yet they are not, by and large, identifying themselves as an oppressed victim group needing safe spaces and all sorts of reparations. At best, they can present themselves as allies.

Moreover, blacks and Hispanics are, on average—and I’m making a generalization here—not over-parented to the same extent. In fact, there’s often a lack of parenting on the part of fathers. Yet black and Hispanic students are eager to jump on the victim bandwagon. So my alternative hypothesis to the over-parenting, psychological explanation is that this really is an ideological phenomenon.

TAI: That’s fair, but my sense is that at most elite universities many if not most of the black kids come from the sorts of middle to upper-class families that Haidt and Lukianoff are describing. Especially since, as you say in the book, affirmative action overwhelmingly benefits wealthy black kids, not poorer black kids it was originally designed to help.

HMD: Well there is certainly an effort to get lower-class black kids. But my perception is that black and Hispanic parents are not as ridiculously insane regarding phantom risks, whether regarding vaccines, genetically modified foods, or crashing a tricycle scooter. I may be wrong.

TAI: Fair enough. There does seem to be some evidence of cross-cultural differences even when you control for socioeconomic status. Which I suppose would support your argument, because if you think that there are subtle cultural differences between affluent white and affluent minority households, you would expect affluent white students to identify as victims more than affluent minority students. But they don’t, which suggests the problem isn’t over-parenting; it’s ideological.

And in the book, you trace this ideology back to poststructuralist theory and deconstruction. Is your view that a group of French academics in the 1960s came up with bad ideas that just happened to spread? Or do you think there’s a deeper story here about modern culture, and perhaps even the Enlightenment itself? If ideology is what caused the diversity delusion, what caused the ideology?

HMD: I would by no means rule out influences outside of the academy. Certainly the violent black student protests of the ’60s occurred before post-structuralism really hit its stride. Poststructuralism did not become widespread throughout American universities until the 1970s. I’m also quite skeptical of the anti-Enlightenment story that’s been embraced by the Left, and also by some members of the Right, in response to Steven Pinker’s new book Enlightenment Now, and to a lesser extent Jonah Goldberg’s.

By contrast, I do think the twists and turns of the civil rights struggle played into all this, and radical feminism has for sure. But there are two important ironies here. First, the original poststructuralist thinkers who created the rhetoric of high theory read the Western canon exclusively. Jacques Derrida and Paul de Man, for example, deconstructed Proust and Plato; they never thought to go in search of female or black writers to fill a quota.

Second, one of the most bizarre tenets of deconstruction was that the self was a mere linguistic trope—there was no self, just language play. But in the 1980s, with the rise of multiculturalism, the self came roaring back with a vengeance. Suddenly academic victimologists were defining the self in the most reductive manner possible, in terms of gonads and melanin. The self became the subject of endless study and theorizing—but it was emphatically not a made-up construct.

TAI: It sounds like you’re drawing a link between the individualistic discourse of the self and the more collectivist discourse of identity politics. Is there a relationship there? Are individualism and collectivism just two sides of the same coin?

HMD: Only to the extent that the individual is a member of a group. I don’t think this is some Michel de Montaigne-esque exploration of the twists and turns in one’s own consciousness. It really is a sense of a tribal identity formed out of a collective sense of oppression. To be a member of these highly competitive victim groups is to be the target unending bias and oppression—a view that, I would say, is at its most supreme level of absurdity on college campuses.

TAI: Emma Green has argued that the most militant progressive activists in the Democratic Party are those who lack a traditional religious affiliation. In The Diversity Delusion, you sometimes use religious, theologically inflected language to describe the worldview of campus identity politics. Do you think the decline of religion has lead people to embrace these totalizing forms of identity as a kind of substitution effect?

HMD: Well, as somebody who’s a radical secularist herself, I don’t buy the traditional religious argument that without a belief in a personal, loving god, life is meaningless. Because, as far as I’m concerned, we are drowning in meaning. Anybody who can live in a world that houses Mozart, Tiepolo, Twain, and George Eliot and still feel a vacuum of meaning is aesthetically blind and deaf.

TAI: On the subject of meaning or lack thereof, let’s talk about sex. You spend about a fourth of The Diversity Delusion discussing #MeToo, Title IX, and what you call the campus rape myth. What, in a nutshell, is the problem with campus sexual politics today?

HMD: We have a bizarre hybrid of promiscuity and neo-Victorianism, which is characterized by a belief in ubiquitous male predation but which also looks to males to be the unique guardians of female well-being. When you destroy the traditional restraints on the male libido as sexual liberation did—those restraints being chivalry and gentlemanliness on the one hand and female modesty and prudence on the other—you’re unleashing a force that the female libido can rarely match. Sexual liberation was premised on a fallacy that males and females are identical in their sexual drives. They are not. Nor are they identical in their emotional (and hormonal) responses to intercourse.

TAI: You suggest at one point that the only good thing about Title IX is that it is actually remoralizing campus sexuality in a weird way. That it paradoxically results in a more conservative or, as you call it, neo-Victorian sexual ethic.

HMD: More studying and less sex. That is not something to be regretted. Colleges are not primarily for partying and one night hook ups. I’m sure there have been instances of truly unconscionable male aggression towards females, and in those instance the female deserves help.

But just as I am not sympathetic to rape victimology when the girl was patently an equal partner in a drunken hook up, I also don’t feel overwhelmingly distressed by the situation that males find themselves in, unlike many of my fellow conservatives. Just as a female can, with almost 100 percent certainty, avoid becoming what is viewed on campus as a rape victim by acting prudently and not getting blackout drunk, by not taking off her clothes and getting into bed with a guy whom she may or may not know, so, too, can every college male usually avoid the predicament of being falsely accused of rape by walking his girlfriend home after a date, kissing her goodnight, and writing her a love poem back in his own dorm room. If the bureaucratization of campus sex, with campus rape bureaucrats promulgating preposterous ten-page legalistic rules for coitus, results in less campus sex, there is simply no social cost, unlike, say, the over-regulation of natural gas production, which results in less of a socially useful product and activity.

TAI: In the book you express a certain degree of skepticism about the one-in-five rape statistic bandied about by many feminists. Why do you find this and similar numbers unpersuasive?

HMD: There are two main reasons why I find those statistics unpersuasive. First is the survey instruments and how they’ve been interpreted. The mother of all campus rape surveys was a study that was published in 1985 in Ms. magazine by University of Arizona professor Mary Koss. Koss found that 42 percent of the college females whom she characterized as rape victims went on to have sex again with their alleged assailant. I propose that that is a behavior that is inconceivable in the case of what most people would understand as rape. Koss also found that 73 percent of the campus females whom she characterized as rape victims, when asked directly whether they have been raped, said they had not. In other words, the feminist claim that we’re living through an epidemic of campus sexual assault depends on doing something that feminists have told us one should never, ever do, which is to ignore what females say about their own experiences.

But the other reason that I reject this narrative about an epidemic of sexual assault is that if it were the case, we would have seen a stampede decades ago to create single sex schools where girls could study in safety. Instead, the stampede of girls to get into this alleged maelstrom of sexual violence increases in ferocity each year. Highly educated Gen X mothers pay $200 an hour in Manhattan to tutor their toddler girls for the most prestigious pre-K school in order to increase their chances of admissions to a highly elite college campus 12 years later.

Unless females are too clueless to look out for themselves and to get the word out: “Don’t go to those frat parties, they are one big gang rape,” one has to assume that this epidemic of sexual assault is not occurring.

TAI: You mentioned something interesting there: Whatever the actual statistic turns out to be, everyone knows that the lion’s share of sexual assault occurs in frats. They’re far and away the worst offenders. So couldn’t someone respond to you, “Yes, it would be irrational for women to go to these schools if they believed there was a one in five chance they would be raped. But they don’t really believe that, because they know that almost all these cases occur within a very small set of social spaces. And presumably they think they will be smart enough to avoid those spaces while in college.” If that were the thought process, the fact that women continue to go to college wouldn’t be evidence that they don’t believe the one-in-five statistic; it would just be evidence that they don’t think they are personally likely to end up among the one-in five.

HMD: One-in-five is an almost unprecedented level of criminal victimization, suggesting it is hard to avoid campus rape. To repeat, even if it were the case that these alleged sexual assaults occur only in selective parts of campus such as frat parties, I just don’t understand why girls keep going to them. If rape is so pervasive, even if just in frat parties, I would think that there would be a “strong-women-together” shared knowledge of, “Do not go there.”

TAI: I think some of this comes down to one’s priors about human rationality—can we really expect hormonal 18-year-olds to think the way you’re thinking? Of course, some progressives make a different argument entirely. They say rape culture isn’t just confined to frat houses; it’s everywhere, on campus and off, which means women have the same odds of being assaulted at college as they do anywhere else. That is, they think the risk is extremely high no matter what women do.

HMD: They may well argue that it’s not just confined to frat parties, that it’s a campus-wide problem. But again, this is denying any kind of efficacy to females. Whether it’s confined to frat parties or is spread out throughout drunken campus hookups, there are very simple steps that girls can take to avoid getting raped. Do not drink yourself blotto. The drinking that happens on the part of females is done quite often to deliberately lower their sexual inhibitions. Do not get into bed with a guy you don’t know. Don’t take your clothes off. Doing those things sets in motion processes and impulses that are hard to control once you unleash them.

Do we believe that girls are capable of using their reason to evaluate risk and take simple precautionary measures, or not? If they’re not capable of doing that, I don’t know whether they even belong in college.

You say if rape culture is so pervasive, you might as well go to college because it’s going to be everywhere. But you could still have single sex schools. You could ask the adults to once again say, “No sex in dorms,” instead of saying, “Here’s a 20-page contract modeled on a mortgage to sign before you have sex.”

TAI: That all makes sense. But I still think there’s a tension in your analysis. You say that sexual aggression is motivated by the male libido, and that once you set this force in motion it’s very difficult to stop. But then you say that college campuses are these libertine spaces in which there are no constraints on the male libido at all. In other words, the thing that motivates people to commit rape is sexual desire, and the constraints that used to keep that desire in check no longer exist on college campuses. Absent any statistics, knowing just those two facts, shouldn’t we expect there to be moresexual assault on college campuses than other places? That seems to be the logical implication of your view.

HMD: Yes. In theory, that is a potential contradiction and a good observation. I would say that I don’t agree with the characterization of these incidents as rape, but you do have males acting boorishly and taking full advantage of the drunken hook up culture, in which females are voluntary co-participants. But unless we want to resurrect Victorian values, making the male the sole guardian of female well-being—and believe me, I’m not necessarily opposed to that—unless you want to return there, it makes sense to say females have the power to protect themselves virtually 100 percent of the time.

TAI: It sounds almost as if you’re making a kind of feminist argument for women’s empowerment. That’s the language you’re using—“power.” Have you ever put it in these terms to college audiences, and if so what has the response been? Because although you’re denying a lot of the Left’s empirical premises, you’re also asserting that women have agency—an idea the Left can’t get enough of.

HMD: I have addressed campus rape more in adult situations like the Federalist Society. On campuses, I’ve mostly been speaking about race and policing. But I can predict that their response would simply be an illogical one, which is that we insist that females are identical to males in all ways, but they are helpless victims at the same time.

The other likely response would be, “Oh, you’re blaming the victim.” I have put the question to many a campus rape bureaucrat and said, “If you really believe there’s this epidemic of campus rape going on, doesn’t it behoove you to try to stop it? Shouldn’t your primary responsibility be female safety, and given that a message of female prudence and modesty would be an almost 100 percent prophylactic against what you insist on calling rape, why don’t you send that message of female prudence and modesty?”

And what I’m told by the campus rape bureaucrats is, “Oh, we would never send that message because then people would presume that females are responsible for being raped, and we all know that they’re not.” That means that these bureaucrats are more interested in preserving the principle of male fault than they are in guaranteeing female safety.

TAI: One last thing, before we move on from this topic. My understanding is that the sexual assault surveys you’re talking about often find rates of sexual misconduct are quite a bit worse in the LGBT community, or at least among gay men. Did you look into those statistics at all when writing your book? Do you think they’re accurate?

HMD: I mentioned that the 2015 surveys commissioned by the American Association of Universities on 27 college campuses found that the LGBTQ communities reported much higher rates of sexual assault than everybody else. I don’t, in the book, posit an explanation for that. But it is certainly interesting.

TAI: I ask because you could argue that the social pressures that encourage an over-identification with victimhood among women wouldn’t be quite as strong among gay men—whereas the male libido would be, almost by definition. Which might imply that there really is a lot of rape on college campuses, not just regretted sex, but that the root cause of this epidemic is unrestrained sexual desire as opposed to some nebulous power structure.

HMD: I disagree with your assumption that the LGBT community would be less susceptible to victimhood.

TAI: Among gay men specifically though?

HMD: I don’t think there’s any vast difference between lesbians and male homosexuals on campuses as far as their political clout. There is an entire campus bureaucracy dedicated to LGBTQ’s allegedly oppressed status, a status that has admittedly been somewhat subsumed of late now that trans is the top victim dog. But until trans came along, being gay on campus probably enjoyed the highest victim ranking.

TAI: We could spend all day talking about sex, but I want to end with a couple of questions about race. One thing you do through the book is quote at length from various activists—including PhD candidates in grievance studies departments—to show that among other things these kids are really bad writers. It’s not just that their ideas are silly; their grasp of the English language is virtually non-existent. One explanation you identify is affirmative action and mismatch theory: You say ethnic and gender studies departments evolved as a way to pass kids who would otherwise flunk out of the institutions they’re attending. Could you elaborate on that argument a bit? And do you think that there are any other potential causes of the decline in academic standards?

HMD: People are terrified of correcting black students or Hispanic students, because of the chance that they will be accused of racism. If you’re asking me to talk about the problem of mismatch theory and how there has been a push to create whole academic fields where one simply specializes in oneself—as if being female were somehow an accomplishment (it’s not)—that push originated with the problem of racial preferences that admit students into academic environments for which they are not prepared.

This is not a problem that is exclusive to race. Suppose MIT decided that it needed more females in its entering class so that it could appease the diversity gods at the National Science Foundation, currently pressuring every STEM department in the country to produce gender equity by whatever means necessary. If MIT admitted me to its freshman class, and I had a 650 on my math SAT on an 800-point scale, and my peers, by and large, had 800s on their math SAT, I would struggle miserably in my first year. I would not be able to keep up with freshman calculus or advanced calculus which understandably and unimpeachably would be pitched toward the average level of academic preparedness of my peers. I would flounder. I would very likely drop out of my STEM track, and I would then have two options. I could say I was admitted without competitive scores, and I am now suffering the consequences. Or I could I say that I am in a patriarchal environment which is causing me to feel trauma and flounder because I am surrounded by implicit bias.

Not surprisingly, students who are the alleged beneficiaries of preferences tend to choose the implicit bias or institutional racism explanations for their problems. There was a very good study that was done at Duke University that found that incoming Black male freshmen intended to major in a STEM field at a higher rate than white male freshmen. But by the time of graduation, the attrition rate of Black males out of STEM majors was enormous, leaving the field almost exclusively to whites and Asians. Meanwhile those Black male students gravitate into much easier fields that do not have the same objective rigorous standards. That’s part of what we see with these absolutely abysmal writing examples that I’ve put forward in the book. Not just bad writing but also bad thinking.

TAI: Just now you used the phrase “objective rigor,” in contrast to some of the postmodern thinkers you were discussing earlier who reject the very notion of objectivity. Is part of the problem that we’re afraid to make comparative judgments about intelligence? Because I think that’s certainly something we see whenever someone like Charles Murray gets brought up. It’s not just that there’s an academic skills gap we don’t know how to close; it’s that mentioning the gap itself is verboten—we’re supposed to pretend as if there aren’t any differences in IQ whatsoever. Not just group differences, mind you. The very basic idea that some people are smarter than others, without taking race or gender into account, still rubs many elites the wrong way.

HMD: I couldn’t agree more. This summer, the Trump Education and Justice Departments withdrew guidelines that the Obama Education and Justice Departments had sent out to colleges outlining how they could best implement racial preferences within the confines of the law. The Trump Administration withdrew those guidelines and substituted something from the Bush Administration that was much less enthusiastic about racial preferences.

And predictably, the coverage of the Trump Administration’s actions in the mainstream media was completely silent about why colleges feel compelled to use racial preferences in the first place. There was virtually no mention of the academic skills gap. Indeed, the New York Times framed this as an ongoing fight for equity and integration, using language from the ’60s to imply that schools today are like Ole Miss barring the door to Black students and that we still have to force them to integrate themselves. This is preposterous. Every selective school in the country is twisting itself into knots to admit as many underrepresented minorities as possible, via the folly of racial preferences that only sets up students to struggle if not fail completely.

So, yes, it is completely verboten to mention the academic skills gap. It only comes up fleetingly in the context of, “Well, we’re not spending enough taxpayer dollars on schools.”

TAI: Do you think that perhaps part of the reason it’s so verboten is that intellectual ability now carries such a huge premium in the knowledge economy that it has become easier to identify cognitive acumen with moral value, as fallacious as that identification is? Could it be that various economic and cultural forces are pushing us to conflate intelligence with human dignity, which in turn makes it much harder to speak frankly about these differences? We subconsciously worry that to make judgments about intelligence is to make judgments about moral worth, so if we’re committed to moral equality, we have to pretend as though everyone is equally intelligent.

HMD: That’s a very profound analysis. I think that one danger of this universal frenzy, the idea that everybody should go to college, is that it devalues occupations that don’t require high levels of cognitive sophistication and implies that there are certain jobs that are not worth doing. That is a trope you hear with a certain degree of regularity from the New York Times and others in discussing poverty.

In many ways, we’re a more meritocratic society than ever before in human history because we have largely cast aside the traditional kinship rules that would determine who gets hired (the Trump family White House notwithstanding). Yet we also have an incessant assault on meritocracy because of identity politics and the notion that the National Science Foundation has embraced: that the only good science is diverse science. That’s ridiculous. But nevertheless, every STEM faculty in the country is being forced to interview and hire females simply because of their gender rather than their scientific qualifications.

TAI: You paint a fairly pessimistic picture of the academy throughout the book, but toward the end you do offer a couple of glimmers of light. What’s the solution to all this? Is there any hope?

HMD: Well I do have a chapter on the Great Courses, which are video lectures by college lecturers who are screen-tested to make sure that they are able to present their material in an accessible way. My point was not that this provides a serious alternative to college; rather, it’s just to say that there’s a vast untapped desire for traditional humanistic learning that has not been colonized by high theory and identity politics. Adults feel like they have a gap in their education and hunger for teaching that speaks unapologetically about great literature, great philosophy, and ideas that changed the world, without all the harping on unending oppression.

I’m not sure that the Great Courses themselves can point us out of the dilemma, but certainly they demonstrate an untapped desire. I write about UCLA in 2011 jettisoning its requirements that every English major take one course in Chaucer, two in Shakespeare, and one course in Milton. This was an absolutely reasonable requirement given the importance of those authors to English literary tradition, yet UCLA replaced it with requirements in various identity-based theories. At the time they did this, UCLA had the most popular English major in the country because it was still wedded to a traditional historical approach to the study of literature. This is something that college students themselves want. One of the Great Courses lecturers on medieval history told me that if you ask students what they want to study, they’ll say kings and queens and knights, not the construction of the gendered self.

To a certain extent, schools are betraying their own students by forcing this stuff down their throats. The driving force in this entire enterprise is the idea that America today remains endemically racist and sexist and that any disparity in group representation in any institution is, by definition, the result of bias as opposed to differences in culture, skills, behaviors, and preferences. As long as that idea of endemic racism and sexism remains the dominating force of elite thought in this country, it’s not going to be possible to beat the diversity delusion back.

Then there’s the whole free speech issue, which we haven’t talked about, but which I regard as a mere epiphenomenon of victim ideology. We’re not going to solve that one either, without taking on the structural bias claim head-on. Even if more faculty issued high-sounding statements about the value of free speech, it’s not going to make a damn bit of difference as long as students are told that they are existentially threatened by circumambient racism and sexism and therefore entitled to silence others by force to protect their very lives.

TAI: And of course, the same people who speak of free speech in these lofty terms often accept the premise that American society is irredeemably racist. At one point in the book you mention Peter Salovey, the President of Yale University who says he supports free speech but then turns around and parrots all the talking points about circumambient oppression, thereby legitimizing the ideological technology that’s used to suppress free speech.

HMD: Absolutely right. Salovey is one of the most appalling examples of a president who kowtows to this destructive ideology. There was an even more recent example, in the wake of the Brett Kavanaugh Supreme Court nomination hysteria: A professor at the University of Southern California public policy school, James Moore, sent around an email in response to calls to “believe survivors.” Moore said, paraphrasing here, Well, if anyone in the future is ever the subject of a false criminal or tort claim, you may find yourselves to be bigger supporters of due process than you are now. Accusers sometimes lie. This provoked an absolute meltdown on the part of the school. The dean of USC’s public policy school, Jack Knott, sent around an email message exactly like Salovey’s, talking about the importance of free speech but then asserting that Moore’s mild email was antithetical to the school’s values and would make it even harder for USC’s oppressed female students to survive. So these administrators pay lip service to free speech, but then go and stoke the furies.

TAI: We’ve been talking for an hour and I don’t want to keep you too long. Nevertheless, I feel I’d be remiss if I didn’t ask one final question: In the lead-up to the 2016 election, some conservatives argued that, as bad as Trump is, he was still a better choice for President than Hilary. And part of that argument was that political correctness and identity politics were overtaking America, and that Trump was our last hope to fight back.

This line of thought seems to have been resurrected the past few weeks with Kavanaugh. Now even some Never Trumpers like Bret Stephens are saying, “You know I gotta hand it to the President, he stood up for due process and didn’t let the Left totally destroy a good man’s life. Yes, he’s crude and he’s an asshole, but at least he’s fighting back.”

Is that true? Has Trump been effective at resisting identity politics, or do you think it’s a lost cause at this point?

HMD: Interesting. Well, is he effective? He’s certainly fighting back. The question is, “Is he fighting back effectively, or is he just going to create more backlash?” Is he inflaming the delusional idea that America is endemically racist and sexist more than he is putting it to rest. Again, that’s an empirical matter. I’m not sure.

I do think it’s salutary to have somebody who, at various moments, has appropriately responded to an excess of political correctness. For example, during the first Republican presidential debate, he refused to take the guff from Megyn Kelly about being a misogynist and said, “I don’t have time for political correctness.” I cheered that response. I view Trump as an incredibly painful dilemma: I support his policies but deplore his personality. I don’t think he’s a racist and sexist. I just think he is the worst possible example of an adult male. He is thin-skinned, gratuitously vindictive, the opposite of magnanimous. I would think it would be very hard to raise a boy today with that as our premier male role model.

Do the ends justify the means? A lot of people I know, a lot of my peers, are fully on board with that logic, and they’ve ended up whitewashing Trump and turning him into an unqualified set of virtues, which is hard for me to stomach. It’s a real question: At what point do you draw the line and say that the corrosive effect he’s having on our public virtues outweighs the good that he’s doing on policy matters such as immigration and policing? I don’t know.


The post “More Studying and Less Sex. That Is Not Something to Be Regretted.” appeared first on The American Interest.

 •  0 comments  •  flag
Share on Twitter
Published on October 18, 2018 12:31

An Intelligent History

The Secret World: A History of Intelligence

Christopher Andrew

Yale University Press, 2018, 960 pp., $40



Spying has always been part of great power conflict. Egyptians chiseled the oldest surviving intelligence reports on clay tablets 3,000 years ago. Even spy-themed entertainment has deep roots. Americans have Homeland. The Greeks had Homer. And yet, for centuries, nearly every aspect of the intelligence enterprise—the recruitment of spies, the making and breaking of encrypted messages, covert operations, analysis, and the inner workings of secret bureaucracies—has lurked in the background of historical studies. Espionage may be as old as history, but standard historical accounts often forget its significance. In international politics, intelligence is both powerful and invisible, pervasive and impenetrable.

Christopher Andrew’s The Secret World seeks to bring intelligence into the foreground of history where it belongs. To say that the book is ambitious understates. Covering three millennia across four continents in nearly 800 pages, Andrew aspires to write the first global history of intelligence. A global history of anything spanning 3,000 years has its challenges, and intelligence has more than most, for a very simple reason: Spymasters and politicians prefer that their secrets stay secret. Inevitably, then, the historical record is distorted by variations in governments’ abilities to record their shadowy deeds and their willingness to reveal them over time. Many Cold War histories, for example, discuss American covert operations in Cuba, Iran, Chile, and elsewhere but omit Soviet “active measures” around the world. This is not because the Soviet Union never conducted covert operations but because it was better able to keep them secret. During the Church Committee investigation of the 1970s, the CIA had to answer demands for greater transparency and accountability in a democratic system, something the KGB never had to do. It wasn’t until the Soviet Union collapsed and a former KGB archivist named Vasili Mitrokhin defected to Britain with trunkloads of records he had buried under his dacha that it became clear the KGB had played an even more active global role than the CIA did throughout the Cold War.

These challenges aside, Andrew is uniquely positioned for the mission. An emeritus professor of history at Cambridge University, he has written extensively about the history of intelligence in the Soviet Union, Great Britain, and the United States and served formerly as the Official Historian of Britain’s MI5. He also founded the Cambridge Intelligence Seminar, which for 20 years has gathered international intelligence scholars and practitioners.

The Secret World is a work of magisterial breadth. Andrew starts by examining the portrayal of spies in the Bible. “The first major figure in world literature to emphasize the importance of good intelligence is God,” he wryly notes. The book proceeds in rough chronological order, offering intelligence highlights across geographies that center on four recurring topics: the role of human intelligence (old fashioned spies), signals intelligence (the making and breaking of coded communications), covert operations (mostly involving assassination plots of foreign leaders), and major intelligence successes and failures (which generally consist of efforts to foil internal plots and foreign invasions). We journey to ancient Greece and Rome; China and India, where The Art of War and the Arthashastra first recognized intelligence as providing strategic advantage; the Islamic world; the Middle Ages and Inquisitions; Renaissance Venice; Elizabethan England; French intelligence during the reign of the “Sun King” Louis XVI; the American Revolution and George Washington’s fixation with intelligence; Czarist and revolutionary Russia; and finally, to intelligence in the 20th century, where most intelligence histories usually begin. Of the book’s 30 chapters, only two are devoted to World War II, and just one to the Cold War. For Andrew, the arc of intelligence history is very long.

If there is a unifying theme to this expansive work, it is that intelligence has almost always been underrated by the political and military leaders who make history and by the writers who chronicle it. For Andrew, intelligence is the Rodney Dangerfield of international politics, never getting the respect it deserves.

In the ancient world, for example, deception played a pivotal role in one of the most consequential naval victories in recorded history, the Battle of Salamis in 480 BCE. According to Herodotus, Persian naval forces vastly outnumbered those of the Greek city-state alliance and were poised to deliver a decisive blow until the Athenian general Themistocles devised an ingenious ruse. He sent a loyal slave to the Persian camp posing as a traitor with valuable intelligence: The Greek alliance was splintering, and if the Persian navy moved quickly into the Strait of Salamis, the Athenian navy would switch sides. The operation was designed to lure the Persian navy into the narrow strait between Salamis and the mainland, where Themistocles believed the Greeks’ smaller and more maneuverable ships could gain the advantage. He was right. Xerxes, the Persian monarch, watched from his golden throne above the Bay of Salamis as the Greeks sank 200 of his ships while losing only 40 of their own. It was a momentous loss at a hinge in history. Had the Persians prevailed, the Greeks likely would have lost the war and the development of world civilization might have been dramatically altered.

Yet the writings of both Herodotus and Thucydides make clear that Themistocles was an exception. Most Greek generals did not regularly use double agents or intelligence of any kind, relying instead on personal seers who claimed to receive divine guidance by interpreting dreams, the behavior of birds, and the entrails of sacrificed animals. Athenian democratic leaders viewed surveillance and deception as beneath them. Most Roman generals (with the notable exception of Julius Caesar, who was the first military leader to code his communications with ciphers during campaigns) also believed there was more utility in divining the will of the gods than the intentions and capabilities of humans.

The Art of War was the first book to argue that intelligence should play a central role in war and peace. It is believed to have been written by Sun Tzu sometime between 544 and 496 BCE, and its ideas are now canonized in military academies and popularized in all sorts of self-help books: Assess yourself and your adversary; subdue the enemy without fighting; use deception to gain advantage; appear weak when strong, incompetent when competent, fearful when brave. Yet here too, Andrew writes, ancient intelligence wisdom was not fully recognized even by China’s first emperor, Qin Shi Huang, who was more influenced by superstition than Sun Tzu’s admonition to gain foreknowledge from people who know the conditions of the enemy.

The Renaissance marked the turning point when Europe became for the first time the intelligence center of gravity. For this development Andrew credits two inventions: the printing press, which spread ideas at unprecedented speed and scale; and the decision of Italian city-states to begin stationing their Ambassadors permanently in each other’s capitals rather than sending them only on specific missions. Since most resident Ambassadors were expected to gather information as well as represent their governments, the recruitment of spies increased, and intelligence came to be seen as more intimately connected to statecraft in Europe.

Venice became the secrecy capital of the world. By tradition, the custodian of official records was illiterate to prevent him from reading the documents he handled. Letterboxes were placed throughout the city for citizens to submit names of those suspected of subversion. Venice’s master codebreaker, Giovanni Soro, was the international celebrity of his day. Masks were so widely used that in 1608 they were banned by statute except during carnivals. With the exception of signals intelligence, which continued to be handled by specialized offices, there was no bureaucratic distinction between diplomacy and espionage in Western or Central Europe for the ensuing four centuries.

Andrew’s discussion of British intelligence history is particularly gripping and detailed. In his view, the reign of Elizabeth I (1558-1603) marks one of England’s most vulnerable and successful periods. Viewed as an illegitimate heretic by the Catholic powers of Europe, the unmarried Protestant queen faced threats of Catholic subversion and assassination at home and invasion from abroad. In response, the queen appointed Sir Francis Walsingham to serve as both her foreign secretary and intelligence chief, the first time that both roles were held by the same person.

Serving from 1573 until his death in 1590, Walsingham built an extensive network of agents across Europe (including most likely the playwright Christopher Marlowe) and oversaw the revival of English codebreaking. Andrew calls the results “the world’s most sophisticated intelligence system” of the day. Although Walsingham’s private papers have not survived, other records show that he enjoyed daily access to the queen and apparently did not shy from speaking truth to power, on one occasion so angering Elizabeth during a disagreement that she took off her shoe and threw it at his head. Walsingham’s most important achievement was decrypting communications that revealed plans by King Philip II of Spain to invade England. Napoleon and Hitler’s invasion plans were also thwarted by British codebreakers. Yet as Andrew notes, historians at Bletchley Park during World War II “had no idea” their codebreaking forbearers had protected the country from similar moments of peril. “No other wartime profession was as ignorant of its own past,” he writes.

Much the same can be said of American intelligence history. Benjamin Franklin is widely regarded as a Founding Father of the United States, but few historians note—as the Central Intelligence Agency did in 1997—that Franklin was also a Founding Father of American intelligence. In Paris, Franklin waged effective covert action propaganda and disinformation campaigns to sway French opinion and secure the crucial French-American alliance during the Revolutionary War. Nor is it widely understood that General George Washington was better at intelligence than fighting. He ran a string of agents and double agents, devoted considerable effort to encrypting and decrypting communications, and devised all sorts of deceptions—including forging fake documents with references to fictitious regiments under his command. Washington’s greatest feat may have been convincing the British that his forces were too strong to attack when in fact they were at their weakest.

Also overlooked is the importance Washington placed on intelligence as President. During his first State of the Union address, Washington requested a secret intelligence fund that within three years had grown to over $1 million, or 12 percent of the Federal budget. Today, by comparison, intelligence comprises less than 2 percent of the budget.

And the “special relationship” between the United States and Britain wasn’t so special on the eve of the World War in 1914. British intelligence went to great lengths to persuade America to join the war effort, including spying on American diplomatic communications, deceiving Assistant Secretary of the Navy Franklin Roosevelt about Britain’s intelligence capabilities and operations, planting stories in American media to whip up anti-German sentiment, and sharing (but hiding the origins of) the now infamous telegram intercepted from German Foreign Minister Arthur Zimmerman offering a German alliance with Mexico that would include assistance to help reclaim Texas, New Mexico, and Arizona territory if war broke out with the United States.

Particularly intriguing for students of history is how one German subversion operation changed the course of the Russian Revolution. In the spring of 1917, the Kaiser approved a plan to transport Lenin from exile in Switzerland back to Russia. Andrew writes that Germany wanted to foment revolutionary chaos, fragment Russian opposition to the Provisional Government, and end the war. Shortly after Lenin arrived in Petrograd, a German representative in Stockholm cabled Berlin: “Lenin’s entry into Russia successful. He is working exactly as we wish.” In reality, however, Lenin’s return set in motion the mother of all blowbacks. While the Mensheviks and Socialist Revolutionaries cooperated with the Provisional Government, under Lenin’s influence the Bolsheviks refused. Instead of fragmenting the revolution, Lenin’s strident position enabled the Bolsheviks to claim that they were the only true revolutionaries as the Provisional Government’s popularity plummeted. “But for German help,” Andrew writes, Lenin “would have remained in exile in Zurich for most of 1917, probably unable to impose his will on his Russian followers.”

Across time and place, Andrew places individuals at the center of the story. The Secret World reads more like a hundred secret worlds, each led by someone who either saw the value of intelligence or didn’t. This approach makes for interesting reading but also some puzzling claims. Among them, Andrew argues that if only Franklin Roosevelt had shown greater interest in U.S. Navy signals intelligence, the U.S. government could have prevented Japan’s surprise attack on Pearl Harbor—even though Roosevelt was an avid consumer of intelligence and a Navy man. Andrew seems unaware that the principal lesson actually learned from Pearl Harbor was the importance of organizational design. Roberta Wohlstetter’s seminal 1962 study, Pearl Harbor: Warning and Decision, finds that signals of Japan’s impending attack existed but got lost in the noise of false alarms and the maze of disjointed intelligence bureaucracies in the War and Navy Departments. Indeed, the Central Intelligence Agency was created precisely to address these coordination weaknesses and “prevent another Pearl Harbor.”

Alas, it has never fully succeeded. Half a century later, my own work and the 9/11 Commission found that organizational deficiencies were once again crucial to understanding why U.S. intelligence agencies failed to prevent a surprise attack. Hobbled by Cold War-era structures, incentive systems, priorities, and cultures, and riddled by coordination problems, the CIA and FBI missed 23 opportunities to penetrate the 9/11 plot. Attributing intelligence failures to individuals may be comforting. But if the goal is learning from past failures to prevent future ones, it is often a counterproductive focus.

The Secret World finds that intelligence has not developed in a linear fashion anywhere. Yet Andrew does not explain any of the patterns he discovers. Why did the great coding capabilities of the Muslim world disappear for 500 years? Why did no American President for 150 years after George Washington utilize intelligence nearly as much as he did? Why did the British allow signals intelligence capabilities to languish for 70 years before World War I?

One possible answer is that the development of intelligence capabilities over time can be understood as a response to shifts in the threat environment. In periods when domestic subversion and foreign invasion threats are great, intelligence tends to be more valued and better developed. When threats recede (or are thought to recede), intelligence capabilities atrophy. Andrew hints that cultural attitudes may also help explain variations in the use of intelligence. The Ottomans, he notes, failed to develop a robust intelligence system when the Venetians did in part because the Sultan thought placing embassies with spies in the capitals of lesser leaders was beneath him. Similarly, he writes that the Russians underestimated the Japanese military in the Russo-Japanese War because of racist views of Japan as a weak and undeveloped civilization. This much is clear: When so many leaders over so many centuries give such little credence to intelligence, chances are that something more systematic is at work.

“Twenty-first century intelligence suffers from long-term historical amnesia,” Andrew observes. Whatever its own shortcomings, The Secret World offers the beginnings of a remedy—though, as might be expected given the very nature of the task, much remains stubbornly hidden.


See my Spying Blind: The CIA, the FBI, and the Origins of 9/11 (Princeton University Press, 2009) and Flawed by Design: The Evolution of the CIA, JCS, and NSC (Stanford University Press, 1999).



The post An Intelligent History appeared first on The American Interest.

 •  0 comments  •  flag
Share on Twitter
Published on October 18, 2018 09:35

October 17, 2018

A Me-First President in a #MeToo Era

In less than a month, many Americans will participate in yet another critical election. I say “yet another” because we have recently experienced an unusual spate of critical midterm contests. For most of the post-World War II period, midterm elections were sleepier, less impactful events. The Democrats controlled the House of Representatives for 40 years up until 1994 and the U.S. Senate for 26 years until 1980. Since 1994, the House has flipped twice and the Senate four times. If the polls and projections are correct, the House will change hands once again.

What makes an election critical? Above all, it is the prospect that the voting outcome will shift political power and policy dynamics in Washington. In a wave election, national forces prevail over local personalities, context and issues, pushing congressional outcomes in one predominant direction. David Brady and Brett Parker have argued that a wave has historically meant a shift of 30 seats or more. This time, however, the Democrats need a mini-wave of 24 seats in the House to take control, but need something more like a tidal wave to take the U.S. Senate. The 2018 Senate races remind us that waves are not only about winning the incumbent party’s seats, but holding onto your own vulnerable seats as well.

The usual sources of negative midterm wave elections (i.e. where the President’s party loses many seats) are bad economic conditions, unpopular Presidents and policies gone awry, plus the usual incumbent party decline that follows presidential election surges. In this case, we have an unpopular President buoyed by relatively strong economic conditions running for re-election under circumstances of increasing polarization. The President is particularly unpopular with Democratic and independent women for obvious reasons. He hopes to offset that with appeals to angry white men.

Adding to the uncertainty about the outcome on November 6, there are different dynamics in the House and Senate races this year. Republicans are defending 25 House districts that Hilary Clinton carried in 2016 versus 13 represented by Democrats and won by Trump. By contrast, 10 Democratic Senators are up for re-election in states that were carried by President Trump two years ago (five of which Trump won by double digit margins) as compared to only one Republican Senate seat in a state won by Clinton.

President Trump poses a strategic dilemma for vulnerable House Republicans in suburban seats. If they embrace the President, it will likely mobilize the Democratic base in their districts, possibly alienating many independents and some moderate Republican leaners at the same time. If they distance themselves from the President, they potentially demobilize their Republican Party base.

In theory, President Trump, the leader of the Republican Party, should be as concerned about this dilemma as the Republican House incumbents are, but Donald Trump is the ultimate me-first candidate: It is always about him first and foremost. Trump knows he cannot win over any Democrats or quite possibly many independents either. In the past, a strong economy might have neutralized or demobilized some of these Democrats, but as Brady and Parker point out, voters view the economy through an increasingly partisan filter. Whatever the objective reality, the economy looks better to partisan voters when their party is in charge and weaker when the other party is.

Whether by instinct or calculation, President Trump has adopted a strident political strategy, appealing to his base voters, drawing attention to “liberal mobs,” portraying Judge Kavanaugh as yet another innocent male victim of feminist anger, and warning of extreme liberal policies such as “Medicare for All.” None of this partisan bluster will help vulnerable Republican House incumbents in the seats Clinton carried in 2016. But, it could be just what the doctor ordered in order to take down vulnerable Democratic Senate incumbents in states like North Dakota, West Virginia, Texas, and Tennessee where the Republican voting base is large.

Losing the House would of course be problematic for President Trump as it will trigger numerous investigations and stymie a conservative policy agenda. Losing control of the Senate, however, would be a potentially devastating blow, because it would block the conservative judicial appointments that have bolstered his mainstream Republican support. It would also likely increase the odds of impeachment. The President has strong survival instincts based on a lifetime of success despite being reviled by many. He understands the Senate is the key to his political survival. There is clear political logic to his actions even if they seem like madness to many.

The strategic dilemma for vulnerable U.S. Senate Democrats was on full display during Brett Kavanaugh’s Supreme Court confirmation hearings. Any Democratic Senator voting to confirm Kavanaugh would earn the enmity of the Democratic base. But a vote against Kavanaugh would play into President Trump’s mobilize-the-base strategy. While several recent polls show that a majority of the American public oppose Judge Kavanaugh being on the Supreme Court, that may not matter in an U.S. Senate election that fortuitously features more vulnerable Democratic than Republican Senate seats.

So what will be the main takeaways if our highly flawed, personally unpopular President survives this midterm referendum, and the Republicans manage to hold onto the Senate? Predictably, many Democrats will focus on the tactical advantages that the Republicans enjoy, such as small state over-representation, voting law restrictions, partisan redistricting and the like. Those are legitimate concerns, but they cloud an important point that Donald Trump recognized: There is a lot of economic hurt outside America’s urban hubs. No one has a good answer as to how we replace the old economy with a new one in many red states, but President Trump continues to reap political advantage by forcefully articulating those fears along with a healthy dose of nativism.

Secondly, although we have had some very decent human beings as President in recent years, it is not clear that character matters much to voters any more. The election of a reasonably good person may just be incidental. One reason perhaps may be that personally decent Presidents such as Jimmy Carter, Gerald Ford, and both Bushes are regarded in retrospect as weak and ineffectual, while some of our more ethically dubious ones like LBJ, Bill Clinton, and Richard Nixon produced significant achievements during their tenure in office.

Taking a longer view of presidential character, the early 18th-century ideal of representatives as trustees who did what was best for their constituents has long since given way to the modern ideal of representatives who do exactly what their constituents want. Today, the messenger needs only to deliver the message faithfully, not exercise independent judgment or integrity. Office holders are mere instruments of the popular will, not neutral experts or wise leaders with the responsibility of correcting or checking the people’s judgment. And if constituents on the other side of the political divide are enemies, then what is the use of compassionate and empathetic public agents anyway?

Putting new faces into the same political system with identical incentives and pressures will not restore the role of character to representation. But what will? The problem may be too deeply embedded in our narrow, populist expectations about representation and what it takes to navigate the entrenched material and ideological interests of modern America.


The post A Me-First President in a #MeToo Era appeared first on The American Interest.

 •  0 comments  •  flag
Share on Twitter
Published on October 17, 2018 11:00

The Virtue of Apprenticeship

The 2016 election heightened the nation’s awareness of the economic woes of many American workers, especially blue-collar workers lacking a college degree. Wage stagnation, while worsened by the slow recovery from the Great Recession, is not a new trend. Men’s long-term earnings have stagnated with each passing cohort from those entering the workforce in 1967 to those entering in 1983. (Women’s earnings increased 59 percent over the period, but from a low base.) Another concern about declining opportunities in America is the erosion of middle-class jobs due to some still debated combination of outsourcing and automation.

Commentators and political factions blame these labor market problems on everything from bad trade deals, to declines in manufacturing jobs, to corporate greed, to outsourcing, to an uncompetitive tax and regulatory environment, to lax immigration policy. But there is another contributing factor that receives less attention: the weaknesses of secondary, postsecondary, and job-training systems in preparing students for well-paid jobs and rewarding careers.

U.S. researchers too often equate “skills” with years of schooling, completion of degrees, or scores on tests of math and verbal capabilities. In their well-known book, The Race Between Education and Technology, Claudia Goldin and Lawrence Katz argue that increases in educational attainment have been too slow to yield healthy economic growth and reduce wage inequality. This view of skills is one driver of the expansion of higher education spending over recent decades. In 2014, the United States spent $27,900 per full-time equivalent student in postsecondary education, 81 percent more than the OECD average of $16,400.

Despite increases in years of schooling, added government spending, and the buildup of student debt, U.S. employers report that they still face a serious skills mismatch in various occupations, especially those in technical fields. One survey of a nationally representative sample of manufacturing companies found that “84 percent of manufacturing executives agree there is a talent shortage in U.S. manufacturing, and they estimate that 6 out of 10 open skilled production positions are unfilled due to the shortage.” The skills shortfall in manufacturing is primarily in jobs that require occupational and employability skills and is not necessarily about a shortfall in the general skills that come with many college degrees. In fact, worker productivity depends heavily on occupational competencies and employability skills such as communication, teamwork, the ability to efficiently allocate resources, problem-solving, reliability, and responsibility. Strikingly, in hard-to-fill jobs, firms generally prefer relevant work experience over a bachelor’s degree.

This all raises questions about the near-exclusive focus by policymakers and researchers on schooling and academic test scores in the United States. So, too, does the recognition that many young people become disengaged from formal schooling, leading to weak high school outcomes (as reflected in high rates of enrollment in remedial coursework in two-year colleges) and low completion rates for community college students. Of students starting a two-year community college program in 2012, only 22 percent of all students and only 12 percent of black students had graduated within three years. Yet spending even more on the “academic only” approach to career preparation—through “free” college—is becoming a political staple.

Meanwhile, the country is seeing a declining share of young people gaining work experience and employability skills. The employment rate for 16- to 19-year-olds dropped from nearly 50 percent in 1979 to about 30 percent in 2018. Even in today’s low-unemployment economy, only about 67 percent of 20-24-year-old men are working, down from about 80 percent in 1979.

Making inroads on these problems requires widening our approach to preparing students and workers for careers. The most cost-effective and equitable way to do so is to build a robust apprenticeship system. Apprenticeship programs combine academic with structured, work-based learning under a mentor or supervisor. They involve joint production of skills and output, as apprentices earn wages and contribute to production, while working towards a valuable, occupation-based credential. They make career preparation more equitable by enhancing options for those who learn best by doing and who excel in a team context.

Apprenticeship in the United States has long proved successful in building skills in construction occupations for the industrial and commercial construction industries and in a few others. But the scale of U.S. apprenticeship is minimal not just compared with Austria, Germany, and Switzerland, but even Australia, Canada, and the United Kingdom. The goal of expanding U.S. apprenticeship has attracted support from Republican and Democratic politicians in Congress, at statehouses, and from Presidents Obama and Trump. In 2015, the Obama Administration funded 46 grantees with $175 million over five years to increase apprenticeships. In the first year of his Administration, President Trump endorsed a goal of 4-5 million apprenticeships. This target might sound impractical, but it would only require that the United States attain the share of apprentices in its workforce that Australia and England have already achieved. The Trump Administration increased the budget for apprenticeship and established a Task Force to chart apprenticeship expansion.

Is the increased emphasis on apprenticeship appropriate? If so, how can the U.S. government, in partnership with private industry, expand apprenticeship sufficiently to cover a wide array of occupations and reach a high share (say one-third or more) of a cohort? How can the United States build a high-quality apprenticeship program that generates sufficient demand for apprentices to match the supply of willing applicants in the labor market?

Why Apprenticeship?

Apprenticeship programs improve the learning process (as students directly apply what they learn), encourage student engagement, increase incentives for students to perform well in academic courses, improve the match between workers’ skills and labor market demands, encourage employers to upgrade their mix of jobs, and widen access to rewarding careers for workers who prefer learning-by-doing over traditional classroom education and the four-year college model. A large-scale apprenticeship system can also yield healthy returns to employers and can slow the seemingly inexorable drive to increase spending on higher education. An apprenticeship credential documents a worker’s competence in a profession and provides apprentices with a deep sense of pride when they complete their program. In Switzerland, where about 70 percent of each cohort goes through an apprenticeship, 95 percent of 25-year-olds have attained either a BA or gained an apprenticeship qualification.

Apprenticeships are distinctive in that they enhance both the worker (supply) side and the employer (demand) side of the labor market. On the supply side, the financial gains to apprenticeship are strikingly high. Studies on U.S. programs indicate that apprentices do not sacrifice earnings during their education and training, and that their long-term earnings benefits exceed the gains to completing a degree at a community college. Reports from the state of Washington indicate that the gains to earnings from apprenticeship programs far surpass the gains to all other alternatives. A study of apprenticeship in ten U.S. states documents large and statistically significant earnings gains from participation in apprenticeship programs.

These results are consistent with many studies of apprenticeship training in Europe showing high rates of return for workers. One study exploited variation in apprentices’ abilities to complete their programs to estimate the effects of additional years of apprenticeship. The researchers found that apprenticeship training raised wages by about 4 percent per year of training. For workers completing a three-to-four-year apprenticeship, post-apprenticeship wages were 12-16 percent higher than the wages of those who did not complete an apprenticeship because the firm went out of business. Because the workers’ costs of participation were often minimal, the Austrian study found high overall benefits and modest costs.

Non-economic outcomes are more difficult to quantify, but evidence from Europe suggests that vocational education and training in general is linked to higher confidence and self-esteem, improved health, higher citizen participation, and higher job satisfaction. These relationships hold even after controlling for income. An Australian study found that quality apprenticeships improve mental health.

On the demand side, employers can feel comfortable raising the skill requirements and the complexity of tasks that new hires are expected to accomplish, knowing that their apprenticeship programs will ensure an adequate supply of well-trained workers. Firms reap several additional advantages from their apprenticeship investments. They save significant sums of money in the form of reduced recruitment and training costs, reduced errors in placing employees, and reduced costs when the demand for skilled workers cannot be quickly filled. Other benefits of apprenticeship for firms include reliable documentation of appropriate skills, increased worker productivity, higher morale, and a reduction in safety issues.

Another benefit to firms, rarely captured in studies, is the positive impact of apprenticeship on innovation. Well-trained workers are more likely to understand the complexities of a firm’s production processes, and to identify and implement technological and technique improvements—the two are not the same—especially incremental innovations that improve existing products and processes. A study of German establishments documented this connection and found a clear relationship between the extent of in-company training and subsequent innovation.

The evidence suggests that employers achieve positive returns on their investments in apprenticeship. After reviewing several empirical studies, Muehlemann and Wolter conclude that:


in a well-functioning apprenticeship training system, a large share of training firms can recoup their training investments by the end of the training period. As training firms often succeed in retaining the most suitable apprentices, offering apprenticeships is an attractive strategy to recruit their future skilled work force.


In the United States, evidence from surveys of more than 900 employers indicates that the overwhelming majority believe their apprenticeship programs are valuable and produce net gains. Nearly all sponsors reported that their apprenticeship program helps them meet their skill demands. Some 87 percent reported they would strongly recommend registered apprenticeships; an additional 11 percent recommended apprenticeship with some reservations. A recent U.S. study found 40-50 percent returns for two expensive apprenticeship programs.

Apprenticeships are also a useful tool for enhancing youth development.1 They integrate what young people learn in the classroom with their on-the-job experiences, which particularly benefits hands-on, nontraditional learners. Early apprenticeships can help engage youth and build their identities. Youth who participate in apprenticeships early in their careers also benefit from a longer period of economic returns to training and a lower probability of developing bad work habits. Ideally, teachers and counselors help prepare students to take full advantage of their work-based learning and provide opportunities for students to reflect on the links between classroom and job-based learning.

Apprentices work with adult mentors who not only teach young people occupational and employability skills but also offer encouragement and guidance, provide immediate feedback on performance, and impose discipline. Unlike community colleges or high schools, where one counselor must guide hundreds of students, each mentor deals with only a few apprentices.

Youth apprenticeships can be less costly for employers than programs focused on older workers. Wages can be low because youth have fewer medium- and high-wage alternatives, and because youth have fewer family responsibilities and are better able to sacrifice current for future income. For example, while Swiss firms invest heavily in their apprenticeship programs, they pay their young apprentices very low wages during the apprenticeship period.

Most government human capital programs offer government funding each year (per full-time equivalent trainee) and result in a social cost in the form of the foregone earnings of trainees. In apprenticeship programs, by contrast, there is an initial fixed cost for helping employers establish apprenticeships, but subsequent years require far less government funding as employers bear most of the costs of training. Moreover, the foregone earnings of apprentices are modest since they receive wages during their training and as they contribute to production. These contributions allow firms to recover a significant share of their costs during the apprenticeship itself.

Some scholars are concerned that apprenticeship training, due to its specificity to one industry or occupation, may yield weaker capabilities than general education to adapt to technological change. Yet recently released data from the 2016 National Household Education Survey found that former apprentices were very likely to apply the skills they learned during their apprenticeship to their current job. Among workers ages 40 and over, 67 percent of those completing apprenticeships of one year or more reported using the skills they learned in the program all or most of the time; another 24 percent reported doing so some of the time.2 European studies yield similar results.

The Government Role

A government role makes sense economically and socially. Like other public investments in career-focused education and training, apprenticeships lessen credit constraints for students, generate productivity gains not fully captured by students or firms, and lower the excess burdens and administrative costs of transfers. As a cost-effective method for subsidizing preparation for careers, apprenticeships lower political pressures to increase government funding for higher education and to impose market distortions (such as increasing the minimum wage). From a social perspective, apprenticeships are likely to increase mobility and reduce inequality by improving career prospects for those who learn best by doing. Why, then, has the United States failed to generate the kind of large-scale apprenticeship program seen in other developed countries?

One barrier is a failure to try. Until 2015, the Federal government had devoted less than $30 million per year to the Office of Apprenticeship (OA) to supervise, market, regulate, and publicize the system. Many states have only one employee working under their OA. Were the United States to spend what Britain spends annually on apprenticeship, adjusting for differences in the size and composition of the labor force, it would provide at least $9 billion per year for apprenticeship. In fact, the British budget for advertising its apprenticeship programs exceeds the entire U.S. budget for apprenticeship.

While government funding for apprenticeship remains very low, the annual cost of instruction and support services for each full-time equivalent student in two-year public colleges was approximately $16,000 in 2008-09; today’s annual costs are no doubt substantially higher. The Federal Pell Grant program for low- and lower-middle-income college students spends about $32 billion per year, with a substantial share going toward career-focused programs in community and career colleges. That’s well over three times what an apprentice program comparable to Britain’s would cost in the United States.

The second barrier is the complex administrative structure of the registered apprenticeship system. This includes separate state administrations in half the states and Federal governance in the other half; the requirement that each firm or set of firms have an approved set of occupational skill frameworks; the lack of national occupational frameworks; delays in the approval process; and the lack of an auditing system to assure quality.

A third barrier is the limited capabilities of OA staff and intermediary organizations to sell and organize apprenticeships. Because few employers outside commercial and industrial construction offer apprenticeships, most employers are unlikely to hear about the model from other employers or from workers in other firms. Compounding this problem are two factors: the difficulty of finding information about the content of existing programs, and the fact that developing apprenticeships is complicated for most employers and often requires technical assistance that is unavailable in most of the country.

A fourth issue is the asymmetric treatment of government funding for postsecondary education and training. Pell Grants, subsidized loans, and state college subsidies provide financial support to students taking for-credit courses. Yet in general, similar subsidies for academic-related instruction linked to apprenticeships receive little government aid. 

A Strategy for Scaling Apprenticeships

A high-quality apprenticeship system requires several elements, including:



effective branding and broad marketing;
incentivizing direct marketing and organizing apprenticeships among private and public employers;
credible, recognized occupational standards with continuing research on changing requirements;
public funding for off-job quality instruction that includes teachers effective at helping apprentices prepare for and reflect on their work-based experiences;
a system of credible end-point assessments of apprentices and programs;
one or two certification bodies to audit programs and issue credentials;
simple systems enabling employers to create and track the progress of apprentices;
counseling and screening for prospective apprentices to insure they have the aptitude for, and interest in, the field;
training for the trainers/mentors of apprentices; and
research, evaluation and dissemination.

We can begin moving toward this vision by focusing on four feasible initiatives.

Develop an Apprenticeship Brand

The Federal and/or state governments should create a distinctive and quality brand. South Carolina chose to link apprenticeship with local pride by using “Apprenticeship Carolina” as its brand name. Britain has now established a copyright for the term “Apprenticeship” so that employers cannot claim to offer an apprenticeship without meeting the terms of the established program.

Once a brand name has been selected, political officials, business leaders, and the media should highlight apprenticeship as a high-quality career option in all types of occupational areas. Videos of successful employers and apprentices should be widely featured.

Establish a Public/Private Entity to Develop Occupational Frameworks

These occupational frameworks should reflect both employer needs and long-term skill requirements. Consensus frameworks are especially important if the public sector provides funding for the general skills component of apprenticeships (for example, for skills that have value outside the training firm). Employers rarely have the time to develop such frameworks, nor do all employers in the same industry always share a common vision. To ensure that American Apprenticeships remains a quality brand and to simplify the process of implementing apprenticeships, Congress should establish the American Apprenticeship Standards Institute (AASI), which would be tasked with researching, creating, and updating apprenticeship competency frameworks for a broad range of occupations.

Working with industry associations and individual public and private employers, the AASI would produce frameworks with potential job titles, occupational pathways, certification and licensure requirements, salary ranges, and employment opportunities. The frameworks should be limited to no more than 500-600 occupations to avoid frameworks that are too narrow for mobility.

Each framework should describe the following:



cross-cutting competencies, including personal effectiveness (such as reliability, initiative, interpersonal skills, and adaptability);
academic competencies; and
workplace competencies (such as planning, teamwork, scheduling, problem-solving, and working with tools).

The entity can draw on examples developed by the Urban Institute and by the United Kingdom’s Institute for Apprenticeship.

Support the Direct Marketing and Organizing of Apprenticeships

Branding and broad marketing will not suffice without a well-developed system for selling and organizing apprenticeships. This key task is often overlooked, especially where most employers are unfamiliar with apprenticeships and their value. Marketing apprenticeship as a partial solution to the talent management efforts of individual employers is not easy and typically requires several face-to-face encounters. Employers whose interest is piqued by an advertisement must have a resource they can access quickly and easily for more information about developing and implementing an apprenticeship program. Working with a company to organize apprenticeships requires determining the most suitable occupations, developing a plan to combine work-based and academic instruction, and filling out the forms and other materials required for registering apprenticeships.

The U.S. government, again in partnership with industry, should establish incentives for intermediaries (private or public) to market directly to, and organize apprenticeships for, employers. The incentives should be structured to ensure apprentices receive the appropriate training and work-based learning experiences and achieve high completion rates. Funding should go only to those intermediaries that stimulate apprenticeships that follow the official occupational frameworks.

Britain managed to scale apprenticeship scale from about 150,000 to over 850,000 in about eight years, largely through the efforts of 850 employment and learning providers. Australia achieves high levels of apprenticeship partly through private, often nonprofit, Group Training Organizations (GTOS).

Evidence suggests that effective marketing and organizing of apprenticeships could be achieved at a cost of about $2,000 for each apprentice who completes the first 60 days of a program, along with an additional $2,000 for each apprentice who completes the program in full. The payments could vary with the long-term returns to occupations. One reason for expecting modest per-apprentice costs is that once employers establish an apprenticeship program, most are likely to continue the program over time, with less effort by intermediaries. Assuming intermediaries stimulate half a million new apprenticeships per year, the initial costs of the incentives would total about $1 billion. In equilibrium, if the intermediaries successfully generated 900,000 new participants and 675,000 completers per year, the costs of the incentives would reach about $3.15 billion per year. Along with intermediary incentives, the Federal government should establish an independent auditing system to assure program quality and to avoid fraud, thus increasing the credibility of the apprenticeship system.

A significant share of the long-term costs of apprenticeship programs will be borne by the employer in the form of apprentice wages and the costs of work-based training. The foregone earnings of apprentices will be modest since they will receive wages during their training. Firms, meanwhile, will recover a significant share of their costs during the apprenticeship itself. The costs to the government will come largely in the form of setup costs and contributions to off-job training.

At scale, the stock of apprentices in any given year would reach well over two million. Since about three-fourths or more of the occupational and employability training for these apprentices would take place at worksites (at no public cost), full public support for the off-job training could be about $8 billion, raising the overall costs to $11.15 billion. For the sake of comparison, were these apprentices to attend community college full-time instead, the costs for instruction and services would amount to at least $32 billion per year. Again, that’s about triple the cost of going the apprenticeship route. Moreover, some of the costs of off-job training would offset spending on community colleges and other training schemes. Over time, the costs of incentives to intermediaries could fall as employers adopted apprenticeships without intermediaries and intermediaries lowered their costs by gaining repeat business.

Federal, state, and local governments could show leadership and credibility by also creating apprenticeship positions in the public sector. Many state and local employees work in occupations that could be filled through apprenticeships, in positions in information technology, accounting, health care, administration of parks and courts, and security (including police and fire). Kentucky recently launched a program for social care apprenticeships. Such a step would be feasible and cost-effective. Britain now requires government agencies to fill 2.3 percent of their jobs with apprentices.

Use Existing Funding for Off-Job Training and Incentives

Theoretically, skills learned in the off-job courses related to apprenticeships can be applied not only to a current employer, but to many other employers as well. For this reason, the employer who provides the training will not necessarily recoup the benefits. But the worker will, and the government shares in these gains in the form of higher taxes and reduced transfers.

Federal, state, and local governments already spend tens of billions of dollars on an array of education and training programs. The effectiveness of government dollars would be far higher if at least some of these funds were made available for off-job apprenticeship training. Encouraging this shift in priorities, however, will require detailed analysis of each funding source.

In some cases, government funds could be substituted directly for employer funding, while in other cases existing government training funds could be made accessible for apprenticeship. Currently, for example, the Trade Adjustment Assistance (TAA) program provides about $740 million in funding to those who lose their jobs due to trade impacts. Participants receive support for training, often in a community college program, and cash income support while undergoing training in the form of extended unemployment insurance. The operations of TAA could be changed in ways that increase funding for the off-job training in apprenticeships and for organizations to sell and organize apprentice programs with employers. The U.S. Department of Labor’s Workforce Innovation and Opportunity Act (WIOA) programs are already required to work with apprenticeship programs, but WIOA staff are ill-equipped to help scale up apprenticeships. Some of WIOA’s over $3 billion dollars could be directed toward the intermediary incentive program. Training WIOA business services staff to sell and organize apprenticeships could also defray some of the costs of the incentive program.

Some of the $1.8 billion now allocated to Job Corps and YouthBuild could also be redirected to apprenticeship initiatives, or made available to local program operators to market and organize apprenticeships. These two programs are expensive, cover only about 56,000 participants per year, and yield modest or no gains in earnings. Although apprenticeships have demonstrated far higher earnings gains than existing programs, including Job Corps and YouthBuild, any diversion of funds should be accompanied by a renewed effort to target disadvantaged youth for participation in apprenticeships.

Funding for the Carl D. Perkins Career and Technical Education Act of 2006 has supported career and technical education in high schools and colleges. Some of the $1.7 billion annual outlays on the program could also subsidize the cost of off-job training for apprentices.

Pell Grant funding is another potential source of funding for apprenticeships. Currently, over half of Pell recipients are in public two-year or for-profit colleges, often in career-focused education programs. Loan programs that are very costly to the Federal government also support students in these programs. Helping students use Pell Grants for apprenticeship would save significant sums and generate higher earnings gains. Although Pell Grants are currently not well-suited for apprenticeship, Pell eligibility criteria could easily be modified to allow apprentices to use pro-rated Pell Grants for the off-job component of their training.

State governments could encourage more apprenticeships with the use of their existing subsidies to community colleges. States commonly reimburse community colleges for some percentage of the cost of a full-time equivalent (FTE) student. Suppose the reimbursement rate were 60 percent of the costs of a FTE but that much of the actual and accredited learning (say, 70 percent) for an occupation program took place at the work site in an apprenticeship. If the costs of the community college instruction fell to only 40 percent of the normal costs of a FTE, but the state continued the 60 percent subsidy, then colleges could provide the classroom component of apprenticeship at no cost to employers. They could use the remaining 20 percent to sell employers on, and help them organize, apprenticeships.

The GI Bill already provides housing benefits and wage subsidies for veterans in apprenticeships, but funding levels for college and university expenses are far higher than for apprenticeship. Offering up to half of the GI Bill’s per-recipient college benefit to reimburse employers for the off-job education and training when hiring a veteran into an apprenticeship program could be accomplished by amending the law. However, unless the liberalized uses of Pell Grants and GI Bill benefits are linked with the intermediary incentive campaign to sell and organize apprenticeships, the take-up by employers is likely to be limited.

Still another way of financing the off-job education of apprentices is to link the intermediary incentive program with youth apprenticeships in high schools. Since high school CTE courses, and some college courses within high schools, are already an entitlement, the funds to complement work-based learning in apprenticeships would be readily available.

Policymakers should consider starting such a policy at “career academies”—schools within high schools that have an industry or occupational focus—and at regional career and technical education (CTE) centers. Over 7,000 career academies operate in the United States in fields ranging from health and finance to travel and construction. Career academies and CTE schools already include classroom-related instruction and sometimes work with employers to develop internships. Because a serious apprenticeship involves learning skills at the workplace, at the employer’s expense, these school-based apprenticeship programs could reduce the costs of teachers relative to a full-time student. If, for example, a student spent 2.5 days per week (or 50 percent of their time) in a paid apprenticeship, the school should be able to save at least 15-30 percent of the costs of educating a traditional, full-time student. Those are big numbers, as anyone familiar with this professional niche knows. Applying these funds to selling and organizing apprenticeships should allow the career academy or CTE program to stimulate employers to provide apprenticeship slots.

Today, funding for the “academic only” approach to skill development in the United States dwarfs the very limited amounts available for market and support apprenticeship. Yet apprenticeship programs yield far higher and more immediate gains in earnings than do community or career college programs, and cost students and the government far less. Postsecondary education carries costs for students in the form of both tuition and foregone earnings and sometimes fails to provide students with a useful degree or credential. By contrast, apprentices rarely lose earnings while they learn skills, nor are they forced to take out burdensome student loans. Apprentices, through their connection with the employers, work on up-to-date equipment, learn modern business practices, and gain valuable work experience and employability skills.

Expanding access to apprenticeship programs could improve the lives of millions of Americans and help prevent further erosion of the middle class. Apprenticeships widen the pathways to rewarding careers by upgrading occupational skills, employability skills, and traditional academic skills. For hands-on and nontraditional learners, academic coursework completed in the context of an apprenticeship program can increase worker motivation and improve the efficacy of the delivery process. Furthermore, given the effects of these programs on worker productivity and innovation, firms will have an increased incentive to adopt “high road” strategies with respect to their apprenticeship programs. Especially in today’s tight labor market, apprenticeships represent one of the best ways firms can attract and retain skilled workers.

While structural barriers to apprenticeship exist in the United States, Federal investments in marketing and standards development, along with ongoing financial support for the off-job costs of apprenticeship, can overcome these barriers. And as more employers adopt apprenticeship strategies successfully, network effects could well take over, with employers learning from each other about the value of apprenticeship.


1 See Robert Halpern, The Means to Grow Up. Reinventing Apprenticeship as a Developmental Support in Adolescence (Routledge, 2009).

2 Tabulations by the author from the 2016 National Household Education Survey.



The post The Virtue of Apprenticeship appeared first on The American Interest.

 •  0 comments  •  flag
Share on Twitter
Published on October 17, 2018 08:35

October 16, 2018

The Taming of the Few

Unelected Power: The Quest for Legitimacy in Central Banking and the Regulatory State

Paul Tucker

Princeton University Press, 2018, 656 pp., $35


Sir Paul Tucker is an unusual type. He is a consummate bureaucratic insider—a 30-year veteran of the Bank of England and now the chair of a group of ex-central bankers called the Systemic Risk Council—who is concerned as much with high principles of democracy and representative government as he is with policy outcomes. In his new book, Unelected Power: The Quest for Legitimacy in Central Banking and the Regulatory State, he takes up the difficult task of saying when and how it is appropriate for elected officials to delegate authority to agencies insulated from politics. In so doing, he aims to show that modern societies can get beyond a binary choice between technocracy or populism if they manage wisely.

The result is a dense, 600-page tome full of good sense, whose prescriptions holders of unelected power—especially but not only central bankers and others engaged in the management of fiscal and monetary policy—would do well to heed. Tucker’s synthesis of economics, law, political science, and political philosophy is prodigious, and his experience on the front lines of fighting the global financial crisis is imposing. And yet, what is most distinctive about the book is its humility. Tucker’s worry that he and his ilk have become “overmighty citizens” is palpable, and he takes seriously the idea that bureaucrats must ultimately be servants rather than masters. He makes a case for optimism, at least if legislators and unelected officials can properly understand their roles. But that is a big if.

By Tucker’s lights, the standard, expertise-based justification for delegating power misses the point. If specialization were all that is needed, it would be easy enough to create organizational structures in which technical specialists pursue goals supplied by non-specialist decision-makers. Indeed, many of our regulatory structures conform to this model, with political appointees supplying substantive goals that permanent bureaucrats implement. There will inevitably be slippage (agency problems), but all large bureaucratic organizations experience slippage without challenging the basic sinews of representative government.

Rather than expertise, democratic governments’ reliance on independent agencies—which Tucker defines as those purposefully shielded from regular political control—is justified instead by the fact that political actors have difficulties making and keeping credible commitments over the longer haul. Sometimes these difficulties come from the multiplicity of political principals, but not always; even if we think of political actors as having a fixed set of goals, they may find it impossible to pursue long-term priorities given the short-term political benefits of deviating from the agreed wiser path. Delegation of power to actors removed from such short-term pressures can thus be an optimal solution for elected officials, as political economists have lately worked out with great rigor.

Keeping this in mind, Tucker argues that delegation is desirable when: 1) there is a clearly specified and monitorable goal; 2) that goal has strong and consistent support across political parties and over time; 3) there is or could be a commitment problem; 4) an accepted set of policy instruments is confidently expected to be effective; and 5) pursuing the goal does not generally require making big choices about distributional trade-offs or shifting the balance of political power.

These criteria are demanding and perhaps rarely satisfied. Tucker admits that the requirement of a clearly specified goal, in particular, is almost always honored only in the breach—though he does think that the maintenance of a stable currency, when carefully specified, can be a paradigmatic example. Tucker’s point in formulating these criteria isn’t that delegation ought never proceed unless they are perfectly satisfied. He knows that delegation is going to proceed anyway, whatever normative concerns may be at issue. He is offering these considerations to inform prospective delegators’ sense of best practices. These should also be helpful for reformers seeking to attack the least justifiable instances of delegation in our system, and could well be helpful for those in less well-institutionalized polities (Russia or Ukraine, for example) who may one day seek to create architectures for sound structural reform.

As an example of an agency with a problematic delegation, Tucker points to the Federal Housing Finance Agency (FHFA), created in July 2008 to oversee the (soon-to-collapse) government-sponsored enterprises, Fannie Mae and Freddie Mac. That agency, which is led by a single head who can only be removed for cause, and which does not require congressional appropriations, is charged with extending home ownership, minimizing downside risks, and protecting the financial system’s stability. These goals are clearly in tension with each other, and choosing between them has profound distributional consequences. Congress may pretend that delegating power to this agency somehow solves the underlying issues, but it merely relocates them and, in the process, damages the legitimacy of delegations undertaken for more compelling reasons.

Having given a sense of when delegation is warranted, Tucker offers another set of principles to inform how it should be handled. The political authority should not only provide a clear and monitorable goal (or a clear hierarchy of goals), it should also set out a plan to do the monitoring. If and when it decides that the independent agency is failing to carry out its mandate, it should have to publicly and explicitly object. This means that demanding accountability from these normally unaccountable bodies must take the form of highly visible political confrontations. If the purported accountability is actually just a desire to force a deviation from settled commitments, voters will at least have a chance to punish this bad behavior.

In general, Tucker’s principles are meant to ensure that transparency and accountability have their due, even as most policy decisions are made by unelected officials. The independent agency must not hold itself immune from criticism brought by normal citizens, since such hubris is sure to erode trust and thereby destroy the independent agency’s ability to overcome the very commitment problem that justifies its existence in the first place. Some degree of transparency about operating procedures and techniques is therefore necessary—and, to succeed, it can’t just be a lawyerly outpouring of incomprehensible jargon. Central banks and other independent regulators must understand that it is in their own interests to clearly communicate with citizens directly (through their websites, for example), and with the elected representatives who will ultimately decide their fate. Central bankers ought to see testifying before congressional committees as a real opportunity to educate political actors rather than an ordeal to endure at the hands of ignoramuses.

Although legislators in a typical democracy will never operate at the level of experts, there is nevertheless hope that they will “invigilate” the insiders’ work, engendering a level of circumspection often lacking in closed expert networks. Denizens of independent agencies are usefully reminded that they must continually earn the trust of those political actors who have delegated power to them, rather than simply exercising it as a birthright. Tucker calls this “democracy as watchfulness.”

Without dwelling on it, Tucker offers an excellent point about an independent central bank’s place in a system of separated powers. Because the power to induce inflation is essentially a power of (indirect) taxation, allowing the Executive Branch to control the money supply essentially gives the President a unilateral power to tax. Once the struggle to define money is removed from the rough and tumble of congressional politics (where it resided in the late 19th century), it is crucial to separate the control of fiscal and monetary powers for basic Madisonian reasons.

One of Tucker’s most important prescriptions for central bank design seeks to operationalize this principle with a well-articulated “Fiscal Carve-Out,” which prohibits bankers from opting into the distributionally sensitive realm of fiscal policy. Drawing a perfectly clean line between monetary and fiscal activities is impossible, but Tucker insists that it’s still very much worth the effort to create rules covering when and why the bank can buy or lend against various assets, when and how the bank is supposed to seek the counsel of the elected executive or representatives, and how the bank’s losses will be reported and covered. By clearly addressing these questions in relatively calm times, the legislature can keep the bank from becoming a de facto picker of winners and losers between regions, sectors, or specific firms during crises.

If we want a healthy relationship between the national legislature and the independent agencies to which it devolves decision-making authority, Tucker’s book offers a remarkably tangible program for achieving it. From his system designer’s perspective, we can see the shortcomings in our current arrangements and formulate a reform program in response. The people’s representatives and the unelected officials they rely on can work in tandem to achieve what neither populism nor technocracy alone could manage. This is a happy and worthy ideal that too few people work to realize in our current age.

And why is that? What is it that regular central bankers and other denizens of independent agencies value apart from realizing the goals provided by democratic bodies?

A recent literature forcefully argues that their real interest is in building a business-friendly, international neoliberal order that can resist the imperatives of democratic sovereigns if and when they become too threatening to market order. This formulation can admittedly sound conspiratorial and unhinged; “neoliberal” is a word that has been thrown around in deeply careless ways during the past several decades. But much of the best recent work on the neoliberal order traces the history of specific institution-building efforts, and the portrait that emerges is one in which experts acting through international organizations are indeed meant to “transcend” democratic shortcomings.

Notable among these books is Quinn Slobodian’s Globalists (Harvard University Press, 2018), which offers a decades-long account of how leading neoliberal thinkers, including F.A. Hayek and other members of the Austrian School, directly participated in creating an institutional order meant to facilitate global prosperity through international commerce. Far from being market fundamentalists who believed that all that was necessary for freedom was the withdrawal of the state, Slobodian’s Vienna- and Geneva-centered neoliberals determined that the rule of law and sanctity of contracts were in need of international protectors in the post-imperial world that was forming after the World War and Versailles. In a sense, they were not different at least as regards practical intentions from John Maynard Keynes after the Second World War, when roughly the same kind of challenge arose. They realized their vision first in the League of Nations and later in the various bodies leading up to the European Union and the World Trade Organization, among others.

Slobodian offers this history as someone clearly critical of what the neoliberal order has since become in the 21st century. He describes the post-1919 neoliberal goal as a world “kept safe from mass demands for social justice and redistributive equality by the guardians of the economic constitution.” When he shows how neoliberals’ nostalgia for the Hapsburg Empire and imperialism generally shaped their thinking, he does not mean to flatter. But unlike Nancy MacLean’s controversial and much-criticized Democracy in Chains (Penguin, 2017), Slobodian does not paint his neoliberal subjects as scheming ideologues. Indeed, he is often at pains to show that Hayek, Mises, and others were more practically oriented than their modern-day admirers and detractors alike assume, and that threats to basic global economic order in a populist era, famously termed by Ortega y Gasset “the revolt of the masses,” were not lurid fantasies.

Indeed, Hayek, Mises, Wilhelm Röpke, and others were so successful in framing the terms for the world order precisely because they were often operating as part of the respectable establishment. A four-day colloquium in Paris organized by Hayek in 1938 centered on the work of Walter Lippmann, who was then busy denouncing “the fallacy of laissez faire” on his way to painting a portrait of what he called “the Great Society” characterized by internationally shared, ever-evolving legal institutions—a vision Hayek found attractive. Although our current elites’ fondness for “governance” as something that transcends mere governments and their clumsily coercive techniques is often thought of as vaguely leftist, Slobodian shows how it grew out of the original neoliberals’ sense of cosmopolitan responsibility. Engaging in this style of thinking does not therefore require a conscious decision to oppose democracy, let alone to carry water for commercial interests. All that is required is a belief that the challenges of a globally interconnected world can only be met by global networks of experts pursuing sound diplomacy and a world governed by law.

It is instructive to consider an example of why such ideals, which are so appealing in the abstract, can nevertheless end up sounding hollow and self-serving—and hence why Tucker’s concerns are not idle.

One institution Slobodian does not cover is the Bank for International Settlements (BIS), based in Basel, Switzerland. In its own description, the BIS is “the central bankers’ central bank.” By its nature, then, it is an institution serving the needs of an international elite. For his part, Tucker testifies to the positive impact of his own “Basel experience,” which he says “makes for more open-minded central bankers, by exposing them to the different ideas, practices, problems, and contexts of their peers.” He concedes, though, that thanks to the BIS’s convening power, central bankers come “closer to fitting the description of a transnational elite” than just about any other group.

The BIS does indeed make for a rather easy target for those concerned about cosmopolitan technocrats who put the interests of global commerce ahead of those of their home nations. Adam LeBor’s Tower of Basel: The Shadowy History of the Secret Bank That Runs the World (Public Affairs, 2013) is an at-times breathless critique along these lines, and it must be said that the BIS has often played to type. For one, its initial design came from a collaboration in 1929 between the famously caped Montagu Norman, who was Governor of the Bank of England, and Walter Layton, editor of The Economist. Layton actually withdrew, saying he could find no way to reconcile the bank’s existence with democratic governments’ freedom of action, but Norman and others managed to devise a multilateral treaty, the Hague Convention, that created the BIS so that it could operate as a creature of the national central banks themselves, with very little room for any government interference in its operations. In 1930, Pierre Mendes-France, later a leader of France’s Fourth Republic, offered lofty encouragement to the BIS: “In the mists of the future, the mystical purpose of a union in financial order . . . under wise and prudent management may become a potent aid for the preservation of world peace.”

LeBor, who is looking, finds plenty to hold against the BIS. It was crucial in keeping some commerce flowing through Nazi Germany before and during the onset of war, and it facilitated the Nazi expropriation of various conquered nations’ gold. After the war, it was a key forum for those remaking Europe on business-friendly terms, and later it was a key pusher of the European Monetary Union—both of which deeds LeBor regards as plainly nefarious.

More recently, the BIS took up the burden of setting international banking supervisory standards for an increasingly globalized financial system. Its power in this arena is purely informal, as its decisions do not have legal force. But with member states committed to following its mandates, it has nevertheless exerted an enormous influence over financial regulatory structures all throughout the developed world.

Doing so seems to have significantly contributed to the onset of the global financial crisis. In 1988 the BIS drafted its first set of standards (Basel I), and later issued another set in 2001 that was completed in 2004 (Basel II). These standards set different capital requirements for loans versus securities, thereby effectively encouraging banks to pile up securitized mortgages on their books. Rules implemented by the U.S. Securities and Exchange Commission in an attempt to incorporate the concepts of Basel II drove American investment banks to load up on similar types of securitized mortgages, creating an unsound environment extremely vulnerable to downturns in the mortgage market. Basel II also allowed banks to design their own capital requirements based on internal risk models (subject to some minimal requirements). This was supposed to make capital requirements more sensitive to particular institutions’ risks, but, not terribly surprisingly, it encouraged banks to underestimate their own risks.

And yet, before the crisis, the refrain out of Basel was simply that cutting-edge expertise and consummate professionalism had dictated optimal standards. After the crisis hit, this language came to sound like self-serving misdirection. The experts were largely drawn from the ranks of the powerful (and high-paying) institutions being regulated, and—for all their talk of being farsighted leaders—they built up a system that served the short-term interests of those institutions at the expense of the rest of society. Humble citizens who knew nothing of financial arcana before the crisis understandably do not rush to console themselves with the idea that the international technocrats had done their best to make globally enriching world commerce hum along. Instead, they feel scandalized, and they turn to their democratically elected representatives to respond.

Tucker’s faith in representative government rests on the belief that these elected officials can ultimately channel their constituents’ rage toward something constructive—but post-crisis experience gives us plenty of reasons to doubt that faith. Rather than “democracy as watchfulness,” they often seem to lurch toward “democracy as revenge.”

This was illustrated in the United States by the “Audit the Fed” groundswell in the years following the Federal Reserve’s ambitious crisis responses. Proponents of an audit generally made it sound as if the Fed had been entirely opaque. This mostly wasn’t true—the Fed has regularly submitted to GAO audits of its basic processes since 1978 and was required to make extensive disclosures about its crisis activities under the Emergency Economic Stabilization Act of 2008—but it tapped into heightened anxieties about the Fed’s unaccountability. And the Fed had insisted on withholding information about the identities of those firms that had received its aid. Representative Ron Paul (R-TX) and Senator Bernie Sanders (I-VT) became outspoken champions of an audit, with Paul at one point in 2009 signing up 320 co-sponsors for his Federal Reserve Transparency Act. Eventually, Sanders’s favored version of an audit was incorporated into Section 1102 of the Dodd-Frank Act of 2010, including forcing the release of loan recipients’ identities.

That all sounds constructive enough, but the “Audit the Fed” crowd was little sated by having gotten an audit of the Fed. From the start, critics of the movement alleged that it was merely a front for an attempt to dismantle the Fed entirely. Paul had been a savage critic of America’s central bank and its fiat currency for decades, and had published a best-selling treatise, End the Fed, in 2009. Maine’s Republican Party had put in a party platform supporting an audit “as the first step in Ending the Fed.” And Paul kept pushing an audit bill long after the audit ordered by Dodd-Frank had been published, with his son Senator Rand Paul (R-KY) continuing the effort after his retirement. Given the nature of this reform coalition and its apparently implacable demands, defenders of the Fed understandably saw it as incapable of playing the role of good-faith reformer. Instead, they sought to draw attention to the loonier elements who were out to smash the whole international financial order, and who were not shy about using the full range of anti-Semitic justifications for doing so. Both sides, then, were locked into a struggle to delegitimize the other—hardly a good environment for reform.

The same dynamic has been apparent in the debates about the role of the European Central Bank (ECB) since the crisis. According to the ECB’s telling, the institution has performed admirably over its twenty years of existence, stabilizing the value of the euro against inflation and deflation alike, including through the purchasing of sovereign bonds (“outright monetary transactions,” initiated in September 2012). Given that its mandate makes price stability a clear goal, the institution regards its record as excellent.

But this is both too modest and too forgiving. In fact, as Tucker notes, because of the absence of any fiscal authority in the European Union, the ECB’s “monetary technocrats [are] adrift in the constitutional order of things, precariously perched as existential guarantors. Hence the legal and political dilemmas posed by the ECB’s Sisyphus-like labors to preserve Europe’s monetary union and the wider project it represents.” Assessments of the ECB’s performance in this larger role have generally been rather harsh. A new account of the crisis emphasizes its hesitancy in addressing Europe’s woes and intransigence against any debt restructurings by Greece or other struggling member-states. Suspicion of the ECB’s supposedly neutral motives is a recurrent theme for populists in Spain, Greece, Italy, and elsewhere. Italy’s current Minister for European Affairs, Paolo Savona, has called the eurozone as managed by the ECB a “German cage.”

Tucker serves up two trilemmas that suggest that the clashes over the Fed and ECB are generated by the very structure of our globalized political economy, rather than being anomalous results of a once-in-a-lifetime perfect storm. The first is what the economist Dani Rodrik calls the “globalization paradox.” Rodrik argues that a state can choose at most two of the following three characteristics: full integration into global markets, national sovereignty, and real democracy. If democratic majorities acting in nation-states insist on exercising their will, they will need to find ways to resist the influence of international capital. An extension of Rodrik’s thinking by Dirk Schoenmaker gives a sense of how the framework bites in finance in particular. As Tucker relates it,


. . . if the world opts for financial integration and financial stability, then democratic nations will not have autonomy over policies on the financial system. Since it is hard to imagine people opting to embrace recurrent financial instability, the apparent choices are (1) to give up financial globalization and thereby regain domestic control, (2) to retain financial integration and relocate democracy to the global plane (the dream of cosmopolitan democrats), and (3) to maintain international financial integration, set financial policy globally, and accept the dilution of democracy!

Framed in this way, democratic legislatures’ yen to rough up their own internationally oriented central banks seems like an almost necessary aspect of the system. So, too, does the choice for recurrent instability that Tucker says is unthinkable. Any consciously choosing decision-maker would indeed reject instability, especially when the memory of the last crisis is fresh. But for a legislature attempting to manage the tensions between the demands of citizens and those of mobile capital, and without any way to settle these questions definitively, regular implicit choices for instability seem unsurprising. Such choices, rather than a meek acceptance of the dictates of an international financial world they cannot really hope to control, may offer the most natural compromises. That hardly means the situation is sustainable. As Ross Douthat described the situation in a recent consideration of Europe’s insurgent populists: “The center is hated, but whether overtly or covertly it finds some ways to hold. The question is how long this situation can last.”

So, what can we realistically expect from central bankers and other technocratic actors empowered by democracies but operating in thoroughly international and sheltered policy realms? How should they conduct themselves so as to best preserve their ability to overcome commitment problems and realize society’s goals?

They must remember, always, from whence their power comes. It is easy enough to acknowledge this outwardly. For his part, former Fed Chairman Ben Bernanke often intoned that “Congress is our boss” and insisted that the Fed would always respect the legal requirements put on it by the legislature. Much harder is to adopt an inward attitude of subordination, especially when legislatures seem oblivious to basic facts. That subordination does not require those who wield unelected power to abstain from trying to educate and improve elected officials. But it does mean that they must accept their judgments as final and legitimate. Technocrats have no right to save the people from themselves. If there are mistakes, the people’s representatives are the ones, and the only ones, who are entitled to make them.

Accepting representatives’ superior claims to legitimacy ought to be aided by a recognition that, as Aristotle put it, “the guest will judge better of a feast than the cook.” That is so despite the cook’s superior knowledge of technique. Experts who have erred will always be tempted to hold themselves as beyond mere mortals’ criticism, by saying that only those who have been initiated into the mysteries of their craft have the context to properly understand the ultimate correctness of their actions. Sometimes this will be right, but not always, and so it can be no kind of trump card in a free society. Accountability must be to the non-expert people and their often-clumsy representatives, or it is not really meaningful.

The humility that makes Tucker’s book so appealing is not based in any kind of thoroughgoing self-doubt. He thinks the brainpower of the central bankers who come through Basel is immense, and that, by and large, they progress toward better policy. Instead—quite appropriately for someone given a knighthood for his service to central banking—his attitude comes from a kind of modern sense of noblesse oblige, heavy on the obligation. Let us hope Tucker can inspire others who wield unelected power to feel and act on this sense as much as he does, titled nobility or no.


Tucker relies especially on Alberto Alesina and Guido Tabellini, “Bureaucrats or Politicians, Part I: A Single Policy Task,” American Economic Review (March 2007), and “Bureaucrats or Politicians, Part II: Multiple Policy Tasks,” Journal of Public Economics (April 2008).

Clearly, there are some particular scenarios in which delegating choices with distributional consequences makes sense. The Base Realignment and Closure Commission, for example, allowed Congress to collectively decide to economize military installations without having the members themselves bear the responsibility for particular cuts, which overcomes intense parochial interests in favor of retaining local jobs. While reformers frequently propose to emulate the BRAC model in other contexts, it does not generally transfer well, as the top-down, unified structure of the military is seldom replicated.

Slobodian, Kindle loc. 375.

Slobodian, Kindle loc. 1508.

For a useful survey of this term, see Gerry Stoker, “Governance as theory: five propositions,” International Social Science Journal (March 1998), pp. 17-28.

Tucker, p. 398.

Lebow, p. 5.

Lebow, p. 31.

See Bank for International Settlements, “Basel Committee Charter,” updated June 5, 2018.

For a comprehensive account of these regulatory requirements and their shortcomings, see Jasmina Svilenova, “Regulatory Response to the Financial Crisis of 2007-2008,” Masters Thesis at Norges Handelshoyskole, Bergen, Norway (Spring 2011), especially section 4.6. For a shorter account, see John Carney, “The SEC Rule That Broke Wall Street,” CNBC, March 21, 2012.

For references and further discussion, see Philip Wallach, To the Edge: Legality, Legitimacy, and the Responses to the 2008 Financial Crisis (Brookings Press, 2015), pp. 188-190.

Tucker, p. 563.

Adam Tooze, Crashed: How a Decade of Financial Crises Changed the World (Viking, 2018); and “The Bank That Nearly Broke Europe,” Prospect (September 2018).

Gideon Rachman, “Italy, democracy and the euro cage,” Financial Times, June 4, 2018.

Dani Rodrik, The Globalization Paradox: Democracy and the Future of the World Economy (W.W. Norton & Company, 2012).

Tucker, p. 401.

Aristotle, Politics, Book III, Part XI.



The post The Taming of the Few appeared first on The American Interest.

 •  0 comments  •  flag
Share on Twitter
Published on October 16, 2018 08:07

The Revenge of Hard Power Politics

Only five years ago, the world still looked like a liberal internationalist’s dream. Although jihadist terror periodically tested the resilience of governments and societies across the globe, open borders and free trade were widely touted as the way forward. In the context of post-Cold War unipolarity, many in academia, think tanks, and the media wanted to believe that the “first universal nation” could shape world politics in its image, and that a Kantian democratic peace hovered just over the horizon. China’s modernization would pave the way for systemic change in Beijing, helping it take its rightful place as a prominent stakeholder in the emerging liberal global order. The opening of Western markets to Chinese exports, coupled with the export of U.S. technology and industrial know-how, was supposed to accomplish this transition, the assumption being that a modernized China with a newly empowered middle class would want to liberalize its domestic system and join, rather than contest, what would become the global status quo.

Europe was declared to be “whole, free, and at peace,” for in the 1990s Boris Yeltsin’s Russia had been all but written off as a geostrategic competitor. Even with the arrival of Vladimir Putin, the delusion that the new liberal world order would serve as a panacea for geostrategic competition persisted past the 2008 Russian-Georgian war. The aftermath of the Russian seizure of Crimea in 2014 and the follow-on war in eastern Ukraine also failed to bring home to many in academia and the commentariat the fact that the largely unquestioned support for globalism and the belief in the coming of a liberal world order were but byproducts of the post-Cold War denouement and the attendant temporary attenuation of great-power competition.

For close to three decades now, reams of books and scholarly articles, conferences, and media panels have heralded a future in which institutions would ultimately triumph over old cultural constraints. The Realist notion of hard power as directly related to industrial economic strength, geography, natural resources, and population was discounted, if not rejected out of hand, as obsolete, and with it the idea of the strong nation-state as the core building block of a people’s national security and prosperity. It was assumed that such passé ideas as nations situated within defensible borders would over time give way to states that would willingly cede part of their sovereignty to transnational and supranational organizations.

In 1994 NAFTA symbolized the arrival of this new global economy in North America; critics concerned about the consequences for the middle class of the fusing of low- and high-wage labor markets were dismissed as economic nationalists ill-suited to the new liberal free trade era. In Europe, leaders yielded to the temptation to transform the European Community—at its core a treaty-based organization—into a proto-United States of Europe. After the 1992 Maastricht Treaty, Europe’s elites renamed their project the European Union, adopted the euro as its common currency in 1999, and, after the 2007 Lisbon Treaty, began to selectively claim the attributes of a federal state through Brussels’ ever-greater regulatory powers and the Union’s addition of a quasi-President and Foreign Minister. On occasion, scholars even argued that Europe offered a way forward for the United States, betokening a future where social market economies would eventually rule. Likewise, a generation of graduate students in political science was encouraged to focus on the now all-pervasive category of “soft power” as a key variable of global politics. The digital revolution accelerated the inward-looking focus across Western democracies, many of whose societies became increasingly preoccupied with rights over responsibilities.

When wars did erupt, as in the Balkans in the 1990s or in Georgia in 2008, they were often framed as the last gasp of a dying nationalist-imperialist era. Even the brief shock of 9/11 failed to bring about a reality check that the world “out there” was still a perilous place, filled with dangerous actors. Few in the West truly believed that ragtag bands of jihadis could bring down the Western states through terrorist violence. These episodes were alleged to be akin to the common cold—the sort of occasional flare-up one must endure if one is to preserve the interconnected liberal world order.

Most importantly, the West’s victory in the Cold War served as a soothing tonic of ideological certitude, seemingly reaffirming the notion that history was indeed on the side of the globalists. As a 2017 Pew report shows, notwithstanding recent concerns over democratic decline, six in ten countries are democracies—a postwar high, though few would hazard a guess as to the extent to which many nascent democracies are in fact consolidated or even stable. The domestic political upheavals that have rocked the West’s oldest democracies in Europe as well as the United States over the past decade have failed to awaken many leaders to the fact that continued mass immigration and the balkanization of Western nations have undermined national resilience by fragmenting and often paralyzing political processes. For almost three decades, theories about “nation-building” abroad abounded in the United States, while America was simultaneously being deconstructed from within by identity politics coupled with rapid deindustrialization. At the same time, two great powers, China and Russia—one rising, the other faltering—continued to define the world in terms of hard power balancing and zero-sum strategies. For decades, China has pursued aggressive mercantilism, manipulated its currency, and forced U.S. and European companies to yield intellectual property as a precondition for entering its market. Russia in turn has quickly recovered from the Yeltsin-era “time of troubles” by first de facto re-nationalizing its energy sector and then using its abundance of oil and gas as a strategic resource to be weaponized for political gain.

Today the world bears little resemblance to the sanguine picture of the liberal international order to which we grew so accustomed in the decades following the Cold War. A wealthier and more geostrategically assertive China is staking ever-bolder claims to a sphere of influence in Asia and is leveraging its growing wealth to gain influence in Australia, Africa, South America, and, increasingly of late, Europe. Russia’s annexation of Crimea effectively shattered the foundations on which the European Union’s rules-based security system was to be built. Erdogan’s Turkey is running away from the Atatürk legacy. The Middle East is on fire, with Iran increasingly determined to pursue a course of regional hegemony. Europe is struggling to cope with mass immigration from the Middle East, Africa, and elsewhere, and to preserve what is left of a rapidly shrinking political middle ground, while the European Union talks of a “two-tiered” organization as the only possible path forward. In the Western Balkans political instability is growing.

The world looks very different than it appeared to many only five years ago. Yet to believe that something abrupt and unexpected has happened is to drink the proverbial Kool-Aid of post-Cold War liberal internationalist certitude. In fact, hard power calculation, geostrategic competition, and mercantilism never went away; rather, they merely remained in the background as power distribution morphed amidst America’s “unipolar moment.” This moment has passed. Hard power considerations, including military imbalances, are again at the center of global politics. It is high time for democracies across the globe to take stock of their positions, and for their governments to speak frankly about what brought them there.

The first step is to stop substituting symptoms for causes. The present era is an inflection point not because of a surge of “illiberalism” in democratic politics, the re-nationalization of European politics, Brexit, or Donald Trump—all of which have been proffered as explanations for the seemingly sudden crumbling of the rules-based international system. Rather, what has driven the ongoing global systemic shift is the first impending genuine reordering of economic power distribution across the globe since 1945, especially to Asia, coupled with the attendant geostrategic assertiveness of China as well as a fundamental disconnect between what drives political discourse in Western democracies today and the power considerations that remain central to international relations. The Huntingtonian civilizational fault lines, in short, are being imbued with the accoutrements of hard power, and that in turn provides a recipe for the major global upheaval now looming over the horizon. The next two decades are likely to witness the first genuine challenge posed by a growing global power, China, to the U.S. dominant position worldwide. It remains an open question whether the United States will in fact be able to avoid the “Thucydides Trap,” whereby the displacement of one great power by another leads to war. State-on-state competition, driven by the shifting balance of economic and military power, has dramatically increased the likelihood of a major confrontation between the United States and China. Such a confrontation, if not contained, will likely draw in other major players, including Russia, and force key states in Europe to act at a time when the continent is unready to contemplate such hard choices.

It is time to admit that at the base of the current Western predicament lies a series of fundamentally misguided assumptions about what matters most in the international system. The so-called liberal international order was never the result of some inevitable process leading to enlightened statecraft; rather, the liberal democratic ascendency was a byproduct of the emergence of the United States as the most powerful nation on earth after the Second World War. America’s status as the world’s greatest democracy for the past 70 years enabled it to imbue the global rulebook with its values and institutions. Notwithstanding talk of “soft power” and rules-based systems, national security and hard power are no less vital today than they were at the moment of that system’s creation.

It is an old Realist paradigm that power, rather than rules and norms, is what nations most aspire to gain, and that the ability to influence the behavior of others rests on the foundations of economic and military strength. The notion that international norms without a dominant enforcer willing and able to demand their implementation have much staying power is a byproduct of decades of U.S. willingness to provide the political, economic, and military glue of the current international system. If the United States is displaced from the center of global power, then—not unlike in past eras of British, French, or Spanish domination – the values of the new hegemon will shape the world we all live in. And it will not be a world in which our liberal democratic assumptions will thrive.

Today’s shifting sands of world politics, especially the progressive fragmentation of the institutional framework that has bound the collective West for close to 70 years, is often portrayed by policy analysts as but a temporary glitch, after which the new normal of a rules-based international order will resume. The reality is quite different. The changing power distribution worldwide and the challenge posed to the dominant position of the United States by the rising economic and military power of China and the geostrategic assertiveness of a Russia intent on reclaiming its great power status are returning the world to the fundamentals of great power politics driven by state-on-state competition. The era of global liberalism is over. It was a great ride while it lasted, but it is time to wake up. Time will tell whether the United States and its allies can adapt quickly enough to this new reality for deterrence to hold.


The post The Revenge of Hard Power Politics appeared first on The American Interest.

 •  0 comments  •  flag
Share on Twitter
Published on October 16, 2018 07:35

October 15, 2018

Trump Can Win Back Central Europe

Until the mid-2000s, Atlanticism was the unquestioned guiding light of foreign policy across post-communist “New Europe.” Not anymore. It is not that the region has become openly anti-American. Rather, its elites and publics have lost interest in the United States, just as interest in Central and Eastern Europe has dissipated in Washington.

With the “pivot to Asia,” the plans for a missile defense shield for Poland and the Czech Republic were shelved. Central Europe was treated largely as an afterthought during Obama’s tenure, even when aspiring authoritarians in Budapest and Warsaw started tightening the screws. At a critical time in Hungary, the ambassadorial post was handed over to Colleen Bell, the former producer of “The Bold and the Beautiful.”

Things have not improved under Donald Trump’s presidency. The criticisms of corruption and rule of law and the scolding over infringements of gay and minority rights has been replaced by the opposite extreme: a complete obliviousness to Viktor Orbán’s and Jarosław Kaczyński’s agendas and their implication for U.S. interests. “Had I witnessed that the freedom of any individual or institution was put in danger, I’d be the first one to raise concerns,” says David Cornstein, Trump’s new Ambassador in Budapest and a former jewelry retailer.

Yes, there has been some pressure exerted to increase defense budgets, though the positive results have largely been homegrown and confined to Poland and the Baltic countries. Overall, however, the attempts to keep the transatlantic conversation meaningful feel more and more forced. And with so much else happening in the world, it is not clear why Central Europe, increasingly prosperous and integrated into the EU and NATO, should be at the forefront of American foreign policy.

But it should. The region has been a constant source of geopolitical headaches that repeatedly forced Americans to come to Europe’s rescue. Its relative calm and prosperity are deceptive: Central Europe is a playground for bad actors, from Vladimir Putin and China to budding local authoritarians and oligarchs siphoning away public funds for their own personal benefit and eroding trust in democratic capitalism.

Unlike Washington, Brussels holds significant leverage over the region by virtue of the enormous benefits provided to Central European countries through their EU membership. Those benefits include the single market, which has made the region an integral part of German value chains, but also the free movement of people, which took the pressure off local labor markets when then 2008 crisis hit. Today, a million Poles and over 400 thousand Romanians live in the UK alone. Finally, there are EU funds, which account for practically all public investment in the region—mostly roads, railways, and other infrastructure.

Not surprisingly, pluralities across Central and Eastern Europe support EU membership. In Poland, for example, 50 percent of people hold a positive view of the EU, while only 12 percent hold a negative one. Neither Fidesz in Hungary, nor Law and Justice Party in Poland are seeking to leave the bloc.

Still, the EU has been terrible at using its leverage effectively, as the standoffs over the rule of law in Poland and Hungary illustrate. Instead of dissuading Orbán and Kaczyński from authoritarian practices, the bloc’s conflation of authoritarianism, immigration issues, and the region’s social conservatism—most recently illustrated by the example of Judith Sargentini’s report on Hungary—has pitted Central European countries against Brussels in a bitter culture war.

Ties to the United States, in contrast, are few and esoteric. The NATO guarantees still matter, of course. Those largely explain why “New Europe” was so eager to assist America during the Iraq War. As Radek Sikorski, Poland’s former foreign minister put it, “Poland did not send a brigade to fight in the ill-conceived war in Iraq out of fear of weapons of mass destruction there. We did not send another brigade to Afghanistan in response to the September 11 attacks because we feared that the Taliban will come to Warsaw and enslave our girls. […] We did all of that because successive Polish leaders are invested in the U.S. security guarantee.”

Yet NATO’s security guarantees carry a much greater weight in countries that see themselves threatened by Russia—Poland and the Baltics—than elsewhere in the region. Most Hungarians, Slovaks, or Czechs simply do not see Russia as a threat. In a 2017 Pew Poll, 56 percent of Bulgarians and 52 percent of Romanians said that a strong Russia was needed to “balance influence of the West.” China is present in the region as well, through its Belt and Road initiative, innocuous-looking investment projects, and active political outreach.

Wars in Afghanistan and Iraq, as well as the stalled, strategy-less intervention in Libya, have led many to question the judgment of successive U.S. administrations. More seriously, the 1999 bombing campaign against Serbia, together with the support extended to Kosovo’s independence, have been ingrained in popular imagination as a treacherous attack on Central Europe’s Slavic “brothers” in the Balkans.

In socially conservative Central European societies, the support lent by the U.S. government to progressive causes has bought little goodwill. Even in the tolerant Czech Republic, the support extended in 2011 by U.S. Ambassador Norman Eisen to the Prague Pride march prompted unexpectedly strong pushback, including from then-foreign minister Karel Schwarzenberg, an Atlanticist with unimpeachable credentials.

The narrative of local extremists and the Kremlin’s propaganda machinery is that Central Europe has to choose between decadent progressive values pushed by the West—gay marriage, gender-neutral pronouns, and excessive political correctness—and the traditional, Christian way of life, associated increasingly with Russia. That is, of course, nonsense. The point of a liberal society is to allow a pluralism of values and lifestyles to co-exist within one society, not to stifle any of them. And Russia, a country with a shocking rate of 480 abortions per 1000 live births—higher than any other European country or the United States—is hardly the epitome of Christian values.

Yet, thanks to progressive overreach, Russian propaganda, and fears of immigration, that narrative has gained traction, distracting electorates from the hard challenges posed by the region’s weak institutions. Besides patronage, corruption, and the presence of oligarchs in politics (which are endemic throughout the post-Soviet space), authoritarianism has become a problem in Poland and Hungary. There, one-party governments have entrenched themselves, dismantled checks and balances, and trampled on civil society and independent media.

The United State cannot turn a blind eye to those developments. The former assistant secretary of state Victoria Nuland was fully justified in her warning four years ago against the “twin cancers of democratic backsliding and corruption” in the region, creating “wormholes that undermine their nations’ security.”

The Obama Administration had good reasons to put a number of Hungarian officials, including the head of the country’s tax administration, NAV, on a U.S. visa ban list in 2014 after revelations of large-scale tax fraud in the food industry (which also affected a U.S. food-processing company, Bunge). The case reverberated throughout the country and inflicted damage on Fidesz—more so than any of the subsequent ideological attacks on Orbán’s government have managed to do.

It would be a mistake for the U.S. government to pussyfoot around similar cases. The challenge is to draw a line between the core issues of rule of law, corruption, and authoritarianism, and the broader progressive agenda, or questions of social tolerance and openness. Unlike his counterparts in the EU, the Trump Administration holds the conservative bona fides needed to bring Central European countries back into the Western fold.

Endemic corruption aside, it is simply not acceptable for Orbán to chase the leading American university in the region, the Central European University (CEU), from the country on bogus legalistic grounds. Whether or not one agrees with George Soros’ politics, the CEU has brought up a generation of pro-Western and pro-American leaders and scholars who are making a difference in the entire post-Soviet space.

Civil society and independent journalism, which have come under pressure from Polish and Hungarian governments, need support too. There, the risk is that any U.S. funding to independent media will be seen as explicitly favoring political opposition, partly because government-friendly outlets and NGOs do not face the same financial constraints as those that benefit from the largesse of the government and government-friendly oligarchs. The solution is to shift the locus of funding toward apolitical themes: local reporting, healthcare, education, and social mobility in underdeveloped regions. If a homegrown push for political change in Poland and Hungary is going to come from somewhere, it is not going to be driven by appeals to abstract values but rather by the fact that the state of large parts of the two countries, as well as the quality of government-provided services, do not correspond to rising levels of economic prosperity.

When it comes to security, whether the countries in question meet the 2-percent spending target (Poland and the Baltics already do or come close) is far less important than whether they remain well-governed, reliable, pluralistic democracies. NATO is a club of likeminded countries. As such it should enforce basic standards regarding democracy and rule of law. The Article 5 guarantees could be withheld, for example, from countries that slide towards authoritarianism like Turkey and increasingly also Hungary and Poland.

But, unlike during Obama’s tenure, there will have to be carrots as well as sticks. Why not send Vice President Mike Pence on a tour to celebrate the 30th anniversary of the fall of communism? Central European audiences, particularly those outside cosmopolitan and liberal capital cities, are bound to react well. Politicians, including Central European ones, are vain creatures. In countries whose relations with the EU’s core are under strain, they have been eager for any kind of international engagement, whether it comes from the West, Turkey, or Russia.

True, President Trump himself might not be able to find these Central European countries on a map. Yet the U.S. government simply cannot afford to sit the next two (or six) years out without bidding farewell to any remnants of its influence that in the region.


The post Trump Can Win Back Central Europe appeared first on The American Interest.

 •  0 comments  •  flag
Share on Twitter
Published on October 15, 2018 12:11

Peter L. Berger's Blog

Peter L. Berger
Peter L. Berger isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Peter L. Berger's blog with rss.