Peter L. Berger's Blog, page 64

November 26, 2018

The Emerging R&D Landscape: A Tale of Scale and Saturation

Over the past 70 years, Americans from all walks of society have deeply valued the importance of scientific progress. Reflecting those values in their work, government and corporate managers have invested heavily in talent and infrastructure, demonstrating the nation’s commitment to the value of science and technology (S&T) as an essential driver for a strong economy and a safe and healthy nation. Even as the United States benefitted from the brain drains of other societies and occasionally cooperated on selective technologies with close allies, advancing science and technology was understood as essentially a national undertaking. And from the beginning of World II until the end of the Cold War, Americans prided themselves on U.S. superiority in both its S&T institutional design architecture and the output of that architecture.

Through the years, however, the context in which we pursue scientific discovery and develop scientific capital has changed dramatically. Most observers are aware that U.S. government efforts in S&T appear to be less dynamic than in decades past, when innovation in military and space technology often trickled down into the general economy. Instead, private industry now leads the way in most cutting-edge sectors, leading many to worry that the public-private synergies developed to serve national policy could decay—including when it comes to controlling key military and intelligence technologies. 

This story is true as far as it goes, but it may be the wrong story. The story that arguably matters more but is far less often told begins with the observation that we now live in a world saturated with trained scientists and engineers who have essentially open access to the tools and infrastructure of science. Science and its technological applications, understood as a single linked process, have been globalized like so much else. As a consequence, we often find ourselves forced back onto our heels defending a global leadership position and trying to sustain or develop that leadership across every domain of technology, both established and emerging. 

We are playing defense because many nations are now aggressively copying our approach to R&D. And many can now afford to develop national strategies, building partnership and education pipelines at home and abroad, investing heavily, and establishing technologically advanced military-industrial complexes. At least 43 nations across six continents currently have a strategy for increasing internal scientific innovation and international scientific engagement, and the African Science Academy Development Initiative (ASADI) lists an additional 11 national science academies with strategies across the African continent. Many Asian countries have increased their support of science and technology development and technical education, as seen in the increased share of tertiary degrees awarded and original research published. Unsurprisingly, the U.S. share has decreased: The United States attracted 28 percent of all globally mobile students in 2001 but only 19 percent in 2012. Countries like China are also encouraging their U.S.-trained scientists to return home to build their internal resources. Asia has led the global growth in R&D since 2003, with China expanding most rapidly and accounting for more than a third of R&D growth and more than 20 percent of global R&D expenditures. This growth across Asia is founded not only on strong public support for R&D but also by large private spending in domestic and foreign industry in the region. 

At the least, this means that the institutional edifice of U.S. superiority in S&T that emerged out of World War II and was codified, in a sense, by Dr. Vannevar Bush’s seminal and protean report, “Science–the Endless Frontier,” no longer can guarantee U.S. leadership in this critical domain. The genius of Bush’s vision rested in an architecture of institutional coherence that linked government, industry, and university-based science in a flexible but mutually reinforcing discovery and application system. Scholars have called it the “pipeline” system, among other things, and have shown how it adapted over time with necessity, coming more to resemble a “connected” system. 

In any event, it worked well, driving decade after decade of discovery, feeding into real world results that improved the lives of every citizen. During this 70+ year experiment, we saw basic research mature into the applications and engineering that allowed an American to walk on the moon, that drove the extension of life for the average American by over a decade, that eradicated polio and ultimately allowed the development of cell phones, the global positioning system (or GPS), and the internet helping us to communicate, navigate, and understand the world in new ways.

In a real sense, however, we have become likely victims of our own success. We have done so well with the model we built in the mid-20th century that we usually fail to acknowledge how much the foundation on which it was built has changed, and now we fail to see how much it needs to change—from a more or less exclusively national enterprise into one that weaves in and out of a larger S&T universe. Research and development (R&D) efforts have followed the same model for decades, and today we remain focused on adapting solutions to yesterday’s problems in a world that has fundamentally changed. As dissemination and understanding of information continues to grow through open data and information initiatives, as more sensors and satellites span the environment, and as the internet expands its reach, it will become increasingly difficult to protect the U.S. technical advantage in applied research, or for any nation to develop new and truly unique advantages in emerging areas. We are no longer living in a postwar period in which the competition has been decimated and everyone else remains traditionally poor and poorly educated; rather, we exist within a thriving global science community, and we must learn how to leverage this reality to our benefit, not cower in fear of a changing future.

One way to describe what has changed is to focus on scale and saturation. Since 1945, the U.S. population has more than doubled and global population has tripled. Global investment in R&D has gone from amounts counted in millions of dollars to more than $1.77 trillion, with well over 70 percent of that coming from outside of the United States, and with 75 percent of U.S. funding coming from non-Federal sources. Trends in the United States show industrial and philanthropic funding of science on the rise, and the Federal funding of science flattening for the foreseeable future. But ironically enough, most state-supported universities have shifted focus from teaching to competitions for federally funded research and, with state resources declining and tuitions skyrocketing, Federal grants now support many students to such an extent that universities are now more dependent on Federal than on state funding. 

Meanwhile, the scientific knowledge being produced globally has by some counts doubled every nine years in the post-World War II era. Bornmann and Mutz give the number solely for cited publications documented in the Web of Science at 1,859,648 just in 2012.4 This is only a portion of global publications, and with this exponential growth of scientific knowledge a new set of challenges arises with respect to effectively turning this knowledge into capital. Trajectories are stable, rate is steady, but growth is therefore exponential. So our structures for understanding what our investments are producing, what new discoveries are being made, and what breakthroughs are ready to be harvested for security, health, and the public good have become inadequate. Communicating new discoveries to the full scientific and engineering community through publications and conferences designed for leveraging that knowledge cannot scale indefinitely. Transitioning the knowledge created out of basic science into technology that can affect human health, security, and wellbeing requires new structures and institutions that can work at the dramatically larger scale and speed of this new globalized science environment. 

To give an idea of the scale of the challenge, note that scientific capital in the form of usable concepts has increased tremendously as measured by patents, with annual patents granted growing from about 50,000 in 1976 to more than 300,000 in 2014. A recent publication by Ahmadpoor and Jones shows a significant correlation between the two. When the researchers assessed 4.8 million U.S. patents and 32 million research articles for the minimum citation distance between the patents and prior scientific publications, they found that 80 percent of scientific publications and more than 60 percent of patents link to the other. One new question raised by this successful experiment is whether the efficacy of these processes continues to scale as the information and knowledge created grows exponentially. Likely, it does not.

Science Policy

To whom would U.S. leaders turn for an answer to such a question? Most Americans are unaware of it, but the U.S. government not only employs a huge number of scientists, mathematicians, and engineers, but it also has developed over the years a fairly ornate set of advisory bodies to assist leaders in both the Executive and Legislative branches of government. 

Whereas in 1945 there were extremely limited sources for trusted science advice to leadership, we now have more than 215 Federal Advisory Committee Act (FACA) boards alone, tagged as Scientific Technology Program Advisory Boards, across the Federal government. A coordination function envisioned already in 1945 by Vannevar Bush now resides in the Office of Science and Technology Policy (OSTP), created in its current form in 1976 to advise the President and the Executive Branch. This broad-based advisory process goes on regardless of the comprehension of or the attitude toward science and expertise that resides in the person of the President. But other problems exist all the same.

The policy-focus of the OSTP office is largely separate from the budget and resource prioritization functions that occur in each Executive Branch agency policy office. Somewhere, somehow, a balance need be struck between the pressures on agencies applied by Congress and the very real budget constraints under which they labor, on the one hand, and the need to coordinate Federal S&T strategy in accordance with White House direction of presidential intent and priorities. Without direct influence on either funding or the independent strategy offices in each Federal agency, the OSTP has uneven influence in coordinating a national research agenda. 

Additionally, Congress now has no official source of science/technical advice since the Office of Technology Assessment was abolished in 1995 during the Clinton Administration. So Congress tends to rely instead on lobbyists and local connections for trusted advice— a reliance that may not actually provide the depth and context needed for our Legislative Branch to reach fully informed decisions. Finally, we have more than 100 national and defense laboratories, federally funded research and development centers (FFRDCs), and university-affiliated research centers (UARCs) representing their own organizational priorities and seeking funding to sustain their livelihoods. This produces hundreds of sources of “trusted” advice, often competing and sometimes in conflict. So the balance has shifted from too little advice available to too much, with a significant portion of it conflicted and some of it compromised by special interests. New concepts for how to create objective, trusted, and non-partisan input in this environment are scarce. 

The result is that the expansion of our advisory boards and sources of “trusted” advice for science has created a situation in which government leaders must determine who among the hundreds of competing sources of input is most trusted, a situation that in effect takes us back almost to where we started 70 years ago when we had no structures for providing sound advice. This challenge, too, calls for us to rethink our strategies for sustaining research, translating that research into solutions for our challenges and economic competitiveness, and fostering a pipeline of talent needed for future economic security. 

Inertia

Today we find ourselves engaged in many conversations about these challenges, but the solutions these conversations typically produce are largely centered around two aspects: protecting the institutions and successful approaches of the past, and stimulating an entrepreneurial culture. Both seem necessary, but insufficient. 

The reason is that the entire landscape of research institutions and funding dependencies have shifted dramatically since 1945, and especially since about 1990. The primary challenge for remaining competitive in a world saturated in technical knowledge, a broad diversity of funding sources, and widely accessible tools and infrastructure is not primarily a lack of sustained support for research in academic institutions; nor is it a lack of worthy small projects and ideas. These are, of course, important functions to sustain. It can be a good thing that researchers are expected to seek funding, drafting hundreds of proposals for small amounts of funding. This drives a competition for ideas, and can result in solutions to smaller, more incremental, fundable, and manageable problems. It’s also important because the diversification of both performing institutions and funding sources gives us new opportunities and will of necessity often require diversification in our approach beyond the standard Federal funding model, which remains nation-focused. 

But the real challenge is how to perforate our old national model in such a way as to be able to monitor and use scientific-technical innovation resources on a global scale, yet do so without relinquishing control over technologies critical to national security. The old problem, back in 1945, was how to avoid having Federal money drive out or marginalize efforts from local government, foundations, and private donors. The problem now is how to make do with relatively less Federal money in a resource mix as a whole that spills across national borders like water spills across the squares and shapes of a parquet floor. 

What to Do?

So what should we do? In broad terms, the emphasis of policy going forward should rest in five basic themes: diversifying the focus on sources of funding; empowering the use of science to solve problems at every level; reconsidering what science advice should be and how it should be organized; leveraging modern tools to keep track of and to understand emerging knowledge so as to facilitate its application; and creating a sustainable commitment to Federal funding of basic science.

Diversify the focus on sources of funding. About 75 percent of U.S. R&D funding is non-Federal, yet the primary focus of most policy discussion is on the 25 percent that is Federal. On one level this doesn’t make sense, but on another level it does because the government can influence its own institutions far more readily than it can those in private hands. There have been attempts to address this shift in the epicenter of funding but it is difficult.  The Department of Defense leaned forward in an effort to innovate the government’s way into a better relationship with Silicon Valley as a solution for a more innovative military—the Defense Innovation Unit experimental (DIUx)—but it got off to a rocky start. The incentives across these communities will need to be better aligned to make this a functional new business model going forward.

But beyond a focus on Federal funding we also actively prioritize the work it produces. We stress the importance of independence and external review for sustaining quality, rigor, and reproducibility in Federally supported science, but as a nation we do not enable access to peer review for non-Federal funds, or pioneer alternate pathways for assessments that could lift trust for the results of the broader 75 percent. How do we elevate the results of the majority of funding, and make the rigor and independence of industrial funding transparent while protecting intellectual property rights? How can we help philanthropic and peer-to-peer funding access approaches to allow all funders to leverage analytics and understand what already exists, so as to highlight unique new discoveries and to link existing knowledge to pathways for transition and harvesting? 

Empower the use of science to solve problems at every level. We have a clear discussion across America at the moment about the value of science, and the trend is clear: Increasingly, the average citizen does not trust science or expertise in general. Some of this has to do with a flattening of social authority in the culture in nearly every respect. But much of the lack of trust may be associated with a general disillusionment with the Federal government as the sole source of solutions. 

Additionally, with the adoption of test-focused curricula in the K-12 classroom, increased emphasis on reading and math has decreased students’ broader exposure to science and experiential learning. By making science the purview of the highly educated and elite, the nation is reducing the opportunities for the application of scientific solutions to local problems. Instead we should enable and facilitate effective, high-quality approaches by empowering people at every level—individual, local, state, and global—with all variations of education to leverage science to solve their own problems. 

We can work to develop approaches that reject the exclusivity of past processes that drive non-scientists into situations of blind trust or distrust, while ensuring that rigor and quality are maintained. And in the process we can improve our ability to harness the entire American R&D investment for true competitiveness in a new world saturated in knowledge, funding, and S&T infrastructure. Peer-to-peer funding sites are one interesting opportunity that might be evolved into “match-making” services between problems and solutions, allowing communities to identify affordable approaches to solve challenges.

Reconsider what trusted advice looks like. In a world where we have hundreds of “trusted” Federal scientific advisory boards, federally funded research and development centers (FFRDCs), university affiliated research centers (UARCs), national laboratories, NIH laboratories, Defense Department laboratories, and more, all vying for a limited Federal funding pot, how do decision-makers know who really to trust? Combine this challenge of scale in our advisory system with the challenge facing the OSTP to exert specific influence across the expanse of Executive Branch agencies. We have a problem.

Perhaps it is time to consider the formation of a team within OSTP that is focused on analysis, not budget—a group that has deep access to data, both open source and government proposals and grants, whose members are trained in ops research with true analytic skills and no competition for the budget in specific technical area. This would be a group that could look at the pros and cons of emerging scientific areas to articulate both the benefits and challenges that will inevitably follow discovery.

Leverage modern tools to understand current knowledge and to translate that knowledge into solutions. In a world where millions of articles are published annually, can we believe that the full results of our support will ever be understood simply by scientists reading journals and attending conferences? Currently, government program managers have limited access to external data sources and accompanying analytics to ensure an objective view of the state of science within their fields. We stand up panels based on the expertise of individuals without arming them with the very tools scientific funding has developed over the past decade: tools that can objectively query and question the uniqueness of the projects funded. These resources are less accessible to philanthropic and peer-to-peer funding and lack external transparency and acceptance when used by industrial sources. 

There are commercial tools available and many bespoke analytic products produced internally by organizations across industry and government that allow the analyst to digest, parse, search and analyze relationships or similarities and differences among millions of scientific journal articles, patents, and funding documents for public and private companies, helping to navigate the full array of ‘innovation’ information and find the highest value opportunities.  An analyst may search on topics across the full array of data to find imaging technology of value to Defense only published within colonoscopy research that would never be noticed by scientists if domain expertise were the sole foundation for decisions.  But data is the key – currently the analysis is only as good as the data you can access, and data is expensive and protected.  Large publishing houses make it difficult to afford access to the full set of publications and patent data is published without any regard to machine readability.  The government could provide significant benefit to the public by finding ways to make data and basic analytic tools available to organizations that want to bring science to bear on our national and global challenges.  

Sustain commitment to public funding of science. If the past has taught us anything, it is that Federal support for basic science has tremendous value. We have created generations of scientists and engineers who want to make the world a better place, and they have done so time and time again. We know, too, of times when we have believed a challenge to have been solved only for it to reemerge: Antibiotic resistance is an excellent example; we declared victory, but nature adapted, and now we are in a struggle mirroring that of the early 1900s. Funding for basic research must therefore persist at stable and hence predictable levels regardless of the success or failure of specific applications. 

However, when 75 percent of the funding for science comes from outside government, perhaps we need a broad shift in our concept of the government’s role. Currently, the U.S. government tries to cover the entire landscape of scientific fields from basic math to cures for cancer. It makes more sense for the Federal agenda for science to focus on areas where industry and foundations are not supporting scientific development, or areas where there is no dual-use potential, and on areas where the nation cannot depend on industry and philanthropy—such as critical vaccine development or weapons systems. By better focusing the 25 percent of the scientific budget that the Federal government controls, we can advance these areas faster and leverage innovation from outside government better than we do today. 

Alas, we have more questions about the way forward than we have answers. But good questions are more important than and are prior to good answers. We have ideas, we have suggestions, but we are concerned about the lack of robust questioning in science policy today.  We must rise above a protectionist stance of our history and ensure that national leadership to understands that science policy is a deadly serious and important domain within national policy, including our national security policy. There are few signs that either major party really gets that. That’s a problem that may spawn many questions. We had better find some good answers before it’s too late.


The post The Emerging R&D Landscape: A Tale of Scale and Saturation appeared first on The American Interest.

 •  0 comments  •  flag
Share on Twitter
Published on November 26, 2018 09:03

November 23, 2018

Estonia and the Kindness of Strangers

In international politics size and location matter above all else. On both counts Estonia, a northern European country on the Baltic Sea, suffers from severe disadvantages. It is tiny, with a population of just over 1.3 million; and it has a giant neighbor that is ill-disposed to it: Russia. Peoples such as the Estonians seldom manage to achieve, let alone maintain, sovereign independence. Their more powerful neighbors usually gobble them up. What explains the Estonian exception to this rule?

In his play A Streetcar Named Desire, the playwright Tennessee Williams has the character Blanche DuBois say that she has “always depended on the kindness of strangers.” In geopolitics the weak are at the mercy of the powerful and kindness is rare.  To secure and maintain independence, those in the position of the Estonians have had to depend, rather, on the mistakes, foibles, distractions—and occasionally on the good will—of stronger neighbors. Since the powerful of the world have usually acted out of self-interest, in order to avoid disappearing from the stage of history the weak must exploit these self-interested actions for their own particular purposes. Estonia: A Modern History, a clearly written and very useful overview of the last eight centuries by Neil Taylor, a British travel writer married to an Estonian who spends part of each year in Tallin, the country’s capital, explains how a small Baltic people has managed to do precisely this.

In keeping with what has been normal in geopolitics, for most of those eight centuries no independent Estonia existed. The Estonian people were largely serfs or, later, peasants, living and working on land owned by Germans. While what is now independent Estonia belonged to Sweden in the seventeenth century, and to the Russian Empire from the eighteenth century through 1918, Germans exercised continuous local economic and political domination.

Stirrings of nationalist sentiment among Estonians began in the nineteenth century, as they did throughout Europe in the wake of the French Revolution. Still, unlike the Italians, the Germans, and the Poles, the Estonians did not seem plausible candidates to achieve political independence, and would not have been able to do so but for one of the peculiar consequences of World War I: both the major powers with an interest in and the capacity for controlling Estonia lost that war. The new Bolshevik regime in Russia submitted to German terms with the Treaty of Brest-Litovsk on March 3, 1918 and left the war.  Eight months later, on November 11, 1918, Germany itself surrendered to the coalition of Great Britain, France, and the United States. Taking advantage of the unusual weakness of their giant neighbors, the Estonians declared independence in 1918 and managed, at the end of that year and the beginning of the next, to withstand Soviet efforts to reestablish control over them, with some timely help from the British Navy, whose undeclared mission was to oppose the Bolsheviks rather than to support the Estonians.

During the interwar period Estonia—like the other former imperial possessions in central and eastern Europe that became independent after 1918, including the other numerically small Baltic peoples to their south, the Latvians and the Lithuanians—governed itself, for part of that time in democratic fashion. On August 22, 1939, however, the Nazi and Soviet regimes signed a pact partitioning between themselves the territory stretching from Germany to Soviet Union’s western border. In June, 1940 Soviet troops entered Estonia and in July it was annexed to the Soviet Union. Hitler attacked his erstwhile Soviet ally on June 22, 1941 and the Germans occupied Estonia until August, 1944, when the Red Army drove them out and once again imposed Soviet control. It was to last for 46 long years.

The Soviet authorities not only extinguished Estonian independence, they sought to destroy all symbols and memories of and hopes for it. Moscow tried to impose the Russian language and moved several hundred thousand ethnic Russians into what become, officially, a union republic of the U.S.S.R. Western countries refused to recognize the incorporation of Estonia, Latvia, and Lithuania into the Soviet Union and an American Congressional Resolution designated an annual “Captive Nations Week” to call attention to the plight of the Balts and other nations submerged in the Soviet empire. While it undoubtedly helped the morale of the Baltic peoples, however, this custom did nothing to loosen the Soviet grip on them. During the Soviet period, Estonia maintained its sense of national distinctiveness through cultural events such as song festivals. In 1965 limited ferry service across the Gulf of Finland began between Tallinn and Helsinki, providing a small opening to the Western world, and eventually Finnish television became available to many Estonians (the two languages are similar). It was, however, another great upheaval, which the Estonians did nothing to trigger but that they put to good use, that paved the way for the recovery of their independence.

No statue of Mikhail Gorbachev stands in Estonia but he deserves one, for he was the architect—albeit unintentionally—of that recovery. Beginning in 1987, his reforms loosened the restraints on civic and political life throughout the Soviet Union.  The Estonians were quick to take advantage of the expanded freedom that became available and worked assiduously and skillfully to expand the boundaries of what the authorities in Moscow, increasingly preoccupied with events in Russia itself, would permit. Estonia initiated, for example, market-based economic activity of the kind traditionally suppressed by the Communist Party. The collapse of communist rule in central and eastern Europe in 1989 made full independence seem feasible, and by 1991 Estonia was functioning as a sovereign state in all but name. In a freely-conducted referendum in March of that year, the Estonians voted overwhelmingly for independence.

The attempted coup against Gorbachev on August 19, 1991 was a dangerous moment for Estonia. Had it succeeded, Taylor writes, “firm Moscow control would have been reimposed” in the Baltic region. Thanks to the resistance led by Boris Yeltsin, however, the coup failed. Yeltsin pushed Gorbachev aside and dissolved the Soviet Union, and Estonia officially regained its independence. For the second time in seventy years the Estonians had slipped through a fortuitous crack created by a great European earthquake.

If achieving independence depended on the actions of others, the Estonians themselves seized the opportunity they were offered to build a solidly democratic political system and a flourishing economy. They have regularly conducted free and fair elections and established institutions and policies that have produced impressive economic growth. The country has put itself at the forefront of cyber-technology: Estonians created the software for Skype. To hedge against renewed threats to its independence by integrating itself into the global community, it has joined as many international organizations as possible, becoming a member of the Western military alliance, NATO, and of the European Union, in 2004. In 2011 it adopted the common European currency, the euro.

Twenty-seven years after the collapse of the Soviet Union, therefore, everything has changed for Estonia—except the two things that in geopolitics matter most: demography and geography. It is still tiny by the standards of sovereign states and still has a border with a much larger neighbor that maintains an unfriendly attitude toward it. Russian President Vladimir Putin has called the dissolution of the Soviet Union the greatest geopolitical tragedy of the twentieth century. He has invaded two other former Soviet republics-turned-independent-countries: Georgia in 2008 and Ukraine in 2014. His regime launched cyberattacks on Estonia in 2007 and he has boasted that his troops “could not only be in Kyiv (the Ukrainian capital) in two days, but in Riga, Vilnius (the capitals of Latvia and Lithuania respectively) and Tallinn too.” A 2016 report by the Rand Corporation lent credibility to that assertion by estimating that a Russian invasion force could reach Tallinn in 60 hours.

Estonia has, in theory, a guarantee against Russian aggression in the form of its membership in NATO. A NATO guarantee means in practice the promise of military protection by the United States; but few Americans could find Estonia on a map or know that their country has made a commitment to defend it against nuclear-armed Russia. When the Clinton Administration decided to expand the alliance eastward in the 1990s, it asserted that this would entail no additional expenditure or risk for the United States. Moreover, the current American President has expressed skepticism about the country’s NATO commitments.

None of this means that Estonia is destined to lose its independence again, as it did in 1940. It does mean, however, that the country cannot escape the fate of small, vulnerable peoples everywhere: no matter how brave, clever, and resourceful the Estonians are, their future depends ultimately on what not-necessarily-kind strangers decide to do—or refrain from doing.


The post Estonia and the Kindness of Strangers appeared first on The American Interest.

 •  0 comments  •  flag
Share on Twitter
Published on November 23, 2018 10:07

The Missing Link Between Evangelicals and Trump

Everyone knows that 81 percent of self-identified white evangelicals voted for Donald Trump in the November 2016 presidential election. For almost two years, pundits, historians and political scientists have been working toward an explanation of that most surprising of electoral statistics. But we’re no closer to an answer, and, as we move out of the midterms and into the long presidential campaign season to come, the religious vote remains as important as ever. But what if that widely circulating statistic is actually misleading?

The problem is in the polling. Polling results during the 2016 election indicate a gross conflation between conservative evangelicals and fundamentalists, and another group that might best be described as “conservative folk Christians.” In a nation of high religiosity, the social desirability to fit into at least one religious category can be an influential factor. Hence, when exit pollsters ask for religious affiliation, evangelicals get lumped in with all the miscellaneous varieties and flavors of American Christianity. On the one hand, this conflation explains the “white evangelicals voted overwhelmingly for Trump” narrative, which has made many rational people wonder how on earth so many self-identifying moral conservatives could support such an unlikely representative. On the other hand, this approach among pollsters stands in stark contrast to the experiences of many progressive evangelicals who have vigorously opposed Trump. Make no mistake, plenty of evangelicals voted for Trump—but their numbers are nowhere near the margins we’ve been led to believe.

There are striking differences between evangelicals and those who might best be described as “folk Christians.” While evangelicals are known for their “Biblicism,” according to their foremost historian David Bebbington, folk Christians are almost always biblically illiterate, and have very little structured religious background beyond understanding a religious ethos as a moral norm. Across and beyond their most familiar denominations, evangelicals have constructed a highly distinctive religious discourse. This discourse is easily parodied, which perhaps explains why Ned Flanders became America’s best-known evangelical. It is also easily appropriated, which is why folk Christians can know which evangelical behaviors to perform when they choose to do so. For these voters, self-identifying as “Christian” is about obtaining membership in a “tribe” that marks them out as “good people,” and they continue to participate in this tribe as long as it is socially advantageous to do so.

We have known for some time that Americans over-report their religious behavior. While Gallup polls have consistently found that around 40 percent claim to regularly attend religious services, studies have shown the actual rate to be nearer to 22 percent. This is precisely what we should expect to observe in a nation with high religiosity, or perhaps better put, a nation in which religiosity has a high social status in many places—a social desirability bias to be perceived as more religious than the average person. It logically follows that if Americans are over-reporting their religious behavior (such as church attendance and belonging in a religious community), they are just as likely to over-report their religious beliefs and, as a consequence for the aforementioned pollsters, their affiliation.

The problem is further compounded by the way that the term “evangelical” is largely used as a catch-all device. Identifiers such as “evangelical,” “born-again,” “Bible-believing,” and “nondenominational” all tend to be used interchangeably, or at least sloppily. Compare the definition of “evangelical” used by two major American religious polls:



The Pew Religious Landscape Study lists under its “Evangelical Protestant” category every Protestant denomination with an evangelical wing (which, to its credit, is far more nuanced than most polls), but also lists “Nondenominational,” “Other evangelical/fundamentalist,” and “Nonspecific Protestant.”
The American Religious Identity Survey, which makes distinctions between belief, belonging, and behaviour (again, more nuanced), does not even include evangelicals as a major category. The designation falls under the banner of “Christian Generic” which includes “Christian Unspecified,” “Non-Denominational Christian,” “Protestant Unspecified,” and “Evangelical/Born Again.”

Americans identify as “Christian” to the tune of 70 percent, based on the Pew study, with the largest group of “Christians” being evangelicals. What we would expect to see, then, from exit polls on any given election night are:



An over-reporting of religious affiliation
An over-reporting that would skew toward Christianity as the largest religious group
An over-reporting, as regards to denominational affiliation, that would skew toward the largest denomination, “evangelical” (which we’ve already established as a problematic catch-all for miscellaneous forms of Christianity).

For those still unconvinced that American evangelicals were not the reliable monolithic base for Trump that they are credited with being, rest assured the statistical evidence tells the same story.

In the 2016 election, there were an estimated 231 million eligible voters. If participation in voting reflected national religious averages, we should expect roughly 58.6 million of these voters to be classified as “evangelical” by exit pollsters, since they make up 25.4 percent of the population according to the Pew study.

However, if we assume from our earlier work that only about 22 percent of all eligible voters are active participants in a faith community, we arrive at 50.8 million as an estimate for religiously observant eligible voters of all faiths. From this number, we must still take our evangelical share (25.4 percent according to Pew), which leaves us with a pool of roughly 12.9 million. From this, we take the percentage of white “evangelicals” (76 percent according to the Pew study), and arrive at just over 9.8 million. Lastly, even if every eligible evangelical voter exercised their right to vote, and we grant the conservatively high estimate of that now-infamous 81 percent rate favoring Trump, we arrive at just over 7.9 million. Of the roughly 63 million who voted for Trump, white evangelicals account for just over 12.6 percent—and that marks the extreme upper end for the percentage of Trump votes stemming from evangelicals.

If, however, we ignore everything that decades of study have taught us about evangelicals, we can take the pollsters’ numbers at face value, to show the problematic result of using the 81 percent rate of support:



First, we begin with our earlier pool of 58.6 million self-identified evangelical eligible voters;
Second, we multiply by the percentage of white self-identified evangelical voters (76 percent, according to the Pew study);
Third, we multiply by the percentage of white voters turnout in 2016 (65.3 percent);
Lastly, we multiply by the 81 percent Trump favorability rating among white self-identified evangelical voters, to arrive at just over 23.5 million.

This brings us much closer to the media’s alleged “white evangelical base,” at over 37 percent—about three times the size of our earlier estimate of Trump’s support from white evangelicals. With poll numbers indicating that white evangelical voters accounted for more than a third of Trump’s popular vote totals, it is easier to understand how the media has run with this narrative.

To those who would object to using church attendance as a way to filter out evangelicals from “folk Christians,” it is paramount to remember the importance of faith communities in evangelicalism. “Lone-wolfing it” is not and has never been a feature of the evangelical tradition. This is not a commentary on the sincerity of belief of those here classed as “folk Christians.” It is simply a point of accuracy: If by “evangelical” we mean anybody who self-identifies with the term (or has the misfortune of unwittingly being so identified by exit polls), then we hardly mean anything by it at all.

It is tempting to put “folk Christians” in the same category as those who have, for almost a hundred years, eschewed mainline Protestantism in favor of their own convictions, decentralized from institutional mandates. But although sincerity of belief may very well mark someone as a Christian, this point of orthodoxy is wholly irrelevant regarding whether one is best classified as an evangelical. Consider the Boy Scouts. One could very well own the Boy Scout Handbook, adhere to its guidance, and even as a matter of sincere belief be convinced of one’s mastery over its contents. But without membership (or at least participation) in a scout troop, would other scouts recognize this individual as a fellow scout? Or just a like-minded enthusiast for the great outdoors? Likewise, evangelicals might very well be happy to recognize these outsiders as genuine Christians, and welcome them to their church services. But evangelicalism proper has a tradition of community that gets missed in exit polls. Even in non-denominational churches, membership requires participation.

White evangelicals may well have supported Trump to the tune of 81 percent, but they hardly have the numbers to warrant their recognition as his reliable base. The 81 percent figure only makes sense if we think of evangelicals as being at the core of an evangelical “market,” in which “folk Christians” (among others) participate in the consumption of evangelical ideas, and where these ideas compete for dominance in the marketplace.

An alliance with Trump will likely have profound effects on evangelicalism for decades to come, but the idea of Trump’s dependence upon evangelicals is probably unfounded. A recent study measuring the 41 percent of white millennials who voted for Trump is more revealing, finding that such voters were motivated primarily by sentiments of “white vulnerability” and racial resentment. This indicates that Trump’s true “base” lies not with white evangelicals but with white people, who happen to be well represented within the evangelical market. Black and Hispanic evangelicals certainly did not come out for Trump, and Hispanics are the largest growing segment within evangelicalism, according to the Pew study. If white evangelicals have indeed hitched themselves to Trump’s wagon, I wouldn’t want to be in their shoes when Trump discovers that he doesn’t really need them.


The post The Missing Link Between Evangelicals and Trump appeared first on The American Interest.

 •  0 comments  •  flag
Share on Twitter
Published on November 23, 2018 06:21

November 21, 2018

Evangelicalism and Its Electoral Discontents

A new interest in Evangelicals on the part of political scientists and media commentators has been one of the most striking consequences of the recent rise of populist politics in Europe and the Americas. The results of the Brexit referendum and the election of Donald Trump, as well as the more recent appointment of Scott Morrison as the Australian Prime Minister and the election of Brazil’s new President, Jair Bolsonaro, have prompted analysts and journalists to consider voting communities whose political preferences and electoral behavior may have contributed to what they represent as the sudden populist, and often distinctively rightward, turn.

And so, in a manner unseen since the later 1970s, when both Jimmy Carter and his critics in the Christian Right were identified with an emerging Evangelical movement, newspapers and other media outlets have been trying to explain to their audiences who Evangelicals are, and why they tend to vote as they do. But there are a number of problems with this approach.

The first problem is that “Evangelicalism” does not exist as unitary movement. As the American historian D. G. Hart has demonstrated, in Deconstructing Evangelicalism (2004), born-again Protestants are not united by any shared theological commitment. In fact, the most recent survey of theological and ethical opinion among Evangelicals, released only this past month, indicates that many of these believers openly question the most time-honored doctrines and behavioral expectations in Christian religion. Lacking a common confession, organizational structures, or a clearly defined leadership, and often being acutely aware of doctrinal contradiction within what they are persuaded to believe is their movement, “Evangelicals” have been gathered together by pollsters who need to find some category to describe a series of often competing religious subcultures.

In most of those cases, as in this one, it has suited many leaders within these “Evangelical” subcultures to own this generic religious identity, for it provides this collection of anomalous believers with an identity and significance that is markedly bigger than the sum of its parts. This “movement” gains the illusion of unity though an infrastructure that combines news and lifestyle magazines, television and internet channels, and—perhaps most importantly of all—the manufacturers and distributors of the holy hardware that provides a self-identified community with its material culture. As American historian Benjamin Huskinson argues, “Evangelicalism” is less a movement than a marketplace. The visible props of the performance of Evangelical religion cannot conceal an immaterial culture. Held together by a common commitment to the necessity of some kind of religious conversion, and by very little else, Evangelicals belong to a movement that doesn’t actually exist.

The second problem attending the media interest in identifying Evangelicals and explaining their electoral behavior is that born-again Protestants do not share a common political platform, as they might if they were members of a movement. Evangelicals on both sides of the Atlantic express a wide variety of political opinions, and some reject any form of political participation at all. This variety has been illustrated in Lydia Bean’s study of The Politics of Evangelical Identity (2014), which reflected upon the year she spent in Evangelical communities on either side of the American-Canadian border. While Bean’s interviews with American Evangelicals confirmed their proclivity for hyper-patriotism, her discussions with Canadian congregants revealed their hesitation about involvement in political campaigning, and this despite the “covert Evangelicalism” of Prime Minister Stephen Harper.

There are signs that evangelicals in the United States are coming to share this hesitation. Steven P. Miller’s The Age of Evangelicalism: America’s Born-again Years (2014) describes the rise and eventual crisis of Evangelical religious nationalism, while The New Evangelical Social Engagement (2014), edited by Brian Steensland and Philip Goff, shows how younger Evangelicals in particular are identifying with issues of social justice that have been more traditionally seen as the political property of the Left. One of the consequences of the religious diversity among Evangelicals is that born-again Protestants may and do support competing political positions. And, if Evangelical responses to President Trump demonstrate anything, it is that this trend toward political diversity is accelerating.

The third problem facing those who would explain the religious politics of “Evangelicalism” is that the politics of born-again Protestants are more obviously shaped by local and national, rather than international or even religious, agendas. Evangelicals do not speak with one political voice or advance a universal political agenda.

Bean’s ethnographic study of congregations on either side of the American-Canadian border illustrates a point that born-again Protestants in Europe would easily recognize—that the conservatism of American Evangelicals is more easily explained by their cultural geography than by the tenets of their faith. This is why journalists and commentators in the United Kingdom have not rushed to explain the result of the Brexit referendum with reference to the aspirations or behaviors of British believers. It is not simply that Evangelicals in the United Kingdom are smaller in number and less politically significant than are their American counterparts (though that is certainly the case). Nor is it simply that Evangelicals in the United Kingdom are more likely to identify with well-established denominations than with any of the multitude of para-church “ministries” that currently organize the majority of American believers, supply their media, shape their beliefs and activities, and claim to represent them. Nor is it merely explained by the fact that Christians in the United Kingdom have not been subject to the kind of polling that at once interrogates and constructs the American “movement.” It is simply that the views of Evangelical politicians are more obviously shaped by their ambient social and political environment than by their religion.

This observation explains the fourth problem facing those journalists and media commentators who attempt to explain the electoral character of Evangelicalism—and that is the striking contrast between the most successful Evangelical politicians on either side of the north Atlantic. While “Evangelicalism” is constructed to embrace a bewildering variety of religious subcultures and communities, even one of those subcultures can produce a bewildering variety of politics. Thus, the same kind of conservative Calvinism that produced a young and energetic conservative Republican, in the form of U.S. Senator Ben Sasse, also produced the former leader of the most progressive party in mainstream British politics, Tim Farron of the Liberal Democrats. For Evangelical politics is not, fundamentally, about religious beliefs, but about the social, cultural, and geographical contexts that produce them.

So those who would seek to understand the relationship between politics and an Evangelical movement are left to consider a paradox. As Hart’s work emphasizes, “Evangelicalism” is a creation of pollsters rather than denominations or religious communities. The pollsters who have created this category now seek to analyze its political preferences in terms of supposed religious convictions that do not in fact define it or hold it together. This is weird, and not good—and besides: There are far better ways to understand America’s current political problems than to blame a huge share of it on religious people. For American Evangelicalism is a political, not a religious, community, and there are better ways to understand its electoral discontents.


The post Evangelicalism and Its Electoral Discontents appeared first on The American Interest.

 •  0 comments  •  flag
Share on Twitter
Published on November 21, 2018 10:24

Tactical Illusions and Split 
Outcomes

Divining a policy mandate from a US election can be problematic because there are so many contests at different levels of government. It is essentially a Rorschach test for pundits, especially when Congressional seat swings move in opposite directions. If House and Senate members behaved according to political science spatial vote models, then House Democrats should shift right and the Senate Republicans should shift left. Why? Because the Democrats won back the House by taking over seats that were previously represented by Republicans, and the Republicans flipped Senate seats that were previously held by Democratic incumbents. Assuming both parties want to hold onto their respective majorities, it should open the door to House-Senate compromises on various issues of mutual benefit. That might happen, but I would not bet the family savings on it.  

The split outcome in 2018 undermined some key assumptions that both parties made about this election. The Democrats got their strategy right, but were wrong about certain tactical assumptions. As in 2006 under Rahm Emmanuel’s guidance at the DCCC, they mostly selected candidates with the right profiles for their districts and kept to a disciplined message about health care and jobs aimed at independent and moderate Republicans.  

What they got wrong was the previous two years of wasted time and energy trying to convince the Court to adopt a new partisan gerrymandering standard. Led by President Obama and Eric Holder, a substantial faction of Democrats had convinced themselves that House elections were so rigged against them that they would need to win by a large margin of votes even to gain a narrow margin of seats. Only redistricting reform, they thought, could fix this. In the end the Democrats got a historically large swing with 52% of the vote—well within the normal range of the recent decades. 

To be sure, the Republicans did control more redistricting efforts after the 2010 census and skewed them to their advantage. But the alleged walls the Republicans built were not very high—certainly not high enough to stop the anti-Trump wave in the suburbs.  The Democrats were able to secure at least 232 seats so far, a larger shift than in 2006. Moreover, by picking up 7 Governors, they will enter the next redistricting round with more political protection against unfavorable partisan gerrymanders. Political disadvantages are more easily fixed by smart politics than by expensive legal pleadings.

Still, the Democrats’ general obsession with electoral unfairness is easy to explain. In addition to the partisan gerrymandering of the House districts, the Senate’s constitutionally prescribed mal-apportionment favors the rural states that resonated with President Trump’s appeals. Moreover, the Democrats have now lost two Presidential elections in this century despite winning the popular vote both times. Add in Republican efforts to institute various types of vote restrictions that all too often seem targeted to nonwhite voters, and you can appreciate why many Democrats feel that the rules are stacked against them. Democracies are supposed to empower the more numerous group over the less numerous one, not the other way around.

That said, the best path to political power for Democrats is not via tactical victories and voting reforms, even if those things are worth doing for other good reasons. Rather, the Democrats need to expand their policy reach to win back some of the rural voters who feel left behind and alienated. It is clear from victories in Arizona, Texas, New Mexico, Oklahoma and Colorado that urban growth and gentrification in the Interior West will help Democrats in the long run, but Democrats must also find ways to share the prosperity of the tech economy and address problems such as addiction, poor job prospects and obesity outside their urban bubbles if they want to lessen the current Republican advantage in rural states.

The Republicans on the other hand are in danger of following the path of Pete Wilson and the California Republicans in the nineties. Pete Wilson rode immigration fears and hostility to affirmative action to victory in 1994, but by doing so managed to put the Republican party on a downward spiral from which they have yet to recover. Gaming voting rules to exclude eligible citizens for minor bureaucratic infractions or relying on Trump’s unique appeal to rural and less educated voters for the next two years as opposed to developing policies and a party brand that could win back the educated middle class voters seems like another downward spiral in the making. This will not only cost them more votes over time, but it may mean ceding the fund-raising edge to Democrats for the foreseeable future as well. Trying to hold power without winning the popular vote as they have done for the past two elections cannot succeed for long. Republicans need to win educated, middle class suburbanites back into their coalition. 

No doubt Republican political operatives were as surprised as the rest of us that the tax cuts mattered so little to key voting blocs.  Partly this was President Trump’s fault as he focused on nativist appeals and the Kavanaugh confirmation at his rallies rather than appeals that would have helped to defend the suburban House districts. Tax cuts and regulatory relief aside, Republicans also need to address problems of school safety, climate change, decaying infrastructure, and health care more effectively and persuasively than they have to date if they want educated voters. They also need to change their message to women fast because generational replacement will pretty much wipe out the appeal of “stand by your man” policies in the coming years.

Finally, I will pass on some advice for both parties about redistricting. Speaking as one who has drawn districts at various levels of government, a partisan plan only works well for a party if it can expand or at least maintain its coalition. It backfires otherwise. A partisan plan works by distributing the party’s voters across its seats more efficiently in order to shift its excess votes to the seats it hopes to flip. However, if the tides shift against you, efficient seats are more vulnerable to hostile waves. A party anticipating a wave against them would be better off with a “bipartisan, make all incumbents safer” strategy. Promoting policies that expand party coalitions to new blocs of voters makes for a more effective partisan gerrymander. To borrow from Yoda, if gerrymander you must then fix the policies first and then reap your tactical bonus.


The post Tactical Illusions and Split 
Outcomes appeared first on The American Interest.

 •  0 comments  •  flag
Share on Twitter
Published on November 21, 2018 08:35

November 20, 2018

The Suffocation of History

Christopher Browning’s recent article in the New York Review of Books, “The Suffocation of Democracy,” offers an historical analysis of parallels between the current situation in the West and that of a century ago, when the people he has studied as a professional historian, went mad with paranoia and inaugurated a death cult that killed tens of millions of people the world over. People ask him anxiously: what parallels might there be for the rise of another Nazi party, a next Hitler? “The interwar period [1918-39] with all too many similarities to our current situation,” Browning asserts, “is the waning of the Weimar Republic.” The opening picture of Hindenburg and Hitler in that brief moment of transition from Weimar “democracy” to totalitarian fascism gives us the key: “If the U.S. has someone whom historians will look back on as the gravedigger of American democracy, it is Mitch McConnell.” And if McConnell is the new Hindenburg, then, clearly, the new Hitler is Donald Trump.

It might strike serious historians as a bit strange to consider U.S. democracy today—however troubled—as beset by a fragility comparable to that of the Weimar Republic. After all, the Weimar Republic’s “democracy” was a brief and fevered respite in the 30-years’ war of the European 20th century. It needed only a very shallow grave in which to expire. To think that the world’s oldest democracy is as helpless before the forces of internally generated fascism as Weimar’s seems bizarre, to say the least. And given how often people have cried “fascist” in decades past, the exercise does have something of a boy-who-cried-wolf quality, a moral panic that, as Augustine said to Orosius, a knowledge of history should remedy.

It also betrays an almost Platonic pessimism about democracies—always-already unstable, volatile, self-destructive, preambles to tyranny—as if centuries of democracy in the United States had not developed mechanisms for resisting these kinds of authoritarian attacks, as well as ways of influencing peer groups to resist the pull of totalitarianism and its ideologically driven crimes against humanity. In a sense, Browning here applies the same argument he made about the Germans as Ordinary Men carrying out the slaughter of Jewish communities in Poland: We’re all ordinary men, capable of burying a democracy and welcoming totalitarianism. The early 20th-century Germans did it, so why not early 21st-century Americans?

Of course, the historian’s job is not only to compare but to contrast. What about all the differences between Hitler and Trump, Weimar “democracy” and America’s? Browning dispatches them out of hand. It does not matter, for example, that Trump’s relationship to journalists differs wildly from Hitler’s. “Total control of the press and other media,” he notes, alluding to Hitler, “is likewise unnecessary, since a flood of managed and fake news so pollutes the flow of information, that facts and truth become irrelevant as shapers of public opinion.” Trump #fakenews press = Nazi totalitarian press. You’ve seen one Lügenpresse you’ve seen them all. QED.

Having mobilized his ample historical knowledge in the pursuit of this parallel, Browning then admits, soto voce, how it’s somewhat absurd: “Nothing remotely so horrific is on the illiberal [Trump] agenda.” Indeed, Trump’s “illiberal democracy falls considerably short of totalitarian dictatorship as exemplified by Mussolini and Hitler.” Noted one critic, “[Browning] presents a wildly distorted account of the current state of affairs perched upon a caricature of the past.”

So if the two circumstances being compared are not even remotely similar, why compare them at all? Why engage in an elaborate analogy so poor that, just in order to maintain some intellectual integrity, you have to admit you don’t mean it?

The obvious if sad answer is that Browning’s article illustrates—embodies, really—Trump Derangement Syndrome, which is a secular version of Antichrist Derangement Syndrome. The only possible explanation for this historical meditation, whose argumentation would get a F (alas, B- on today’s inflated curve) in any class of historiography, is that it’s there to stigmatize Trump and make pariahs—fascists in the making—of anyone who legitimizes his exercise of power. “Hitler,” notes Bill Burr, “is the gold standard of evil; you want to call somebody evil, you call him the next Hitler.” And that’s what Browning has done here with Trump, at once openly (with a wink) and, given any standards of proportion, preposterously.

Were anyone seriously looking for a candidate for the next “Nazi-like” war machine planning to engulf the world in megadeath, it’s pretty clearly who that candidate would be: global jihadis like those of the Islamic State, al-Qaeda, Hamas, Hezbollah, Boko Haram. The slightest open-minded glance at these organizations and their ideologies offers the full panorama of everything that has made Hitler and the Nazis “the gold standard of evil”—genocidal hatreds, death worship, paranoid megalomania, conspiracy theories projected onto apocalyptic enemies, millennial aspirations of world conquest, and apocalyptic dreams of exterminating the evil Jewish “other.”

The analogy is, as all, imperfect. Nazi Germany, after all, was a state with a professional military and a modern industrial economy with which to support and arm it, while the jihadi world is fractured, stateless, incapable of holding territory for long, and without conventionally significant, military-relevant, economic assets.  On the other hand, Jihadi hatreds have marinated longer: far fewer German priests and ministers (if any) preached the genocide of the Jews from the pulpit in the manner of a profusion of jihadi preachers, the world over, including in the West.

Either way, the analogy teaches us we need to make certain that the jihadi world stays this way long into the future. It is a sign of how little we know about the problem, that they are not even on Browning’s radar when he thinks about the present, that we Western infidels have no idea how many Caliphaters dream of and preach the coming of a global Caliphate in this generation.

The only trace of the Caliphater issue in Browning’s historical reconstruction is their (unmentioned if small) presence among immigrant communities in Europe and the United States. If Trump is the new Hitler and McConnell the new Hindenburg, then those who oppose open immigration are the new Nazi deplorables:


Xenophobic nationalism (and in many cases explicitly anti-immigrant white nationalism) as well as the prioritization of “law and order” over individual rights are also crucial to these regimes in mobilizing the popular support of their bases and stigmatizing their enemies.


When Browning speaks of “explicitly anti-immigrant white nationalism,” he is not the historian, reporting the actual voices of those nationalists he describes; rather, he’s the polemicist, tagging them with the dog-whistle for “racist-fascist.” Actually, and especially in Europe, it is not generic anti-immigration attitudes but specifically anti-Muslim immigration attitudes that drive these allegedly ‘white nationalist’ groups. Generic anti-immigration sentiment has larger and more varied causes, and their political currents are much broader than the “white nationalist” wing. They are not lost souls. For Browning, however, they’re the budding fascists of Eastern Europe, and, by extension, via Trump, the American politicians who have any role in legitimating that Hitler wannabe’s exercise of the presidency. (This is not to trivialize the problems of right-wing, white nationalism, just to caution against assuming everyone who fears immigrants is phobic and on the slippery slope to fascist rampage.)

Instead of Browning’s depiction of a fight between Weimar democrats (good) and surging fascists (bad), one might view it as a dispute between two legitimate groups within a civil society who disagree on how a nation, a culture, a civilization should deal perceived enemies, some of them violent megalomaniacs, embracing a weaponized (‘nazi’) ideology, who want to take, and in a few cases have taken, power. No need here for ‘slippery slopes’ down which the zealots will eventually slide; they have long ago broken any and every barrier politically correct concerns might throw up.

This dispute pits, on the one hand, those who want to calm the zealots down somehow (ignore them?), and, on the other, those who want to resist them. If there actually is a parallel between what’s happening now, globally, and what happened in Europe 100 years ago (and that is itself dubious), it’s that, alas, Trump and his deplorables have the role of the new Churchill and his camp of belligerent Germanophobes.

By mis-identifying those who wish to confront a totalitarian enemy as the “real enemy,” the proto-fascist “right wing,” the “next Hitler,” Browning implicitly assigns the role of Chamberlain (“a man in every other regard different from Trump”), to the peace-seeking, Whiggish intellectual elite who know what behavior on our part “the arc of history” demands, that arc that “inevitably bends toward greater emancipation, equality, and freedom” [italics mine]. Of course, we know just how disastrously wrong Chamberlain was about Hitler, about the arc of history, about peace in his time. He is rightly a byword for folly. So why would Browning push a narrative that surreptitiously advances (mutatatis mutandis) Chamberlain’s agenda? Because it should have worked for Chamberlain? And therefore, this time, it will work for us?

Trump, deplorables, Nazis, xenophobes, white nationalists—they’re all the same, all ordinary men, potentially on the path to fascism, totalitarianism, genocide. In this analysis, Browning repeats what he did in his study of the first German battalions (not even Nazis) told to carry out orders to exterminate entire Jewish populations as the Wehrmacht headed nach Osten. There he played down the worst details—the sadism, the approval of German women, the rejoicing in and volunteering to kill—in order to appeal to the postwar zeitgeist by saying, “They were just ordinary men, just like you and me, reluctantly pressed into genocide. And now, ‘we’ are just like ‘them’, on the verge of fascist dictatorship.”

There is an apocalyptic flavor to Browning’s analogy with the end of Weimar. Like Joachim of Fiore and Marx, he places his (our) present time at a hinge, a turning in meta-history, and in the process he makes over a clever, opportunistic clown into a genocidal maniac. And like so many apocalyptic believers, his fear of total, imminent collapse and catastrophe draws him towards dualism: the good “us” and the evil, apocalyptic, “other.”

In this sense, Browning’s pejorative slang—xenophobia, white nationalism—and agenda-driven comparisons, actually serve to drive a greater wedge between Americans. When seen in light of a non-trivial “enemy of civilization” (in the worst traditions of Nazism), Browning pits those who prefer appeasement and those who prefer confrontation in the struggle against weaponized Caliphater hatreds. This enemy may not now constitute an existential threat, but most earnestly wishes that it did. So if the 1930s should have taught us anything, it’s to take these millennial zealots seriously, to read their material, to firmly oppose, not ignore them.

In this sense, Browning more resembles the propagandists of the 11th-century investiture conflict, the first ecclesiastics in the West to play with accusing Christian rulers—pope and emperor—of being the Antichrist. And in so doing, he internalizes the actual clash of (at least two) civilizations that his analogy completely ignores, to wit: “My enemies are my right-wing neighbors, the Republicans across the aisle, the unwoke (that is, those who do not understand the arc of history); my enemies are not the members of genocidal millenarian cults whose ability to inspire Muslims and even some infidels shows disturbing vibrancy, and who doubtless (were I to even bother thinking about it), rejoice at my self-lacerating folly.”


The post The Suffocation of History appeared first on The American Interest.

 •  0 comments  •  flag
Share on Twitter
Published on November 20, 2018 12:17

Brexit: An Unsustainable Deal

After 18 months of negotiations, the UK government and the remaining 27 EU member states have finally reached a deal on Brexit. The draft Withdrawal Agreement (WA) was published on November 14. After all the debate, argument, counter-argument and occasional fury, a settled basis for the exit of the United Kingdom from the European Union should in theory be welcomed. Individuals, businesses, and government agencies need certainty in order to plan, invest and get on with their lives and operations.

Unfortunately, the WA does not provide that certainty. It instead prolongs the uncertainty by kicking the can down the road and failing to resolve any of the internal British disputes on its future relationship with the European Union. In essence, the WA simultaneously reduces British negotiating power in any future trade negotiations with the EU and shifts the cliff’s edge moment at which Britain leaves the single market to the end of the transition period, all while leaving Parliament and the public entirely unclear as to what Britain’s final relationship with the EU will look like. This has appropriately been dubbed the “Blind Brexit,” and it’s a deal that Parliament will almost surely reject, despite all the government’s heavy lobbying.

Many of the Brexiteers who oppose the WA have not helped their own case in opposing the deal. They have ploughed into the WA’s 500 pages of legal text and come up with a range of claims and reasons for agitation, which range from the bizarre (tax immunity for British ex-EU officials) to the misguided (concern over the continuing jurisdiction of the European Court of Justice), while comprehensively missing the point.

The key point is that the WA sets no course for the future EU-UK trade relationship and puts London in a much weaker position to negotiate those trade issues once it comes into force. Aside from dealing with citizen’s rights (of EU nationals in the United Kingdom, and UK nationals in the European Union), existing financial obligations, and Northern Ireland, the WA provides for a transition period. The transition period continues the application of EU law and keeps Britain in the single market and the customs union after March 29, 2019, until December 31, 2020 (extendable once to December 2022). However, during the transition period the United Kingdom will no longer have any representation in EU institutions, in the EU Council with other member states, or in the European Parliament. Essentially, London will be bound by EU rules but not have any representation in their making.

The reason for seeking a transition period is clear enough: the British economy is so deeply tied into Europe’s that the economic damage inflicted by a sudden exit would be immense. Supply chains would be disrupted, Britain’s huge services export machine would be unable to properly function, and many of the smaller single-market-focused businesses would go to the wall. However, all the WA does to deal with these dangers is to move that market access cliff edge to (at most) the end of 2022.

But there is no way a trade agreement with the European Union can be negotiated by December 31, 2022. The Canada-EU Free Trade Agreement, for instance, took seven years to negotiate. Whatever Theresa May and her counterparts in Brussels are saying now, the practical reality is that the transition period will have to be extended.

Alternatively, the two sides will have to enter a temporary Free Trade Agreement as close as possible to the transition agreement, while the final free trade agreement is being negotiated. Meanwhile the United Kingdom will be subject to EU rules but with no influence over them for most of a decade. That is not the end of the difficulties London will face. The very reason for the transition period, the fact that the British economy is deeply integrated into the European economy, remains. In effect the WA is putting off the British debate on its future relationship with the European Union until after Britain has left, when it will be in a much weaker negotiating position vis-à-vis Brussels.

If the United Kingdom cannot resolve the debate on its relationship with the European Union now, the danger is that it will enter into Brexit limbo and remain stuck in some form of transition indefinitely. It is worth remembering that when Norway entered the European Economic Area (EEA) agreement in 1993, it was supposed to be a temporary arrangement. Twenty-five years later, Olso is still subject to EU rules for which it has no substantive input (though the EEA does at least give Norway some institutional oversight rights over the legislative process, unlike the WA). Although the WA only provides for one extension period of the transition agreement, Brussels has significant incentives, not least disruption of the rest of the single market, to provide further extensions, which a desperate British government will be eager to sign up for.

No. 10 is lobbying MPs of all parties on the grounds that the deal provides certainty. It does not. Its main achievement is to kick the can down the road. The underlying failure of the WA reflects the failure of the British government, because of internal political divisions, to set a course for any future relationship with the EU which would provide that certainty.

Luckily for the United Kingdom, the British government is already facing grave difficulties in mustering a majority in the House of Commons to push the WA through. It has managed to range the entire opposition against the deal, including its erstwhile allies in the Democratic Unionist Party. In addition, both the pro-Remain and the ultra-Brexiteer wings of the parliamentary Conservative Party are opposed to the agreement. In such circumstances, despite furious lobbying by the whips and the government machine, it is very doubtful that the WA can pass.

If the agreement is rejected by the Commons, what then will happen? Parliament, it is clear, will not countenance a “no deal” scenario that would see the country crash out of the European Union. One of two scenarios is likely: Either the government goes back to EU negotiators and seeks a permanent EEA (single market) plus customs union solution, or it seeks to hold a new referendum. The WA can be seen as a waymark to a referendum, in that its contents illustrate the inability of Britain’s political class to arrive at a consensus on Brexit. In such circumstances, the only real solution is to hand the matter back to the people.


The post Brexit: An Unsustainable Deal appeared first on The American Interest.

 •  0 comments  •  flag
Share on Twitter
Published on November 20, 2018 06:45

November 19, 2018

American K-12 Education in Thrall

As the Berkeley neuroanthropologist Terance Deacon has written, “Knowing how something originated is often the best clue to how it works.” That is as true for diagnosing the health of America’s K-12 educational institutions as it is for most things—except that in this case many observers would rephrase the end of Deacon’s thought to read “how it doesn’t work.” That is another way of saying that for at least three and a half decades now, ever since international testing comparisons became widely conducted and the results known, American elites have considered the nation’s K-12 educational institutions, especially in science and math, in some kind of crisis. A host of Federal commissions has been empowered to diagnose the problems, and, partly as a result, over time large percentages of ordinary Americans have become persuaded that elementary education is broken.

Something, anyway, is broken. Ever since the Department of Education was established in 1979, Federal education policy has illustrated a bewildering mashup of contending constituencies: academic experts in high-status university schools of education, teachers unions, textbook publishers, major corporate-linked philanthropies, organizations of state education departments, and, in recent years, for-profit testing “experts” and educational computer software salesmen. Tensions among these constituencies, and between Federal, state, and local government, have been ceaseless, but we can nevertheless describe a basic pattern.

A variety of incremental approaches, mainly tested out at local levels, gave way about 30 years ago to calls for “systemic reform,” and the then-newly created Department of Education suggested to many that there was now a means to push through a nationwide reform. But thanks to union resistance, lack of funds and coordination, and the absence of agreement on what systematic reform actually meant, not much happened. Later on, large foundations gradually began to play a larger role in the education debate. One result of these efforts was the George W. Bush-era No Child Left Behind (NCLB) legislation, which focused on student testing and teacher evaluation to hold schools accountable for outcomes. This made political but not pedagogical sense. Greater accountability there might have been, but overall scores did not rise, and achievement disparities among various class and ethno-racial categories did not shrink.

The Obama Administration converted “no child left behind” into a “race to the top,” but without affecting the focus on testing and teacher accountability of the previous approach—in other words, a tinkering at the edges of what was already widely regarded as a futile reform. The new approach did yield some concrete changes, however. Through Education Secretary Arne Duncan, the major foundations—especially the Gates Foundation, with whom Duncan had worked when he ran the Chicago school system—essentially gained a lock on Federal policy. Many observers thought that was fine, since the deep pockets of these foundations could make up for the lack of adequate Federal and other government funds for education. Others, however, believed that private money that was, notwithstanding its charitable tax status, tied to corporate ventures was usurping public decision-making. Worse still, many also believed that the reform solutions these foundations were pushing—such as the emphasis on testing, the charter school movement, and voucher/school choice proposals—were misguided or even harmful.

At the foundations of these endless arguments over education reform policy were several polite fictions that few dared to call out. One such lie was that, when a cohort of five-year olds enters kindergarten, the students are all clean slates ready for educational imprinting—and that public schools could deliver equal opportunity with unequal tax bases. Thus, according to the logic of this untruth, if students at one school did not in fact perform as well as their peers, the schools were to blame, not the funding inequalities.

No one really believed this fiction, but few were willing to call it out. For many decades we have understood that a child’s early rearing at home affects that child’s ability, willingness, and readiness to learn throughout his or her future. Quite aside from the relative “word poverty” of some homes compared to others, differences in nutrition, emotional stability, public educational funding, and other factors make for anything but clean and equal slates, and the differences tend to cluster spatially such that schools in different places receive students of widely varying pre-school preparatory experiences. The United States government, however, cannot intervene directly into local public school funding inequities or into private family lives in order to prepare young children for school—and not very many want to change that. So policymakers, limited in their means of driving effective reform, have pretended that, for all practical purposes, these differences do not exist.

A second lie, having to do with the depressing data on the achievement gap, is less well recognized among education specialists but equally absent from public discussion of education policy. Over and above the differential early-childhood basis for learning readiness is the fact that over the past forty or so years the number of immigrant and first-generation students living below the poverty line in public schools has skyrocketed to levels not seen since the 1890s. Large numbers of Hispanic students, many of whose parents do not speak English as their first language or hold a college degree, are necessarily going to affect the data sets. Those numbers are going to make achievement gaps worse; so too will substantially increased numbers of emotionally fragile children in poor urban and rural populations, who often experience higher levels of stress and trauma. Schools will need to provide the wrap-around services that these students need to succeed and thrive. It is also worth noting, at the same time, that the data shows barely any achievement gap for the smaller numbers of immigrant and first-generation students from cultures that place a high value on education—mainly from East Asia and some communities in India, for example.

Of course, cultures do change, especially when they hit the road, and assimilation does happen; but it doesn’t happen very fast, and forty years is very fast in sociological terms. In some cases, challenges can also endure in minority communities due to a wide range of complicating social factors, for better or for worse as regards educational achievement, for centuries and more, even in alien environments. Yet review the dozens of articles on the achievement gap in the mainstream media over the past decade or so, and note how few of them mention immigration-related and other social factors affecting the demographic data. You will be fortunate to find a single one.

How can we begin to craft effective education policy when we cannot even bear to face reality? We also take for granted, for instance, the necessity of free, compulsory, and univeral public education for all Americans. We know, too, that virtually the entire world has followed suit during the 20th century, such that the international consensus on the fundamental right to free universal primary education is strong—even if many millions of children in poorer countries do not actually have access to education. Moreover, while local structures and curricular requirements vary, public school systems across the globe today also tend to share two common aspirations: to teach the core content and skills needed for participation in the workforce, and to serve as an agent of common socialization—what 19th-century Common School champion Horace Mann called the “balance-wheel of the social machinery.”

Mann also described public school education as the “great equalizer of the conditions of men,” illustrating the belief (in the West, anyway) that free, compulsory, and univeral public education was intertwined with the fortunes of liberal democracy. Popular sovereignty required that the population apply reason to its decisions regarding the public realm, and education was the means to hone reason. It all goes back to a fourth-century monk named Pelagius, whose rather unorthodox views underlay the modern project that held education to be the spearhead of decisive social advance by filtering out bad behaviors and instilling good ones. But Pelagius’s view did not triumph in at least a part of the West until about 1,200 years later. What happened?

For most of human history, relatively widespread education was the rare exception, found only in such isolated instances as Sparta’s compulsory youth-training program, in first-century Talmud-Torah schools among Jews in Judea and Babylonia, in the Roman Republic and Empire before the fifth century, and perhaps in the 15th-century Aztec Triple Alliance. Not until the mid-16th century did a huge wave of compulsory education sweep across Europe, and it did so hand-in-hand with the Protestant Reformation and newly translated Bibles printed on the Gutenberg printing press. We can even date the beginning of broad compulsory education to the year: 1524, when Martin Luther published his letter, “To the Councillors of all Cities on German Territory,” which called on governing authorities to make education compulsory for all boys and girls, regardless of class.

Luther’s twin theological-political movement found a powerful engine in the establishment of these compulsory schools. Wherever Protestant beliefs spread, state-mandated education followed soon thereafter, with each reinforcing the other. Early Protestant compulsory schooling functioned as an institutional fulcrum, continually linking the theological to the political and vice versa. And in this dynamic theological-political system, “conscience” was the hinge. That is, it was Luther’s religious belief in the duty of each person to read Scripture in the light of reason and conscience that initially led to his call for government-mandated schools to teach literacy, which in turn led to a social consensus on the importance of conscience in the political arena as well, and so on in cyclical fashion.

Not only did the governing authorities legally mandate Protestant schools, but local parishioners were required to fund them and Church bishops to oversee them. In other words, an institution formed. Note in this arrangement the logical priority of religious belief to mass literacy. Compulsory education did not lead to the individual’s faith in equality, reason, and conscience; rather, a prior theological belief in the individual’s equality, reason, and conscience provided the raison d’être for the establishment of compulsory schools and the subsequent development of mass literacy. The use of scripture and other Protestant religious materials in these schools’ curricula then reinforced those initial beliefs, which further promoted compulsory Protestant education, and so on.

In other words, public education owes its origin, at least in Europe and by extension to the New World, to its theological-political roots. One may therefore wonder if the de-Protestantization of American culture—through some combination of demographic and attitudinal change—might affect the underlying basis of support for public education. What if the American majority were no longer to believe in “the Laws of Nature and of Nature’s God” referenced in the Declaration of Independence? Would they also recognize that, by relinquishing that belief, they were simultaneously surrendering the grounds that have historically been invoked to justify “equality” and the “right to education,” as well as all other universal human “rights” they might claim or affirm?

To be sure, original purposes can be joined by and eventually displaced by other purposes as time passes. But the possibility cannot be entirely discounted that severing the bonds between Protestant scripturalism, on the one hand, and the egalitarian ethos and, ultimately, liberal democracy, on the other, may put the latter at risk—and may damage the status of public education as well. The possibility does not contradict the universalism inherent in Christianity generally and in Protestantism in particular, which extends a theological conviction about religious universality to a conviction about the potential, supposedly secular universality of a particular political culture—namely, liberal democracy. This is not a rarity: As Carl Schmitt understood in 1922, “All significant concepts of the modern theory of the state are secularized theological concepts.” But that latter universalism is, to be frank, more aspirational than it is grounded in social science.

The theological origin of public education provides the backdrop for only part of the American public school ethos. The road from 1524 in Germany to the 18th-century founding of the United States and beyond has its curves and detours. For example, the American Founders sharply disagreed about the role of government in funding education. All serious educational institutions at the time were denominational. Some of the Founders supported giving public funds for education to such schools. But others, led by Jefferson and Madison, both channeling Locke (and in a way Luther himself), opposed it as a violation of the wall of separation between church and state, and as an affront against the free exercise of conscience. In 1784 their view won out. They were for church schools and for a public role for religion generally, but they insisted that parishioners, not citizens, foot the bill.

Ever since, the courts have wrestled with the religion clauses in the Constitution and the Bill of Rights. And the reason the matter has never been finally settled is that it cannot be. On the one hand, everyone knows—or knew—that the basic institutions of American democratic government are rooted in the Reformation as a twinned theological-political phenomenon, it being understood that the Reformation and the Enlightenment were also twinned both in time and partial disposition. On the other hand, it was deemed unwise to “establish” any one denomination over the others, and the principle of religious liberty and tolerance so dear to America’s many “dissenter” religious communities put the idea beyond the pale. So the basis of free, compulsory, and universal public education was vividly Protestant but could not lawfully be embedded as such. Hence the eternally unsolvable problem.

As a result of the original impasse, the reach of public education actually suffered in the United States after the Revolution. It was not until the end of the 1840s that two overarching utilitarian considerations began to sway national opinion in favor of state funding for the universal, nonsectarian “Common Schools” championed by Horace Mann.

First, the industrial revolution created an economic need, and eventually also a military need, for skilled labor at various levels. Normal schools for teachers were also founded to produce standardized, methodological teaching practices. And second, the large-scale arrival of mostly poor Catholic and Jewish immigrants during this period diluted the dominant Protestant homogeneity of American society. Against this backdrop, Mann argued that universal public education was the best way to transform the nation’s newer “unruly children” into effective workers and citizens.

So it was that, in 1852, Massachusetts became the first state to require basic education for all children, a law not unlike earlier ones it had passed in the 1640s, except that it now provided state funding for universal non-sectarian public education. The Reconstruction Act of 1867 not only required Southern states to ratify the 14th Amendment, but also to implement state-funded public school systems. Then, in 1892, amid an even vaster influx of non-Protestant immigrants, the National Education Association formed the Committee of Ten in order to streamline a standard national curriculum and educational philosophy. They developed what came to be known as the traditional school structure and curriculum that has remained largely in place for American schools to this day—with a resistance to change that many reformers have found maddeningly “sticky.”

Ever since, the Committee of Ten system has functioned to socialize all Americans to as much of a common public creed as possible through the vehicle of civic education, and to teach both basic skills and prepatory discipline as key elements in preparing a labor force aligned with a modern industrial economy. We can debate how well the civic education piece has been going in the past half century. But what we really need to think hard about is whether America’s K-12 system, as currently designed, is well aligned with current, not to speak of future, labor force needs.

One urgent question for the future of public education is how schools can best prepare students to meet the demands of a globalized, increasingly internet-based economy. But an even more daunting one concerns the prospect that the combination of information technology and artificial intelligence will make vast numbers of jobs, perhaps even most traditional jobs, disappear, as well as change the nature of many of the jobs that are left or are yet to be created.

This raises a question that American public education as an institution has never had to face: not how to educate people, but why? If one day in the not-so-distant future, an increasingly large percentage of traditional jobs can be performed, and performed better, by a small cohort of highly specialized human and robotic workers, could this not only lead to decreased spending on public education but even to disbanding schools as we know them? And pardon the afterthought, but what happens not just to work under such circumstances, but to the power of the Western narrative itself, stretching from from Genesis and Ecclesiastes to the Lockean “labor value” creation tale and the Protestant work ethic? What will happen to the very stories that make us, as a culture, who we are?

It would be nice to think that we Americans can take the measure of this challenge, and plan for the massive dislocations that IT/AI innovation portends in a way that respects the dignity of each citizen. It would be nice to think that a new political basis for public education can be devised, one perhaps less instrumental than the older one. But with the theological belief in the inherent value of free, universal education in eclipse, and with the IT/AI revolution removing the instrumental necessity for a wide-scale national workforce, on what would our nice thoughts be based? Mega-corporate enlightened self-interest? Heaven help us. Yes, free, basic public education currently enjoys a high level of both national and international legal protection as a “universal human right,” but it is less clear that this foundation will remain “self-evident” in a post-IT/AI and post-Protestant future.

21st-Century American Schools

Even if it does remain self-evident, we still have some heavy lifting to do. Consider that with dramatic advances in cognitive neuroscience, education technology, and personalized learning, we should, in theory, have already made highly effective, individualized instruction available to all students with internet access, regardless of income or demographics (not to say that solipsistic student-computer interfacing lacks problems of its own). In addition to a rich abundance of free, online, anytime-anywhere resources, several new school models also offer innovative, alternative learning environments. In addition to traditional public, private, and religious schools, new public charter schools, micro-schools, and for-profit schools are experimenting with a wide range of evidence-based best practices in order to restructure and individualize instruction, supercharge engagement and relevance, and provide a nurturing, student-centered experience that best meets their learning needs.

So we should be optimistic, right? We might be if these advances were being systematically structured, brought to scale, and deployed in a consistent manner. But they’re not. The battling constituencies I mentioned in the opening of this essay are preventing any of that from happening. Besides, we have to remember that the challenge of the near future could well be qualitatively different from that of the past, and there is not much to be gained from bringing public K-12 education into the late 20th century only for it to be overwhelmed by the realities of the mid-21st.

But the choices are ours to make. As we make them, we would be well advised to keep in mind a few general principles, which we might also think of as three challenges with myriad subordinate policy choices: (1) to ensure public education equity; (2) to plan strategically for the future of education in a post-AI world; and (3) to finally replace the Committee of Ten’s 19th-century school model with evidence-based educational best practices that can allow us to better meet the first two challenges.

Challenge 1: Ensure Public Education Equity

The first practical step to reviving our ailing public schools is to reaffirm our national faith in the self-evident truth that all human beings are created equal and endowed with certain unalienable rights. Belief always precedes action, and this belief provides both the catalyst and sine qua non for the institution of public education, as it does for all of America’s core institutions. If we truly believe that, as a corollary of this idea, all children deserve the basic education that will allow them to exercise their reason and participate in a democratic society, then we as a society will be more likely to make the policy decisions and engage in the school reform that is needed to provide each and every student with the evidence-based best practices in education that they need to succeed and thrive in the 21st century.

We face several obstacles on the road to public school equity, but none that are insurmountable, politically or otherwise. First is uneven local tax bases, and with that, uneven facilities, resources, and teacher quality. Second is persisting racial and cultural prejudice. Third, related, is the plague of belief in an array of social and biological determinisms. Fourth is low teacher salaries, which make it extremely hard to attract quality teachers in an age where women, who have traditionally dominated K-12 classrooms, have vastly more professional opportunities open to them than was the case four or five decades ago. And fifth is a widening achievement gap turned into an academic arms race, which is a dynamic that is not suitable for arriving at successful solutions.

Challenge Two: Plan for the Future

If the number of traditional human jobs declines in an AI-suffused world (and according to some experts not necessarily be replaced by new job growth), then we must strategically redesign school now to help students develop the skills and habits of mind they will need to navigate that brave new world. And even without the benefit of a crystal ball, we can safely assume that our human progeny will still need to acquire new knowledge, skills, and understanding in order to address them. Thus, in addition to mastering an ever-evolving, modular array of skills and literacies, we must also train students how to grow the power of their own minds, and to think critically, creatively, and collaboratively. This kind of educational preparation will help our children to actively choose—and design—a better future for all of us, rather than passively allowing someone (or something) else to choose it for us.

Challenge Three: Implement Evidence-Based Best Practices

In order to meet the urgent demands of these first two challenges, the entire K-12 educational system must be reoriented around the new sun of student-centered learning. Advances in cognitive neuroscience have demonstrated why this student-centric model is both necessary and effective for 21st-century learners. Yet, despite the fact that most educators have theoretically embraced this Copernican Revolution in teaching, and that $18 billion is spent annually on professional development to implement it, 50 million American students continue to return year after year to K-12 classrooms that still operate on the Committee of Ten’s 19th-century model.

By definition, personalized learning will look different to account for each student’s individual strengths, challenges, and other age-appropriate, cultural, and social-emotional needs. But there are some key elements of good learning design that we can look to in order to help create high-quality learning environments for all students. For example, since neuroscientists like Mary Helen Immordino-Yang have proven that there is no learning without emotion, the first job of any teacher must be to build trust and cultivate a positive mentoring relationship with students. The best teachers have always done this intuitively, but we now have empirical evidence that demonstrates why all teachers must lay this critical foundation in order for learning to take root and flourish. This is also why teachers must nurture students’ individual passions, voice, and choice, and help them to engage with and find relevance in new content knowledge, questions, and challenges. Teaching students to ask good questions is therefore more important than teaching them how to find true but often trivial answers. These key ingredients help to make learning meaningful, to go deep, and to become encoded in long-term memory.

Researchers have also demonstrated that intelligence is not a fixed number on an IQ bell curve but rather, as Scott Barry Kaufman describes it in Ungifted, “the dynamic interplay of our abilities and engagement to achieve our desired goals.” Teachers must therefore make students explicitly aware of the power of brain plasticity, and the ability of their minds to grow their own intelligence with sustained effort and focus. Here brain science also dovetails well with the classic American dream, since it finally offers empirical evidence that we can indeed become the kind of people we want to be and achieve the goals we desire, if we put in the time and effort needed to get there (a timeless phenomenon that educational gurus currently refer to as “growth mindset” and “grit”).

Teachers also need to be trained in the evidence-based, multisensory programs that we know work best for teaching math and literacy—such as the systematic Orton Gillingham approach to reading instruction and Stan Dehaene’s games to grow number sense and math fluency. All young mammals learn best when their hands and bodies and brains are engaged simultaneously in exploring and mastering their environment. Schooling and education are not synonyms, and real education has to involve doing rather than just reading or listening. Sitting still listening to talking-head teachers has to be the worst possible way to teach young humans anything of value, yet that still defines what most K-12 students are subjected to most of the time.

New financial, digital, economic, environmental, and health and wellness literacies must be added to the curriculum as well. And students having reached a certain age should be entrusted with a transparent scope and sequence of all the competencies that they need to master, so that they can share in the ownership of their learning path and their progression along it.

Rather than teaching isolated subject content in the middle and high school years, teachers should instead creatively engage students in interdisciplinary STEM and Humanities projects. Arts integration helps students of all ages to access other modalities for learning. When creativity is embedded into an assignment, it can be leveraged as a powerful tool for differentiation, mastery, and assessment. Essential questions help to inspire wonder, critical thinking, deep literacy, synthesis, and creative problem solving. Flexible learning spaces allow freedom of movement for individual, small group and large group work, as well as for presentations and public speaking. The campus should also be extended beyond the brick and mortar school walls to include local and global digital communities, with rich opportunities for further research, apprenticeship, and entrepreneurship in areas of interest.

Looming social challenges on the one hand, combined with advances in education neuroscience and digital technology, on the other hand, provide both the urgent incentive and the means to overhaul K-12 education from the ground up. We know what works. What we need is the national will to finally redesign the K-12 educational system as a whole, so that students can better reach their 21st-century professional and personal goals, and help to make a better tomorrow for all of us.


Mann, “Twelfth Annual Report to the Massachusetts Board of Education” (1848), in Mann, Lectures and Annual Reports on Education:1796-1859.

It is possible, even likely, that Luther’s belief in the futility of Catholic ritual to reach the soul, the conscience, of the true inner person came about as a result of the interiority enabled by his own deep literacy.

Not all sectarian divisions within the Reformation were equally friendly to Enlightenment precepts: Anglicans were far more so than Calvinists, for example. Indeed, the encounter between Protestantism and liberalism has been, and remains, endlessly intricate and fascinating. But further analysis thereof would serve no good purpose here.



The post American K-12 Education in Thrall appeared first on The American Interest.

 •  0 comments  •  flag
Share on Twitter
Published on November 19, 2018 10:39

Fighting the Geography of Discontent

The day after the 2016 presidential election, thousands of Americans took to the streets in cities across the country to express their indignation. Standing outside Trump Tower in Midtown Manhattan, protestors chanted “Not our President!” and “New York hates Trump!” But far away from the cities, many Americans were celebrating. Reflecting on Donald Trump’s victory, a truck driver from western Pennsylvania declared, “All the rurals are happy.” When asked why voters in Vigo county, Indiana (population 107,516) went for Trump, one long-time resident explained, “These are real people here, these are not New York City, Chicago, Los Angeles.”

Without a doubt, the 2016 election revealed a dramatic gap between two Americas—one based in large, diverse, thriving metropolitan regions; the other found in more homogeneous small towns and rural areas struggling under the weight of economic stagnation and social decline.

Two years later, the urban-rural divide has only deepened. In this most recent midterm election, rural America became redder, metro America became bluer, and suburban America emerged as the new political battleground. While Democrats failed to pick up a single rural district, Republicans lost their last urban district with Republican incumbent Dan Donovan losing his seat in New York’s Staten Island district.

While Trump’s rhetoric and policies have exacerbated the regional divide in American politics, it is a divide that predates the Trump era and has its roots in the changing geography of economic growth. That divide has now grown in size and importance to the point that it is now a core fact of American life that demands a systematic, concerted national response.

From Convergence to Divergence

As recently as 1980, the wage gap between regions was shrinking while growth in rural areas and small towns led the country from recession to recovery in the 1990s. While the 20th century economy largely facilitated regional convergence as market forces closed employment, wage, and investment gaps between America’s communities, growth in today’s economy has produced stark regional disparities.

If the half century after the New Deal was one of regional convergence, future historians may well regard the current era as a time of divergence abetted by the dynamics of trade and technology.

The expansion of global trade—with its attendant import penetration and offshoring—decimated the industrial base of many of America’s mid-size cities and small towns and the regional supply chains that once connected the nation’s biggest, most prosperous metropolitan areas and non-metropolitan areas. While the rise of the information economy has boosted the returns to urban skills, it has diminished the importance of the resources and manual labor that non-metropolitan areas provided during the heyday of the manufacturing economy. And for that matter, high-tech manufacturers that still depend on supply chains to produce physical goods—and might once have sourced from the American “heartland”—have instead moved production and assembly functions overseas.

At the same time, technology exacerbated an even more pervasive (and polarizing) set of dynamics—those that scholars call “skill-biased technical change” and “agglomeration.”

In the case of skill-biased technical change, the initial spread of digital technology expanded the economic benefits awarded to highly educated and digitally savvy individuals while reducing those conferred on individuals without such skills. As a result, the places most plugged into the digital economy attracted highly skilled workers by offering the greatest economic return to their skillsets.

As to the agglomeration dynamic, this is the age-old tendency of economic actors to cluster together to partake in the benefits of proximity. In this regard, the concentration of highly skilled, often technical workers in certain locations triggers further concentration as the presence of well-educated workers spawns new business establishments, which in attracts more talented workers. A feedback effect between a highly skilled workforce on the one hand and companies operating at the economic frontier on the other, has led to rising productivity in agglomeration hubs and substantial wage increases for the highly skilled workers clustered there.

While cost differentials between regions historically worked to attract workers and firms to less expansive places in need of economic revitalization, today workers and firms in the tech economy are drawn to expensive and already prosperous places. The concentration of highly skilled workers and frontier firms in the agglomeration economy is self-perpetuating: Human capital and economic activity flows to just a few “superstar” cities at the expense of the rest of the nation’s regions. Amazon’s recent decision to split its second headquarter between Long Island City in Queens, New York and Crystal City, Virginia just outside Washington, D.C. offers a clear illustration of what urbanist Richard Florida calls “winner-take-all urbanism.”

[image error]The effects are clear: Cities like San Francisco, Boston, and New York with populations over 1 million have accounted for over 72 percent of the nation’s employment growth since the financial crisis, while many small towns and rural areas have yet to return to their pre-recession employment levels.

Meanwhile, public policy has done little to manage the effect these economic changes have had on mid-sized cities and rural communities. Indeed, taken as a whole, the policies of recent decades have almost certainly exacerbated regional inequality. The deregulation of transportation and finance along with the failure to update and enforce antitrust policies has worked against less densely populated areas while the introduction of ill-conceived zoning regulations in large cities have driven up housing costs, discouraging the movement of lower-skilled workers to rapidly growing areas. At the same time, no urgent digital skills or serious technology-oriented regional growth strategy has emerged to support tech employment and start-ups in places outside coastal tech hubs. Our failure to craft effective, place-sensitive policies has allowed growth and opportunity to concentrate in fewer and fewer places while leaving others behind.

Neoliberal Neglect and the Political Consequences

The divergent outcomes of the modern economy challenge the neoliberal assumptions that for years have guided our understanding of regional development.

Historically, the overall trend of wage convergence has fed a belief that regions, just like individuals, are upwardly mobile—that the places lagging today may outperform prosperous regions tomorrow. Consequently, economists’ and policymakers’ optimistic faith in a level playing field across regional economies limited the demand for place-based policies that seek to ensure economic growth geographically balanced. While for years the facts stood on the side of economy theory as regions converged economically and lagging places caught up to more prosperous communities, today’s troubling regional divergence warrants revising the spatially-blind policy-making approach embraced in the U.S.

Such a revision should reject the false assumption that adopting place-sensitive policies will necessarily come at the expense of economic efficiency. Economists have long argued that interventions to promote a more even distribution of economic activity might reduce the nation’s efficiency by reducing the capacity of the nation’s most successful local agglomerations to drive national productivity. To a degree, concentration dynamics are a good thing for the economy, and agglomeration has been associated with efficiency gains and aggregate welfare at the national level. But this fact has led to a misguided “agglomeration bias” in policymaking which deems any intervention to reduce inequalities between regions as nationally inefficient. As economic geographer Ron Martin writes, “the new spatial economics leads too readily to the view that spatial agglomeration is the only or main game in town, and that is almost everywhere a nationally efficient, market-driven equilibrium outcome.”

In fact, there is evidence that our failure to think spatially has actually diminished aggregate economic output. The Organization for Economic Co-operation and Development argues that because lagging regions are not operating at their “production possibility frontier,” they constitute “unrealised growth potential.” Meanwhile, recent research from the Economic Innovation Group finds that “had distressed communities merely stagnated, the U.S. economy would have added one-third more jobs over the past 15 years than it actually did.” Such conclusions suggest that our failure to address the under-performance of lagging regions is working to depress national growth.

But the consequences of regional divergence are more than economic. Spatial divergence has helped spawn troubling political trends as well. As regions have pulled apart economically, they have also pulled apart politically.

The failure of both major political parties to respond to the voters most affected by economic change paved the way for a populist insurgency as voters in communities left behind by economic transformation embraced Donald Trump’s bid for the presidency.

While non-economic, cultural factors such as racial resentment and xenophobia may help account for the support Trump enjoyed among many Americans, regional inequality also played a role in his success. The economic divide fueling political discontent may not be between the poorest and richest members of society, but between prosperous and lagging regions. As economic geographer Andrés Rodríguez-Pose argues, “Populism took hold not among the poorest of the poor, but in a combination of poor regions and areas that had suffered long periods of decline . . . The challenge to the [political] system has come from a neglected source of inequality: territorial and not interpersonal.”

In this way, the populist politics produced by economic change, and the polarization that results, constitute an externality few economists anticipated but can no longer afford to ignore.

The Policy Imperative

Economic underutilization and the turn to a populist politics in the U.S. offer a rationale for serious federal action. Absent intervention, the disparities between places will only intensify to the detriment of our economy and democracy.

What kinds of efforts might begin to push back against divergence after years of neoliberal neglect? For too long economists and policymakers have espoused a false choice between spatially-blind policies that maximize efficiency and place-based policies that maximize equity. We believe it is possible to achieve both outcomes with a policy framework that respects the dynamism and efficiency of the agglomeration economy but seeks to extend it to more places.

We outline several proposals in this vein in our report “Countering the Geography of Discontent: Strategies for Left-Behind Places.” In order to renew local vitality—and, with it, convergence—it is essential to strengthen local communities’ access to the assets and conditions needed to cultivate the kind of economic activity that can lift up left-behind areas. This includes ensuring that places possess a skilled workforce prepared for the kinds of employment opportunities the digital economy has created, that these people have access to capital to start and grow businesses, and that they have access to reliable communication technologies.

Beyond helping places secure the basic assets and conditions needed to enable convergence, it will be essential to develop strategies for instigating new growth in the places left behind and creating opportunities for the people living there. While it may be inefficient to “save” every left-behind small city or rural community in the U.S., place-sensitive policies can target a few promising mid-size communities adjacent to other lagging towns and rural areas. Coordinated federal investment has the potential to promote more growth and hope across whole swaths of the country as ancillary business opportunities proliferate and small-town and rural residents begin to commute to the adjacent new growth centers. At the same time, restoring more geographic mobility to the labor market would help more people catch up to growth. The federal government should provide financial support for individuals who want to make long-distance moves to places that promise greater economic opportunity. Meanwhile, states and localities could encouraging commuting to adjacent areas by offering a commuting subsidy that would support individuals who want to stay in their communities to live but not necessarily to work.

And there may well be a growing political appetite for such policies.

The turn away from cross-regional coalition building in recent years, as Republicans have solidified their support in more rural parts of the country and Democrats have done the same in more urban areas, has proven politically toxic.

A political strategy that focuses on mobilizing the geographic base has also proven to be a political liability for Democrats electorally disadvantaged by an electoral map that favors less densely populated, rural parts of the country. While Democrats have largely campaigned on stemming rising inequality between individuals and economic classes, the party could benefit from paying greater attention to the spatial dimension of economic inequality. While Democratic voters may be reluctant to support policies that benefit rural Americans given urban resentment toward these communities, residents of superstar cities may soon find themselves advocating for more geographically balanced economic growth. Amazon’s recent decision to split its headquarters between Long Island City in Queens, New York and Crystal City, Virginia, just outside Washington, D.C. will exacerbate the problems of traffic congestion and expensive housing that already plague these regions. Regional inequality, many are learning, exacerbates the inequality within superstar cities.

Republicans, unlike Democrats, have successfully tapped into the discontent fueled by increasing spatial divergence but have showed little interest in reckoning with the disruptive market forces that heighten such discontent. Even populists who at present benefit politically by harvesting rural anger should be aware that it will turn against them, too, if their performance does not measure up to their promises.

While implementing place-sensitive policies would be an undeniably heavy lift in today’s divided Congress, the failure of the two major political parties to capture a geographically diverse set of voters in recent years—let alone crafting an agenda that will knit regional economies together again—has exacerbated political polarization and challenged confidence in our democratic system.

Reflecting on her failed presidential bid, Hillary Clinton remarked, “I won the places that represent two-thirds of America’s gross domestic product. So I won the places that are optimistic, diverse, dynamic, moving forward.” While Clinton was not wrong to say she won the economy (the counties she accounted for 64 percent of aggregate GDP in 2015), the comment reflects the disturbing political ramifications of regional divergence: the places that are left behind by economic change feel left behind by the political system too. The implication that inclusion in the new and changing economy is a prerequisite for democratic representation only works to embolden discontent and stoke populist resentment.

In an era of regional divergence, political campaigns will be tempted to limit their appeal to particular places, relying on the energy of their geographic base. But political expediency should not come before good governance. We cannot allow the divide between winners and losers of economic change become a permanent feature of American politics. Tackling this divide has the added benefit of boosting aggregate growth by tapping into the economic potential mid-size cities and rural areas currently underutilized. But even if there wasn’t an economic rationale for crafting place-sensitive policies to mitigate spatial divergence, ensuring more Americans can prosper across regions in a changing economy is an obligation democratic governments have to their citizens.


The post Fighting the Geography of Discontent appeared first on The American Interest.

 •  0 comments  •  flag
Share on Twitter
Published on November 19, 2018 08:00

November 16, 2018

A New Age of Reform

There are a lot of ways one can interpret the 2018 mid-term elections in the United States. It was a Blue Wave, as Democrats gained back the House with a net pickup of probably close to 40 seats, along with net gains of seven state governorships, five state legislatures and something like 250 state legislative seats. It was a successful defense for the Republicans, holding control of the Senate and even gaining a seat or two. It was a reinvigoration of democracy, as voter turnout rose from the anemic mid-term average (40 percent) to 49 percent.

It was a victory for women, who will comprise their largest share of the House of Representatives in history (23 percent)—but still much less than in Germany (31 percent) or France (39 percent). It was a sad day for Republican women, who now account for only a third of all women state legislators, and a smaller share of both the U.S. House and state legislatures than they did in the previous two years.

It furthered our geographic and political polarization, with Republican members of Congress plunging perilously close to extinction in the Northeast and the urban and suburban West, while still dominating the South and most rural and exurban districts. And yet in some ways, it showed signs of easing polarization, as many moderate Democrats won close races in swing Congressional districts. They included Abigail Spanberger (a former CIA operative) in Virginia’s Richmond suburbs, Elissa Slotkin (a former CIA analyst and Pentagon official) in southern Michigan, and Josh Harder (a young venture capitalist and college teacher) in California’s Central Valley.

Perhaps the most consequential election for the future, however, was in Maine’s Second Congressional District, where Democrat Jared Golden defeated Republican Representative Bruce Poliquin by less than one percentage point after trailing in the first round of voting by a similar margin. For the first time in American history, a U.S. congressional election was determined by “instant runoff,” using the system of ranked-choice voting that Maine had adopted in a 2016 voter initiative, and then reinstated (over the resistance of the state legislature and much of the state’s professional political class) in a June 2018 “people’s veto” initiative. Under ranked-choice voting, if no candidate gains a majority of first-preference votes, the weakest finishers are eliminated and their second (and if necessary, lower) preference votes are transferred to the remaining candidates until someone wins a majority. It was by that method that Golden, a 36-year-old Marine Corps veteran, came from behind to win yesterday.

Established politicians (and especially Republicans) in Maine attempted everything imaginable in terms of legislative and judicial obstruction to try to keep ranked-choice voting (RCV) from being used to determine an election outcome. And they partially succeeded, getting the Maine Supreme Court to overturn RCV for use in the general election for state legislators and governor (while allowing it to stand for primary elections and the general election for U.S. and Senate). In a last stab against reform, the incumbent Poliquin raced to federal court to try to stop the instant runoff count after the first-round tally of votes showed him leading. He claimed that Article 1 Section 2 of the U.S. Constitution sets plurality voting as the method of election for Congress, even though it does no such thing. U.S. District Judge Lance Walker declined Poliquin’s bid to stop the tabulation of the final preference votes, and Golden won.

With the first successful operation of ranked-choice voting in a nationally significant election—producing a majority winner out of a divided field—interest in ranked-choice voting is bound to grow. Already, it is employed in a number of American municipalities, and on November 6 the voters of Memphis turned back an initiative that would have repealed a ten-year-old voter decision to adopt ranked-choice voting for their elections. The Memphis City Council tried to get the city’s voters to reverse themselves, but by a vote of 62 percent, Memphis voters reaffirmed their decision. Next year, FairVote reports, the city will use RCV to ensure majority winners in their elections, “replacing the costly and low turnout runoffs that have long disenfranchised voters.” A rising tide of analytic, editorial and grassroots sentiment is gathering in favor of this electoral reform, foundation interest in it is growing, and new grassroots campaigns for it are gaining traction (particularly in Massachusetts).

Ranked-choice voting is a reform whose time is coming. Even a good many Americans with clear political and programmatic preferences are sick and tired of political polarization, gridlock, and maximalist bashing of one party by the other in Congress and the state legislatures. Everywhere I have spoken in the last two years about our mounting democratic difficulties and the possibilities for reform, I have encountered a similar reaction (which has been confirmed by a recent innovative opinion poll). Once people hear the logic behind ranked-choice voting—that it enables people to vote for a third party or independent candidate without fear of wasting their vote, that it ensures that every vote counts, and that the victor will be the candidate who best appeals to a majority of the electorate—people like the notion a lot. They particularly like the idea that it encourages moderation and bridging of our political divides, rather than simply appealing to a hardened partisan base. From Maine to Memphis, we see that if people are offered the choice, RCV is a reform they will embrace—and defend.

In fact, there is good reason to believe that we are entering a new age of political reform in America. As the remarkable grassroots reform group Represent.US has shown, almost every political reform measure that was on the ballot last week passed. Four states—Colorado, Michigan, Missouri, and Utah—passed voter initiatives to eliminate partisan gerrymandering of legislative districts (giving the power to draw district boundaries to independent, nonpartisan commissions). As USC’s Schwarzenegger Institute explains, given that Ohio had passed a similar measure in its May primary election and California (and a few other states) did so before, the 2022 redistricting process will now see roughly a third of all House seats drawn by independent commissions or nonpartisan experts. That’s a good start on ridding American democracy of one of its most disgraceful scourges.

But there were other victories as well—in fact, according to Represent.US, 22 of them. Baltimore, Denver, Phoenix, and New York City passed campaign finance reform measures that will promote transparency and establish or expand public funding of campaigns. Michigan and Nevada passed measures to make it easier for people to register and vote. New Mexico voters created an independent commission to investigate and adjudicate allegations of political corruption. Only in South Dakota did voters reject a reform measure, which would have tightened the state’s lobbying and campaign finance laws and also constrained the state legislature from unilaterally modifying a voter initiative, as it did in 2016 in repealing a similar initiative to strengthen ethics provisions.

As we enter the Age of Reform, creative and exciting new ideas are coming forward.  In a groundbreaking, two-part editorial following last week’s election, the New York Times floated the idea of significantly enlarging the U.S. House of Representatives. The current size of 435 was set in 1911 when the average member represented 200,000 people; now it is nearly 4 times that. The Times proposes to add 158 new House seats—enough to make the House more competitive, representative, and hence democratic, while not making it so huge that it would be unworkable.

The Times also proposes electing Congress from multi-member districts, in which ranked-choice voting would be combined with proportional representation (in essence, the Fair Representation Act introduced by Rep. Donald Beyer and endorsed by political scientists like Lee Drutman). This system would be manifestly fairer—restoring significant Republican representation in Northeastern states and significant Democratic representation in predominantly white Southern areas. It would thus also make it much harder for cynical politicians to effectively gerrymander the opposition out of fair representation—as the Times observes, every district with three or more members would almost certainly have at least one member from each of the two major parties. The problem is that the two major parties would have to agree for Congress to enact it nationally (or even allow the states to do so individually). And there is some risk that over time it could fragment the party system, possibly making our polarization worse. In short, it’s an intriguing—but more daring—idea.

While we imagine more ambitious reforms, we can at least get moving on safer, more incremental measures that are highly likely to move our politics in a more accommodating direction. Nothing offers a better near-term prospect of that than ranked-choice voting.

But not every innovation to dilute the partisan poison in our politics has to come from legislation. In advance of the 2018 election cycle, another former marine, Rye Barcott, got an idea.  Maybe our younger veterans could help improve the tone of our politics by doing in politics what they have done in Iraq and Afghanistan: Put country over party. Maybe one small contributing factor in our polarization is that the proportion of military veterans in Congress has plummeted from three-quarters in the late 1960s to about 20 percent today. Maybe it’s time for a bipartisan effort to support candidates from this new generation of veterans who will pledge to “work in a cross-partisan way to create a more effective and less polarized government.” From this kind of thinking was born the new bipartisan campaign group, With Honor, which supported 40 military veterans running for the House last week. About half of them won, more or less evenly divided between Republicans and Democrats. The victors included Golden and Dan Crenshaw—a former Navy SEAL who lost an eye in Afghanistan. After being ridiculed for his eyepatch by Saturday Night Live star Peter Davidson, Crenshaw went on SNL last weekend with a Veteran’s Day message of national unity and a gracious willingness to accept Davidson’s apology. A Republican, he won over many liberals.

Look for Golden and Crenshaw, alongside other With Honor veterans, and former CIA agents Spanberger and Slotkin, to work to change the tone on Capitol Hill.  They know from experience the price we pay when we put party over country.


The post A New Age of Reform appeared first on The American Interest.

 •  0 comments  •  flag
Share on Twitter
Published on November 16, 2018 13:31

Peter L. Berger's Blog

Peter L. Berger
Peter L. Berger isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Peter L. Berger's blog with rss.