Adam Thierer's Blog, page 15
February 18, 2020
My Submission for the DOJ Workshop on Section 230
Below is a link to my submission for tomorrow’s Department of Justice workshop, “Section 230 – Nurturing Innovation or Fostering Unaccountability?“. I will be on panel three, “Imagining the Alternative.” From my opening:
Section 230 of the Communications Decency Act is a crucial part of the U.S.’s regulatory environment. The principles of individual responsibility embodied in Section 230 freed U.S. entrepreneurs to become the world’s best at developing innovative user-to-user platforms. Some people, including some people in industries disrupted by this innovation, are now calling to change Section 230. But there is little evidence that changing Section 230 would improve competition or innovation to the benefit of consumers. And there are good reasons to believe that increasing liability would hinder future competition and innovation and could ultimately harm consumers on balance. Thus, any proposed changes to Section 230 must be evaluated against seven important principles to ensure that the U.S. maintains a regulatory environment best suited to generate widespread human prosperity.
Neil Chilson – Comments for DOJ 230 Workshop 2-18-2020 Download
February 17, 2020
Building in Accountability for Algorithmic Bias
– Coauthored with Anna Parsons
“Algorithms’ are only as good as the data that gets packed into them,” said Democratic Presidential hopeful Elizabeth Warren. “And if a lot of discriminatory data gets packed in, if that’s how the world works, and the algorithm is doing nothing but sucking out information about how the world works, then the discrimination is perpetuated.”
Warren’s critique of algorithmic bias reflects a growing concern surrounding our interaction with algorithms every day.
Algorithms leverage big data sets to make or influence decisions from movie recommendations to credit worthiness. Before algorithms, humans made decisions in advertising, shopping, criminal sentencing, and hiring. Legislative concerns center on bias – the capacity for algorithms to perpetuate gender bias, racial and minority stereotypes. Nevertheless, current approaches to regulating artificial intelligence (AI) and algorithms are misguided.
The European Union enacted stringent data protection rules requiring companies to explain publicly how their algorithms make decisions. Similarly, the US Congress has introduced the Algorithmic Accountability Act regulating how companies build their algorithms. These actions reflect the two most common approaches to address algorithm bias of transparency and disclosure. In effect, regulations require companies to publicly disclose the source code of their algorithms and explain how they make decisions. Unfortunately, this strategy would fail to mitigate AI bias as it would only regulate the business model and inner workings of algorithms, rather than holding companies accountable for outcomes.
Research shows that machines treat similarly situated people and objects differently. Algorithms risk reproducing or even amplifying human biases in certain cases. For example, automated hiring systems make decisions at a faster and larger- scale than their human counterparts, making bias more pronounced.
However, research has also shown that AI can be a helpful tool for improving social outcomes and gender equality. For example, Disney uses AI to help identify and correct human biases by analyzing the output of its algorithms. Its machine learning tool allows the company to compare the number of male and female characters in its movie scripts, as well as other factors such as the number of speaking lines for characters based on their gender, race, or disability.
AI and algorithms have the potential to increase social and economic progress. Therefore, policy makers should avoid broad regulatory requirements and focus on guidelines and policies that address harms in specific contexts. For example, algorithms making hiring decisions should be treated differently than algorithms that produce book recommendations.
Promoting algorithmic accountability is one targeted way to mitigate problems with bias. Best practices should include a review process to ensure the algorithm is performing its intended job.
Furthermore, laws applying to human decisions must also apply to algorithmic decisions. Employers must comply with anti-discrimination laws in hiring, therefore the same principle applies to the algorithm they use.
In contrast, requiring organizations to explain how their algorithms work would prevent companies from using entire categories of algorithms. For example, machine learning algorithms construct their own decision-making systems based on databases of characteristics without exposing the reasoning behind their decisions. By focusing on accountability in outcomes, operators are free to focus on the best methods to ensure their algorithms do not further biases and improve the public’s confidence in their systems.
Transparency and explanations have other positive uses. For example, there is a strong public interest in requiring transparency in the criminal justice system. The government, unlike a private company, has constitutional obligations to be transparent. Thus, transparency requirements for the criminal justice system through risk assessments can help prevent abuses of civil rights.
The Trump administration recently released a new policy framework for Artificial Intelligence. It offers guidance for emerging technologies that is both supportive of new innovations and addresses concerns about disruptive technological change. This is a positive step toward finding sensible and flexible solutions to the AI governance challenge. Concerns about algorithmic bias are legitimate. But, the debate should be centered on a nuanced, targeted approach to regulations and avoid treating algorithmic disclosure as a cure. A regulatory approach centered on transparency requirements could do more harm than good. Instead, an approach that emphasizes accountability ensures organizations use AI and algorithms responsibly to further economic growth and social equality.
February 4, 2020
Congress as a Non-Actor in Tech Policy
Congress has become a less important player in the field of technology policy. Why did that happen, and what are the ramifications for technological governance efforts going forward?
I’ve spent almost 30 years covering technology policy. There was a time in my life when I spent almost all my time as a policy analyst preoccupied with developments in the federal legislative arena. I lived in the trenches of Capitol Hill and interacted with lawmakers and their staff morning, noon, and night.
In recent years, however, I have spent very little time focused on the Legislative Branch because it has effectively become a non-actor on technology policy. It is not that congressional lawmakers stopped caring about tech policy. Interest actually remains quite high—perhaps higher than ever before. Congress also continues to introduce lots of bills, host plenty of hearings, and issue mountains of press releases related to tech policy issues.
Nonetheless, all that interest and activity has not really translated into much important legislation. While it is hard to track tech-oriented legislative trends statistically because of the complication of defining “technology policy” over time, judged by substantive output, Congress has largely checked out of technological policymaking.
Think about digital privacy. How many years now have people been predicting a comprehensive “baseline” privacy bill would pass in each legislative session? It never happens. Perhaps it will this year, but if you would like to place a wager on it, I will take that bet.
Speaking of bets, for several years now, I have been wagering with friends that Congress will not pass federal legislation creating a national autonomous vehicles framework. Each session I win that bet. Keep in mind, a framework for driverless cars is far less controversial than privacy policy. Still, nothing substantive ever gets done in Congress.
Same goes for cybersecurity with lots of calls for big measures, but no final action. Folks are now also telling me to expect a big artificial intelligence bill one day soon. I sincerely doubt it. Again, I’ll bet on it if you’d like to lose some money!
Let me be clear, there may actually be some very good reasons why Congress should implement a national framework for privacy, driverless cars, and some AI policy issues. But all the wishful thinking in the world will not magically make it happen.
We need to entertain the possibility that Congress has largely checked out of the world of substantive tech policymaking and isn’t coming back. We may get a few big surprise measures here and there, as we did with clumsily-drafted FOSTA-SESTA. If anything, it is more likely that we instead see misguided legislative riders attached to non-germane measures during late night negotiations. But even haphazard efforts like those will be extremely rare. The days of Congress passing big bills like the Telecom Act of 1996 or the Cable Act of 1992 appear mostly over.
Why Congress Is No Longer the Major Player It Once Was
I think there are probably many obvious explanations for why Congress has checked out of tech policymaking, but let me try to boil it down to a couple of interrelated trends:
The “pacing problem” has intensified: The pacing problem refers to the inability of legal or regulatory regimes to keep adjust to the intensifying pace of technological change. There are just more emerging technologies than ever, and they are evolving faster than ever, too. “New technologies that used to have two-year cycle times now can become obsolete in six months, and the pace of change is not slowing,” says consulting firm Deloitte.
A growing multiplicity of technologies means more tech policy issues to cover. And those issues grow more complicated each year. As soon as lawmakers wrap their heads around one technology (if they do at all), another innovation pops up that complicates things further or crowds out their attention.
Technological convergence and blurring governance boundaries: Technology policymaking increasingly involves metaphysical questions about the underlying nature of things. For example, what is a “phone,” a “medical device,” or an “aerial vehicle”? These things used to be relatively easy to define and had well-understood meanings in federal statutes and regulations. But those concepts evolved rapidly in an age of widespread technological convergence and rapid-fire “combinatorial innovation,” with new technologies multiplying and building on top of one another in the symbiotic fashion. Basically, almost as soon as new tech laws or regulations are enacted, they are confronted with new marketplace realities and technological changes that call into question legal classifications or regulatory distinctions.
For example, today’s smartphones combine dozens of different functions that were previously quite distinct, including health tracking capabilities, mobile payment systems, and video distribution, all of which remain heavily regulated by an assortment of federal laws and agencies. But the convergence of all these capabilities in a single device that we can carry in our pockets creates massive governance challenges, not only for archaic legislative frameworks, but even for newer semantic distinctions that may seem current one moment only to be obliterated the next. These factors also make it harder to figure out who in Congress should be driving policy because technological convergence blurs previously distinct governance categories among legislative committees and the laws they have crafted.
Legislative dysfunctionalism: Policymaking processes move slowly by design. Constitutional constraints and other legal requirements demand it. But things move even slower today because of what Jonathan Rauch calls “demosclerosis,” or the “government’s progressive loss of the ability to adapt.” “[A]s layer is dropped upon layer,” he argued, “the accumulated mass becomes gradually less rational and less flexible.”
Inadequate resources are also part of the problem with Congress facing a complex, rapidly-evolving set of issues but devoting only limited resources to technical staff or studies to better understand these developments. This combined with the factors cited above has led to a never-ending “competency trap,” with lawmakers and their staffs seemingly always one step behind technological developments and societal demands or expectations.
Meanwhile, partisanship increases and the work load on many other fronts grows alongside it. There’s just a lot more on Congress’s plate than ever before. Plus, tech policy matters seemingly always take a back seat to tax, budget, entitlements, defense, and other issues.
Many people hope that boosting technology assessment efforts might help correct these problems. Perhaps better technical advice could help lawmakers ask less ignorant questions at tech-oriented congressional hearings, which have become showcases for the staggering lack of congressional understanding of modern technologies. But just adding new technology assessment capacity, such as in the form of a revived Office of Technology Assessment, won’t likely move the needle much in terms of actual legislative output. More serious structural reforms will be required.
Globalization: Many modern technologies “are truly global and call out for policy approaches that do not respect traditional national borders,” note former NITA officials Lawrence E. Strickling and Jonah Force Hill. Congress only has so much control over technologies that defy national boundaries, further complicating tech governance questions.
Yet, one would think that when America’s global competitive advantage was on the line, Congress would have greater reason to assert itself and craft frameworks to ensure US firms are not disadvantaged by a lack of policy clarity. That has not proven to be the case, however. Congressional lawmakers do plenty of huffing and puffing about the tech governance choices made by Europe, China, and other governments, but they then leave the field wide open to them (as well as lower levels of government) to craft policies that govern national markets throughout the United States.
Endless delegation: Speaking of passing the buck, Congress has been doing it for decades on tech policy by delegating massive and quite amorphous authority to technocratic administrative agencies. Over the past half century, scholars from various disciplines—economics, law, political science, history, and others—have explored the growth of what has been alternatively called the “interest group society,” “receivership by regulation,” “iron triangles,” and “client politics.” This literature identifies the way Congress has increasingly abdicated its constitutional role as lawmaker by shifting hard policy questions to regulatory agencies and then hoping that bureaucrats could figure out all the answers.
Delegation is even more common for the most technical policy matters, and that trend has only accelerated in recent years as the complexity increases and overwhelms lawmakers and their staff.
Ramifications for Tech Governance Going Forward
If Congress remains largely incapable of ever getting the ball over the goal line on important tech policy matters, what are some of the ramifications? There are many, but I will identify just a few of the most obvious ones:
More tech-oriented legislative activity will shift to the states: In fact, it already has. For each of the tech policy issues I identified earlier (privacy, driverless cars, cybersecurity, and even some AI-related issues like facial recognition), states are—for better or worse—picking up the slack. We should expect that trend to accelerate. This will create an increasingly confusing patchwork of policies that will potentially raise serious barriers to entry and innovation. Nonetheless, I can’t see this trend reversing anytime soon. Perhaps Congress will finally act on privacy or driverless cars legislation if for no other reason than to preempt a crazy-quilt of contradictory policies. Of course, that’s what people have been predicting for years, and it never happens.
“Soft law” becomes the dominate governance force for tech: Again, it already has. Soft law refers to informal, collaborative, and constantly evolving governance mechanisms that differ from hard law in that they lack the same degree of enforceability. Soft law can include things like multi-stakeholder processes, industry best practices and standards, agency workshops and guidance documents, and educational efforts. But that just scratches the surface of soft law mechanisms. For better or worse, soft law is becoming the dominant modus operandi for most modern technological governance. We can expect that trend to accelerate to fill the governance gap left by Congressional inaction. For example, we don’t have any formal “rules of the road” for driverless cars, but we do now have four iterations of Department of Transportation guidance on driverless cars. Version 4.0of the DoT guidance for automated vehicles was just released this month. Expect the “soft law-ization” of technological governance to expand considerably in coming years because it is really the only way for agencies to cope with the pacing problem and those metaphysical issues identified earlier. Because soft law is not boxed in by rigid preconceptions of what a particular technology or technological process is or entails, it is often better able to address new marketplace realities. Soft law can adapt as technologies do. With Congress out of the picture, it will have to.
The congressional tech policy death spiral accelerates. Some may think (or at least hope) that the situation described here can’t get any worse. To the contrary, it can get radically worse. With our politics increasingly infected with bitter partisanship and rancor, what are the chances that lawmakers can work together to craft comprehensive tech policy measures? I’d say the odds are approaching zero. The Cable Act, the Telecom Act (and Sec. 230), and the Internet Tax Freedom Act all enjoyed broad, bipartisan support when they passed in the 1990s. People reached across the aisle to get things done. It didn’t always work, and sometimes it resulted in misguided policies (like the Communications Decency Act’s provisions trying to censor internet “indecency”). But bipartisan lawmaking scenarios like those seem almost unthinkable now. To the extent many lawmakers even show up at tech-oriented congressional hearings anymore, it is mostly to score points in front of the cameras for Team Red or Team Blue back home. Serious legislative oversight and policymaking is dead; it’s mostly just show-trials and media circuses at this point.
Should I Care about Congress Anymore?
If you believe this miserable thesis is correct but continue to focus on the Legislative Branch for a living, you may be asking yourself: Am I wasting all my time here? Not necessarily. Congress is still actively interested in tech policy matters. For those who hope to limit that damage Congress might do by hastily passing ham-handed, crisis-driven policy measures, your efforts in the trenches will continue to be important in curbing the worst instincts of some lawmakers. In many instances, preserving a perpetual stalemate may go down as a tremendous victory.
For example, as the debate over Section 230 intensifies—with politicians of all stripes looking to gut the most important of all Internet freedom policies—it is vital that smart people work with lawmakers and their staff to beat back misguided and destructive measures. Hopefully this becomes another instance of legislative gridlock winning out! And I think it will.
More realistically, your role will not be to stop Congress from doing insanely destructive things, it will be to just stop them from saying those things. In fact, that seems to be what a lot of people who work with Congress already do today. When I chat with various inside-the-Beltway policy advocates and industry reps today, they usually acknowledge that the prospects for actual legislation on any given issue are quite slim. They will, of course, continue to try to work with lawmakers, their committees, and their staff to either advance or stop legislative measures. Yet, they all seem to accept the utter futility of it all.
Why do they persist? Most obviously, they want to at least preserve the legislative stalemate and not cede the ground to their enemies who might succeed in getting lawmakers to do something if only one side was communicating with Congress.
But the other thing these policy advocates are hoping to achieve is better messaging. Regulatory advocates want lawmakers to use the power of the bully pulpit to put pressure on various people or groups to change behavior, even in the absence of any legislative action. By contrast, many in industry want to make sure that their technologies are understood and not endlessly demonized. Bad press isn’t good for business, even if all the congressional threats never result in final legislation. Also, those defending innovation more generally will want to make sure that even if lawmakers aren’t making any actual laws, they still better understand and appreciate the importance of new technological capabilities for improving human welfare.
Those are all good reasons not to give up your legislative advocacy. For some of us, however, the personal cost-benefit analysis just doesn’t add up. Our focus has shifted to where the real action is at: federal administrative agencies, statehouses and state administrative agencies, and the growing world of multi-stakeholder governance and other soft law efforts. Congress has checked out, but technological governance lives on in many other forms and venues.
Vocational Programs Won’t Hit the Mark in an Ever-changing Job Market
–Coauthored with Mercatus MA Fellow Jessie McBirney
Flat standardized test scores, low college completion rates, and rising student debt has led many to question the bachelor’s degree as the universal ticket to the middle class. Now, bureaucrats are turning to the job market for new ideas. The result is a renewed enthusiasm for Career and Technical Education (CTE), which aims to “prepare students for success in the workforce.” Every high school student stands to benefit from a fun, rigorous, skills-based class, but the latest reauthorization of the Carl D. Perkins Act, which governs CTE at the federal level, betrays a faulty economic theory behind the initiative.
Modern CTE is more than a rebranding of yesterday’s vocational programs, which earned a reputation as “dumping grounds” for struggling students and, unfortunately, minorities. Today, CTE classes aim to be academically rigorous and cover career pathways ranging from manufacturing to Information Technology and STEM (science, technology, engineering, and mathematics). Most high school CTE occurs at traditional public schools, where students take a few career-specific classes alongside their core requirements.
In addition to building skepticism toward “college for everyone,” researchers have identified a “skills gap” between what employers want and the skills job-seekers offer. STEM training is a particularly trendy solution. Trump recently signed a presidential memo expanding the National Science Foundation’s STEM education initiatives and Virginia established a STEM Education Commission last year. With its many pathways, local customizability, and promise of immediate income upon graduation, CTE feels like a practical answer for young people and the economy.
As recent changes to the Perkins Act suggest, “alignment” between CTE courses and labor markets is a growing concern. Now, programs applying for federal funds must conduct a “local needs assessment” to ensure their course offerings align with local labor markets. One recent study attempted an early measure of this alignment in several metropolitan areas. Findings are mixed, but the quest for alignment itself shows how hope in career training programs has exceeded good economic sense.
Consider some of the phrases found in states’ CTE mission statements:
“…to prepare students for in-demand, high-skilled, and high-waged jobs.” (MD)
“…relevant experiences leading to purposeful and economically viable careers.” (AZ)
“…meeting the commonwealth’s need for well-trained workers.” (VA)
The desire to parse out an economy and plan accordingly is not new, but there are limits to predicting in-demand skills and future jobs. Friedrich Hayek conceives of the market not as a math problem to deconstruct but as a “discovery procedure.” The market changes, rapidly and unexpectedly, based on information identified only along the way. It is the cumulative and dynamic result of thousands of individual plans coordinating through prices and wages. Thus, a central authority could never collect enough information to make accurate predictions about market outcomes. Aiming at a particular social or economic goal—such as fixing a list of gaps in the labor market—will likely fall short of another outcome we didn’t even consider.
For this reason, Hayek explains in his Constitution of Liberty, flourishing societies must be economically and politically free, and public education should be offered to the extent that it nurtures the independent citizens that a free society requires. Education oriented toward a particular vocational end shortchanges the student. Hayek explains:
“We are not educating people for a free society if we train technicians who expect to be ‘used,’ who are incapable of finding their proper niche themselves … All that a free society has to offer is an opportunity of searching for a suitable position, with all the attendant risk and uncertainty which such a search for a market for one’s gifts must involve.”
(Hayek 1960, 144-45).
Picking training goals for a student body is no guarantee of long term success, and may block even better outcomes. It is no accident that Hayek does not count increased earning potential or national economic strength among the reasons to publicly subsidize education. Instead, he favors general education and literacy for social cohesion and democratic participation. Rising wages for high-demand skills should entice students into sparse job markets without extra encouragement from school programs.
Hayek is not alone in his insistence that individuals are in the best position to choose and experiment with their professions. In The Wealth of Nations, Adam Smith recognizes,
“In a society where things were left to follow their natural course, where there was perfect liberty, and where every man was perfectly free both to choose what occupation he thought proper, and to change it as often as he thought proper […] every man’s interest would prompt him to seek the advantageous, and to shun the disadvantageous employment.”
(Smith 1776, 151)
Rather than encourage programs to narrowly direct CTE training towards local “needs,” the federal government should focus on clearing barriers to entry into those professions. It can preempt state occupational licensing laws for opticians and interior designers, among other professions. States can follow the lead of Arizona and recognize out-of-state occupational licenses.
It is worth noting that CTE advocates are not attempting to plan the American economy one web-design class at a time. High schoolers earn only 12 percent of their credits from CTE, and some of the most prominent proponents recognize the challenges a changing economy poses. But the language we use will shape our goals over time. Requiring districts to consider “labor market alignment” in their annual CTE budgets is exactly the choosing between different kinds of education Hayek cautions against. Today’s alignment can be tomorrow’s stagnation.
This is not to deny the academic and personal benefits of taking CTE classes. Teenagers who do are more likely to graduate high school, to get a job, and to earn higher wages right away. Other studies suggest non-academic benefits like increased attendance. It makes intuitive sense that students would welcome non-traditional learning opportunities to break up their daily studies, and that their high school experience would be better for it. But by insisting CTE programs be training for certain job categories, we may be selling students short.
January 21, 2020
Podcast on Driverless Cars, AI & “Soft Law” Governance
Here’s a new Federalist Society Regulatory Transparency “Tech Roundup” podcast about driverless cars, artificial intelligence and the growth of “soft law” governance for both. The 34-minute podcast features a conversation between Caleb Watney and me about new Trump Administration AI guidelines as well as the Department of Transportation’s new “Version 4.0” guidance for automated vehicles.
This podcast builds on my recent essay, “Trump’s AI Framework & the Future of Emerging Tech Governance” as well as an earlier law review article, “Soft Law for Hard Problems: The Governance of Emerging Technologies in an Uncertain Future.”
January 8, 2020
Trump’s AI Framework & the Future of Emerging Tech Governance
This week, the Trump Administration proposed a new policy framework for artificial intelligence (AI) technologies that attempts to balance the need for continued innovation with a set of principles to address concerns about new AI services and applications. This represents an important moment in the history of emerging technology governance as it creates a policy vision for AI that is generally consistent with earlier innovation governance frameworks established by previous administrations.
Generally speaking, the Trump governance vision for AI encourages regulatory humility and patience in the face of an uncertain technological future. However, the framework also endorses a combination of “hard” and “soft” law mechanisms to address policy concerns that have already been raised about developing or predicted AI innovations.
AI promises to revolutionize almost every sector of the economy and can potentially benefit our lives in numerous ways. But AI applications also raise a number of policy concerns, specifically regarding safety or fairness. On the safety front, for example, some are concerned about the AI systems that control drones, driverless cars, robots, and other autonomous systems. When it comes to fairness considerations, critics worry about “bias” in algorithmic systems that could deny people jobs, loans, or health care, among other things.
These concerns deserve serious consideration and some level of policy guidance or else the public may never come to trust AI systems, especially if the worst of those fears materialize as AI technologies spread. But how policy is formulated and imposed matters profoundly. A heavy-handed, top-down regulatory regime could undermine AI’s potential to improve lives and strengthen the economy. Accordingly, a flexible governance framework is needed and the administration’s new guidelines for AI regulation do a reasonably good job striking that balance.
Background
Last February, the White House issued Executive Order 13859, on “Maintaining American Leadership in Artificial Intelligence.” The Order announced the creation of the “American AI Initiative,” an effort to “focus the resources of the Federal government to develop AI.” It prioritized investments in AI-focused research and development (R&D), building a workforce ready for the AI era, international engagement on AI priorities, and the establishment governance standards for AI systems to “help Federal regulatory agencies develop and maintain approaches for the safe and trustworthy creation and adoption of new AI technologies.”
Regarding that last objective, Order 13589 required the Office of Management and Budget (OMB) and the Office of Science and Technology Policy (OSTP) to develop a framework and set of principles for federal agencies to follow when considering the development of regulatory and non‑regulatory approaches for AI. Importantly, the Order also specified that the framework should seek to “advance American innovation” and “reduce barriers to the use of AI technologies in order to promote their innovative application while protecting civil liberties, privacy, American values, and United States economic and national security.”
That resulted in the memorandum sent to heads of federal departments and agencies this week entitled, “Guidance for Regulation of Artificial Intelligence Applications” (hereinafter AI Guidance). The draft version of the AI Guidance specifies that “federal agencies must avoid regulatory or non-regulatory actions that needlessly hamper AI innovation and growth.” More specifically:
“Agencies must avoid a precautionary approach that holds AI systems to such an impossibly high standard that society cannot enjoy their benefits. Where AI entails risk, agencies should consider the potential benefits and costs of employing AI, when compared to the systems AI has been designed to complement or replace.”
But the AI Guidance is certainly not a call for comprehensive deregulation or the abandonment of all AI federal oversight. The memorandum’s very title reflects an understanding that existing laws and agency rules will continue to play a role in guiding the development of AI, machine-learning, and autonomous systems.
Accordingly, and consistent with past executive orders and OMB regulatory guidance documents for federal agencies, the AI Guidance establishes a set of ten principles that agencies must take into consideration when considering AI policy:
Public trust in AI: Requiring that “the government’s regulatory and non-regulatory approaches to AI promote reliable, robust, and trustworthy AI applications, which will contribute to public trust in AI.”
Public participation: Agencies must provide “ample opportunities for the public to provide information and participate in all stages of the rulemaking process.”
Scientific integrity and information quality: Agencies should “leverage scientific and technical information and processes” to build trust and ensure data quality and transparency.
Risk assessment and management: Acknowledging that “all activities involve tradeoffs,” the AI Guidance requires that “a risk-based approach should be used to determine which risks are acceptable and which risks present the possibility of unacceptable harm, or harm that has expected costs greater than expected benefits.”
Benefits and costs: As part of those risk assessments, agencies must “carefully consider the full societal costs, benefits, and distributional effects before considering regulations related to the development and deployment of AI applications. Such consideration will include the potential benefits and costs of employing AI, when compared to the systems AI has been designed to complement or replace, whether implementing AI will change the type of errors created by the system, as well as comparison to the degree of risk tolerated in other existing ones.”
Flexibility: OMB encourages agencies to “pursue performance-based and flexible approaches that can adapt to rapid changes and updates to AI applications.”
Fairness and non-discrimination: Acknowledging that “in some instances, introduce real-world bias that produces discriminatory outcomes or decisions that undermine public trust and confidence in AI,” the AI Guidance requires agencies to consider “issues of fairness and non-discrimination with respect to outcomes and decisions produced by the AI application at issue.”
Disclosure and transparency: Agencies are encouraged to consider how greater “transparency and disclosure can increase public trust and confidence in AI applications.”
Safety and security: Agencies are required to “promote the development of AI systems that are safe, secure, and operate as intended, and encourage the consideration of safety and security issues throughout the AI design, development, deployment, and operation process.”
Interagency coordination: The guidance makes it clear that a “coherent and whole-of-government approach to AI oversight requires interagency coordination.”
Soft Law Ascends
Importantly, the AI Guidance also encourages agencies to be open to “non-regulatory approaches to AI” governance and specifies three particular models:
Sector-specific policy guidance or frameworks: OSTP writes that “agencies should consider using any existing statutory authority to issue non-regulatory policy statements, guidance, or testing and deployment frameworks, as a means of encouraging AI innovation in that sector.” The memorandum also notes that this can include “work done in collaboration with industry, such as development of playbooks and voluntary incentive frameworks.”
Pilot programs and experiments: The document encourages the use of “pilot programs that provide safe harbors for specific AI applications” which “could produce useful data to inform future rulemaking and non-regulatory approaches.”
Voluntary consensus standards: Before regulating, the AI Guidance encourages agencies to consider how voluntary consensus standards, assessment programs, and compliance programs might be used to address policy concerns.
These represent “soft law” approaches to technological governance and they are becoming all the rage in technology policy discussions today. Soft law mechanisms are informal, collaborative, and constantly evolving governance efforts. While not formerly binding like “hard law” rules and regulations, soft law efforts nonetheless create a set of expectations about sensible development and use of technologies. Soft law can include multistakeholder initiatives, best practices and standards, agency workshops and guidance documents, educational efforts, and much more.
Soft law has become the dominant governance approach for emerging technologies because it is often better able to address the “pacing problem,” which refers to the growing gap between the rate of technological innovation and policymakers’ ability to keep up with it. As I have previously noted, the pacing problem is “becoming the great equalizer in debates over technological governance because it forces governments to rethink their approach to the regulation of many sectors and technologies.”
Not only do traditional legislative and regulatory hard law systems struggle to keep up with fast-paced technological changes, but oftentimes those older mechanisms are just too rigid and unsuited for new sectors and developments. That is definitely the case for AI, which is multi-dimensional in nature and even defies easy definition. Soft law offers a more flexible, adaptive approach to learning on the fly and cobbling together principles and policies that can address new policy concerns as they develop in specific contexts, without derailing potentially important innovations.
Building on Past Governance Frameworks
In this sense, the Trump administration’s AI Guidance borrows from past policy frameworks by marrying up a desire to promote an exciting new set of emerging technologies alongside the need for reasonable but flexible oversight and governance mechanisms. At a high level, the AI Guidance builds on many of the same principles that motivated the Clinton administration’s Framework for Global Electronic Commerce, a statement of principles and policy objectives for the then-emerging Internet. The document, which was issued in July 1997, said that “governments should encourage industry self-regulation and private sector leadership where possible” and “avoid undue restrictions on electronic commerce.”
The Framework was a clean break from the top-down regulatory paradigm that had previously governed traditional communications and media technologies. Clinton’s Framework insisted that, to the extent government intervention was needed at all, “its aim should be to support and enforce a predictable, minimalist, consistent and simple legal environment for commerce.” The use of soft law and multistakeholder models was a key component of this vision, and those more flexible governance approaches were tapped by the subsequent administrations to address emerging tech policy concerns.
For example, the Obama administration considerably expanded the use of multistakeholder mechanisms and other soft law tools in response to the need of oversight of fast-moving technologies. The Obama administration had many different policy governance efforts underway for specific AI technologies and concerns, including workshops and multistakeholder efforts focused on the safety, security, and privacy-related issues surrounding “big data” systems, online advertising, connected cars, drones, and more.
Whereas the Obama administration was deeper in the weeds of the policy issues associated with specific AI and machine-learning applications, the Trump administration has sought to both build on those focused efforts while also stepping back to consider AI governance at the 30,000-foot level. In essence, the AI Guidance combines some of the aspirational elements found in the Clinton Framework alongside the Obama administration’s more targeted approach to consider specific policy concerns across many different sectors and technologies.
Trump’s AI Guidance adds an element of formality to this process regarding how federal agencies should address AI developments and formulate potential policy responses. It does so by counseling humility and even potential forbearance until all the facts are in. “Fostering innovation and growth through forbearing from new regulations may be appropriate,” the memorandum says. Agencies should consider new regulation only after they have reached the decision, in light of the foregoing section and other considerations, that Federal regulation is necessary.” Again, this is very much consistent with more general regulatory guidance issued by every administration since President Reagan was in office.
Flexible, Adaptive Governance is Key
The AI Guidance foreshadows the future of not only AI governance but the governance of many other emerging technologies. Hard law will continue to provide a backstop and have a role in guiding technological developments. Toward that end, efforts like the new AI Guidance are important because it represents an effort to “regulate the regulators” by placing some ground rules on how they go about applying old law to new developments.
But soft law governance is where the real action is at, both for AI and almost all emerging technologies today. The Trump AI Guidance reflects the extent to which soft law has become the dominant governance paradigm for modern tech sectors. As my colleagues Jennifer Huddleston and Trace Mitchell have noted, soft law is already effectively the law of the land for driverless cars, for example. After years of congressional wrangling over a federal autonomous vehicle regulatory framework—one that has widespread bipartisan support, no less—we still do not have a law on the books. Instead, the Department of Transportation has been cobbling together informal “rules of the road” through informal guidance documents that have been “versioned” as if they were computer software (i.e., Version 1.0, 2.0, 3.0). Version 4.0 of the DoT guidance for automated vehicles was just released this week.
That is the same approach that the National Institute of Standards and Technology (NIST) has taken with the privacy guidelines it developed. NIST’s Privacy Framework: A Tool for Improving Privacy through Enterprise Risk Management is also versioned like software. And many other federal agencies, especially the Federal Trade Commission, have tapped a wide variety of soft law tools—such as agency workshops and workshop reports that recommended privacy best practices for various technologies. Meanwhile, the National Telecommunications and Information Administration (NTIA) has used multistakeholder processes to address privacy concerns surrounding a wide range of technologies, including drones and facial recognition. NIST, FTC, and NTIA have undertaken these informal governance efforts because, despite over a decade of debate, Congress still has not advanced comprehensive federal privacy legislation. For better or worse, soft law has filled that governance gap.
Addressing Likely Objections from Left & Right
Many people of varying ideological dispositions will object to the growing role of soft law as the primary governance tool for emerging technology policy. Some conservatives will cringe at the sound of giving regulators greater leeway to address amorphous policy concerns, fearing that it will result in unconstrained exercises of unaccountable, extra-constitutional power.
Some of those concerns are valid, but they fail to account for the fact that the prospects for agency downsizing or deregulation they prefer are extremely limited. Practically speaking, the administrative state isn’t going anywhere. In some cases, agencies can actually do some real good by encouraging innovators to think about how to “bake-in” sensible best practices to preemptively address concerns about the privacy, safety, security, and fairness of various AI systems. Better those concerns be addressed in more flexible, adaptive fashion than by a heavy-handed, overly-rigid regulatory approach. Soft law offers that possibility, even if legitimate concerns remain about agency accountability and transparency.
Many to the left of center will be critical of this governance approach as well, but on very different grounds. As Associated Press reporter Matt O’Brien notes, “the vagueness of the principles announced by the White House is unlikely to satisfy AI watchdogs who have warned of a lack of accountability as computer systems are deployed to take on human roles in high-risk social settings, such as mortgage lending or job recruitment.”
These concerns actually are addressed in several of the OSTP’s ten principles, including those which stress the need for fairness and non-discrimination, information quality, public participation, disclosure and transparency, and safety and security. Yet many on the left will claim these principles merely pay lip service to these values and that what is really needed is a full-blown regulatory regime and some sort of corresponding new federal AI agency, which would preemptively determine which AI technologies would be allowed into the wild.
Already, an Algorithmic Accountability Act was introduced in Congress last year that would ask the FTC to take a more active role in policing “inaccurate, unfair, biased, or discriminatory decisions impacting consumers” that may have resulted from “automated decision systems.” Meanwhile, some academics have called for the creation of a Federal Robotics Commission or a National Algorithmic Technology Safety Administration to preemptively oversee new AI developments.
The problem with overly-precautionary regulation of that sort could potentially unduly limit AI innovation and the many benefits it entails. There may be some AI applications that pose serious and immediate risks to humanity and which require preemptive restraints on their development and use. Autonomous military and law enforcement applications are the most obvious examples. But most AI applications do not rise to that same level of regulatory concern, and other governance approaches are required to balance the use and misuse of them. This is why a more open and flexible governance approach is needed. Moreover, the old regulatory system just cannot keep up anymore, and it is ill-suited to address most policy concerns in a timely or efficient fashion.
Cristie Ford, and advocate of greater regulatory oversight for fintech, notes in her latest book that the problem with “old-style Welfare State regulation” is that it is “a clumsy, blunt instrument for achieving regulatory objectives” due to its reliance upon “one-size-fits-all mandates, prohibitions, and penalties.” Ford acknowledges what many other regulatory advocates are reluctant to admit: public policies toward fast-paced technology sectors can no longer be governed effectively using the Analog Era’s top-down, command-and-control regulatory processes. Far too many federal agencies rely on a “build-and-freeze model” of regulation that puts rules in stone to deal with one sets of issues one day, but then either fails to eliminate them later when they become obsolete or to reform those rules to bring them in line with new social, economic, and technical realities.
If we hope to encourage continued innovation in sectors that could produce profoundly important, life-enriching technologies, America’s regulatory approach for AI and emerging technology needs to move away from “build-and-freeze” and toward “build-and-adapt.” Regulation is still needed, but the old regulatory toolkit is badly broken. For better or worse, soft law is going to fill the resulting governance gap, regardless of objections from some on the left or the right. Pragmatic policymaking is going to carry the day for emerging technology governance.
Conclusion
The Trump Administration AI Guidance represents a continuation and extension of this trend toward more flexible, adaptive governance approaches for emerging technologies. It offers a pragmatic vision that builds on the policies and paradigms of the past, while also encouraging fresh thinking about how best to balance the need for continued innovation alongside the various concerns about disruptive technological change.
There are many challenging issues that lie ahead and the new AI Guidance cannot provide bright-line answers to all the hypothetical questions that people want answered today. No one possesses a crystal ball that will allow them to forecast the technological future. Only ongoing trial-and-error experimentation and policy improvisation will allow us to find sensible solutions. A policy approach rooted in humility, flexibility, and forbearance will help ensure that America’s regulatory policies continue to promote both innovation and the public good.
January 7, 2020
The Top 10 Most-Read Posts of 2019
Technopanics, Progress Studies, AI, spectrum, and privacy were hot topics at the Technology Liberation Front in the past year. Below are the most popular posts from 2019.
Glancing at our site metrics over the past 10 years, the biggest topics in the 2010s were technopanics, Bitcoin, net neutrality, the sharing economy, and broadband policy. Looking forward at the 2020s, I’ll hazard some predictions about what will be significant debates at the TLF: technopanics and antitrust, AVs, drones, and the future of work. I suspect that technology and federalism will be long-running issues in the next decade, particularly for drones, privacy, AVs, antitrust, and healthcare tech.
Enjoy 2019’s top 10, and Happy New Year.
10. 50 Years of Video Games & Moral Panics by Adam Thierer
I have a confession: I’m 50 years old and still completely in love with video games.
As a child of the 1970s, I straddled the divide between the old and new worlds of gaming. I was (and remain) obsessed with board and card games, which my family played avidly. But then Atari’s home version of “Pong” landed in 1976. The console had rudimentary graphics and controls, and just one game to play, but it was a revelation. After my uncle bought Pong for my cousins, our families and neighbors would gather round his tiny 20-inch television to watch two electronic paddles and a little dot move around the screen.
9. The Limits of AI in Predicting Human Action by Anne Hobson and Walter Stover
Let’s assume for a second that AIs could possess not only all relevant information about an individual, but also that individual’s knowledge. Even if companies somehow could gather this knowledge, it would only be a snapshot at a moment in time. Infinite converging factors can affect one’s next decision to not purchase a soda, even if your past purchase history suggests you will. Maybe you went to the store that day with a stomach ache. Maybe your doctor just warned you about the perils of high fructose corn syrup so you forgo your purchase. Maybe an AI-driven price raise causes you to react by finding an alternative seller.
In other words, when you interact with the market—for instance, going to the store to buy groceries—you are participating in a discovery process about your own preferences or willingness to pay.
8. Free-market spectrum policy and the C Band by Brent Skorup
A few years ago I would have definitely favored speed and the secondary market plan. I still lean towards that approach but I’m a little more on the fence after reading Richard Epstein’s work and others’ about the “public trust doctrine.” This is a traditional governance principle that requires public actors to receive fair value when disposing of public property. It prevents public institutions from giving discounted public property to friends and cronies. Clearly, cronyism isn’t the case here and FCC can’t undo what FCCs did generations ago in giving away spectrum. I think the need for speedy deployment trumps the windfall issue here, but it’s a closer call for me than in the past.
One proposal that hasn’t been contemplated with the C Band but might have merit is an overlay auction with a deadline. With such an auction, the FCC gives incumbent users a deadline to vacate a band (say, 5 years). The FCC then auctions flexible-use licenses in the band. The FCC receives the auction revenues and the winning bidders are allowed to deploy services immediately in the “white spaces” unoccupied by the incumbents. The winning bidders are allowed to pay the incumbents to move out before the deadline.
7. STELAR Expiration Warranted by Hance Haney
The retransmission fees were purposely set low to help the emerging satellite carriers get established in the marketplace when innovation in satellite technology still had a long way to go. Today the carriers are thriving business enterprises, and there is no need for them to continue receiving subsidies. Broadcasters, on the other hand, face unprecedented competition for advertising revenue that historically covered the entire cost of content production.
Today a broadcaster receives 28 cents per subscriber per month when a satellite carrier retransmits their local television signal. But the fair market value of that signal is actually $2.50, according to one estimate.
6. What is Progress Studies? by Adam Thierer
How do we shift cultural and political attitudes about innovation and progress in a more positive direction? Collison and Cowen explicitly state that the goal of Progress Studies transcends “mere comprehension” in that it should also look to “identify effective progress-increasing interventions and the extent to which they are adopted by universities, funding agencies, philanthropists, entrepreneurs, policy makers, and other institutions.”
But fostering social and political attitudes conducive to innovation is really more art than science. Specifically, it is the art of persuasion. Science can help us amass the facts proving the importance of innovation and progress to human improvement. Communicating those facts and ensuring that they infuse culture, institutions, and public policy is more challenging.
5. How Do You Value Data? A Reply To Jaron Lanier’s Op-Ed In The NYT by Will Rinehart
All of this is to say that there is no one single way to estimate the value of data.
As for the Lanier piece, here are some other things to consider:
A market for data already exists. It just doesn’t include a set of participants that Jaron wants to include, which are platform users.
Will users want to be data entrepreneurs, looking for the best value for their data? Probably not. At best, they will hire an intermediary to do this, which is basically the job of the platforms already.
An underlying assumption is that the value of data is greater than the value advertisers are willing to pay for a slice of your attention. I’m not sure I agree with that.
Finally, how exactly do you write these kinds of laws?
4. Explaining the California Privacy Rights and Enforcement Act of 2020 by Ian Adams
As released, the initiative is equal parts privacy extremism and cynical-politics. Substantively, some will find elements to applaud in the CPREA, between prohibitions on the use of behavioral advertising and reputational risk assessment (all of which are deserving of their own critiques), but the operational structure of the CPREA is nothing short of disastrous. Here are some of the worst bits:
3. Best Practices for Public Policy Analysts by Adam Thierer
So, for whatever it’s worth, here are a few ideas about how to improve your content and your own brand as a public policy analyst. The first list is just some general tips I’ve learned from others after 25 years in the world of public policy. Following that, I have also included a separate set of notes I use for presentations focused specifically on how to prepare effective editorials and legislative testimony. There are many common recommendations on both lists, but I thought I would just post them both here together.
2. An Epic Moral Panic Over Social Media by Adam Thierer
Strangely, many elites, politicians, and parents forget that they, too, were once kids and that their generation was probably also considered hopelessly lost in the “vast wasteland” of whatever the popular technology or content of the day was. The Pessimists Archive podcast has documented dozens of examples of this reoccurring phenomenon. Each generation makes it through the panic du jour, only to turn around and start lambasting newer media or technologies that they worry might be rotting their kids to the core. While these panics come and go, the real danger is that they sometimes result in concrete policy actions that censor content or eliminate choices that the public enjoys. Such regulatory actions can also discourage the emergence of new choices.
1. How Conservatives Came to Favor the Fairness Doctrine & Net Neutrality by Adam Thierer
If I divided my time in Tech Policy Land into two big chunks of time, I’d say the biggest tech-related policy issue for conservatives during the first 15 years I was in the business (roughly 1990 – 2005) was preventing the resurrection of the so-called Fairness Doctrine. And the biggest issue during the second 15-year period (roughly 2005 – present) was stopping the imposition of “Net neutrality” mandates on the Internet. In both cases, conservatives vociferously blasted the notion that unelected government bureaucrats should sit in judgment of what constituted “fairness” in media or “neutrality” online.
Many conservatives are suddenly changing their tune, however.
January 2, 2020
The Case for Sanctuary Cities in Many Different Contexts
The spread of “sanctuary cities”—local governments that resist federal laws or regulations in some fashion, and typically for strongly-held moral reasons—is one of the most interesting and controversial governance developments of recent decades. Unfortunately, the concept receives only a selective defense from people when it fits their narrow political objectives, such as sanctuary movements for immigration and gun rights.
But there is broader case to be made for sanctuaries in many different contexts as a way to encourage experiments in alternative governance models and just let people live lives of their choosing. The concept faces many challenges in practice, however, and I remain skeptical that sanctuary cities will ever scale up and become a widespread governance phenomenon. There’s just too much for federal officials to lose and they likely will crush any particular sanctuary movement that gains serious steam.
Sanctuary Cities as Political Civil Disobedience
First, let’s think about what local officials are really doing when they declare themselves a sanctuary. (Because they can be formed by city, county, or state governments, I will just use “sanctuaries” as a shorthand throughout this essay.)
Academics use the term “rule departure” when referencing “deliberate failures, often for conscientious reasons, to discharge the duties of one’s office.” [Joel Feinberg, “Civil Disobedience in the Modern World,” in Humanities in Society, Vol. 2, No. 1, 1979, p 37.] In this sense, sanctuary cities could be viewed as a type of collective civil disobedience by public officials because these governance arrangements are typically defended on moral grounds and represent an active form of resistance to policies imposed by higher-ups.
Rule departure and political civil disobedience can be carried out by individual government officials or entire governing bodies. Back in the 1970s, for example, some judges refused to convict Vietnam-era “draft dodgers,” even though laws made it clear that they were supposed to be punished. And, although it is rare, juries have sometimes nullified laws that they find unconscionable.
When a legislature engages in rule departure, it is often in opposition to federal policies that local officials feel is unfair or unethical. They may even declare themselves in a sort of open rebellion against a very specific directive and steadfastly refuse to acknowledge the legitimacy of the policies being imposed from above. This is how modern sanctuaries developed. In my forthcoming book, Evasive Entrepreneurs & the Future of Governance, I discuss a couple of prominent recent examples.
When state lawmakers refuse to enforce federal marijuana restrictions because officials in those states favor decriminalization that represents rule departure between levels of government. Similarly, in May 2018, Vermont became the first state to legalize the importation prescription drugs from Canada in an attempt to gain access to lower-priced drugs for its citizens. That policy departed from federal law, which tightly controls the importation of drugs into the US.
Rule departures by city and county governments can be even more daring and far-reaching in effect. After the Trump Administration took office and announced more restrictive immigration policies, many mayors and local officials promptly announced that they would become sanctuary cities and not follow federal immigration reporting requirements. The number of immigration-related sanctuary cities, counties, and even entire states has grown steadily since then. [The Center for Immigration Studies keeps a running list.]
Even more controversial is the rise of the “Second Amendment sanctuary” movement that resists state or federal firearm restrictions. Virginia cities and counties have been particularly aggressive in declaring themselves gun sanctuaries, but the movement is nationwide and growing fast. Interestingly, the leaders of this movement include many local officials, including some sheriffs, who actively oppose immigration-related sanctuary cities. Conversely, most of the local officials who favor immigration sanctuaries oppose Second Amendment sanctuaries. The only thing unifying officials on either side is a commitment to engage in rule departure for moral reasons.
But here’s the question I want to explore: Why not give both these sanctuary movements (and many others) a chance, regardless of what motivates them?
A Sanctuary for Me, But Not for Thee
Of course, there are few issues that divide the Left and the Right more bitterly these days than immigration and guns, and neither side will accept the moral case for rule departure when the other side is promoting it. Stated differently, while each side will make strong moral claims in favor of rule departure for their pet issue, their defense will not extend to the underlying act of rule departure or political civil disobedience more generally.
And that’s a shame. There is a good case to be made not just for greater localized decision-making and policy experimentation, but also for letting people lives of their own choosing in different governance arrangements.
The idea that we could ever have of one single utopia has always been a silly notion for a simple reason: People are just very different. What would make more sense, the late philosopher Robert Nozick once argued, is a governance arrangement that was truly fit for a pluralistic society. In his 1974 book, Anarchy, State, and Utopia, Nozick made the case for a regime in which citizens could potentially take advantage of many different utopias to better fit their preferred governance arrangements. “Utopia is a framework for utopias, a place where people are at liberty to join together voluntarily to pursue and attempt to realize their own vision of the good life in the ideal community but where no one can impose his own utopian vision upon others,” he said.
I’ve always found this “utopia of utopias” vision enormously compelling in theory but somewhat unrealistic in practice. It is appealing precisely because it rejects any effort to define utopia in a monolithic fashion. A true utopia would reject one-size-fits-all governance schemes and instead promote a framework for optimizing an individual’s ability to choose their preferred governance arrangement (hopefully among many options). “There is no reason to think that there is one community which will serve as ideal for all people,” Nozick noted, “and much reason to think that there is not.”
Indeed, it is likely that my preferred utopia is not yours. What’s my particular sanctuary look like? Adam Smith argued in 1755 that all that was needed for lifting civilization up “from the lowest barbarism” to “the highest degree of opulence” is “peace, easy taxes, and a tolerable administration of justice; all the rest being brought about by the natural course of things.” More recently, Emily Chamlee-Wright, president of the Institute for Humane Studies, elaborated on this vision when she identified the core elements of a good society as, “a pluralistic and tolerant society in which intellectual and economic progress are the norm, and where individuals and communities flourish in a context of openness, peaceful and voluntary cooperation, and mutual respect.”
That pretty much sums up the utopia or sanctuary I’d like to live in. More concretely, my perfect sanctuary would combine elements of all the real-world sanctuary cities described above. It would give immigrants safe haven and allow everyone to carry firearms openly while also ignoring federal marijuana restrictions and drug importation rules! Moreover, drones would zip through the air delivering goods (regardless of what the FAA said), driverless cars would occupy the roads (regardless of what the DOT said), and citizens with serious illnesses would be more free to try alternative treatments (regardless of what the FDA said).
Of course, I also appreciate that many other people would prefer to live in sanctuaries where government plays are a far more active role. Might it be possible for us all to agree to live peacefully in our separate utopias, yet also remain part of some loosely unified federation? What would help make that model work, Nozick argued, was some sort of minimal state above all the utopias that ensured peace and free movement of people, goods, and information among them. So, you pick your utopia and I’ll pick mine, but let us agree to be free to trade with each other and move to other utopias if we are not satisfied.
That remains a beautiful governance vision to me, and, if nothing else, I hope others would appreciate the potential benefits associated with experimentation in government administration. In his 1970 book, Exit, Voice and Loyalty, the economist and political theorist Albert Hirschman discussed the interplay between “voice” and “exit”—for businesses, organizations, and even governments—and argued that, “exit has an essential role to play in restoring quality performance of government, just as in any organization.”
Sanctuaries represent a form of localized collective voice (opposing specific policy choices made by higher-ups) combined with the implicit threat of some sort of exit. “The chances for voice to function effectively as a recuperation mechanism,” Hirschman argued, “are appreciably strengthened if voice is backed up by the threat of exit, whether it is made openly or whether the possibility of exit is merely well understood to be an element in the situation by all concerned.” I doubt any cities, counties, or states are going to try to completely exit the American republic over the issues that led them to form sanctuaries. Nonetheless, sanctuaries—and even the very threat to form one—can still act as a sort of relief valve that allow citizens to push back against over-zealous edicts from above, while also potentially giving citizens the chance to “shop around” for better jurisdictional governance arrangements.
Haven’t We Already Tried This?
Practically speaking, however, a utopia of utopias must have some limits or else it breaks down under the weight of endless splintering, border disputes, and even the threat of violence. As the Wall Street Journal editorial board argued in a recent essay about sanctuary cities, an atomistic patchwork of breakaway sub-governments could lead to discord and “lawlessness.” And that was in an editorial about Second Amendment sanctuary cities, which the Journal is more ideologically predisposed to favor!
But this is not a completely unfounded concern. Think about American history. Many people forget that America’s current constitution is not our nation’s first. The Articles of Confederation were formulated by the 13 original colonies as they fought for their independence from Great Britain. The Articles were a dismal failure, however, and did not even last a decade. America’s Founders abandoned the Articles because the sole governing agent—Congress—lacked any real power. It couldn’t do much to sustain itself or an army to defend the new nation, which the Articles treated as more of just a collection of territories in “a firm league of friendship with each other.”
More importantly, because states retained all the real power under the Articles, trade skirmishes broke out among them and Congress was virtually powerless to do anything about it. The so-called “league of friendship” threatened to degenerate into endless commercial and political conflicts among loosely joined state sovereigns. The situation grew intolerable and by 1789 the Articles were discarded in favor of a new Constitution that opted for a more tightly integrated union, which would guarantee some basic rights and also help ensure that commerce and people could move freely across state borders.
The durability of this framework remains a remarkable achievement and, in some ways, could be viewed as a more workable “utopia of utopias” than what the Articles of Confederation proposed. Yet, while plenty of people still play up the benefits of devolution and local control, American federalism has been increasingly neutered over the past century. The federal government came to take on more and more authority over even the most trivial parochial matters. States and localities must now beg for freedoms from federal restrictions, but they usually cave fairly quickly and fall in line with federal demands at the mere threat of federal lawmakers just denying them a few grants. Political kickbacks, it turns out, is a remarkably simple way to get subordinate bodies to fall in line and comply with top-down edicts.
Does a Broader Sanctuary Movement Have Any Hope?
Which is why it is remarkable that the sanctuary city movement is still alive at all. It might be because, as George Mason University law professor Ilya Somin has suggested, many Democrats fell back in love with federalism following the election of Donald Trump. Devolution and local control suddenly sounds a lot more appealing to many Dems when it becomes a way to resist federal restrictions on immigration and marijuana decontrol, among other issues.
It could still be the case that these sanctuary movements will be brought to heel in coming years. Current sanctuary efforts provide a good litmus test for just how much real-world policy experimentation federal officials are willing to tolerate. To the extent any particular sanctuary effort gained meaningful momentum and posed a serious challenge to federal power in some fashion, I believe it would likely be crushed eventually. While plenty of politicians provide lip service to the idea “reinventing government” and enhancing local decision-making, the reality is that if we ever had anything approximating actual entrepreneurial government administration in this country, the feds would likely move quickly to snuff it out.
If the Supreme Court took action to limit semi-rebellious efforts like these, it would also discourage future sanctuary city experiments. But it is more likely that, as suggested above, federal officials would just double-down on the “power of the purse” to intimidate state officials into complying—and then presumably force governors and state legislatures to do the dirty work of cracking down on cities and counties that won’t comply with federal demands. President Trump has already tapped this playbook to threaten immigration sanctuaries with Executive Order 13768 of January 25, 2017, which sought to “[e]nsure that jurisdictions that fail to comply with applicable Federal law do not receive Federal funds.” Lower courts have pushed back, however, and a bit of a stalemate has ensued.
If things got really ugly, one could imagine President Trump or a future Democratic president calling in the National Guard to deal with sanctuaries that really pushed the limits on immigration, guns, or anything else disfavored by the powers that be. God help us if we get to that point. Hopefully cooler heads will prevail.
A Dream Deferred
In the meantime, I will persist in making the case for sanctuaries and other forms of experimental government—including charter cities and special economic zones—more generally. I remain a bit of a dreamer and will continue to defend alternative governance visions based on the benefits associated with political decentralization, policy experimentation, and citizen choice. I continue to long for Nozick’s noble vision of, “a society in which utopian experimentation can be tried, different styles of life can be lived, and alternative visions of the good can be individually or jointly pursued.”
Alas, I am also a political realist and I recognize it is highly quixotic to believe that this governance framework will carry the day in the short-term. Selective morality will prevail instead. That is, most people will loudly proclaim the moral imperative of sanctuaries only when it fits their ideological priors, while equally vociferously decrying creative governance alternatives when they do not align with their political values. In the end, both sides will only succeed in crushing the broader dream of more decentralized communities of common interest, simply because a lot pf people just cannot tolerate giving others a little zone of freedom in this world.
And so a “utopia of utopias” will likely remain a dream deferred.
December 17, 2019
Is Europe Leading the US in Telecom Competition? Notes on Philippon’s “Great Reversal”
After coming across some reviews of Thomas Philippon’s book, The Great Reversal: How America Gave Up on Free Markets, I decided to get my hands on a copy. Most of the reviews and coverage mention the increasing monopoly power of US telecom companies and rising prices relative to European companies. In fact, Philippon tells readers in the intro of the book that the question that spurred him to write Great Reversal is “Why on earth are US cell phone plans so expensive?”
As someone who follows the US mobile market closely, I was a little disappointed to discover that the analysis of the telecom sectors is rather slim. There’s only a handful of pages (out of 340) of Europe-US telecom comparison, featuring one story about French intervention and one chart. This is not a criticism of the book–Philippon doesn’t pitch it as a telecom policy book. However, the telecom section in the book isn’t the clear policy success story it’s described as.
The general narrative in the book is that US lawmakers are entranced by the laissez-faire Chicago school of antitrust and placated by dark money campaigns. The result, as Philippon puts it, is that “Creeping monopoly power has slowly but surely suffocated the [US] middle class” and today Europe has freer markets than the US. That may be, but the telecom sectors don’t provide much support for that idea.
Low Prices in European Telecom . . .
Philippon says that “The telecommunications industry provides another example of successful competition policy in Europe.”
He continues:
The case of France provides a striking example of competition. Free Mobile . . . obtained its 4G license [with regulator assistance] in 2011 and became a significant competitor for the three large incumbents. The impact was immediate. . . . In about six months after the entry of Free Mobile, the price paid by French consumers had dropped by about 40 percent. Wireless services in France had been more expensive in the US, but now they are much cheaper.
It’s true, mobile prices are generally lower in Europe. Average revenue per user (ARPU) in the US, for instance, is about double the ARPU in the UK (~$42 v. ~20 in 2016). And, as Philippon points out, cellular prices are lower in France as well.
One issue with this competition “success story”: the US also has four mobile carriers, and had four mobile carriers even prior to 2011. Since the number of competitors is the same in France and the US, competition doesn’t really explain why there’s a price difference between France and the US. (India, for instance, has fewer providers than the US and France and very low cellular prices, so number of competitors isn’t a great predictor of pricing.)
. . . and Low Investment
If “lower telecom prices than the US” is the standard, then yes, many European countries have succeeded. But if consumers and regulators prioritize other things, like industry investment, network quality (fast speeds), and rural coverage, the story is much more mixed. (Bret Swanson at AEI points to other issues with Philippon’s analysis.) Philippon’s singular focus on telecom prices and number of competitors distracts him from these other important factors.
According to OECD data, for instance, in 2015 the US exceeded the OECD average for spending on IT and communications equipment as a percent of GDP. France might have lower cell phone bills, but US telecom companies spend 275% more than French telecom companies on this measure (1.1% of GDP v. 0.4% of GDP) .
Further, telecom investment per capita in the US was much higher than its European counterparts. US telecom companies spent about 55 percent more per capita than French telecoms spent ($272 v. $175), according to the same OECD reports. And France is one of the better European performers. Many European carriers spend, on a per capita basis, less than half what US carriers spend. US carriers spend 130% more than UK telecoms spend and 145% more than German telecoms.
This investment deficit in Europe has real-world effects on consumers. OpenSignal uses crowdsourced data and software to determine how frequently users phones have a 4G LTE network available (a proxy for coverage and network quality) around the world. The US ranked fourth the world (86%) in 2017, beating out every European country, save Norway. In contrast, France and Germany ranked 60th and 61st, respectively, for this network quality measure, beat out by less wealthy nations like Kazakhstan, Cambodia, and Romania.
The European regulations and anti-merger policies created a fragmented market and financially strapped companies. As a result, investors are fleeing European telecom firms. According to the Financial Times and Bloomberg data, between 2012 and 2018, the value of Europe’s telecom companies fell almost 50%. In comparison, the value of the US sector rose by 70% and the Asian sector rose by 13% in that time period.
Price Wars or 5G Investment?
Philippon is right that Europe has chosen a different path than the US when it comes to telecom services. Whether they’ve chosen a pro-consumer path depends on where you sit (and live). Understandably, academics, journalists, and advocates living in places like Boston, New York and DC look fondly at Berlin and Paris broadband prices. Network quality outside of the cities and suburbs rarely enters the picture in these policy discussions, and Philippon’s book is no exception. US lawmakers and telecom companies prioritize other things: network quality, investment in 5G, and rural coverage.
If anything, European regulators seem to be retreating somewhat from the current path of creating competitors and regulating prices. As the Financial Times wrote last year, the trend in Europe telecom is consolidation. The French regulator ARCEP reversed course last year signaled a new openness to telecom consolidation.
Still, there are significant obstacles to consolidation in European markets, and it seems likely they’ll fall further behind the US and China in rural network coverage and 5G investment. European telecom companies are in a bit of panic about this, which they expressed in a letter to the European Commission this month, urging reform.
To his credit, Philippon suggests humility in prognostications and understands the limits of experts’ knowledge:
I readily admit I don’t have all the answers. …I would suggest . . . that [economists’] prescriptions be taken with a (large) grain of salt. When you read an author or commentator who tells you something obvious, take your time and do the math. Almost every time, you’ll discover that it wasn’t really obvious at all. I have found that people who tell you that the answers to the big questions in economics are obvious are telling you only half of the story.
Couldn’t have put it better myself.
Credit to Connor Haaland for research assistance.
November 15, 2019
My testimony to the Pennsylvania Senate about rural broadband
A few weeks ago I was invited to provide testimony about rural broadband policy to the Communications and Technology Committee in the Pennsylvania Senate (video recording of the hearing). My co-panelists were Kathyrn de Wit from Pew and Prof. Sasha Meinrath from Penn State University.
In preparing for the testimony I was surprised to learn how much money leaves Pennsylvania annually to fund the federal Universal Service Fund programs. In recent years, a net $200 million leaves the state annually and is disbursed at USAC and in other states. That’s a lot of money considering Pennsylvania, like many geographically large states, has its own broadband deployment problems.
From the Intro:
The federal government has spent more than $100 billion on rural telecommunications in the past 20 years. Most of that total comes from the federal Universal Service Fund (USF), which disburses about $4.5 billion annually to rural providers across the country. In addition, the Pennsylvania Universal Service Fund redistributes about $32 million annually from Pennsylvania phone customers to Pennsylvania phone companies serving rural areas.
Are rural residents seeing commensurate benefits trickle down to them? That seems doubtful. These programs are complex and disburse subsidies in puzzling and uneven ways. Reform of rural telecommunications programs is urgently needed. FCC data suggest that the current USF structure disproportionately penalizes Pennsylvanians—a net $800 million left the state from 2013 to 2017.
I made a few recommendations, which mostly apply for state legislators in other states looking at rural broadband issues.
Urge the FCC to transform the USF into broadband vouchers for rural households. Prevent unreasonable restrictions on small, outdoor antennas on private property. Instruct the state broadband advisory committee to recommend best practices for rural towns and counties. Create a “vertical assets inventory” for wireless providers to use in rural areas.
I also came across an interesting program in Pennsylvania spearheaded in 2018 by Gov. Wolf. It’s a $35 million grant program to rural providers. From the Governor’s website:
The program was a partnership between the Office of Broadband Initiatives and PennDOT. The $35 million of incentive funding was provided through PennDOT to fulfill its strategic goal of supporting intelligent transportation systems, connected vehicle infrastructure, and improving access to PennDOT’s facilities. In exchange for incentive funding, program participants were required to supply PennDOT with the use of current and future network facilities or services.
It’s too early to judge the results of that program but I’ve long thought state DOTs should collaborate more with state telecom officials. There’s a lot of federal and state transportation money that can do double duty in supporting broadband deployment efforts, a subject Prof. Korok Ray and I take up in our recently-released Mercatus Paper, “Smart Cities, Dumb Infrastructure.”
For more, you can find my full testimony at the Mercatus website.
The Ray-Skorup paper, “Smart Cities, Dumb Infrastructure,” about transportation funds and their use in telecom networks is on SSRN.
Adam Thierer's Blog
- Adam Thierer's profile
- 1 follower
