Adam Thierer's Blog, page 14
April 16, 2020
Bringing broadband to rural areas quickly during the COVID-19 crisis
Building broadband takes time. There’s permitting, environmental reviews, engineering, negotiations with city officials and pole owners, and other considerations.
That said, temporary wireless broadband systems can be set up quickly, sometimes in days and weeks, not months or years like wireline networks. Setting up outdoor WiFi, as some schools have done (HT Billy Easley II), is a good step but WiFi has its limits and more can be done.
The FCC has done a great job freeing up more spectrum on a temporary basis for the COVID-19 crisis, like allowing carriers to use Dish’s unused cellular spectrum. Wireless systems need more than spectrum, however. Operators need real estate, electricity, backhaul, and permission. This is where cities, counties, and states can help.
Waive or simplify permitting
States, counties, and cities should consider waiving or simplifying their permitting for temporary wireless systems, particularly in rural or low-income areas where adoption lags.
Cellular providers set up Distributed Antenna Systems (DAS) and Cells on Wheels (COWs) for events like football games, parades, festivals, and emergency response after hurricanes. These provide good coverage and capacity in a pinch.
There are other ad hoc wireless systems that can be set up quickly in local areas, like WISP transmitters, cellular or WISP backhaul, outdoor WiFi, and mesh networks.

Allow rent-free access to municipal property
Public agencies own real estate and buildings that would lend themselves to temporary wireless facilities. Not only do they have power, taller public buildings and water towers allow wireless systems to have greater coverage. Cities should consider leasing out temporary space rent free for the duration of the crisis.
Many cities and counties also have a dark fiber and lit fiber networks that serve public facilities like police, fire, and hospitals. If there’s available capacity, state and local public agencies should consider providing cheap or free access to the municipal fiber network.
Now, these temporary measures won’t work miracles. Operators are looking at months of cash constraints and probably don’t have many field technicians available. But the temporary waiver of permitting and the easy access to public property could provide quick, needed broadband capacity in rural and hard-to-reach areas.
5 Books that Shaped My Thinking on Innovation

To commemorate its 40th anniversary, the Mercatus Center asked its scholars to share the books that have been most influential or formative in the development of their analytical approach and worldview. Head over to the Mercatus website to check out my complete write-up of my Top 5 picks for books that influenced my thinking on innovation policy progress studies. But here is a quick summary:
#1) Samuel C. Florman – “The Existential Pleasures of Engineering” (1976). His book surveys “antitechnologists” operating in several academic fields & then proceeds to utterly demolish their claims with remarkable rigor and wit.
#2) Aaron Wildavsky – “Searching for Safety” (1988). The most trenchant indictment of the “precautionary principle” ever penned. His book helped to reshape the way risk analysts would think about regulatory trade-offs going forward.
#3) Thomas Sowell – “A Conflict of Visions: Ideological Origins of Political Struggles” (1987). It’s like the Rosetta Stone of political theory; the key to deciphering why people think the way they do about human nature, economics, and politics.
#4) Virginia Postrel – “The Future and Its Enemies” (1998). Postrel reconceptualized the debate over progress as not Left vs. Right but rather dynamism— “a world of constant creation, discovery, and competition”—versus the stasis mentality. More true now than ever before.
#5) Calestous Juma – “Innovation and Its Enemies” (2016). A magisterial history of earlier battles over progress. Juma reminds us of the continued importance of “oiling the wheels of novelty” to constantly replenish the well of important ideas and innovations.
The future needs friends because the enemies of innovative dynamism are voluminous and vociferous. It is a lesson we must never forget. Thanks to these five authors and their books, we never will.
Finally, the influence of these scholars is evident on every page of my last book (“Permissionless Innovation”) and my new one (“Evasive Entrepreneurs and the Future of Governance: How Innovation Improves Economies and Governments”). I thank them all!
March 23, 2020
Reforming Licensing Rules to Help Fight the Pandemic
In a new essay in The Dallas Morning News (“Licensing restrictions for health care workers need to be flexible to fight coronavirus“), Trace Mitchell and I discuss recent efforts to reform occupational licensing restrictions for health care workers to help fight the coronavirus. Trace and I have written extensively about the need for licensing flexibility over the past couple of years, but it is needed now more than ever. Luckily, some positive reforms are now underway.
We highlight efforts in states like Massachusetts and Texas to reform their occupational licensing rules in response to the crisis, as well as federal reforms aimed at allowing reciprocity across state lines. We conclude by noting that:
It should not take a crisis of this magnitude for policymakers to reconsider the way we prevent fully qualified medical professionals from going where they are most needed. But that moment is now upon us. More leaders would be wise to conduct a comprehensive review of regulatory burdens that hinder sensible, speedy responses to the coronavirus crisis.
If nothing else, the relaxation of these rules should give us a better feel for how necessary strict licensing requirements truly are. Chances are, we will learn just how costly the regulations have been all along.
Read the entire piece here.
March 20, 2020
GPS location data and COVID-19 response
I saw a Bloomberg News report that officials in Austria and Italy are seeking (aggregated, anonymized) users’ location data from cellphone companies to see if local and national lockdowns are effective.
It’s an interesting idea that raises some possibilities for US officials and tech companies to consider to combat the crisis in the US. Caveat: these are very preliminary thoughts.
Cellphone location data from a phone company is OK but imprecise about your movements. It can show where you are typically in a mile or half-mile area.
But smartphone app location is much more precise since it uses GPS, not cell towers to show movements. Apps with location services can show people’s movements within meters, not half-mile, like cell towers.I suspect 90% of smartphone users have GPS location services on (Google Maps, Facebook, Yelp, etc.). App companies have rich datasets of daily movements of people.
Step 1 – App companies isolate and share location trends with health officials
This would need to be aggregated and anonymized of course. Tech companies with health officials should, as Balaji Srinivasan says, identify red and green zones. The point is not to identify individuals but make generalizations about whether a neighborhood or town is practicing good distancing practices.
To reiterate, lockdown without scaled testing will not achieve desired ends.
— Balaji S. Srinivasan (@balajis) March 20, 2020
1) Virus is invisible
2) Testing makes it visible
3) Use gradation of tests: thermometer, CT, PCR
4) Identify red & green zones, with high & low virus %
5) Let people in green zones out of lockdown https://t.co/V4OB58bVFD
Step 2 – In green zones, where infection/hospitalization are low and app data says people are strictly distancing, COVID-19 tests.
If people are spending 22 hours not moving except for brief visits to the grocery store and parks, that’s a good neighborhood. We need tests distributed daily in non-infected areas, perhaps at grocery stores and via USPS and Amazon deliveries. As soon as the tests production ramps up, tests need to flood into the areas that are healthy. This achieves two things:
Asymptomatic people who might spread can stay home.Non-infected people can start returning to work and a life of semi-normalcy of movement with confidence that others who are out are non-contagious.
Step 3 – In red zones, where infection/hospitalization is high and people aren’t strictly distancing, public education and restrictions.
At least in Virginia, there is county-level data about where the hotspots are. I expect other states know the counties and neighborhoods that are hit hard. Where there’s overlap of these areas not distancing, step up distancing and restrictions.
That still leaves open what to do about yellow zones that are adjacent to red zones, but the main priority should be to identify the green and red. The longer health officials and the public are flying blind with no end in sight, people get frustrated, lose jobs, shutter businesses, and violate distancing rules.
March 12, 2020
Remote Work and the State of US Broadband
To help slow the spread of the coronavirus, the GMU campus is moving to remote instruction and Mercatus is moving to remote work for employees until the risk subsides. GMU and Mercatus employees join thousands of other universities and businesses this week. Millions of people will be working from home and it will be a major test of American broadband and cellular networks.
There will likely be a loss of productivity nationwide–some things just can’t be done well remotely. But hopefully broadband access is not a major issue. What is the state of US networks? How many people lack the ability to do remote work and remote homework?
The FCC and Pew research keep pretty good track of broadband buildout and adoption. There are many bright spots but some areas of concern as well.
Who lacks service?
The top question: How many people want broadband but lack adequate service or have no service?
The good news is that around 94% of Americans have access to 25 Mbps landline broadband. (Millions more have access if you include broadband from cellular and WISP providers.) It’s not much consolation to rural customers and remote workers who have limited or no options, but these are good numbers.
According to Pew’s 2019 report, about 2% of Americans cite inadequate or no options as the main reason they don’t have broadband. What is concerning is that this 2% number hasn’t budged in years. In 2015, about the same number of Americans cited inadequate or no options as the main reason they didn’t have home broadband. This resembles what I’ve called “the 2% problem“–about 2% of the most rural American households are extremely costly to serve with landline broadband. Satellite, cellular, or WISP service will likely be the best option.
Mobile broadband trends
Mobile broadband is increasingly an option for home broadband. About 24% of Americans with home Internet are mobile only, according to Pew, up from ~16% in 2015.
The ubiquity of high-speed mobile broadband has been the big story in recent years. Per FCC data, from 2009 to 2017 (the most recent year we have data), the average number of new mobile connections increased about 30 million annually. In Dec. 2017 (the most recent data), there were about 313 million mobile subscriptions.
Coverage is very good in the US. OpenSignal uses crowdsourced data and software to determine how frequently users’ phones have a 4G LTE network available (a proxy for coverage and network quality) around the world. The US ranked fourth the world (86%) in 2017, beating out every European country, save Norway.
There was also a big improvement was in mobile speeds. In 2009, a 3G world, almost all connections were below 3 Mbps. In 2017, a world of 4G LTE, almost all connections were above 3 Mbps.
Landline broadband trends
Landline broadband also increased significantly. From 2009 to 2017, there were about 3.5 million new connections per year, about 108 million connections in 2017. In Dec. 2009, about half of landline connections were below 3 Mbps.
There were some notable jumps in high-speed and rural broadband deployment. There was a big jump in fiber-to-the-premises (FTTP) connections, like FiOS and Google Fiber. From 2012 to 2017, the number of FTTP connections more than doubled, to 12.6 million. Relatedly, sub-25 Mbps connections have been falling rapidly while 100 Mbps+ connections have been shooting up. In 2017, there were more connections with 100 Mbps+ (39 million) than there were connections below 25 Mbps (29 million).
In the most recent 5 years for which we have data, the number of rural subscribers (not households) with 25 Mbps increased 18 million (from 29 million to 47 million).
More Work
We only have good data for the first year of the Trump FCC, so it’s hard to evaluate but signs are promising. One of Chairman Pai’s first actions was creating an advisory committee to advise the FCC on broadband deployment (I’m a member). Anecdotally, it’s been fruitful to regularly have industry, academics, advocates, and local officials in the same room to discuss consensus policies. The FCC has acted on many of those.
The rollback of common carrier regulations for the Internet, the pro-5G deployment initiatives, and limiting unreasonable local fees for cellular equipment have all helped increase deployment and service quality.
An effective communications regulator largely stays of the way and removes hindrances to private sector investment. But the FCC does manage some broadband subsidy programs. The Trump FCC has made some improvements to the $4.5 billion annual rural broadband programs. The 17 or so rural broadband subprograms have metastasized over the years, making for a kludgey and expensive subsidy system.
The recent RDOF reforms are a big improvement since they fund a reverse auction program to shift money away from the wasteful legacy subsidy programs. Increasingly, rural households get broadband from WISP, satellite, and rural cable companies–the RDOF reforms recognize that reality.
Hopefully one day reforms will go even further and fund broadband vouchers. It’s been longstanding FCC policy to fund rural broadband providers (typically phone companies serving rural areas) rather than subsidizing rural households. The FCC should consider a voucher model for rural broadband, $5 or $10 or $40 per household per month, depending on the geography. Essentially the FCC should do for rural households what the FCC does for low-income households–provide a monthly subsidy to make broadband costs more affordable.
Many of these good deployment trends began in the Obama years but the Trump FCC has made it a national priority to improve broadband deployment and services. It appears to be be working. With the coronavirus and a huge increase in remote work, US networks will be put to a unique test.
March 6, 2020
The APA’s Welcome New Statement on Video Game Violence
I was pleased to see the American Psychological Association’s new statement slowly reversing course on misguided past statements about video games and acts of real-world violence. As Kyle Orland reports in Ars Technica, the APA has clarified its earlier statement on this relationship between watching video game depictions of violence and actual youth behavior. The APA’s old statement said that evidence “confirms [the] link between playing violent video games and aggression.” But the APA has come around and now says that, “there is insufficient scientific evidence to support a causal link between violent video games and violent behavior.” More specifically, the APA says:
The following resolution should not be misinterpreted or misused by attributing violence, such as mass shootings, to violent video game use. Violence is a complex social problem that likely stems from many factors that warrant attention from researchers, policy makers and the public. Attributing violence to violent video gaming is not scientifically sound and draws attention away from other factors.
This is a welcome change of course because the APA’s earlier statements were being used by politicians and media activists who favored censorship of video games. Hopefully that will no longer happen.
“Monkey see, monkey do” theories of media exposure leading
to acts of real-world violence have long been among the most outrageously
flawed theories in the fields of psychology and media studies. All the evidence points the opposite way, as I documented a decade
ago in a variety of studies. (For a summary, see my 2010 essay, “More
on Monkey See-Monkey Do Theories about Media Violence & Real-World Crime.”)
In fact, there might even be something to the “cathartic
effect hypothesis,” or the idea first articulated by Aristotle
(“katharsis”) that watching dramatic portrayals of violence could
lead to “the proper purgation of these emotions.” (See my 2010 essay on
this, “Video
Games, Media Violence & the Cathartic Effect Hypothesis.”)
Of course, this doesn’t mean that endless exposure to video
game or TV and movie violence is a good thing. Prudence and good parenting are
still essential. Some limits are smart. But the idea that a kid playing or
watching violent act will automatically become violent themselves was always nonsense.
It’s time we put that theory to rest. Thanks to the new APA statement, we are
one step closer.
P.S. I recently penned an essay about my long love affair
with video games that you might find entertaining: “Confessions
of a ‘Vidiot’: 50 Years of Video Games & Moral Panics”
March 3, 2020
Comment on the FAA’s drone Remote ID proposal
Michael Kotrous and I submitted a comment to the FAA about their Remote ID proposals. While we agree with the need for a “digital license plate” for drones, we’re skeptical that requiring an Internet connection is necessary and that an interoperable, national drone traffic management system will work well.
The FAA deserves credit for rigorously estimating the costs of their requirements, which they set at around $450 million to $600 million over 10 years. These costs largely fall on drone operators and on drone manufacturers for network (say, LTE) subscriptions and equipment.
The FAA’s proposed requirements aren’t completely hashed out, but we raised two points of caution.
One, many many drone flights won’t stray from a pre-programmed route or leave private property. For instance, roof inspections, medical supply deliveries across a hospital campus, train track inspections, and crop spraying via drone all remain on private property. They all pose a de minimis safety concern to manned aircraft and requiring networking equipment and subscriptions seems excessive.
Two, we’re not keen on the FAA and NASA plans for an interoperable, national drone traffic management system. A simple wireless broadcast from a drone should be enough in most circumstances. The FAA proposal would require drone operators to contract with UAS Service Suppliers (USSs) who would be contractors of the FAA. Technical standards would come later. This convoluted system of making virtually all drone operations known to the FAA is likely run aground with technical complexity, technical stagnation, FAA-blessed oligopoly in USS or all of the above.
The FAA instead should consider allowing states, cities, and landowners to make rules for drone operations when operations are solely on their property. States are ready to step in. The North Dakota legislature, for instance, authorized $28 million a few months ago for a statewide drone management system. Other states will follow suit and a federated, geographically-separated drone management system could develop, if the FAA allows. That would reduce the need for complex, interoperable USS and national drone traffic management systems.
Further reading:
Refine the FAA’s Remote ID Rules to Ensure Aviation Safety and Public Confidence, comment to the FAA (March 2020), https://www.mercatus.org/publications/technology-and-innovation/refine-faa%E2s-remote-id-rules-ensure-aviation-safety-and
Auctioning Airspace, North Carolina Journal of Law & Technology (October 2019), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3284704
February 26, 2020
Impressions from the DOJ Workshop about Section 230
Last week I attended the Section 230 cage match workshop at the DOJ. It was a packed house, likely because AG Bill Barr gave opening remarks. It was fortuitous timing for me: my article with Jennifer Huddleston, The Erosion of Publisher Liability in American Law, Section 230, and the Future of Online Curation, was published 24 hours before the workshop by the Oklahoma Law Review.
These were my impressions of the event:
I thought it was pretty well balanced event and surprisingly civil for such a contentious topic. There were strong Section 230 defenders and strong Section 230 critics, and several who fell in between. There were a couple cheers after a few pointed statements from panelists, but the audience didn’t seem to fall on one side or the other. I’ll add that my friend and co-blogger Neil Chilson gave an impressive presentation about how Section 230 helped make the “long tail” of beneficial Internet-based communities possible.
AG Bob Barr gave the opening remarks, which are available online. A few things jumped out. He suggested that Section 230 had its place but Internet companies are not an infant industry anymore. In his view, the courts have expanded Section 230 beyond drafters’ intent, and the Reno decision “unbalanced” the protections, which were intended to protect minors. The gist of his statement was that the law needs to be “recalibrated.”
Each of these points were disputed by one or more panelists, but the message to the Internet industry was clear: the USDOJ is scrutinizing industry concentration and its relationship to illegal and antisocial online content.
The workshop signals that there is now a large, bipartisan coalition that would like to see Section 230 “recalibrated.” The problem for this coalition is that they don’t agree on what types of content providers should be liable for and they are often at cross-purposes. The problematic content ranges from sex trafficking, to stalkers, to opiate trafficking, to revenge porn, to unfair political ads. For conservatives, social media companies take down too much content, intentionally helping progressives. For progressives, social media companies leave up too much content, unwittingly helping conservatives.
I’ve yet to hear a convincing way to modify Section 230 that (a) satisfies this shaky coalition, (b) would be practical to comply with, and (c) would be constitutional.
Now, Section 230 critics are right: the law blurs the line between publisher and conduit. But this is not unique to Internet companies. The fact is, courts (and federal agencies) blurred the publisher-conduit dichotomy for fifty years for mass media distributors and common carriers as technology and social norms changed. Some cases that illustrate the phenomenon:
In Auvil v. CBS 60 Minutes, a 1991 federal district court decision, some Washington apple growers sued some local CBS affiliates for airing allegedly defamatory programming. The federal district court dismissed the case on the grounds that the affiliates are conduits of CBS programming. Critically, the court recognized that the CBS affiliates “had the power to” exercise editorial control over the broadcast and “in fact occasionally [did] censor programming . . . for one reason or another.” Still, case dismissed. The principle has been cited by other courts. Publishers can be conduits.
Conduits can also be publishers. In 1989, Congress passed a law requiring phone providers to restrict “dial-a-porn” services to minors. Dial-a-porn companies sued. In Information Providers Coalition v. FCC, the 9th Circuit Court of Appeals held that regulated common carriers are “free under the Constitution to terminate service” to providers of indecent content. The Court relied on its decision a few years earlier in Carlin Communications noting that when a common carrier phone company is connecting thousands of subscribers simultaneously to the same content, the “phone company resembles less a common carrier than it does a small radio station.”
Many Section 230 reformers believe Section 230 mangled the common law would like to see the restoration of the publisher-conduit dichotomy. As our research shows, that dichotomy had already been blurred for decades. Until advocates and lawmakers acknowledge these legal trends and plan accordingly, the reformers risk throwing out the baby with the bathwater.
Relevant research:
Brent Skorup & Jennifer Huddleston, The Erosion of Publisher Liability in American Law, Section 230, and the Future of Online Curation (Oklahoma Law Review).
Brent Skorup & Joe Kane, The FCC and Quasi–Common Carriage: A Case Study of Agency Survival (Minnesota Journal of Law, Science & Technology).
February 20, 2020
Podcast: Problems with the Precautionary Principle
On the latest Institute for Energy Research podcast, I joined Paige Lambermont to discuss:
the precautionary principle vs. permissionless innovation;risk analysis trade-offs;the future of nuclear power;the “pacing problem”;regulatory capture;evasive entrepreneurialism;“soft law”;… and why I’m still bitter about losing the 6th grade science fair!
Our discussion was inspired by my recent essay, “How Many Lives Are Lost Due to the Precautionary Principle?”
Europe’s New AI Industrial Policy
The race for artificial intelligence (AI) supremacy is on with governments across the globe looking to take the lead in the next great technological revolution. As they did before during the internet era, the US and Europe are once again squaring off with competing policy frameworks.
In early January, the Trump Administration announced a new light-touch regulatory framework and then followed up with a proposed doubling of federal R&D spending on AI and quantum computing. This week, the European Union Commission issued a major policy framework for AI technologies and billed it as “a European approach to excellence and trust.”
It seems the EU basically wants to have its cake and eat it too by marrying up an ambitious industrial policy with a precautionary regulatory regime. We’ve seen this show before. Europe is doubling down on the same policy regime it used for the internet and digital commerce. It did not work out well for the continent then, and there are reasons to think it will backfire on them again for AI technologies.
An Ambitious Industrial Policy Vision
The new EU framework includes a lot of catchphrases and proposals that are an industrial policy lover’s dream. In an attempt to create “an ecosystem of excellence” and ensure the “human-centric development if AI,” it identifies a variety of existing or new industrial planning efforts, including: Digital Innovation Hubs, Enterprise Resource Planning, the Digital Europe Programme, the Key Digital Technology Joint Undertaking, and broad-based public private partnerships. This is all part of an official “Coordinated Plan” prepared together with the Member States “to foster the development and use of AI in Europe.”
To accomplish that, the Commission says it will “facilitate the creation of excellence and testing centres” that will “concentrate in sectors where Europe has the potential to become a global champion.” The Commission also wants to give special consideration to growing small and mid-size enterprises (SMEs) is establishing these plans.
Again, it’s an ambitious industrial policy vision, and one that will be accompanied by a wide variety of (yet-to-be-determined) regulatory enactments to shape the development and use of AI. But if that approach really works, why aren’t European digital companies global leaders today? Instead, firms based mostly in the US have risen to become household names across the globe. Regulation had an influence on that result because American firms enjoyed a policy regime that was rooted in “permissionless innovation,” which generally allows experimentation by default and addresses concerns by using more flexible, ex post remedies. By contrast, Europe’s internet policy approach was rooted in the precautionary principle, or the notion that innovation is essentially guilty until proven innocent. New technologies are to be subjected to prior constraints—or what the new European Commission white paper calls “prior conformity assessments”—before being allow into the wild.
Precautionary Regulation Dominates
Despite losing that last round of the innovation wars, the new EU white paper makes it clear that Europe will keep using a precautionary approach. What does that mean for AI regulation? The problem here begins with defining what is a “high-risk” AI application requiring prior restraints. The white paper defines it in a somewhat circular fashion, saying that, “an AI application should be considered high-risk where…(it) is employed in a sector where, given the characteristics of the activities typically undertaken, significant risks can be expected to occur” and is “used in such a manner that significant risks are likely to arise.” Instead of providing legal certainty, this definition clarifies almost nothing and will require future regulatory inquires to determine the full scope and nature of AI controls.
There’s also a lot of talk in the proposal about preemptively addressing “risks for fundamental rights,” which is understandable. AI innovations can raise various safety, security, and privacy concerns that deserve to be taken seriously. But what about the risk of not having access to important AI innovations at all? What about the risk of losing out on life-enriching—and in many cases life-saving—innovations because, instead of “building trust,” the regulatory regime builds the exact opposite: fear of innovating.
Entrepreneurs and investors respond to incentives. Before building or investing in a new technology, they want to know how long it will take to get that good or service launched—assuming they can get approval at all. Every innovator and investor factors such political risk into their business plans. When the potential costs of product launch overwhelm the likely benefits, they will abandon innovative efforts or look to engage in them elsewhere.
The EU says “the race for global leadership is ongoing,” and claims that, “Europe offers significant potential, knowledge and expertise” through its efforts to make the continent an AI innovation hub. Indeed, some of the best AI researchers are in Europe, and there are plenty of brilliant people brimming with entrepreneurial enthusiasm about creating world-class AI applications. But all that knowledge and enthusiasm do not matter much if the regulatory deck is stacked against innovation from the start.
And Even More Expansive Regulation Down the Road
Beyond the precautionary approach in that document, the EU’s accompanying white paper on safety and liability implications of AI leaves open the possibility of an expansion in preemptive regulatory requirements. “Additional obligations may be needed for manufacturers to ensure that they provide features to prevent the upload of software having an impact on safety during the lifetime of the AI products,” the document notes. Moreover, if an ongoing AI software update “modifies substantially the product in which it is downloaded, the entire product might be considered as a new product and compliance with the relevant safety product legislation must be reassessed at the time the modification is performed.”
That sort of regulatory regime may sound quite sensible at first blush. In practice, however, it means that every conceivable tweak to an algorithm requires costly and complex regulatory approval. If traditional computer software had required regulatory approval before any new modifications could be made, most consumers would still be stuck with an aol.com email address and Windows 95 as an operating system.
What the European Commission proves with its new AI policy framework is that it is easy to talk a big game about planning for an innovative future, but it is an entirely different thing to actually bring one about. The European approach will have clear competitive effects, or more specifically, anti-competitive effects. As is already the case with the EU’s regulatory approach to the data economy and GDPR in particular, regulatory compliance costs continue to skyrocket and small and mid-size enterprises struggle to cope. This means that only firms operating the largest digital platforms are able to shoulder these burdens, leaving consumers without as many competitive, low-cost choices as they might otherwise enjoy. Not even generous government support for SMEs will be able to counter-balance the costly entry barriers associated with over-regulation.
Solidifying Market Power of Existing Giants?
This is why it is so ironic how worried the EU is about the market power of Google, Facebook and other US-based tech giants: the regulatory burden now helps those firms maintain their market dominance. Over-regulation by the EU undermined both home-grown and international investment and competition that might challenge those existing players. With each addition layer of AI regulation that now gets piled on top of the Europe’s existing regulatory burden, the prospects for creative destruction decrease, as do the chances for life-enriching innovations to ever make it to consumers.
While the European Commission will, no doubt, insist that they are implementing this new AI regime with the very best of intentions in mind, there is no escaping the fact that regulation involves complex trade-offs and unforeseeable consequences. The consequences in this case are likely a bit easier to predict, however: By smothering new AI applications in layers of red tape, we can expect fewer innovations and less competition.
Despite all the talk of boosting SMEs, perhaps the EU will eventually become more like China and unabashedly support larger home-grown firms to make sure they are part of the global AI race. China has already made waves on this front with its 2017 “New Generation Artificial Intelligence Development Plan,” an audacious industrial policy plan which seeks “to build China’s first-mover advantage in the development of AI [and] to accelerate the construction of an innovative nation and global power in science and technology.” The document is as much a manifesto about geopolitical power as it is about technological governance. And it does not try to hide China’s authoritarian impulse to meticulously plan every facet of daily life under the auspices of promoting global technological leadership. China’s AI manifesto even concludes with a section on “public opinion guidance” that creepily insists the country will, “Fully use all kinds of traditional media and new media to quickly propagate new progress and new achievements in AI, to let the healthy development of AI become a consensus in all of society, and muster the vigor of all of society to participate in and support the development of AI.”
The new European AI industrial policy framework does not go as far as China’s, not only because the continent is obviously more open and democratic by nature, but also because the EU is a collection of many countries and cultures that will never be able to speak as coherently and forcefully with one voice on all technological governance matters. In fact, the EU’s new governance framework explicitly leaves room for more tailored AI regulation by individual member states.
Conclusion
This leaves Europe stuck between the polar opposites of China and the US when it comes to AI governance. China’s meticulously detailed, highly centralized, state-driven approach stands in stark contrast to the more bottom-up, adaptive American approach which insists that regulators, “must avoid a precautionary approach that holds AI systems to such an impossibly high standard that society cannot enjoy their benefits.”
The US approach also leans heavily on “soft law,” or informal governance mechanisms that are not as burdensome as precautionary regulatory controls. Soft law can include a wide variety of tools and methods for addressing policy concerns, including multistakeholder initiatives, best practices and standards, agency workshops and guidance documents, educational efforts, and much more. These are the governance tools the dominated for the internet and digital platforms for that past twenty years in the US, and they will likely continue to be the primary governance mechanisms for artificial intelligence, robotics, the internet of things, and other emerging tech sectors.
The EU probably thinks it has found the Goldilocks formula and gotten AI policy just right by falling between China and the US on the governance spectrum. It is more likely, however, that European policymakers will be unable to resist the urge to over-plan and micro-manage AI markets until they are once again left wondering how they got stuck trying to regulate market leaders that are headquartered oceans away from them. With the US once again adopting a more flexible approach, we could see a replay of the Web Wars, with innovators and investors putting their efforts behind AI launches in the US instead of Europe. Meanwhile, China will likely attract far more global venture capital for AI and robotics launches than they did for digital platforms. This could really put the squeeze on Europe.
Only time will tell. But, to paraphrase Yoda, when it comes to global artificial intelligence governance, one thing is clear: Begun the AI war has.
Adam Thierer's Blog
- Adam Thierer's profile
- 1 follower
