Adam Thierer's Blog, page 32

April 11, 2017

Innovation Policy at the Mercatus Center: The Shape of Things to Come

Written with Christopher Koopman and Brent Skorup (originally published on Medium on 4/10/17)


Innovation isn’t just about the latest gee-whiz gizmos and gadgets. That’s all nice, but something far more profound is at stake: Innovation is the single most important determinant of long-term human well-being. There exists widespread consensus among historians, economists, political scientists and other scholars that technological innovation is the linchpin of expanded economic growth, opportunity, choice, mobility, and human flourishing more generally. It is the ongoing search for new and better ways of doing things that drives human learning and prosperity in every sense — economic, social, and cultural.


As the Industrial Revolution revealed, leaps in economic and human growth cannot be planned. They arise from societies that reward risk takers and legal systems that accommodate change. Our ability to achieve progress is directly proportional to our willingness to embrace and benefit from technological innovation, and it is a direct result of getting public policies right.


The United States is uniquely positioned to lead the world into the next era of global technological advancement and wealth creation. That’s why we and our colleagues at the Technology Policy Program at the Mercatus Center at George Mason University devote so much time and energy to defending the importance of innovation and countering threats to it. Unfortunately, those threats continue to multiply as fast as new technologies emerge.


Indeed, it isn’t easy keeping on top of all of these issues and threats because the only constant in the world of innovation policy — the study of technological change and its impact on social, economic, and political systems — is constant change. You go to sleep one night thinking you’ve got the world figured out, only to awake the next morning to see that another tectonic shift has reshaped the landscape.


In the industrial era, it was hard enough mapping the contours of this field of academic study. This task has grown far more challenging. Computing and Internet-enabled innovations have fundamentally reshaped society and have also helped spawn other technological revolutions in diverse fields such as: robotics, autonomous systems, artificial intelligence, big data, the Sharing Economy, 3D printing, virtual reality, aviation, advanced medical technology, blockchain and Bitcoin, and the so-called the Internet of Things.


The short-term social and economic disruptions caused by these and other new technologies often lead to backlashes and even occasional “techno-panics.” When those panics bubble over into the political arena, the risk is that misguided regulatory policies will short-circuit opportunities for creators and entrepreneurs to pursue life-enriching innovations.


At the Mercatus Center, where we study these and other topics, our goal is to bring greater focus to these emerging technologies and the many different facets of innovation policy surrounding them. How we accomplish these goals is as challenging as it is exciting. As more and more industries and business are affected by these emerging technologies, the decisions that policymakers make about them will have profound effects on large parts of our economy and society.


Specifically, as we place ourselves at the forefront of these debates, our aim is to:



Explore how innovation policy affects economic growth and mobility, consumer welfare, and global competitive advantage;
Identify barriers to entrepreneurial endeavors and devise a roadmap for how to remove them;
Push back against technopanics and overly-broad theories of “technological harm” that could limit innovation opportunities and greater consumer choice; and
Confront the legal and ethical concerns surrounding emerging technologies and craft constructive solutions to those problems to avoid solutions of the top-down, “command-and-control” variety.

Overall, our vision is simple: Permissionless innovation must become the norm rather than the exception. This means innovation and innovators are protected against efforts to preemptively control ongoing trial-and-error experimentation. We should let creative minds and empowered entrepreneurs experiment with new and better ways of doing things. It also means that the future if public policy should be rooted in fact-based analysis and not shaped by outlandish fears of hypothetical worst-case scenarios.


Going forward, you will continue to see Mercatus producing research applying permissionless innovation across a host of areas. You can also expect us to begin pursuing big questions about the future.


What if we could reduce the number of deaths on US roadways from 96 people per day to zero? What if we could double life expectancy? Triple it? Wouldn’t it be nice if we could travel from New York to London in three hours? New York to Los Angeles in 2.5 hours? What if we welcomed automation instead of fearing its effects on the workforce? What if we could remove the technical and political barriers keeping us from going to Mars and then beyond it? And so on.


We pose these questions not merely because they are intellectually interesting and important, but also because we hope to make the case for embracing the future with a sense of wonder and optimism about how technological advancement can radically improve human well-being in both the short- and long-run.


It isn’t enough to simply point out where innovators and entrepreneurs are being hindered. It isn’t enough to simply tell people that the future will be bright. We must explain, in real terms, how hindering innovation opportunities undermines our collective ability to constantly improve the human condition.


And because there is a symbiotic relationship between freedom and progress, we must defend our collective ability as a society to achieve very concrete, widely-shared advances in well-being through a general freedom to experiment with new technologies and better ways of doing things.


That is our vision for the Technology Policy Program at the Mercatus Center and we hope it is one that the public and public policymakers will embrace going forward.

 •  0 comments  •  flag
Share on Twitter
Published on April 11, 2017 08:11

April 5, 2017

FCC Chairman Pai Pledges Greater Use of Economics

Federal Communications Commission (FCC) Chairman Ajit Pai today announced plans to expand the role of economic analysis at the FCC in a speech at the Hudson Institute. This is an eminently sensible idea that other regulatory agencies (both independent and executive branch) could learn from.


Pai first made the case that when the FCC listened to its economists in the past, it unlocked billions of dollars of value for consumers. The most prominent example was the switch from hearings to auctions in order to allocate spectrum licenses. He perceptively noted that the biggest effect of auctions was the massive improvement in consumer welfare, not just the more than $100 billion raised for the Treasury. Other examples of the FCC using the best ideas of its economists include:



Use of reverse auctions to allocate universal service funds to reduce costs.
Incentive auctions that reward broadcasters for transferring licenses to other uses – an idea initially proposed in a 2002 working paper by Evan Kwerel and John Williams at the FCC.
The move from rate of return to price cap regulation for long distance carriers.

More recently, Pai argued, the FCC has failed to use economics effectively. He identified four key problems:



Economics is not systematically employed in policy decisions and often employed late in the process. The FCC has no guiding principles for conduct and use of economic analysis.
Economists work in silos. They are divided up among bureaus. Economists should be able to work together on a wide variety of issues, as they do in the Federal Trade Commission’s Bureau of Economics, the Department of Justice Antitrust Division’s economic analysis unit, and the Securities and Exchange Commission’s Division of Economic and Risk Analysis.
Benefit-cost analysis is not conducted well or often, and the FCC does not take Regulatory Flexibility Act analysis (which assesses effects of regulations on small entities) seriously. The FCC should use Office of Management and Budget guidance as its guide to doing good analysis, but OMB’s 2016 draft report on the benefits and costs of federal regulations shows that the FCC has estimated neither benefits nor costs of any of its major regulations issued in the past 10 years. Yet executive orders from multiple administrations demonstrate that “Serious cost-benefit analysis is a bipartisan tradition.”
Poor use of data. The FCC probably collects a lot of data that’s unnecessary, at a paperwork cost of $800 million per year, not including opportunity costs of the private sector. But even useful data are not utilized well. For example, a few years ago the FCC stopped trying to determine whether the wireless market is effectively competitive even though it collects lots of data on the wireless market.

To remedy these problems, Pai announced an initiative to establish an Office of Economics and Data that would house the FCC’s economists and data analysts. An internal working group will be established to collect input within the FCC and from the public. He hopes to have the new office up and running by the end of the year. The purpose of this change is to give economists early input into the rulemaking process, better manage the FCC’s data resources, and conduct strategic research to help find solutions to “the next set of difficult issues.”


Can this initiative significantly improve the quality and use of economic analysis at the FCC?


There’s evidence that independent regulatory agencies are capable of making some decent improvements in their economic analysis when they are sufficiently motivated to do so. For example, the Securities and Exchange Commission’s authorizing statue contains language that requires benefit-cost analysis of regulations when the commission seeks to determine whether they are in the public interest. Between 2005 and 2011, the SEC lost several major court cases due to inadequate economic analysis.


In 2012, the commission’s general counsel and chief economist issued new economic analysis guidance that pledged to assess regulations according to the principal criteria identified in executive orders, guidance from the Office of Management and Budget, and independent research. In a recent study, I found that the economic analysis accompanying a sample of major SEC regulations issued after this guidance was measurably better than the analysis accompanying regulations issued prior to the new guidance. The SEC improved on all five aspects of economic analysis it identified as critical: assessment of the need for the regulation, assessment of the baseline outcomes that will likely occur in the absence of new regulation, identification of alternatives, and assessment of the benefits and costs of alternatives.


Unlike the SEC, the FCC faces no statutory benefit-cost analysis requirement for its regulations. Unlike the executive branch agencies, the FCC is under no executive order requiring economic analysis of regulations. Unlike the Federal Trade Commission in the early 1980s, the FCC faces little congressional pressure for abolition.


But Congress is considering legislation that would require all regulatory agencies to conduct economic analysis of major regulations and subject that analysis to limited judicial review. Proponents of executive branch regulatory review have always contended that the president has legal authority to extend the executive orders on regulatory impact analysis to cover independent agencies, and perhaps President Trump is audacious enough to try this. Thus, it appears Chairman Pai is trying to get the FCC out ahead of the curve.

 •  0 comments  •  flag
Share on Twitter
Published on April 05, 2017 12:04

March 29, 2017

Some background on broadband privacy changes

Congress passed joint resolutions to rescind FCC online privacy regulations this week, which President Trump is expected to sign. Ignore the hyperbole. Lawmakers are simply attempting to maintain the state of Internet privacy law that’s existed for 20-plus years.


Since the Internet was commercialized in the 1990s, the Federal Trade Commission has used its authority to prevent “unfair or deceptive acts or practices” to prevent privacy abuses by Web companies and ISPs. In 2015, that changed. The Obama FCC classified “broadband Internet access service” as a common carrier service, thereby blocking the FTC’s authority to determine which ISP privacy policies and practices are acceptable.


This has contributed to a regulatory mess for consumers and tech companies. Technological convergence is here. Regulatory convergence is not.


Consider a plausible scenario. I start watching an NFL Thursday night game via Twitter on my tablet on Starbucks’ wifi. I head home at halftime and watch the game from my cable TV provider, Comcast. Then I climb into bed and watch overtime on my smartphone via NFL Mobile from Verizon.


One TV program, three privacy regimes. FTC guidelines cover me at Starbucks. Privacy rules from Title VI of the Communications Act cover my TV viewing. The brand-new FCC broadband privacy rules cover my NFL Mobile viewing and late-night browsing.


Other absurdities result from the FCC’s decision to regulate Internet privacy. For instance, if you bought your child a mobile plan with web filtering, she’s protected by FTC privacy standards, while your mobile plan is governed by FCC rules. Google Fiber customers are covered by FTC policies when they use Google Search but FCC policies when they use Yelp.


This Swiss-cheese approach to classifying services means that regulatory obligations fall haphazardly across services and technologies. It’s confusing to consumers and to companies, who need to write privacy policies based on artificial FCC distinctions that consumers disregard.


The House and Senate bills rescind the FCC “notice and choice” rules, which is the first step to restoring FTC authority. (In the meantime, the FCC will implement FTC-like policies.) 


Considering that these notice and choice rules have not even gone into effect, the rehearsed outrage from advocates demands explanation: The theatrics this week are not really about congressional repeal of the (inoperative) privacy rules. Two years ago the FCC decided to regulate the Internet in order to shape Internet services and content. Advocates are outraged because FCC control of the Internet is slipping away. Hopefully Congress and the FCC will eliminate the rest of the Title II baggage this year.

 •  0 comments  •  flag
Share on Twitter
Published on March 29, 2017 10:41

March 27, 2017

Who needs a telecom regulator? Denmark doesn’t.

US telecommunications laws are in need of updates. US law states that “the Internet and other interactive computer services” should be “unfettered by Federal or State regulation,” but regulators are increasingly imposing old laws and regulations onto new media and Internet services. Further, Federal Communications Commission actions often duplicate or displace general competition laws. Absent congressional action, old telecom laws will continue to delay and obstruct new services. A new Mercatus paper by Roslyn Layton and Joe Kane shows how governments can modernize telecom agencies and laws.


Legacy Laws


US telecom laws are codified in Title 47 of the US Code and enforced mostly by the FCC. That the first eight sections of US telecommunications law are devoted to the telegraph, the killer app of 1850, illustrates congressional inaction towards obsolete regulations.


In the last decade, therefore, several media, Internet, and telecom companies inadvertently stumbled into Communications Act quagmires. An Internet streaming company, for instance, was bankrupted for upending the TV status quo established by the FCC in the 1960s; FCC precedents mean broadcasters can be credibly threatened with license revocation for airing a documentary critical of a presidential candidate; and the thousands of Internet service providers across the US are subjected to laws designed to constrain the 1930s AT&T long-distance phone monopoly.


US telecom and tech laws, in other words, are a shining example of American “kludgeocracy”–a regime of prescriptive and dated laws whose complexity benefits special interests and harms innovators. These anti-consumer results led progressive Harvard professor Lawrence Lessig to conclude in 2008 that “it’s time to demolish the FCC.” While Lessig’s proposal goes too far, Congress should listen to the voices on the right and left urging them to sweep away the regulations of the past and rationalize telecom law for the 21st century.


Modern Telecom Policy in Denmark


An interesting new Mercatus working paper explains how Denmark took up that challenge. The paper, “Alternative Approaches to Broadband Policy: Lessons on Deregulation from Denmark,” is by Denmark-based scholar Roslyn Layton, who served on President Trump’s transition team for telecom policy, and Joe Kane, a masters student in the GMU econ department. 


The “Nordic model” is often caricatured by American conservatives (and progressives like Bernie Sanders) as socialist control of industry. But as AEI’s James Pethokoukis and others point out, it’s time both sides updated their 1970s talking points. “[W]hen it comes to regulatory efficiency and business freedom,” Tyler Cowen recently noted, “Denmark has a considerably higher [Heritage Foundation] score than does the U.S.”


Layton and Kane explore Denmark’s relatively free-market telecom policies. They explain how Denmark modernized its telecom laws over time as technology and competition evolved. Critically, the center-left government eliminated Denmark’s telecom regulator in 2011 in light of the “convergence” of services to the Internet. Scholars noted,


Nobody seemed to care much—except for the staff who needed to move to other authorities and a few people especially interested in IT and telecom regulation.


Even-handed, light telecom regulation performs pretty well. Denmark, along with South Korea, leads the world in terms of broadband access. The country also has a modest universal service program that depends primarily on the market. Further, similar to other Nordic countries, Denmark permitted a voluntary forum, including consumer groups, ISPs, and Google, to determine best practices and resolve “net neutrality” controversies.


Contrast Denmark’s tech-neutral, consumer-focused approach with recent proceedings in the United States. One of the Obama FCC’s major projects was attempting to regulate how TV streaming apps functioned–despite the fact that TV has never been more abundant and competitive. Countless hours of staff time and industry time were wasted (Trump’s election killed the effort) because advocates saw the opportunity to regulate the streaming market with a law intended to help Circuit City (RIP) sell a few more devices in 1996. The biggest waste of government resources has been the “net neutrality” fight, which stems from prior FCC attempts to apply 1930s telecom laws to 1960s computer systems. Old rules haphazardly imposed on new technologies creates a compliance mindset in our tech and telecom industries. Worse, these unwinnable fights over legal minutiae prevent FCC staff from working on issues where they can help consumers. 


Americans deserve better telecom laws but the inscrutability of FCC actions means consumers don’t know what to ask for. Layton and Kane illuminate that alternative frameworks are available. They highlight Denmark’s political and cultural differences from the US. Nevertheless, Denmark’s telecom reforms and pro-consumer policies deserve study and emulation. The Danes have shown how tech-neutral, consumer-focused policies not only can expand broadband access, they reduce government duplication and overreach.

 •  0 comments  •  flag
Share on Twitter
Published on March 27, 2017 12:42

March 9, 2017

Federal spectrum sales can help fund Trump’s infrastructure investments

The Wall Street Journal reported yesterday that the White House is crafting a plan for $1 trillion in infrastructure investment. I was intrigued to learn that President Trump “inquired about the possibility of auctioning the broadcast spectrum to wireless carriers” to help fund the programs. Spectrum sales are the rare win-win-win: they stimulate infrastructure investment (cell towers, fiber networks, devices), provide new wireless services and lower prices to consumers, and generate billions in revenue for the federal government.


Broadcast TV spectrum is good place to look for revenue but the White House should also look at federal agencies, who possess about ten times what broadcasters hold.


Large portions of spectrum are underused or misallocated because of decades of command-and-control policies. Auctioning spectrum for flexible uses, on the other hand, is a free-market policy that is often lucrative for the federal government. Since 1993, when Congress authorized spectrum auctions, wireless carriers and tech companies have spent somewhere around $120 billion for about 430 MHz of flexible-use spectrum, and the lion’s share of revenue was deposited in the US Treasury.


A few weeks ago, the FCC completed the $19 billion sale of broadcast TV spectrum, the so-called incentive auction. Despite underwhelming many telecom experts, this was the third largest US spectrum auction ever in terms of revenue and will transfer a respectable 70 MHz from restricted (broadcast TV) use to flexible use.


The remaining broadcast TV spectrum that President Trump is interested in totals about 210 MHz. But even more spectrum is under the President’s nose.


As Obama’s Council of Advisors on Science and Technology pointed out in 2012, federal agencies possess around 2,000 MHz of “beachfront” (sub-3.5 GHz) spectrum. I charted various spectrum uses in a December 2016 Mercatus policy brief.



This government spectrum is very valuable if portions can be cleared of federal users. Federal spectrum was part of the frequencies the FCC auctioned in 2006 and 2015, and the slivers of federal spectrum (around 70 MHz of the federal total) sold for around $27 billion combined.


The Department of Commerce has been analyzing which federal spectrum bands could be used commercially and the Mobile Now Act, a pending bill in Congress, proposes more sales of federal spectrum. These policies have moved slowly (and the vague language about unlicensed spectrum in the Mobile Now bill has problems) but the Trump administration has a chance to expedite spectrum reallocation processes and sell more federal spectrum to commercial users.

 •  0 comments  •  flag
Share on Twitter
Published on March 09, 2017 11:11

February 8, 2017

Why Compromise and Allow the FCC to Regulate the Internet?

If Congress and the President wanted to prevent intrusive regulation of the Internet, how would they do it? They know that silence on the issue wouldn’t protect Internet services. As Congress learned in the 1960s and 1970s with cable TV, congressional silence, to the FCC, looks like permission to enact a far-reaching regulatory regime.


In the 1990s, Congress knew the FCC would be tempted to regulate the Internet and Internet services and that silence would be seen as an invitation to regulate the Internet. Congress and President Clinton therefore passed a 1996 law, Section 230 of the Communications Decency Act, which stated:


It is the policy of the United States…to preserve the vibrant and competitive free market that presently exists for the Internet and other interactive computer services, unfettered by Federal or State regulation.


But this statement raised the possibility that the FCC would regulate Internet access providers and would claim (as FCC defenders do today) they were not regulating “the Internet,” only access providers. To preempt such sophistry, Congress added that that “interactive computer services” shielded from regulation include:


specifically a service or system that provides access to the Internet….


Congress proved prescient. For over a decade, as the FCC’s traditional areas of regulation waned in importance, advocates and FCC officials have sought to regulate Internet access providers and the Internet. After two failed attempts to regulate providers and enforce net neutrality norms, the FCC decided to regulate Internet access providers with Title II, the same provisions regulating telephone and telegraph providers. Section 230 featured prominently in the dissents of commissioners Pai and O’Rielly who both noted that the Open Internet Order was a simple rejection of the plain words of Congress. Nevertheless, two judges on DC Circuit Court of Appeals blessed those regulations and the Open Internet Order in 2016.


If “unfettered from Federal regulation” means anything, doesn’t it mean that the FCC cannot use Title II, its most stringent regulatory regime, to regulate Internet access providers? Is there any combination of words Congress could draft that would protect Internet access providers and Internet services from Title II?


There is a pending appeal challenging the Open Internet Order before the DC Circuit and after that is appeal to the Supreme Court. The Supreme Court, in particular, might be receptive to a common-sense argument that “unfettered from Federal regulation” is hazy around the edges but it cannot mean regulation of ISPs’ content, services, protocols, topology, and business models.


I understand the sentiment that a net neutrality compromise is urgently needed to save the Internet from Title II. But until the Open Internet Order appeals have concluded, I think it’s premature to compromise and grant the FCC permanent authority to regulate the Internet with vague standards (e.g., no one knows what “reasonable throttling” means). A successful appeal could mean a third and final court loss for net neutrality purists, thereby restoring Section 230’s free-market protections for the Internet. Until the Supreme Court denies cert or agrees with the FCC that up is down, black is white, and agencies can ignore clear statutes, I’m not persuaded that Congress should nullify its own deregulatory language of Section 230 with a net neutrality compromise.

 •  0 comments  •  flag
Share on Twitter
Published on February 08, 2017 07:11

January 23, 2017

Thoughts on “Demand” for Unlicensed Spectrum

The proposed Mobile Now Act signals that spectrum policy is being prioritized by Congress and there’s some useful reforms in the bill. However, the bill encourages unlicensed spectrum allocations in ways that I believe will create major problems down the road.


Congress and the FCC need to proceed much more carefully before allocating more unlicensed spectrum. The FCC’s 2008 decision, for instance, to allow unlicensed devices in the “TV white spaces” has been disappointing. As some economists recently noted, “[s]imply stated, the FCC’s TV white space policy to date has been a flop.” Unlicensed spectrum policy is also generating costly fights (see WiFi v. LTE-U, Bluetooth v. TLPS, LightSquared v. GPS) as device makers and carriers lobby about who gains regulatory protection and how to divide this valuable resource that the FCC parcels out for free.


The unlicensed spectrum provisions in the Mobile Now Act may force the FCC to referee innumerable fights over who has access to unlicensed spectrum. Section 18 of the Mobile Now bill encourages unlicensed spectrum. It says the FCC must


make available on an unlicensed basis radio frequency bands sufficient to meet demand for unlicensed wireless broadband operations if doing so is…reasonable…and…in the public interest.


Note that we have language about supply and demand here. But unlicensed spectrum is free to all users using an approved device (that is, nearly everyone in the US). Quantity demanded will always outstrip quantity supplied when a valuable asset (like spectrum or real estate) is handed out when price = 0. By removing a valuable asset from the price system, large allocation distortions are likely.


Any policy originating from Congress or the FCC to satisfy “demand” for unlicensed spectrum biases the agency towards parceling out an excessive amount of unlicensed spectrum. 


The problems from unlicensed spectrum allocation could be mitigated if the FCC decided, as part of a “public interest” conclusion, to estimate the opportunity cost of any unlicensed spectrum allocated. That way, the government will have a rough idea of the market value of unlicensed spectrum being given away. There have been several auctions and there is an active secondary market for spectrum so estimates are achievable, and the UK has required the calculation of the opportunity cost of spectrum for over a decade.


With these estimates, it will be more difficult but still possible for the FCC to defend giving away spectrum for free. Economist Coleman Bazelon, for instance, estimates that the incremental value of a nationwide megahertz of licensed spectrum is more than 10x the equivalent unlicensed spectrum allocation. Significantly, unlike licensed spectrum, allocations of unlicensed bands are largely irreversible.


People can quibble with the estimates but it is unclear that unlicensed use is the best use of additional spectrum. In any case, hopefully the FCC will attempt to bring some economic rigor to public interest determinations.

 •  0 comments  •  flag
Share on Twitter
Published on January 23, 2017 13:27

January 19, 2017

Did the Incentive Auction Fail?

Is the incentive auction a disappointment? Along with others, I suspect that the complexity of the auction depressed prices and might have reduced participation. But, for consumers, this auction is not a disappointment. At least–not yet.


Scott Wallsten at the Technology Policy Institute has a good rundown. My thoughts below:


By my count, this was the eighth major auction of commercial, liberalized spectrum since auctions were authorized in 1993. On the most important question–how much spectrum was repurposed from restricted uses to liberal, licensed uses?–this auction stacks up pretty well.


At 70 MHz, this was the third largest auction in terms of total spectrum repurposed, trailing the mid-1990s PCS auction (120 MHz) and 2006 AWS-1 auction (90 MHz).


On the next most important question–how quickly will new services be deployed?–the verdict is still out. Broadcasters have over three years to clear out of the spectrum but some believe it will take longer. If delays mount and it take several years for broadcasters to be “repacked,” this auction doesn’t look quite so good.


That said, some people are disappointed with this auction, particularly some in the broadcasting industry and in the FCC or Congress, who expected higher auction revenues.


High revenue gets nice headlines but is far less important than the amount of spectrum repurposed. It’s an underreported story but close to 290 MHz of spectrum, nearly 45% of all liberalized, licensed spectrum, was de-zoned by the FCC, not auctioned. De-zoning spectrum generates zero auction revenue for the government but consumers see substantial benefits from this de-zoning, even if the government does not directly benefit. I recently wrote a policy brief about the benefits of de-zoning spectrum.


In any case, in terms of revenue, this auction was not a failure. At around $17 billion, it’s third in terms in auction revenue, trailing the 2008 700 MHz band auction (about $21 billion in 2015 dollars) and the massive haul from the 2015 AWS-3 auction (about $42 billion).


At close, broadcasters will receive $10 billion for the 70 MHz of available licensed spectrum. Some broadcasters consider it a failure, just as a home seller is disappointed when her home sells below list price. The broadcasters initially requested $86 billion for 100 MHz of available spectrum. When the carriers bids didn’t match that price, some broadcasters pulled out and the remaining broadcasters lowered their price.


In short, the auction will clear a respectable amount of spectrum and will generate respectable revenues. Were there better ways of repurposing broadcast spectrum? Broadcasters have a point that the complexity of the auction might have reduced participation. As Wallsten notes, an overlay auction (like AWS-1) or simply de-zoning the spectrum might have been better (faster) alternatives. But it goes too far deem this auction a failure (at least until we know how long the broadcaster repack takes).

 •  0 comments  •  flag
Share on Twitter
Published on January 19, 2017 14:08

January 9, 2017

Remember What the Experts Said about the Apple iPhone 10 Years Ago?

Today marks the 10th anniversary of the launch of the Apple iPhone. With all the headlines being written today about how the device changed the world forever, it is easy to forget that before its launch, plenty of experts scoffed at the idea that Steve Jobs and Apple had any chance of successfully breaking into the seemingly mature mobile phone market.


After all, those were the days when BlackBerry, Palm, Motorola, and Microsoft were on everyone’s minds. Perhaps, then, it wasn’t so surprising to hear predictions like these leading up to and following the launch of the iPhone:



In December 2006, Palm CEO Ed Colligan summarily dismissed the idea that a traditional personal computing company could compete in the smartphone business. “We’ve learned and struggled for a few years here figuring out how to make a decent phone,” he said. “PC guys are not going to just figure this out. They’re not going to just walk in.”
In January 2007, Microsoft CEO Steve Ballmer laughed off the prospect of an expensive smartphone without a keyboard having a chance in the marketplace as follows: “Five hundred dollars? Fully subsidized? With a plan? I said that’s the most expensive phone in the world and it doesn’t appeal to business customers because it doesn’t have a keyboard, which makes it not a very good e-mail machine.”
In March 2007, computing industry pundit John C. Dvorak argued that “Apple should pull the plug on the iPhone” since “There is no likelihood that Apple can be successful in a business this competitive.” Dvorak believed the mobile handset business was already locked up by the era’s major players. “This is not an emerging business. In fact it’s gone so far that it’s in the process of consolidation with probably two players dominating everything, Nokia Corp. and Motorola Inc.”


A decade after these predictions were made, Motorola, Nokia, Palm, and Blackberry have been decimated by the rise of Apple as well as Google (which actually purchased Motorola in the midst of it all). And Microsoft still struggles with mobile even though they are still a player in the field. Rarely have Joseph Schumpeter’s “perennial gales of creative destruction” blown harder than they have in the mobile sector over this 10 year period.


The lesson here is pretty clear. As Yogi Berra once quipped: “It’s tough to make predictions, especially about the future.” But there’s more to it than just that. These mistaken predictions serve as a classic example of those with a static snapshot mentality disregarding the potential for new entry and technological disruption to shake things up. “In dealing with disruptive technologies leading to new markets,” says Clayton M. Christensen, author of The Innovator’s Dilemma, “researchers and business planners have consistently dismal records.”


This has implications not only for business forecasting but also for public policy, which is notoriously shortsighted when it comes to the potential for new technological innovations to shake up existing markets. Just because you think a particular firm or sector it the proverbial “King of the Hill” one day, it doesn’t mean they will be able to sit on that lofty perch forever. Likewise, policymakers cannot neatly “plan progress” by incessantly intervening in the hope of directing markets and technologies toward some supposedly better end. Picking winners and losers–or even just trying to stimulate more “winners”–will likely end very badly.


In his book, The Year 2000: A Framework for Speculation on the Next Thirty-three Years, the futurist Herman Kahn wisely noted that:


History is likely to write scenarios that most observers would find implausible not only prospectively but sometimes, even in retrospect. Many sequences of events seem plausible now only because they have actually occurred; a man who knew no history might not believe any. Future events may not be drawn from the restricted list of those we have learned are possible; we should expect to go on being surprised.


But we can only “expect to go on being surprised” by leaving plenty of breathing room for the evolution of markets and technology. While all social and economic experiments are accompanied by a great deal of unpredictability and disruption, history indicates that most of those experiments will result in greater progress and prosperity–just as the iPhone did. But developments such as these are almost impossible to predict or plan beforehand. We have to get the environment for innovation right and then let creative minds work their magic.


 


 

 •  0 comments  •  flag
Share on Twitter
Published on January 09, 2017 09:15

December 5, 2016

Innovation Arbitrage, Technological Civil Disobedience & Spontaneous Deregulation

The future of emerging technology policy will be influenced increasingly by the interplay of three interrelated trends: “innovation arbitrage,” “technological civil disobedience,” and “spontaneous private deregulation.” Those terms can be briefly defined as follows:



Innovation arbitrage” refers to the idea that innovators can, and will with increasingly regularity, move to those jurisdictions that provide a legal and regulatory environment more hospitable to entrepreneurial activity. Just as capital now fluidly moves around the globe seeking out more friendly regulatory treatment, the same is increasingly true for innovations. And this will also play out domestically as innovators seek to play state and local governments off each other in search of some sort of competitive advantage.
Technological civil disobedience” represents the refusal of innovators (individuals, groups, or even corporations) or consumers to obey technology-specific laws or regulations because they find them offensive, confusing, time-consuming, expensive, or perhaps just annoying and irrelevant. New technological devices and platforms are making it easier than ever for the public to openly defy (or perhaps just ignore) rules that limit their freedom to create or use modern technologies.
Spontaneous private deregulation” can be thought of as de facto rather than the de jure elimination of traditional laws and regulations owing to a combination of rapid technological change as well the potential threat of innovation arbitrage and technological civil disobedience. In other words, many laws and regulations aren’t being formally removed from the books, but they are being made largely irrelevant by some combination of those factors. “Benign or otherwise, spontaneous deregulation is happening increasingly rapidly and in ever more industries,” noted Benjamin Edelman and Damien Geradin in a Harvard Business Review article on the phenomenon.

I have previously documented examples of these trends in action for technology sectors as varied as drones, driverless cars, genetic testing, Bitcoin, and the sharing economy. (For example, on the theme of global innovation arbitrage, see all these various essays. And on the growth of technological civil disobedience, see, “DOT’s Driverless Cars Guidance: Will ‘Agency Threats’ Rule the Future?” and “Quick Thoughts on FAA’s Proposed Drone Registration System.” I also discuss some of these issues in the second edition of my Permissionless Innovation book.)


In this essay, I want to briefly highlight how, over the course of just the past month, a single company has offered us a powerful example of how both global innovation arbitrage and technological civil disobedience—or at least the threat thereof—might become a more prevalent feature of discussions about the governance of emerging technologies. And, in the process, that could lead to at least the partial spontaneous deregulation of certain sectors or technologies. Finally, I will discuss how this might affect technological governance more generally and accelerate the movement toward so-called “soft law” governance mechanisms as an alternative to traditional regulatory approaches.


Comma.ai Case Study, Part 1: The Innovation Arbitrage Threat

The company I want to highlight is Comma.ai, a start-up that had hoped to sell a $999 after-market kit for vehicles called the “Comma One,” which “would give average, everyday cars autonomous functionality.” Created by famed hacker George Hotz, who as a teenager gained notoriety for being the first person to unlock an iPhone in 2007, the Comma One represents an attempt to create autonomous vehicle tech “on the cheap” by using off-the-shelf cameras and GPS technology combined with a healthy dose of artificial intelligence technology.


comma-one


But regulators at the National Highway Traffic Safety Administration (NHTSA), the federal agency responsible for road safety and automobile regulation, were none too happy to hear about Hotz’s plan to unleash his technology into the wild without first getting their blessing. On October 27, the agency fired off a nastygram to Hotz saying: “We are concerned that your product would put the safety of your customers and other road users at risk. We strongly encourage you to delay selling or deploying your product on the public roadways unless and until you can ensure it is safe.”


Hotz responded on Twitter promptly and angrily. After posting the full NHTSA letter, he said, “First time I hear from them and they open with threats. No attempt at a dialog.” In a follow-up tweet, he said, “Would much rather spend my life building amazing tech than dealing with regulators and lawyers. It isn’t worth it.” And then he announced that, “The comma one is cancelled. comma.ai will be exploring other products and markets. Hello from Shenzhen, China.” A flood of news articles followed about Hotz’s threat to engage in this sort of global innovation arbitrage by bolting US shores.


Incidentally, what Hotz and Comma.ai were proposing to do with Comma One—i.e., deploy autonomous vehicle tech into the wild without prior regulatory approval—was recently done by Otto, a developer of autonomous trucking technology. As Mark Harris reported on Backchannel:


When Otto performed its test drive — the one shown in the May video — it did so despite a clear warning from Nevada’s Department of Motor Vehicles (DMV) that it would be violating the state’s autonomous vehicle regulations. When the DMV realized that Otto had gone ahead anyway, one official called the drive “illegal” and even threatened to shut down the agency’s autonomous vehicle program.”


While Nevada regulators were busy firing off angry letters, Otto was busy doing even more testing in others states (like Ohio), which are eager to make their jurisdictions a testbed for autonomous vehicle innovation. In fact, just recently, Ohio Gov. John Kasich announced the creation of the “Smart Mobility Corridor,” which, according to the Dayton Daily News, will be “a 35-mile stretch of U.S. 33 in central Ohio that runs through Logan County. Officials say that section of U.S. 33 will become a corridor where technologies can be safely tested in real-life traffic, aided by a fiber-optic cable network and sensor systems slated for installation next year.”


otto-truck


This is an example of innovation arbitrage will increasingly take root here domestically as well as abroad, and some states (or countries) will use inducements in an effort to lure innovators to their jurisdictions.


Anyway, let’s get back to the Comma One case study. I don’t want to get too sidetracked regarding the merits of the concerns raised by NHTSA in its letter to Hotz and the implications of the agency’s threats for innovation in this space. But EFF board member Brad Templeton did a nice job addressing that issue in an essay about NHTSA’s letter that threatened Comma. As Templeton observed:


I will presume the regulators will say, “We only want to scare away dangerous innovation” but the hard truth is that is a very difficult thing to judge. All innovation in this space is going to be a bit dangerous. It’s all there trying to take the car — the 2nd most dangerous legal consumer product — and make it safer, but it starts from a place of danger. We are not going to get to safety without taking risks along the way.


This gets to the very real trade-offs in play in the debate over driverless car technology and its regulation. In fact, my Mercatus Center colleague Caleb Watney and I recently filed comments with NHTSA addressing the agency’s recently proposed “Federal Automated Vehicles Policy.” We stressed the potentially deleterious implications of prior regulatory restraints on autonomous vehicle innovation by stressing the horrific real-world baseline we live with today, in which over 35,000 people dying on US roadways in 2015 (roughly 96 people per day) and 94 percent of all those crashes being attributable to human error.


Caleb and I noted that, by imposing new preemptive constraints on the coding of superior autonomous driving technology, “NHTSA’s proposed policy for automated vehicles may inadvertently increase the number of total automobile fatalities by delaying the rapid development and diffusion of this life-saving technology.” Needless to say, if that comes to pass, it would be a disaster because “automation on the roads could be the great public-health achievement of the 21st century.”


In our filing, Caleb and I estimated that, “If NHTSA’s proposed premarket approval process slows the deployment of HAVs by 5 percent, we project an additional 15,500 fatalities over the course of the next 31 years. At 10 percent regulatory delay, we project an additional 34,600 fatalities over 33 years. And at 25 percent regulatory delay, we project an additional 112,400 fatalities over 40 years.


So, needless to say, this is a very big deal.


But let’s ignore all those potential foregone benefits for the moment and just stick with the question of whether Hotz’s threat to engage in a bit of global innovation arbitrage (by moving to China or somewhere else) could work, or at least affect policy in some fashion. I think it absolutely could be an effective threat both because (a) policymakers really do want to do everything they can to achieve greater road safety, and (b) the auto sector remains a hugely important industry for the United States, and one that policymakers will want to do everything in their power to retain on our shores.


Moreover, as Templeton observes that “Comma is not the only company trying to build a system with pure neural networks doing the actual steering decisions.” Even if NHTSA succeeds in bringing Comma to heel, there will be others who will follow in its footsteps. It might be a firm like Otto, but there are many other players in this space today, including big dogs like Tesla and Google. If ever there was a truly global technology industry, it the automotive sector. Autonomous vehicle innovation could take root and blossom in almost any country in the world, and many countries will be waiting with open arms if America screws up its regulatory process.


As Templeton concludes:


The USA and California led the way in robocars in part because it was unregulated. In the USA, everything is permitted unless it was explicitly forbidden and nobody thought to write “no robots” in the laws. Progress in other countries where everything is forbidden unless it is permitted was much slower. The USA is moving in the wrong direction.


Comma.ai Case Study, Part 2: The Technological Civil Disobedience Threat

But an interesting thing happened on the way to Comma’s threatened exodus. On November 30, the firm announced that it would now be open sourcing the code for its autonomous vehicle technology. Reporters at The Verge noted that, during a press conference:


Hotz said that Comma.ai decided to go open source in an effort to sidestep NHTSA as well as the California DMV, the latter of which he said showed up to his house on three separate occasions. “NHTSA only regulates physical products that are sold,” Hotz said. “They do not regulate open source software, which is a whole lot more like speech.” He went on to say that “if the US government doesn’t like this [project], I’m sure there are plenty of countries that will.”


So here we see Hotz combining the threat of still potentially taking the project offshore (i.e., global innovation arbitrage) with the suggestion that by open-sourcing the code for Comma One he might be able to get around the law altogether. We might consider that an indirect form of technological civil disobedience.


george-hotz


Incidentally, Hotz may not be aware of the fact that NHTSA is in the process of making a power-play to become a driverless car code cop. While Hotz is technically correct that, under current law, NHTSA officials “do not regulate open source software, which is a whole lot more like speech,” NHTSA’s recent Federal Automated Vehicles Policy claimed that the agency “has authority to regulate the safety of software changes provided by manufacturers after a vehicle’s first sale to a consumer” while also suggesting that the agency “may need to develop additional regulatory tools and rules to regulate the certification and compliance verification of such post-sale software updates.”


Needless to say, this proposal has important ramifications for not only Comma, but all other firms in this sector. Consider the implications for Tesla’s “autopilot” mode, which is really little more than a string of constantly-evolving code it pushes out to offer greater and greater autonomous driving functionality.  How would that iterative process work if every time Tesla wanted to make a little tweak to its code it had to run to Washington and file paperwork with NHTSA petitioning for permission to experiment and improve their systems? And then think about all the smaller innovators out there who want to be the next Elon Musk or George Hotz but do not yet have the resources or political connections in Washington to even go through this complex and costly process.


In any event, I have no idea if Hotz or Comma.ai will follow through with any of these threats or be successful in doing so. It may be the case that he is just blowing off smoke and that he and his firm will end up staying in the U.S. and perhaps even later reversing course on the decision to open source the Comma code. But to the extent that innovators like Hotz even hint that they might split the country or open source their code to avoid burdensome regulatory regimes, it can have an influence on future policy decisions. Or at least it should.


New Tech Realities & Their Policy Implications

Indeed, the increasing prevalence of global innovation arbitrage and technological civil disobedience raise some interesting issues for the governance of emerging technologies going forward. The traditional regulatory stance toward many existing sectors and technologies will be challenged by these realities. That’s because most of those traditional regulatory systems are highly precautionary, preemptive, and prophylactic in character. They generally opt for policy solutions that are top-down, overly rigid, and bureaucratic.


marcandreessenThis results a slow-moving and sometimes completely stagnant regulatory approval process that can stop innovation dead in its tracks, or at least delay it for many years. Such systems send innovators a clear message: You are guilty until proven innocent and must receive some bureaucrat’s blessing before you can move forward.


Of course, in the past, many innovators (especially smaller scale entrepreneurs) really couldn’t do much to avoid similar regulatory systems where they existed. You either fell into line, or else! It wasn’t always clear what “or else!” would entail, but it could range from being denied a permit/license to operate, waiting months or years for rules to emerge, dealing with fines or other penalties, or some combination of all those things. Or perhaps you would just give up on your innovative idea altogether and exit the market.


But the world has changed in some important ways in recent years. Many of the underlying drivers of the digital revolution—massive increases in processing power, exploding storage capacity, steady miniaturization of computing, ubiquitous communications and networking capabilities, the digitization of all data, and more—are beginning to have a profound impact beyond the confines of cyberspace. As venture capitalist Marc Andreessen explained in a widely read 2011 essay about how “software is eating the world”:


More and more major businesses and industries are being run on software and delivered as online services—from movies to agriculture to national defense. Many of the winners are Silicon Valley-style entrepreneurial technology companies that are invading and overturning established industry structures. Over the next 10 years, I expect many more industries to be disrupted by software, with new world-beating Silicon Valley companies doing the disruption in more cases than not.


Why is this happening now? Six decades into the computer revolution, four decades since the invention of the microprocessor, and two decades into the rise of the modern Internet, all of the technology required to transform industries through software finally works and can be widely delivered at global scale.


We can add to this list of a new realities the more general problem of technology accelerating at an unprecedented pace. This is what philosophers of technology call the “pacing problem.”  In his new book, A Dangerous Master: How to Keep Technology from Slipping beyond Our Control, Wendell Wallach concisely defined the pacing problem as “the gap between the introduction of a new technology and the establishment of laws, regulations, and oversight mechanisms for shaping its safe development.” “There has always been a pacing problem,” Wallach correctly observed, but like other philosophers, he believes that modern technological innovation is accelerating much faster than it was in the past.


What are the ramifications of all this for policy? As technology lawyer and consultant Larry Downes has noted, lawmaking in the information age is now inexorably governed by the “law of disruption” or the fact that “technology changes exponentially, but social, economic, and legal systems change incrementally.” This law is “a simple but unavoidable principle of modern life,” he said, and it will have profound implications for the way businesses, government, and culture evolve. “As the gap between the old world and the new gets wider,” he argues, “conflicts between social, economic, political, and legal systems” will intensify and “nothing can stop the chaos that will follow.”


laws-of-disruption


The end result of the “law or disruption” and a world relentlessly governed by the ever-accelerating “pacing problem” is that it will be harder than ever to effectively control emerging technologies using traditional legal and regulatory systems and mechanisms. And this makes it even more likely that the related threats of global innovation arbitrage and various forms of technological civil disobedience will become more regular fixtures in debates about many emerging technologies.


New Governance Models

How one reacts to these new realities will depend upon their philosophical disposition toward innovative activities more generally.


Consider first those adhering to a more “precautionary principle” mindset, which I have defined in my recent book as those who believe “that new innovations should be curtailed or disallowed until their developers can prove that they will not cause any harm to individuals, groups, specific entities, cultural norms, or various existing laws, norms, or traditions.”


Needless to say, the precautionary principle crowd with be dismayed by these new trends and perhaps even decry them as “lawlessness.” Some of these folks seem to be in denial about these new realities and pretend that nothing much has changed. Yet, I have found that most precautionary principle-oriented advocates, and even many regulatory agencies themselves, tend to acknowledge these new realities. But they remain very uncertain about how best to respond to them, often just suggesting that we’ll all need to just try harder to impose new and better regulations on a more expedited or streamlined basis.


Of course, those of us who generally embrace the alternative policy vision for technological governance—“permissionless innovation”—are going to be more accepting of the new technological realities I have described, and we will perhaps even work to defend and encourage them. But while I count myself among this crowd, we cannot ignore the fact that many serious challenges will arise when innovation outpaces law or can easily evade it.


There is some middle ground here, although it is very messy middle ground.


The era of technocratic, top-down, one-size-fits-all regulatory regimes is fading, or at least being severely strained. We will instead need to craft flexible and adaptive policies going forward that are bottom-up, flexible, and evolutionary in character.


What that means in practice is that a lot more “soft law” and informal governance mechanisms will become the new norm. I wrote about this new policy environment in my recent essay, “DOT’s Driverless Cars Guidance: Will ‘Agency Threats’ Rule the Future?” as well as this lengthy review of Wendell Wallach’s latest book about technology ethics.  Along with Gary Marchant of the Arizona State University law school, Wallach recently published an excellent book chapter on “Governing the Governance of Emerging Technologies,” which discussed these soft law mechanisms, which include: “codes of conduct, statements of principles, partnership programs, voluntary programs and standards, certifications programs and private industry initiatives.”


Their chapter appears in an important collection of essays that Gary Marchant edited with Kenneth W. Abbott and Braden Allenby entitled, Innovative Governance Models for Emerging Technologies.


governance-book


What is interesting about the chapters in that book is that seemingly widespread consensus now exists among experts in this field that some combination of these soft law mechanisms are likely to become the primary mode of technological governance for the indefinite future.  This is because, as Marc A. Saner points out in a different chapter of that book, “the control paradigm is too limited to address all the issues that arise in the context of emerging technologies.” By the control paradigm, he generally means traditional administration regulatory agencies and processes. He and other contributors in the book all seem to agree that the control problem paradigm “has its limits when diffusion, pacing and ethical issues associated with emerging technologies become significant, as is often the case.”


And so the traditional command-and-control ways will gradually give way to a new paradigm for emerging technology governance. In fact, as I noted in my recent essay on driverless cars, we already see this happening quite a bit already. “Multistakeholder processes” are already all the rage in the world of emerging technologies and their governance. In recent years, we have seen the White House and various agencies (such as the FTC, NTIA, FDA, and others) craft multistakeholder agreements or best practice guidance documents for technologies as far ranging as:



Drones & privacy
Sharing economy
Internet of Things
Driverless cars
Big data
Artificial intelligence
Cross-device tracking
Native advertising
Online data collection
Mobile app transparency and security
Mobile apps for kids
Mobile medical apps
Online health advertising
3D printing
Facial recognition

And that list is not comprehensive. I know I am missing other multistakeholder efforts, best practices, or industry guidance documents that have been crafted in recent years.


Of course, many challenging issues need to be sorted out here, most notably: how transparent and accountable will these soft law systems be in practice? How will they be enforced? And what will happen to all those existing laws, regs, and agencies that will continue to exist? More generally, it is worth asking whether we can more closely study these various multistakeholder arrangements and soft law governance mechanisms and determine if there are certain principles or strategies that could be applicable across a wide class of technologies and sectors. In other words, can we a do a better job of “formalizing the informal,” without falling right back into the trap of trying to impose rules in a rigid, top-down, one-size-fits-all fashion?


Conclusion

Those are just a few of the hard questions we will need to consider going forward. For now, however, I think it is safe to conclude that we will no longer see much “law” being made for emerging technologies, at least not in the traditional sense of the term. Thanks to the new technological realities I have described here—and the relentless reality of the “pacing problem” more generally—I believe we are witnessing a wide-ranging and quite profound transformation in how technology is governed in our modern world. And I believe this movement away from traditional “hard law” and toward “soft law” governance mechanisms is likely to accelerate due to the increasing prevalence of innovation arbitrage, technological civil disobedience, and spontaneous private deregulation.


The ramifications of this transformation will be studied by philosophers, legal theorists, and political scientists for many decades to come. But we are still in the early years of this momentous transformation in technological governance and we will continue to struggle to figure out how to make it all work, as messy as it all may be.


_______


[Note: This essay is condensed from a manuscript I have been working on about The Rise of Technological Civil Disobedience. I’m not sure I will ever get around to finishing it, however, so I thought I would at least post this piece for now. In a subsequent essay, which is also part of that draft manuscript, I hope to discuss how this process might play out for technologies that are “born free” versus those that are “born in captivity.” That is, how likely is it that the trends I discuss here will take hold for technologies that have no pre-existing laws or agencies, while other technologies that are born into a regulatory environment are potentially doomed to be pigeonholed into those old regulatory regimes? What are the chances that the latter technologies can escape captivity and gain the freedom the other technologies already enjoy? How might technology-enabled “spontaneous private deregulation” be accelerated for those sectors? Is that always desirable? Again, I will leave these questions for another day. Scholars and students who are interested in these topics can feel free to contact me if they are interested in discussing them as well as potential paper ideas. Regardless of how you feel about these trends, these issues are ripe for intellectual exploration.]


    Benjamin Edelman and Damien Geradin, “Spontaneous Deregulation,” Harvard Business Review, April 2016, https://hbr.org/2016/04/spontaneous-d....


    Megan Geuss, “After mothballing Comma One, George Hotz releases free autonomous car software,” Ars Technica, November 30, 2016, http://arstechnica.com/cars/2016/11/a....


    See: “NHTSA Scared This Self-Driving Entrepreneur Off the Road,” Bloomberg Technology, October 28, 2016, https://www.bloomberg.com/news/articles/2016-10-28/nhtsa-scared-this-self-driving-entrepreneur-off-the-road; Sean O’Kane, “George Hotz cancels his self-driving car project after NHTSA expresses concern,” The Verge, October 28, 2016, http://www.theverge.com/2016/10/28/13453344/comma-ai-self-driving-car-comma-one-kit-canceled; Brad Templeton, “Comma.ai cancels comma-one add-on box after threats from NHTSA,” Robohub, October 31, 2016, http://robohub.org/comma-ai-cancels-comma-one-add-on-box-after-threats-from-nhtsa.


    Mark Harris, “How Otto Defied Nevada and Scored a $680 Million Payout from Uber,” Backchannel, November 28, 2016,  https://backchannel.com/how-otto-defi...


    Larry E. Hall, “Otto Self-Driving Truck Tests in Ohio; Violated Nevada Regulations,” Hybrid Cars, November 29, 2016, http://www.hybridcars.com/otto-self-d....


    Kara Driscoll, “Ohio to create ‘smart’ road for driverless trucks,” Dayton Daily News, November 30, 2016, http://www.daytondailynews.com/busine....


    Brad Templeton, “Comma.ai cancels comma-one add-on box after threats from NHTSA,” Robohub, October 31, 2016, http://robohub.org/comma-ai-cancels-c...


     Adam Thierer and Caleb Watney, “Comment on the Federal Automated Vehicles Policy,” November 22, 2016, https://www.researchgate.net/publicat....


    National Highway Traffic Safety Administration (NHTSA), Federal Automated Vehicles Policy, September 2016.


  Adrienne LaFrance, “Self-Driving Cars Could Save 300,000 Lives per Decade in America,” Atlantic, September 29, 2015


  Adam Thierer and Caleb Watney, “Comment on the Federal Automated Vehicles Policy,” November 22, 2016, https://www.researchgate.net/publicat....


  Templeton.


  Sean O’Kane and Lauren Goode, “George Hotz is giving away the code behind his self-driving car project,” The Verge, November 30, 2016, http://www.theverge.com/2016/11/30/13779336/comma-ai-autopilot-canceled-autonomous-car-software-free.


  NHTSA, Federal Automated Vehicles Policy, 76.


  Adam Thierer, Jerry Brito, and Eli Dourado, “Technology Policy: A Look Ahead,” Technology Liberation Front, May 12, 2014, http://techliberation.com/2014/05/12/....


  Marc Andreessen, “Why Software Is Eating the World,” Wall Street Journal, August 20, 2011, http://www.wsj.com/articles/SB1000142....


  Wendell Wallach, A Dangerous Master: How to Keep Technology from Slipping beyond Our Control (New York: Basic Books, 2015), 60.


  Larry Downes, The Laws of Disruption: Harnessing the New Forces That Govern Life and Business in the Digital Age 2 (2009).


  Id.


  Thierer, Permissionless Innovation, at 1.


  Gary E. Marchant and Wendell Wallach, “Governing the Governance of Emerging Technologies,” in Gary E. Marchant, Kenneth W. Abbott & Braden Allenby (eds.), Innovative Governance Models for Emerging Technologies (Cheltenham, UK: Edward Elgar, 2013), 136.


  Marc A. Saner,  “The Role of Adaptation in the Governance of Emerging Technologies,” in Gary E. Marchant, Kenneth W. Abbott & Braden Allenby (eds.), Innovative Governance Models for Emerging Technologies (Cheltenham, UK: Edward Elgar, 2013), 106.


  Ibid., at 94.

 •  0 comments  •  flag
Share on Twitter
Published on December 05, 2016 12:06

Adam Thierer's Blog

Adam Thierer
Adam Thierer isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Adam Thierer's blog with rss.