Adam Thierer's Blog, page 41
October 29, 2014
Regarding the Use of Apocalyptic Rhetoric in Policy Debates
Evan Selinger, a super-sharp philosopher of technology up at the Rochester Institute of Technology, is always alerting me to interesting new essays and articles and this week he brought another important piece to my attention. It’s a short new article by Arturo Casadevall, Don Howard, and Michael J. Imperiale, entitled, “The Apocalypse as a Rhetorical Device in the Influenza Virus Gain-of-Function Debate.” The essay touches on something near and dear to my own heart: the misuse of rhetoric in debates over the risk trade-offs associated with new technology and inventions. Casadevall, Howard, and Imperiale seek to “focus on the rhetorical devices used in the debate [over infectious disease experiments] with the hope that an analysis of how the arguments are being framed can help the discussion.”
They note that “humans are notoriously poor at assessing future benefits and risks” and that this makes many people susceptible to rhetorical ploys based on the artificial inflation of risks. Their particular focus in this essay is the debate over so-called “gain-of-function” (GOF) experiments involving influenza virus, but what they have to say here about how rhetoric is being misused in that field is equally applicable to many other fields of science and the policy debates surrounding various issues. The last two paragraphs of their essay are masterful and deserve everyone’s attention:
Who has the upper hand in the GOF debate? The answer to this question will be apparent only when the history of this time is written. However, it is possible that in the near future, arguments about risk will trump arguments about benefits, because the risk of a GOF experiment unleashing a devastating epidemic plays on a well-founded human fear, while the potential benefits of the research are considerably harder to articulate. In debates about benefits and risks, arguments based on positing extreme risks, however unlikely, are powerful rhetorical devices because they play into human fears. While we all agree that the risk of a GOF experiment unleashing a deadly epidemic is not zero, such an event would be at the extreme end of the likely outcomes from GOF experimentation. Arguing against GOF on the basis of pandemic dangers is a powerful rhetorical device because anyone can understand it. The problem with the use of apocalyptic scenarios in risk-benefit analysis is that they invoke the possibility for infinite suffering, irrespective of the probability of such an event, and the prospect of infinite suffering can potentially overwhelm any good obtained from knowledge gained from such experiments.
Repeatedly invoking the apocalypse can create a sophistry that we call the apocalyptic fallacy, which, when applied in a vacuum of evidence and theory, proposes consequences that are so dire, however low the probability, that this tactic can be employed to quash any new invention, technique, procedures, and/or policy. The apocalyptic fallacy is an effective rhetorical tool that is meaningless in the absence of objective numbers. We remind those who invoke the apocalypse that the DNA revolution went on to deliver a multitude of benefits without unleashing the fears of Asilomar and that the large hadron collider was turned on, the Higgs boson was discovered, the standard model in physics was validated, and we are still here. Hence, we caution individuals against overreliance on the apocalypse in the debates ahead, for rhetoric can win the day, but rhetoric never gave us a single medical advance.
This is spot-on and, again, has applicability in many other arenas. Indeed, it aligns quite nicely with what I had to say about the use and misuse of rhetoric in information technology debates in my recent law review article on “Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle” (Minnesota Journal of Law, Science and Technology, Vol. 14, No. 1, Winter 2013). In that piece, I began by noting that:
Fear is an extremely powerful motivational force. In public policy debates, appeals to fear are often used in an attempt to sway opinion or bolster the case for action. Such appeals are used to convince citizens that threats to individual or social well-being may be avoided only if specific steps are taken. Often these steps take the form of anticipatory regulation based on the precautionary principle. Such “fear appeal arguments” are frequently on display in the Internet policy arena and often take the form of a fullblown “moral panic” or “technopanic.” These panics are intense public, political, and academic responses to the emergence or use of media or technologies, especially by the young. In the extreme, they result in regulation or censorship. While cyberspace has its fair share of troubles and troublemakers, there is no evidence that the Internet is leading to greater problems for society than previous technologies did. That has not stopped some from suggesting there are reasons to be particularly fearful of the Internet and new digital technologies. There are various individual and institutional factors at work that perpetuate fear-based reasoning and tactics.
I continued on to document the structure of “fear appeal” arguments, and then outlined how those arguments can be deconstructed and refuted using sound analysis and real-world evidence. The logic pattern behind fear appeal arguments looks something like this (as documented by Douglas Walton, in his outstanding textbook, Fundamentals of Critical Argumentation):
Fearful Situation Premise: Here is a situation that is fearful to you.
Conditional Premise: If you carry out A, then the negative consequences portrayed in this fearful situation will happen to you.
Conclusion: You should not carry out A.
In the field of rhetoric and argumentation, this logic pattern is referred to as argumentum in terrorem or argumentum ad metum. A closely related variant of this argumentation scheme is known as argumentum ad baculum, or an argument based on a threat. Argumentum ad baculum literally means “argument to the stick,” and the logic pattern in this case looks like this (again, according to Walton’s book on the subject):
Conditional Premise: If you do not bring about A, then consequence B will occur.
Commitment Premise: I commit myself to seeing to it that B will come about.
Conclusion: You should bring about A.
The problem is that these logic patterns and rhetorical devices are logical fallacies or are based on outright myths. Once you start carefully unpacking arguments based on this logic pattern and applying reasoned, evidenced-based analysis, you can quickly show why the premise is not valid.
Unfortunately, that doesn’t stop some people (including a great many policymakers) from utilizing such faulty logic or misguided rhetorical devices. Even worse, as I note in my paper, is that,
fear appeals are facilitated by the use of threat inflation. Specifically, threat inflation involves the use of fear-inducing rhetoric to inflate artificially the potential harm a new development or technology poses to certain classes of the population, especially children, or to society or the economy at large. These rhetorical flourishes are empirically false or at least greatly blown out of proportion relative to the risk in question.
I then go on for many pages in my paper to document the use of fear appeals and threat inflation in a variety of information technology debates. I show that in every case where such tactics are common, they are unjustified once the evidence is evaluated dispassionately. Regrettably, those who employ fear tactics and use threat inflation often don’t care because they know exactly what they are doing: The use of apocalyptic rhetoric grabs attention and sometimes ends serious deliberation. It is often an intentional ploy to scare people into action (or perhaps just into silence), even if that result is not based on a reasoned, level-headed evaluation of all the facts on hand.
The lesson here is simple: The ends do not justify the means. No matter how passionately you feel about a particular policy issue — even those that you perhaps believe potentially involve life and death ramifications — it is wise to avoid the use of apocalyptic rhetoric. Generally speaking, the sky is not falling and when one insists that it is, they should be backing up their assertions with a substantial body of evidence. Otherwise, they are just using fear appeal arguments and apocalyptic rhetoric to unnecessarily scare people and end all serious debate over issues that are likely far more complex and nuanced than their rhetoric suggests.
_______________
[For all my essays on "technopanics," moral panics, and threat inflation, see this compendium I have assembled.]

October 21, 2014
Tax Consequences of Net Neutrality
Would the Federal Communications Commission expose broadband Internet access services to tax rates of at least 16.6% of every dollar spent on international and interstate data transfers—and averaging 11.23% on transfers within a particular state and locality—if it reclassifies broadband as a telecommunications service pursuant to Title II of the Communications Act of 1934?
As former FCC Commissioner Harold Furchtgott-Roth notes in a recent Forbes column, the Internet Tax Freedom Act only prohibits state and local taxes on Internet access. It says nothing about federal user fees. The House Energy & Commerce Committee report accompanying the “Permanent Internet Tax Freedom Act” (H.R. 3086) makes this distinction clear.
The law specifies that it does not prohibit the collection of the 911 access or Universal Service Fund (USF) fees. The USF is imposed on telephone service rather than Internet access anyway, although the FCC periodically contemplates broadening the base to include data services.
The USF fee applies to all interstate and international telecommunications revenues. If the FCC reclassifies broadband Internet access as a telecommunications service in the Open Internet Proceeding, the USF fee would automatically apply unless and until the commission concluded a separate rulemaking proceeding to exempt Internet access. The Universal Service Contribution Factor is not insignificant. Last month, the commission increased it to 16.1%. According to Furchtgott-Roth,
At the current 16.1% fee structure, it would be perhaps the largest, one-time tax increase on the Internet. The FCC would have many billions of dollars of expanded revenue base to fund new programs without, according to the FCC, any need for congressional authorization.
In another Forbes column, Steve Posiask discusses the possibility that reclassification could also trigger state and local taxes. The committee report notes that if Congress allows the Internet access tax moratorium (which expires on Dec. 11, 2014) to lapse, states and localities could impose a crippling burden on the Internet.
In 2007, the average tax rate on communications services was 13.5%, more than twice the rate of 6.6% on all other goods and services. Some rates even exceed sin tax rates. For example, in Jacksonville, Florida, households pay 33.24% wireless taxes, higher than beer (19%), liquor (23%) and tobacco (28%). Moreover, these tax burdens fall heavier on low income households. They pay ten times as much in communications taxes as high income households as a share of income. (citation omitted.)
For more information on state and local taxation of communications services, see, e.g., the report from the Tax Foundation on wireless taxation that came out this month.
The House committee report also notes that broadband Internet access is highly price-elastic, which means that higher taxes would be economically inefficient.
former White House Chief economist Austan Goolsbee authored a paper finding the average elasticity for broadband to be 2.75. Elasticity is a measure of price sensitivity and here indicates that a $1.00 increase in Internet access taxes would reduce expenditures on those services by an average of $2.75. (citation omitted.)
Even if the Internet Tax Freedom Act is renewed by the lame duck Congress, the act isn’t exactly a model of clarity on this issue. The definition (see Sec. 1104) of “internet access service,” for example, specifically excludes telecommunications services.
INTERNET ACCESS.—The term ‘‘Internet access’’ means a service that enables users to access content, information, electronic mail, or other services offered over the Internet, and may also include access to proprietary content, information, and other services as part of a package of services offered to users. Such term does not include telecommunications services.
And the definition of “telecommunications service” (also in Sec. 1104) is the same one (by cross-reference) that the FCC may try to interpret as including broadband.
TELECOMMUNICATIONS SERVICE.—The term ‘‘telecommunications service’’ has the meaning given such term in section 3(46) of the Communications Act of 1934 (47 U.S.C. 153(46)) and includes communications services (as defined in section 4251 of the Internal Revenue Code of 1986).
When the Internet Tax Freedom Act was enacted in 1998, Congress took great pains to not jeopardize the pre-existing authority of state and local governments to levy substantial taxes on telecommunications carriers. Notwithstanding, state and local tax collectors have been fighting the moratorium ever since. Their point of view was summarized by Michael Mazerov in The Hill as follows:
Beyond costing states the $7 billion a year in potential revenue to support education, healthcare, roads, and other services, the bill would violate an understanding between Congress and the states dating back to the 1998 Internet Tax Freedom Act (ITFA): that any ban on applying sales taxes to Internet access charges would be temporary and not apply to existing access taxes.
The House passed H.R. 3086 in July. The “Internet Tax Freedom Forever Act” (S. 1431) is pending in the Senate Finance Committee. Even if Congress renews the Internet Tax Freedom Act but fails to clarify the definitions in current law, there is a distinct possibility that state and local tax collectors will test the limits of the law if and when the FCC rules that broadband is no different than a Title II telecommunications service.

October 20, 2014
Driverless Cars, Privacy & Security: Event Video & Talking Points
Last week, it was my pleasure to speak at a Cato Institute event on “The End of Transit and the Beginning of the New Mobility: Policy Implications of Self-Driving Cars.” I followed Cato Institute Senior Fellow Randal O’Toole and Marc Scribner, a Research Fellow at the Competitive Enterprise Institute. They provided a broad and quite excellent overview of all the major issues at play in the debate over driverless cars. I highly recommend you read the excellent papers that Randal and Marc have published on these issues.
My role on the panel was to do a deeper dive into the privacy and security implications of not just the autonomous vehicles of our future, but also the intelligent vehicle technologies of the present. I discussed these issues in greater detail in my recent Mercatus Center working paper, “Removing Roadblocks to Intelligent Vehicles and Driverless Cars,” which was co-authored with Ryan Hagemann. (That article will appear in a forthcoming edition of the Wake Forest Journal of Law & Policy.) I’ve embedded the video of the event down below (my remarks begin at the 38:15 mark) as well as my speaking notes. Again, please consult the longer paper for details.
______________
The privacy & security implications of self-driving cars are already driving public policy concerns because of the amount of data they collect. Here are a few things we should keep in mind as we consider new regulations for these technologies:
1) Security & privacy are relative concepts with amorphous boundaries
Not everyone affixes the same value on security & privacy; very subjective
Some people are hyper-cautious about security or hyper-sensitive about their privacy; others are risk-takers or are just somewhat indifferent (or pragmatic) about these things
2) Security & privacy norms can and often do evolve very rapidly over time
With highly disruptive technologies, we tend to panic first but then when move to a new plateau with new ethical and legal baselines
[I’ve written about this in my recent law review articles on about privacy and security]
The familiar cycle at work: initial resistance, gradual adaptation, eventual assimilation
This was true for the first cars a century ago; true today as well
3) For almost every perceived privacy or security harm, there is a corresponding consumer benefit that may outweigh the feared harm
We see this reality at work with the broader Internet & we will see it at work with intelligent vehicles
Ex: Compare vehicle telematics to locational tracking technologies for smartphones
In both contexts, locational tracking raises rather obvious privacy considerations
But has many benefits and could not exist without them (traffic)
“tracking” concerns may dissipate for cars like smartphones (but not evaporate!)
4) As it pertains to intelligent vehicle technologies, today’s security & privacy concerns are not the same as yesterdays and they will not be the same as tomorrow’s either.
Today’s “intelligent vehicle” technology privacy issues may be more concerning that tomorrow’s for fully autonomous vehicles
today’s on-baord EDRs & telematics may cause more privacy concerns for us as drivers than tomorrow’s technologies
ex: concerns about tailored insurance & automated law enforcement
That may lead to some privacy concerns in the short-term (or fears of “discrimination”)
BUT… What happens when cars are no longer a final good but merely a service for hire? (i.e., What happens when we combine Sharing Economy w/ self-driving cars?)
Car of future = robotic chauffeur (like Uber + Zip Car)
Old privacy concerns will evolve rapidly; security likely to become bigger concern
5) Any security & privacy solutions must take these realities into account in order to be successful and those solutions must also accommodate the need to balance many different values and interests simultaneously.
There are no silver bullet solutions to privacy & security problems
+ it will be difficult for law to keep up with pace of innovations
Therefore, We need a flexible, “layered approach” with many different solutions
we need “simple rules for a complex world” (Richard Epstein)
Contracts / enforce Terms of Service
Common law / torts / products liability
see excellent new Brookings paper by John Villasenor: “when confronted with new, often complex, questions involving products liability, courts have generally gotten things right. . . . Products liability law has been highly adaptive to the many new technologies that have emerged in recent decades, and it will be quite capable of adapting to emerging autonomous vehicle technologies as the need arises.”
liability norms & insurance standards will evolve rapidly as cars move from final good to service
“least-cost avoider” implications (the more you know, the more responsible you become)
Privacy & Security “by design” (“Baking-in” best practices)
Data collection minimization
Limit sharing w 3rd parties
Transparency about all data collection and use practices
Clear consent for new uses
see Future of Privacy Forum best practices for intelligent vehicle tech providers
this is already happening (GAO report noted 10 smart car tech makers already doing so)
Hopefully some firms compete on privacy & exceed these standards for those who want it
And hopefully privacy & security advocates develop tools to better safeguard these values, again for those who want more protection
Query: But shouldn’t there be some minimal standards? Federal or state regulation?
Things moving too quick; hard for law to keep pace w/o limiting innovation opportunities
The flexible approach and methods I just listed are better suited to evolve with the cases and controversies that pop up along the way
it is better to utilize a “wait and see” strategy & see if serious & persistent problems develop that require regulatory remedies; but don’t lead with preemptive, precautionary controls
“permissionless innovation” should remain our default policy position
Ongoing experimentation should be permitted not just with technology in general, but also with privacy and security solutions and standards
In sum… avoid One Size Fits All solutions
6) Special consideration should be paid to government actions that affect user privacy
Whereas many of the privacy and security concerns involving private data collection can be handled using the methods discussed previously, governmental data collection raises different issues
Private entities cannot fine, tax, or imprison us since they lack the coercive powers governments possess.
Moreover, although it is possible to ignore or refuse to be a part of various private services, the same is not true for governments, whose grasp cannot be evaded.
Thus, special protections are needed for law enforcement agencies and officials as it pertains to these technologies.
When government seeks access to privately-held data collected from these technologies, strong constitutional and statutory protections should apply.
We need stronger 4th Amendment constraints
Courts should revisit the “third-party doctrine,” which holds that an individual sacrifices their Fourth Amendment interest in their personal information when they divulges it to a third party, even if that party has promised to safeguard that data.

ITU agrees to open access for Plenipot contributions
Good news! As the ITU’s Plenipotentiary Conference gets underway in Busan, Korea, the heads of delegation have met and decided to open up access to some of the documents associated with the meeting. At this time, it is only the documents that are classified as “contributions“—other documents such as meeting agendas, background information, and terms of reference remain password protected. It’s not clear yet whether that is an oversight or an intentional distinction. While I would prefer all documents to be publicly available, this is a very welcome development. It is gratifying to see the ITU membership taking transparency seriously.
Special thanks are due to ITU Secretary-General Hamadoun Touré. When Jerry Brito and I launched WCITLeaks in 2012, at first, the ITU took a very defensive posture. But after the WCIT, the Secretary-General demonstrated tremendous leadership by becoming a real advocate for transparency and reform. I am told that he was instrumental in convincing the heads of delegation to open up access to Plenipot documents. For that, Dr. Touré has my sincere thanks—I would be happy to buy him a congratulatory drink when I arrive in Busan, although I doubt his schedule would permit it.
It’s worth noting that this decision only applies to the Plenipotentiary conference. The US has a proposal that will be considered at the conference to make something like this arrangement permanent, to instruct the incoming SG to develop a policy of open access to all ITU meeting documents. That is a development that I will continue to watch closely.

October 17, 2014
More evidence that ‘SOPA for Search Engines’ is a bad idea
Although SOPA was ignominiously defeated in 2012, the content industry never really gave up on the basic idea of breaking the Internet in order to combat content piracy. The industry now claims that a major cause of piracy is search engines returning results that direct users to pirated content. To combat this, they would like to regulate search engine results to prevent them from linking to sites that contain pirated music and movies.
This idea is problematic on many levels. First, there is very little evidence that content piracy is a serious concern in objective economic terms. Most content pirates would not, but for the availability of pirated content, empty their wallets to incentivize the creation of more movies and music. As Ian Robinson and I explain in our recent paper, industry estimates of the jobs created by intellectual property are absurd. Second, there are serious free speech implications associated with regulating search engine results. Search engines perform an information distribution role similar to that of newspapers, and they have an editorial voice. They deserve protection from censorship as long as they are not hosting the pirated material themselves. Third, as anyone who knows anything about the Internet knows, nobody uses the major search engines to look for pirated content. The serious pirates go straight to sites that specialize in piracy. Fourth, this is all part of a desperate attempt by the content industry to avoid modernizing and offering more of their content online through convenient packages such as Netflix.
As if these were not sufficient reason to reject the idea of “SOPA for Search Engines,” Google has now announced that they will be directing users to legitimate digital content if it is available on Netflix, Amazon, Google Play, Spotify, or other online services. The content industry now has no excuse—if they make their music and movies available in convenient form, users will see links to legitimate content even if they search for pirated versions.
Google also says they will be using DMCA takedown notices as an input into search rankings and autocomplete suggestions, demoting sites and terms that are associated with piracy. This is above and beyond what Google needs to do, and in fact raises some concerns about fraudulent DMCA takedown notices that could chill free expression—such as when CBS issued a takedown of John McCain’s campaign ad on YouTube even though it was likely legal under fair use. Google will have to carefully monitor the DMCA takedown process for abuse. But in any case, these moves by Google should once and for all put the nail in the coffin of the idea that we should compromise the integrity of search results through government regulation for the sake of fighting a piracy problem that is not that serious in the first place.

October 6, 2014
How to Destroy American Innovation: The FAA & Commercial Drones
If you want a devastating portrait of how well-intentioned regulation sometimes has profoundly deleterious unintended consequences, look no further than the Federal Aviation Administration’s (FAA) current ban on commercial drones in domestic airspace. As Jack Nicas reports in a story in today’s Wall Street Journal (“Regulation Clips Wings of U.S. Drone Makers“), the FAA’s heavy-handed regulatory regime is stifling America’s ability to innovate in this space and remain competitive internationally. As Nicas notes:
as unmanned aircraft enter private industry—for purposes as varied as filming movies, inspecting wind farms and herding cattle—many U.S. drone entrepreneurs are finding it hard to get off the ground, even as rivals in Europe, Canada, Australia and China are taking off.
The reason, according to interviews with two-dozen drone makers, sellers and users across the world: regulation. The FAA has banned all but a handful of private-sector drones in the U.S. while it completes rules for them, expected in the next several years. That policy has stifled the U.S. drone market and driven operators underground, where it is difficult to find funding, insurance and customers.
Outside the U.S., relatively accommodating policies have fueled a commercial-drone boom. Foreign drone makers have fed those markets, while U.S. export rules have generally kept many American manufacturers from serving them.
Of course, the FAA simply responds that they are looking out for the safety of the skies and that we shouldn’t blame them. Again, there’s no doubt that the agency’s hyper-cautious approach to commercial drone integration is based on the best of intentions. But as we’ve noted here again and again, all the best of intentions don’t count for much–or at least shouldn’t count for much–when stacked against real-world evidence and results. And the results in this case are quite troubling.
An article last week from Alan McQuinn of the Information Technology and Innovation Foundation (“Commercial Drone Companies Fly Away from FAA Regulations, Go Abroad“) documented how problematic this situation has become:
With no certainty surrounding a timeline, limited access to exemptions, and a dithering pace for setting its rules, the FAA is slowing innovation. . . . These overbearing rules have pushed U.S. companies to move their drone research and development projects to more permissive nations, such as Australia, where Google chose to test its drones. Australia’s Civil Aviation Safety Authority, the agency in charge of commercial drones, offers a great example of unrestrictive regulations. While it has not yet finalized its drone laws, it still allows companies and citizens to test and use these technologies under certain rules. Instead of forcing companies to reveal their technologies at government test sites, it allows them to test outdoors if they receive an operator’s certificate and submit their test area for approval. Australia’s more permissive nature shows how a country can allow innovation to thrive while simultaneously examining it for potential safety concerns.
The Wall Street Journal’s Nicas similarly observes that foreign innovators are already taking advantage of America’s regulatory mistakes to leapfrog us in drone innovation. He reports that Germany, Canada, Australia and China are starting to move ahead of us. Nicas quotes Steve Klindworth, head of a DJI drone retailer in Liberty Hill, Texas, who says that if the United States doesn’t move soon to adopt a more sensible policy position for drones that, “It’ll reach a point of no return where American companies won’t ever be able to catch up.”
In essence, the United States is adopting the exact opposite approach we did a generation ago for the Internet and digital technology. I’ve written recently about how “permissionless innovation” powered the Information Revolution and helped American companies become the envy of the globe. (See my essay, “Why Permissionless Innovation Matters,” for more details and data.) That happened because America got policy right, whereas other countries either tried to micromanage the Information Revolution into existence or they adopted policies that instead actively stifled it. (See my recent book on this subject for more discussion.)
In essence, we see this story playing out in reverse with commercial drones. The FAA is adopting a hyper-precautionary principle position that is holding back innovation based on worse-case scenarios. Certainly the safety of the national airspace is a vital matter. But to shut down all other aerial innovation in the meantime is completely unreasonable. As I wrote in a filing to the FAA with my Mercatus Center colleagues Eli Dourado and Jerry Brito last year:
Like the Internet, airspace is a platform for commercial and social innovation. We cannot accurately predict to what uses it will be put when restrictions on commercial use of UASs are lifted. Nevertheless, experience shows that it is vital that innovation and entrepreneurship be allowed to proceed without ex ante barriers imposed by regulators. We therefore urge the FAA not to impose any prospective restrictions on the use of commercial UASs without clear evidence of actual, not merely hypothesized, harm.
Countless life-enriching innovations are being sacrificed because of the FAA’s draconian policy. (Below I have embedded a video of me discussing those innovations with John Stossel, which was taped earlier this year.) New industry sectors and many jobs are also being forgone. It’s time for the FAA to get moving to open up the skies to drone innovation. Congress should be pushing the agency harder on this front since the agency seems determined to ignore the law, which requires the agency to integrate commercial drones into the nation’s airspace.
Watch the latest video at video.foxbusiness.com
Additional Reading
Filing to FAA on Drones & “Model Aircraft”, Sept. 23, 2014
Private Drones & the First Amendment, Sept. 19, 2014
[TV interview] The Beneficial Uses of Private Drones, March 28, 2014
Comments of the Mercatus Center to the FAA on integration of drones into the nation’s airspace, April 23, 2o13
Eli Dourado, Deregulate the Skies: Why We Can’t Afford to Fear Drones, Wired, April 23, 2013
Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom (2014)
[Video] Cap Hill Briefing on Emerging Tech Policy Issues (June 2014)

September 28, 2014
Trust (but verify) the engineers – comments on Transatlantic digital trade
Last week, I participated in a program co-sponsored by the Progressive Policy Institute, the Lisbon Council, and the Georgetown Center for Business and Public Policy on “Growing the Transatlantic Digital Economy.”
The complete program, including keynote remarks from EU VP Neelie Kroes and U.S. Under Secretary of State Catherine A. Novelli, is available below.
My remarks reviewed worrying signs of old-style interventionist trade practices creeping into the digital economy in new guises, and urged traditional governments to stay the course (or correct it) on leaving the Internet ecosystem largely to its own organic forms of regulation and market correctives:
Vice President Kroes’s comments underscore an important reality about innovation and regulation. Innovation, thanks to exponential technological trends including Moore’s Law and Metcalfe’s Law, gets faster and more disruptive all the time, a phenomenon my co-author and I have coined “Big Bang Disruption.”
Regulation, on the other hand, happens at the same pace (at best). Even the most well-intentioned regulators, and I certainly include Vice President Kroes in that list, find in retrospect that interventions aimed at heading off possible competitive problems and potential consumer harms rarely achieve their objectives, and, indeed, generate more harmful unintended consequences.
This is not a failure of government. The clock speeds of innovation and regulation are simply different, and diverging faster all the time. The Internet economy has been governed from its inception by the engineering-driven multistakeholder process embodied in the task forces and standards groups that operate under the umbrella of the Internet Society. Innovation, for better or for worse, is regulated more by Moore’s Law than traditional law.
I happen to think the answer is “for better,” but I am not one of those who take that to the extreme in arguing that there is no place for traditional governments in the digital economy. Governments have and continue to play an essential part in laying the legal foundations for the remarkable growth of that economy and in providing incentives if not funding for basic research that might not otherwise find investors.
And when genuine market failures appear, traditional regulators can and should step in to correct them as efficiently and narrowly as they can. Sometimes this has happened. Sometimes it has not.
Where in particular I think regulatory intervention is least effective and most dangerous is in regulating ahead of problems—in enacting what the FCC calls “prophylactic rules.” The effort to create legally sound Open Internet regulations in the U.S. has faltered repeatedly, yet in the interim investment in both infrastructure and applications continues at a rapid pace—far outstripping the rest of the world.
The results speak for themselves. U.S. companies dominate the digital economy, and, as Prof. Christopher Yoo has definitively demonstrated, U.S. consumers overall enjoy the best wired and mobile infrastructure in the world at competitive prices.
At the same time, those who continue to pursue interventionist regulation in this area often have hidden agendas. Let me give three examples:
1. As we saw earlier this month at the Internet Governance Forum, which I attended along with Vice President Kroes and 2,500 other delegates, representatives of the developing world were told by so-called consumer advocates from the U.S. and the EU that they must reject so-called “zero rated” services, in which mobile network operators partner with service providers including Facebook, Twitter and Wikimedia to provide their popular services to new Internet users without use applying to data costs.
Zero rating is an extremely popular tool for helping the 2/3 of the world’s population not currently on the Internet get connected and, likely, from these services to many others. But such services violate the “principle” of neutrality that has mutated from an engineering concept to a nearly-religious conviction. And so zero rating must be sacrificed, along with users who are too poor to otherwise join the digital economy.
2. Closer to home, we see the wildly successful Netflix service making a play to hijack the Open Internet debate into one about back-end interconnection, peering, and transit—engineering features that work so well that 99% of the agreements involved between networks, according to the OECD, aren’t even written down.
3. And in Europe, there are other efforts to turn the neutrality principle on its head, using it as a hammer not to regulate ISPs but to slow the progress of leading content and service providers, including Apple, Amazon and Google, who have what the French Digital Council and others refer to as non-neutral “platform monopolies” which must be broken.
To me, these are in fact new faces on very old strategies—colonialism, rent-seeking, and protectionist trade warfare respectively. My hope is that Internet users—an increasingly powerful and independent source of regulatory discipline in the Internet economy—will see these efforts for what they truly are…and reject them resoundingly.
The more we trust (but also verify) the engineers, the faster the Internet economy will grow, both in the U.S. and Europe, and the greater our trade in digital goods and services will strengthen the ties between our traditional economies. It’s worked brilliantly for almost two decades.
The alternatives, not so much.

September 26, 2014
WCITLeaks is Ready for Plenipot
The ITU is holding its quadrennial Plenipotentiary Conference in Busan, South Korea from October 20 to November 7, 2014. The Plenipot, as it is called, is the ITU’s “supreme organ” (a funny term that I did not make up). It represents the highest level of decision making at the ITU. As it has for the last several ITU conferences, WCITLeaks will host leaked documents related to the Plenipot.
For those interested in transparency at the ITU, two interesting developments are worth reporting. On the first day of the conference, the heads of delegation will meet to decide whether documents related to the conference should be available to the public directly through the TIES system without a password. All of the documents associated with the Plenipot are already available in English on WCITLeaks, but direct public access would have the virtue of including those in the world who do not speak English but do speak one of the other official UN languages. Considering this additional benefit of inclusion, I hope that the heads of delegation will seriously consider the advantages of adopting a more open model for document access during this Plenipot. If you would like to contact the head of delegation for your country, you can find their names in this document. A polite email asking them to support open access to ITU documents might not hurt.
In addition, at the meeting, the ITU membership will consider a proposal from the United States to, as a rule, provide open access to all meeting documents.
This is what WCITLeaks has always supported—putting ourselves out of business. As the US proposal notes, the ITU Secretariat has conducted a study finding that other UN agencies are much more forthcoming in terms of public access to their documents. A more transparent ITU is in everyone’s interest—including the ITU’s. This Plenipot has the potential to remedy a serious deficiency with the institution; I’m cheering for them and hoping they get it right.

The Debate over the Sharing Economy: Talking Points & Recommended Reading
The sharing economy is growing faster than ever and becoming a hot policy topic these days. I’ve been fielding a lot of media calls lately about the nature of the sharing economy and how it should be regulated. (See latest clip below from the Stossel show on Fox Business Network.) Thus, I sketched out some general thoughts about the issue and thought I would share them here, along with some helpful additional reading I have come across while researching the issue. I’d welcome comments on this outline as well as suggestions for additional reading. (Note: I’ve also embedded some useful images from Jeremiah Owyang of Crowd Companies.)
1) Just because policymakers claim that regulation is meant to protect consumers does not mean it actually does so.
Cronyism/ Rent-seeking: Regulation is often “captured” by powerful and politically well-connected incumbents and used to their own benefit. (+ Lobbying activity creates deadweight losses for society.)
Innovation-killing: Regulations become a formidable barrier to new innovation, entry, and entrepreneurism.
Unintended consequences: Instead of resulting in lower prices & better service, the opposite often happens: Higher prices & lower quality service. (Example: Painting all cabs same color destroying branding & ability to differentiate).
2) The Internet and information technology alleviates the need for top-down regulation & actually does a better job of serving consumers.
Ease of entry/innovation in online world means that new entrants can come in to provide better options and solve problems previously thought to be unsolvable in the absence of regulation.
Informational empowerment: The Internet and information technology solves old problem of lack of consumer access to information about products and services. This gives them monitoring tools to find more and better choices. (i.e., it lowers both search costs & transaction costs). (“To the extent that consumer protection regulation is based on the claim that consumers lack adequate information, the case for government intervention is weakened by the Internet’s powerful and unprecedented ability to provide timely and pointed consumer information.” – John C. Moorhouse)
Feedback mechanisms (product & service rating / review systems) create powerful reputational incentives for all parties involved in transactions to perform better.
Self-regulating markets: The combination of these three factors results in a powerful check on market power or abusive behavior. The result is reasonably well-functioning and self-regulating markets. Bad actors get weeded out.
Law should evolve: When circumstances change dramatically, regulation should as well. If traditional rationales for regulation evaporate, or new technology or competition alleviates need for it, then the law should adapt.
3) Sharing economy has demonstrably improved consumer welfare. It provides:
more choices / competition
more service innovation / differentiation
better prices
higher quality services (safety & cleanliness /convenience / peace of mind)
Better options & conditions for workers
4) If we need to “level the (regulatory) playing field,” best way to do so is by “deregulating down” to put everyone on equal footing; not by “regulating up” to achieve parity.
Regulatory asymmetry is real: Incumbents are right that they are at disadvantage relative to new sharing economy start-ups.
Don’t punish new innovations for it: But solution is not to just roll the old regulatory regime onto the new innovators.
Parity through liberalization: Instead, policymakers should “deregulate down” to achieve regulatory parity. Loosen old rules on incumbents as new entrants challenge status quo.
“Permissionless innovation” should trump “precautionary principle” regulation: Preemptive, precautionary regulation does not improve consumer welfare. Competition and choice do better. Thus, our default position toward the sharing economy should be “innovation allowed” or permissionless innovation .
Alternative remedies exist: Accidents will always happen, of course. But insurance, contracts, product liability, and other legal remedies exist when things go wrong. The difference is that ex post remedies don’t discourage innovation and competition like ex ante regulation does. By trying to head off every hypothetical worst-case scenario, preemptive regulations actually discourage many best-case scenarios from ever coming about.
5) Bottom line = Good intentions only get you so far in this world.
Just because a law was put on the books for noble purposes, it does not mean it really accomplished those goals, or still does so today.
Markets, competition, and ongoing innovation typically solve problems better than law when we give them a chance to do so.
[P.S. On 9/30, my Mercatus Center colleague Matt Mitchell posted this excellent follow-up essay building on my outline and improving it greatly.]

_________________________
Additional Reading
Randolph J. May & Michael J. Horney, “The Sharing Economy: A Positive Shared Vision for the Future,” Free State Foundation, Perspectives from FSF Scholars, Vol. 9, No. 26, July 30, 2014.
R.J. Lehmann, “The Sharing Economy Will Thrive Only If Government Doesn’t Strangle It,” Reason, August 2, 2014.
Andrew Moylan and R.J. Lehmann, “Five Principles for Regulating the Sharing Economy,” R Street, Policy Study No. 26, July 2014.
Eli Lehrer & Andrew Moylan, “Embracing the Peer-Production Economy,” National Affairs, 2014, p. 51-63.
Arun Sundararajan, “Why the Government Doesn’t Need to Regulate the Sharing Economy,” Wired, October 22, 2012.
Matthew Mitchell and Michael Farren, “If you Like Uber, You Would’ve Loved the Jitney,” LA Times, July 12, 2014.
Daniel M. Rothschild, “How Uber and Airbnb Resurrect ‘Dead Capital,’” The Umlaut, April 9, 2014.
Daniel M. Rothschild, “Renters and Rent-Seeking in San Francisco,” Technology Liberation Front, April 15, 2014.
[podcast] Michael Munger on the Sharing Economy, EconTalk, July 7, 2014.
video of Michael Munger talk on the Sharing Economy, September 25, 2014.
John C. Moorhouse, “Consumer Protection Regulation and Information on the Internet,” in Fred E. Foldvary & Daniel B. Klein (eds), The Half-Life of Policy Rationales: How New Technology Affects Old Policy Issues (Cato Institute, 2003).
Rachel Bostman, What’s Mine is Yours: The Rise of Collaborative Consumption (2010).
Jeremiah Owyang, A Glossary of Emerging Terms in the Collaborative Economy, August 29, 2014

August 14, 2014
Comments to the New York Department of Financial Services on the Proposed Virtual Currency Regulatory Framework
Today my colleague Eli Dourado and I have filed a public interest comment with the New York Department of Financial Services on their proposed “BitLicense” regulatory framework for digital currencies. You can read it here. As we say in the comment, NYDFS is on the right track, but ultimately misses the mark:
State financial regulators around the country have been working to apply their existing money transmission licensing statutes and regulations to new virtual currency businesses. In many cases, existing rules do not take into account the unique properties of recent innovations like cryptocurrencies. With this in mind, the department sought to develop rules that were “tailored specifically to the unique characteristics of virtual currencies.”
As Superintendent Benjamin Lawsky has stated, the aim of this project is “to strike an appropriate balance that helps protect consumers and root out illegal activity—without stifling beneficial innovation.” This is the right goal and one we applaud. It is a very difficult balance to strike, however, and we believe that the BitLicense regulatory framework as presently proposed misses the mark, for two main reasons.
First, while doing much to take into account the unique properties of virtual currencies and virtual currency businesses, the proposal nevertheless fails to accommodate some of the most important attributes of software-based innovation. To the extent that one of its chief goals is to preserve and encourage innovation, the BitLicense proposal should be modified with these considerations in mind—and this can be done without sacrificing the protections that the rules will afford consumers. Taking into account the “unique characteristics” of virtual cur-rencies is the key consideration that will foster innovation, and it is the reason why the department is creating a new BitLicense. The department should, therefore, make sure that it is indeed taking these features into account.
Second, the purpose of a BitLicense should be to take the place of a money transmission license for virtual currency businesses. That is to say, but for the creation of a new BitLicense, virtual currency businesses would be subject to money transmission licensing. Therefore, to the extent that the goal behind the new BitLicense is to protect consumers while fostering innovation, the obligations faced by BitLicensees should not be any more burdensome than those faced by traditional money transmitters. Otherwise, the new regulatory framework will have the opposite effect of the one intended. If it is more costly and difficult to acquire a BitLicense than a money transmission license, we should expect less innovation. Additional regulatory burdens would put BitLicensees at a relative disadvantage, and in several instances the proposed regulatory framework is more onerous than traditional money transmitter licensing.
As Superintendent Lawsky has rightly stated, New York should avoid virtual currency rules that are “so burdensome or unwieldy that the technology can’t develop.” The proposed BitLicense framework, while close, does not strike the right balance between consumer protection and innovation. For example, its approach to consumer protection through disclosures rather than prescriptive precautionary regulation is the right approach for giving entrepreneurs flexibility to innovate while ensuring that consumers have the information they need to make informed choices. Yet there is much that can be improved in the framework to reach the goal of balancing innovation and protection. Below we outline where the framework is missing the mark and recommend some modifications that will take into account the unique properties of virtual currencies and virtual currency businesses.
We hope this comment will be helpful to the department as it further develops its proposed framework, and we hope that it will publish a revised draft of the framework and solicit a second round of comments so that we can make sure we all get it right. And it’s important that we get it right.
Other jurisdictions, such as London, are looking to become the “global centre of financial innovation,” as Chancellor George Osborne put it in a recent speech about Bitcoin. If New York drops the ball, London may just pick it up. As Garrick Hileman, economic historian at the London School of Economics, told CNet last week:
The chancellor is no doubt aware that very little of the $250 million of venture capital which has been invested in Bitcoin startups to date has gone to British-based companies. Many people believe Bitcoin will be as big as the Internet. Today’s announcement from the chancellor has the potential to be a big win for the UK economy. The bottom line on today’s announcement is that Osborne thinks he’s spotted an opportunity for the City and Silicon Roundabout to siphon investment and jobs away from the US and other markets which are taking a more aggressive Bitcoin regulatory posture.
Let’s get it right.

Adam Thierer's Blog
- Adam Thierer's profile
- 1 follower
