Adam Thierer's Blog, page 62
July 1, 2013
Robert Samuelson Engages in a Bit of Argumentum in Cyber-Terrorem
Washington Post columnist Robert J. Samuelson published an astonishing essay today entitled, “Beware the Internet and the Danger of Cyberattacks.” In the print edition of today’s Post, the essay actually carries a different title: “Is the Internet Worth It?” Samuelson’s answer is clear: It isn’t. He begins his breathless attack on the Internet by proclaiming:
If I could, I would repeal the Internet. It is the technological marvel of the age, but it is not — as most people imagine — a symbol of progress. Just the opposite. We would be better off without it. I grant its astonishing capabilities: the instant access to vast amounts of information, the pleasures of YouTube and iTunes, the convenience of GPS and much more. But the Internet’s benefits are relatively modest compared with previous transformative technologies, and it brings with it a terrifying danger: cyberwar.
And then, after walking through a couple of worst-case hypothetical scenarios, he concludes the piece by saying:
the Internet’s social impact is shallow. Imagine life without it. Would the loss of e-mail, Facebook or Wikipedia inflict fundamental change? Now imagine life without some earlier breakthroughs: electricity, cars, antibiotics. Life would be radically different. The Internet’s virtues are overstated, its vices understated. It’s a mixed blessing — and the mix may be moving against us.
What I found most troubling about this is that Samuelson has serious intellectual chops and usually sweats the details in his analysis of other issues. He understands economic and social trade-offs and usually does a nice job weighing the facts on the ground instead of engaging in the sort of shallow navel-gazing and anecdotal reasoning that many other weekly newspaper columnist engage in on a regular basis.
But that’s not what he does here. His essay comes across as a poorly researched, angry-old-man-shouting-at-the-sky sort of rant. There’s no serious cost-benefit analysis at work here; just the banal assertion that a new technology has created new vulnerabilities. Really, that’s the extent of the logic at work here. Samuelson could have just as well substituted the automobile, airplanes, or any other modern technology for the Internet and drawn the same conclusion: It opens the door to new vulnerabilities (especially national security vulnerabilities) and, therefore, we would be better off without it in our lives.
Samuelson does admit that “Life would be radically different… without some earlier breakthroughs: electricity, cars, antibiotics,” so it is obvious he thinks their benefits outweigh their costs. But I could just as well say that new technologies such as cars and planes bring death and destruction, both in the theater of war and in everyday life. So, one might conclude of modern transportation technology that the “virtues are overstated, its vices understated. It’s a mixed blessing — and the mix may be moving against us,” just as Samuelson concludes of the Net. Of course, such an assertion would be absurd without reference to the many benefits that accrue to us from these technologies. I don’t think I need to cite them all here. But Samuelson is certainly a sharp enough guy that he would engage in such a cost-benefit analysis if someone made such an assertion about other technologies.
When it comes to the Internet, however, all he can say about benefits is that “the instant access to vast amounts of information, the pleasures of YouTube and iTunes, the convenience of GPS and much more.” (GPS? Really? Strictly speaking, that’s not an Internet technology, Bob. But perhaps you have something against satellite technology, too! Looking forward to your column, “Is Satellite Communication Worth It?”)
Of course the first benefit of the Internet that Samuelson cites — “instant access to vast amounts of information” — is nothing to sneeze at! The fact that he so casually dismisses that benefit is rather troubling. For the vast majority of civilization, humans have lived in a what we might think of as a state of extreme information poverty. Today, by contrast, we are blessed to live in amazing times. An entire planet of ubiquitous, instantly accessible media and information is now at our fingertips. We are able to share culture and engage with others — both socially and commercially — in ways that were unthinkable and impossible even just a few decades ago.
It’s hard to quantify the benefits associated with these facts, but I would think most of us would agree they are enormous. But it’s hardly the only sort of benefit that comes from the Internet and modern digital communications technologies. The fact that Samuelson can’t think of anything more is either a serious failure of imagination or, more troubling, an intentional effort to minimize and ignore those benefits in order to prey on people’s worst fears.
I’ve spent a lot of time thinking about “technopanics” and the role that journalists sometimes play in hyping them. See, for example, my essay last summer, “Journalists, Technopanics & the Risk Response Continuum,” which is based on my Minnesota Journal of Law, Science & Technology law review article, “Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle.” As I explain in that article, the model for what Samuelson has done in his essay is actually a very old logical fallacy: a so-called “appeal to fear.” Here’s how I explain it in my law review article:
Rhetoricians employ several closely related types of “appeals to fear.” Douglas Walton, author of Fundamentals of Critical Argumentation, outlines the argumentation scheme for “fear appeal arguments” as follows:
Fearful Situational Premise: Here is a situation that is fearful to you.
Conditional Premise: If you carry out A, then the negative consequences portrayed in the fearful situation will happen to you.
Conclusion: You should not carry out A.
This logic pattern here is referred to as argumentum in terrorem or argumentum ad metum. A closely related variant of this argumentation scheme is known as argumentum ad baculum, or an argument based on a threat. Argumentum ad baculum literally means “argument to the stick,” an appeal to force. Walton outlines the argumentum ad baculum argumentation scheme as follows:
Conditional Premise: If you do not bring about A, then consequence B will occur.
Commitment Premise: I commit myself to seeing to it that B comes about.
Conclusion: You should bring about A.
As will be shown, these argumentation devices are at work in many information technology policy debates today even though they are logical fallacies or based on outright myths. They tend to lead to unnecessary calls for anticipatory regulation of information or information technology.
I continue on in that article to provide several examples of how “argumentum in cyber-terrorem” logic is at work in several digital policy arenas today, especially as it pertains to cybersecurity and cyberwar fears. My Mercatus Center colleagues Jerry Brito and Tate Watkins have warned of the dangers of “threat inflation” in cybersecurity policy in their important paper, “Loving the Cyber Bomb? The Dangers of Threat Inflation in Cybersecurity Policy.” The rhetoric of cybersecurity debates illustrates how threat inflation is a crucial part of “argumentum in cyber-terrorem” logic. Frequent allusions are made in cybersecurity debates to the potential for a “Digital Pearl Harbor,” a “cyber cold war,” a “cyber Katrina,” or even a “cyber 9/11.” These analogies are made even though these historical incidents resulted in death and destruction of a sort not comparable to attacks on digital networks. Others refer to “cyber bombs” even though no one can be “bombed” with binary code. A rush to judgment often follows inflated threats.
And that’s exactly what Samuelson has done in his essay. He’s rushed to an illogical, sweeping conclusion — namely, that we would be better off just bottling up the Net, or “repealing” it (whatever that means) — and he hasn’t even bothered considering the costs of such action. Worse yet, even though he admits that, “I don’t know the odds of this technological Armageddon. I doubt anyone does. The fears may be wildly exaggerated,” that doesn’t stop him from suggesting that we should live in fear of worst case hypothetical scenarios and take radical steps based upon them.
Again, it is certainly true that the Internet creates new vulnerabilities, including national security vulnerabilities, but that simply cannot be the end of the story. Those vulnerabilities need to be carefully evaluated and measured and, before we rush to panicked conclusions and advocate sweeping policy solutions, the corresponding benefits of the Internet must be taken into consideration.
Instead, Samuelson has engaged in the worst sort of fear-based, factually-challenged reasoning in his essay. It’s a model for how not to think or write about Internet policy. A more thoughtful analysis would acknowledge that the Internet is more than just “a symbol of progress;” it constitutes real progress and an improvement of the human condition. And while it’s all too easy for newspaper columnists to suggest ”we would be better off without it” and that it should be “repealed,” there are all too many government goons out there who would like to do just that since the Net has empowered the masses and given them a voice like no other technology in history.
Shame on Robert Samuelson for dismissing these realities — and the Internet’s many benefits — so lightly.
[Note: See all my essays on "technopanics" here.]







June 27, 2013
Book Review: Brown & Marsden’s “Regulating Code”
Ian Brown and Christopher T. Marsden’s new book, Regulating Code: Good Governance and Better Regulation in the Information Age, will go down as one of the most important Internet policy books of 2013 for two reasons. First, their book offers an excellent overview of how Internet regulation has unfolded on five different fronts: privacy and data protection; copyright; content censorship; social networks and user-generated content issues; and net neutrality regulation. They craft detailed case studies that incorporate important insights about how countries across the globe are dealing with these issues. Second, the authors endorse a specific normative approach to Net governance that they argue is taking hold across these policy arenas. They call their preferred policy paradigm “prosumer law” and it envisions an active role for governments, which they think should pursue “smarter regulation” of code.
In terms of organization, Brown and Marsden’s book follows the same format found in Milton Mueller’s important 2010 book Networks and States: The Global Politics of Internet Governance; both books feature meaty case studies in the middle bookended by chapters that endorse a specific approach to Internet policymaking. (Incidentally, both books were published by MIT Press.) And, also like Mueller’s book, Brown and Marsden’s Regulating Code does a somewhat better job using case studies to explore the forces shaping Internet policy across the globe than it does making the normative case for their preferred approach to these issues.
Thus, for most readers, the primary benefit of reading either book will be to see how the respective authors develop rich portraits of the institutional political economy surrounding various Internet policy issues over the past 10 to 15 years. In fact, of all the books I have read and reviewed in recent years, I cannot think of two titles that have done a better job developing detailed case studies for such a diverse set of issues. For that reason alone, both texts are important resources for those studying ongoing Internet policy developments.
That’s not to say that both books don’t also make a solid case for their preferred policy paradigms, it’s just that the normative elements of the texts are over-shadowed by the excellent case studies. As a result, readers are left wanting more detail about what their respective policy paradigms would (or should) mean in practice. Regardless, in the remainder of this review, I’ll discuss Brown and Marsden’s normative approach to digital policy and contrast it with Mueller’s since they stand in stark contrast and help frame the policy battles to come on this front.
Governing Cyberspace: Mueller vs. Brown & Marsden
Mueller’s normative goal in Networks and States was to breathe new life into the old cyber-libertarian philosophy that was more prevalent during the Net’s founding era but which has lost favor in recent years. He made the case for a “cyberliberty” movement rooted in what he described as a “denationalized liberalism” vision of Net governance. He argued that “we need to find ways to translate classical liberal rights and freedoms into a governance framework suitable for the global Internet. There can be no cyberliberty without a political movement to define, defend, and institutionalize individual rights and freedoms on a transnational scale.”
I wholeheartedly endorsed that vision in my review of Mueller’s book, even if he was a bit short on the details of how to bring it about. But it is useful to keep Mueller’s paradigm in mind because it provides a nice contrast with the approach Brown and Marsden advocate, which is quite different.
Generally speaking, Brown and Marsden reject most forms of “Internet exceptionalism” and certainly reject the sort of “cyberliberty” ethos that Mueller and I embrace. They instead endorse a fairly broad role for governments in ordering the affairs of cyberspace. In their self-described “prosumer” paradigm, the State is generally viewed as benevolent actor, well-positioned to guide the course of code development toward supposedly more enlightened ends.
Consistent with the strong focus on European policymaking found throughout the book, the authors are quite enamored with the “co-regulatory” models that have become increasing prevalent across the continent. Like many other scholars and policy advocates today, they occasionally call for “multi-stakeholderism” as a solution but they do not necessarily mean the sort of truly voluntary, bottom-up multi-stakeholderism of the Net’s early days. Rather, they are usually thinking of multi-stakeholderism as what is essentially pluralistic politics; it’s the government setting the table, inviting the stakeholders to it, and then guiding (or at least “nudging”) policy along the way. “We are convinced that fudging with nudges needs to be reinforced with the reality of regulation and coregulation, in order to enable prosumers to maximize their potential on the broadband Internet,” they say. (p. 187)
Meet the New Boss, Same as the Old Boss?
Thus, despite the new gloss, their “prosumer law” paradigm ends up sounding quite a bit like a rehash of traditional “public interest” law and common carrier regulation, albeit with a new appreciation of just how dynamics markets built on code can be. Indeed, Brown and Marsden repeatedly acknowledge how often law and regulation fails to keep pace with the rapid evolution of digital technology. “Code changes quickly, user adoption more slowly, legal contracting and judicial adaptation to new technologies slower yet, and regulation through legislation slowest of all,” they correctly note (p. xv). This reflects what Larry Downes refers to as the most fundamental “law of disruption” of the digital age: “technology changes exponentially, but social, economic, and legal systems change incrementally.”
At the end of the day, however, that insight doesn’t seem to inform Brown and Marsden’s policy prescriptions all that much. Theirs is a world in which policy tinkering errors will apparently be corrected promptly and efficiently by still more policy tinkering, or “smarter regulation.” Moreover, like many other Internet policy scholars today, they don’t mind regulatory interventions that come early and often since they believe that will help regulators get out ahead of the technological curve and steer markets in preferred directions. “If regulators fail to address regulatory objects at first, then the regulatory object can grow until its technique overwhelms the regulator,” they say (p. 31).
This is the same mentality that is often on display in Tim Wu’s work, which I have been quite critical of here and elsewhere. For example, Wu has advocated informal “agency threats” and the use of “threat regimes” to accomplish policy goals that prove difficult to steer though the formal democratic rulemaking process. As part of his “defense of regulatory threats in particular contexts,” Wu stresses the importance of regulators taking control of fast-moving tech markets early in their life cycles. “Threat regimes,” Wu argues, “are best justified when the industry is undergoing rapid change — under conditions of ‘high uncertainty.’ Highly informal regimes are most useful, that is, when the agency faces a problem in an environment in which facts are highly unclear and evolving. Examples include periods surrounding a newly invented technology or business model, or a practice about which little is known,” Wu concludes.
This is essentially where most of the “co-regulation” schemes that Brown and Marsden favor would take us: Code regulators would take an active role in shaping the evolution of digital technologies and markets early in its life cycle. What are the preferred regulatory mechanisms? Like Wu and many other cyberlaw professors today, Brown and Marsden favor robust interconnection and interoperability mandates bolstered by antitrust actions as well. And, again, they aren’t willing to wait around and let the courts adjudicate these issues in an ex post fashion. “Essential facilities law is a very poor substitute for the active role of prosumer law that we advocate, especially in its Chicago school minimalist phase” (p. 185). In other words, we shouldn’t wait for someone to bring a case and litigate it through the courts when preemptive, proactive regulatory interventions can sagaciously steer us to a superior end.
More specifically, they propose that “competition authorities should impose ex ante interoperability requirements upon dominant social utilities… to minimize network barriers” (p. 190) and they model this on traditional regulatory schemes such as must-carry obligations, API interface disclosure requirements, and other interconnection mandates (such as those imposed on AOL/Time Warner a decade ago to alleviate fears about instant messaging dominance). They also note that “Effective, scalable state regulation often depends on the recruitment of intermediaries as enforcers” to help achieve various policy objectives (p. 170).
The Problem with Interoperability Über Alles
So, in essence, the Brown-Marsden Internet policy paradigm might be thought of as interoperability über alles. Interoperability and interconnection in pursuit of more “open” and “neutral” systems is generally considered an unalloyed good and most everything else is subservient to this objective.
This is a serious policy error and one that I address in great detail in my absurdly long review of John Palfrey and Urs Gasser’s Interop: The Promise and Perils of Highly Interconnected Systems. I’m not going to repeat all 6,500 words of that critique here when you can just click back and read it, but here’s the high level summary: There is no such thing as “optimal interoperability” that can be determined in an a priori fashion. Ongoing marketplace experimentation with technical standards, modes of information production and dissemination, and interoperable information systems, is almost always preferable to the artificial foreclosure of this dynamic process through state action. The former allows for better learning and coping mechanisms to develop while also incentivizing the spontaneous, natural evolution of the market and market responses. The latter (regulatory foreclosure of experimentation) limits that potential.
More importantly, when interoperability is treated as sacrosanct and forcibly imposed through top-down regulatory schemes, it will often have many unintended consequences and costs. It can even lock in existing market power and market structures by encouraging users and companies to flock to a single platform instead of trying to innovate around it. (Go back and take a look at how the “Kingsbury Commitment” — the interconnection deal from the early days of the U.S. telecom system — actually allowed AT&T to gain greater control over the industry instead of assisting independent operators.)
Citing Palfrey and Gasser, Brown and Marsden do note that “mandated interoperability is neither necessary in all cases nor necessarily desirable” (p. 32), but they don’t spend as much time as Palfrey and Gasser itemizing these trade-offs and the potential downsides of some interoperability mandates. But what frustrates me about both books is the almost quasi-religious reverence accorded to interoperability and open standards when such faith is simply not warranted after historical experience is taken into consideration.
Plenty of the best forms of digital innovation today are due to a lack of interoperability and openness. Proprietary systems have produced some of the most exciting devices (iPhone) and content (video games) of modern times. Then again, voluntary interoperable and “open” services and devices thrive, too. The key point here — and one that I develop in far greater detail in my book chapter, “The Case for Internet Optimism, Part 2 – Saving the Net From Its Supporters” — is that the market for digital services is working marvelously and providing us with choices of many different flavors. Innovation continues to unfold rapidly in both directions along the “open” vs. “closed” continuum. (Here are 30 more essays I have written on this topic if you need more proof.)
Generally speaking, we should avoid mandatory interop and openness solutions. We should instead push those approaches and solutions in a truly voluntary, bottom-up fashion. And, more importantly, we should be pushing for outside-the-box solutions of the Schumpeterian (creative destruction / disruptive innovation) variety instead of surrendering so quickly on competition through forced sharing mandates.
The Case for Patience & Policy Restraint
But Brown and Marsden clearly do not subscribe to that sort of Schumpeterian thinking. They think most code markets tip and lock into monopoly in fairly short order and that only wise interventions can rectify that. For example, they claim that Facebook’s “monopoly is now durable,” which will certainly come as a big surprise to the millions of us who do not use it all. And the story of MySpace’s rapid rise and equally precipitous fall has little bearing on this story, they argue.
But, no matter how you define the “social networking market,” here are two facts about it: First, it is still very, very young. It’s only about a decade old. Second, in that short period of time, we have already witnessed the entire first generation of players fall by the wayside. While the second generation is currently dominated by Facebook, it is by no means alone. Again, millions like me don’t use it at all and get along just fine with other “social networking” technologies, including Twitter, LinkedIn, Google+, and even older tech like email, SMS, and yes, phone calls! Accusations of “monopoly” in this space strain credulity in the extreme. I invite you to read my Mercatus working paper, “The Perils of Classifying Social Media Platforms as Public Utilities,” for a more thorough debunking of this logic. (Note: The final version of that paper will be published in the CommLaw Conspectus shortly.)
Such facts should have a bearing on the debate about regulatory interventions. We continue to witness the power of Schumpeterian rivalry as new and existing players battle in a race for the prize of market power. Brown and Marsden fear that the race is already over in many sectors and that it is time to throw in the towel and get busy regulating. But when I look around at the information technology marketplace today, I am astonished just how radically different it looks from even just a few years ago, and not just in the social media market. I have written extensively about the smartphone marketplace, where innovation continues at a frantic pace. As I noted in my essay here on “Smartphones & Schumpeter,” it’s hard to remember now, but just 6 short years ago:
The iPhone and Android had not yet landed.
Most of the best-selling phones of 2007 were made by Nokia and Motorola.
Feature phones still dominated the market; smartphones were still a luxury (and a clunky luxury at that).
There were no app stores and what “apps” did exist were mostly proprietary and device or carrier-specific; and,
There was no 4G service.
It’s also easy to forget just how many market analysts and policy wonks were making absurd predictions at the time about how the telecom operators at the time had so much market power that they would crush new innovation without regulation. Instead, in very short order, the market was completely upended in a way that mobile providers never saw coming. There was a huge shift in relative market power flowing from the core of these markets to the fringes, especially to Apple, which wasn’t even a player in that space before the launch of the iPhone.
As I noted in concluding that piece last year, these facts should lead us to believe that this is a healthy, dynamic marketplace in action. Not even Schumpeter could have imagined creative destruction on this scale. (Just look as BlackBerry). But much the same could be said of many other sectors of the information economy. While it is certainly true that many large players exist, we continue to see a healthy amount of churn in these markets and an astonishing amount of technological innovation.
Public Choice Insights: What History Tells Us
One would hope these realities would have a greater bearing on the policy prescriptions suggested by analysts like Brown and Marsden, but they don’t seem to. Instead, the attitude on display here is that governments can, generally speaking, act wisely and nudge efficiently to correct short-term market hiccups and set us on a better course. But there are strong reasons to question that presumption.
Specifically, what I found most regrettable about Brown and Marsden’s book was the way — like all too many books in this field these days — the authors briefly introduce “public choice” insights and concerns only to summarily dismiss them as unfounded or overblown. (See my review of Brett Frischmann’s book, Infrastructure: The Social Value of Shared Resources for a more extended discussion of this problem as it pertains to discussions about not just infrastructure regulation by the regulation of all complex industries and technologies.)
Brown and Marsden make it clear that their intentions are pure and that their methods would incorporate the lessons of the past, but they aren’t very interested in dwelling on the long, lamentable history of regulatory failures and capture in the communications and media policy sectors. They do note the dangers of a growing “security-industrial complex” and argue that “commercial actors dominate technical actors in policy debates.” They also say that the “potential for capture by regulated interests, especially large corporate lobbies, is an essential insight” that informs their approach. The problem is that it really doesn’t. They largely ignore those insights and instead imply that, to the extent this is a problem at all, we can build a better breed of bureaucrats going forward who will craft “smarter regulation” that is immune from such pressures. Or, they claim that “multi-stakeholderism” — again, the new, more activist and government-influenced conception of it — can overcome these public choice problems.
A better understanding of power politics that is informed by the wisdom of the ages would instead counsel that minimizing the scope of politicization of technology markets is the better remedy. Capture and cronyism in communications and media markets has always grown in direct proportion to the overall scope of law governing those sectors. (I invite you to read all the troubling examples of this that Brent Skorup and I have documented in our new 72-page working paper, “A History of Cronyism and Capture in the Information Technology Sector.” Warning: It makes for miserable reading but proves beyond any doubt that there is something to public choice concerns.)
To be clear, it’s not that I believe that “market failures” or “code failures” never occur, rather, as I noted in this debate with Larry Lessig, it’s that such problems are typically “better addressed by voluntary, spontaneous, bottom-up, marketplace responses than by coerced, top-down, governmental solutions. Moreover, the decisive advantage of the market-driven approach to correcting code failure comes down to the rapidity and nimbleness of those response(s).” It’s not just that traditional regulatory remedies cannot keep pace with code markets, it’s that those attempting to craft the remedies do not possess the requisite knowledge needed to know how to steer us down a superior path. (See my essay, “Antitrust & Innovation in the New Economy: The Problem with the Static Equilibrium Mindset,” for more on that point.)
Regardless, at a minimum, I expect scholars to take seriously the very real public choice problems at work in this arena. You cannot talk about the history of these sectors without acknowledging the horrifically anti-consumer policies that were often put in place at the request of one industry or another to shield themselves from disruptive innovation. No amount of wishful thinking about “prosumer” policies will change these grim political realities. Only by minimizing chances to politicize technology markets and decisions can we overcome these problems.
Conclusion
For those of us who prefer to focus on freeing code, Brown and Marsden’s Regulating Code is another reminder that liberty is increasingly a loser in Internet policy circles these days. Milton Mueller’s dream of decentralized, denationalized liberalism seems more and more unlikely as armies of policymakers, regulators, special interests, regulatory advocates, academics, and others all line up and plead for their pet interest or cause to be satisfied through pure power politics. No matter what you call it — fudging, nudging, coregulation, smart regulation, multistakeholderism, prosumer law, or whatever else, — there is no escaping the fact that we are witnessing the complete politicization of almost every facet of code creation and digital decisionmaking today.
Despite my deep reservations about a more politicized cyberspace, Brown and Marsden’s book is an important text because it is one of the most sophisticated articulations and defenses of it to date. Their book also helps us better understand the rapidly developing institutional political economy of Internet regulation in both broad and narrow policy contexts. Thus, it is worth your time and attention even if, like me, you are disheartened to be reading yet another Net policy book that ultimately endorses mandates over of markets as the primary modus operandi of the information age.
Additional Resources about the book:
the official MIT Press website for Regulating Code
shorter Brown & Marsden paper on “Prosumer Law” (via SSRN)
Little Atoms podcast featuring Brown & Marsden discussing Regulating Code
Ian Brown edited this beefy Research Handbook On Governance Of The Internet , which makes a nice complement to Regulating Code. It offers even more detailed case studies on the major issues featured in the book. A terrific resource (if you can afford it!)
a video of Brown & Marsden discussing the book at the Oxford Internet Institute (also embedded below):
Other books you should read alongside “Regulating Code” (links are for my reviews of each):
Milton Mueller’s Networks and States: The Global Politics of Internet Governance
John Palfrey & Urs Gasser’s Interop: The Promise and Perils of Highly Interconnected Systems
Rebecca MacKinnon’s Consent of the Networked: The Worldwide Struggle for Internet Freedom
David G. Post’s In Search of Jefferson’s Moose: Notes on the State of Cyberspace
Christopher Yoo’s The Dynamic Internet: How Technology, Users, and Businesses are Transforming the Network
Larry Lessig’s Code (my 2-part exchange with Lessig upon Code’s 10th anniversary: Part 1, Part 2)







How Can Congress Accommodate Both Federal and Commercial Spectrum Demand?
“Permitting voluntary spectrum transactions between federal and commercial users would harness the power of market forces to put both commercial and federal spectrum to its highest and best uses.”
The House Energy and Commerce Committee’s Subcommittee on Communications and Technology is holding a hearing today to ask, “How can Congress meet the needs of Federal agencies while addressing carriers’ spiraling demand for spectrum in the age of the data-intensive smartphone?” In my view, the answer requires a flexible approach that permits experimentation among multiple approaches.
There are challenges and opportunities for both (1) clearing and reallocating federal spectrum for commercial use and (2) sharing spectrum among federal and commercial users. Economic and technical issues may require different strategies for different spectrum bands and different uses. Experience indicates that voluntary negotiations among interested parties – not bureaucratic fiat – are likely to produce the most efficient strategy in any particular instance. Unfortunately, current law does not provide market incentives or mechanisms for the relevant parties (federal and commercial spectrum users and spectrum regulators) to achieve efficient outcomes.
Congressional action creating markets for spectrum transactions between federal and commercial users would provide the relevant parties with an opportunity to maximize their spectrum use through voluntary negotiation. A market-oriented approach would permit experimentation, encourage innovation, and promote investment while increasing the efficiency of spectrum use. The result would benefit consumers, federal agencies, and the economy.
Federal users lack incentives to relinquish or share spectrum with commercial users
The law requires the NTIA and FCC to jointly plan spectrum allocations to accommodate all users and promote the efficient use of the spectrum. Although the agencies have agreed to share spectrum when the potential for harmful interference is low, the NTIA typically does not voluntarily agree to repurpose federal spectrum for exclusive commercial use. That typically requires a Presidential memorandum, Congressional legislation, or both.
The reason: NTIA and its constituent federal spectrum users have no incentive to voluntarily relinquish federal spectrum rights.
First, government agencies generally cannot profit from relinquishing their spectrum (i.e., they are not subject to the opportunity costs applicable in commercial markets). They are entitled to reimbursement for the costs of relocating their wireless systems after a commercial spectrum auction, but the majority of auction proceeds are remitted to the general Treasury.
Second, government agencies face an uncertain funding environment (i.e., they cannot raise capital in commercial markets). Agencies often reserve federal spectrum allocations for planned wireless systems that are unfunded, which can result in federal spectrum lying fallow for years. An agency that reserves spectrum for a planned system can remain optimistic that it will receive funding in the next budget cycle. But, if the agency relinquishes its spectrum, it cannot build the planned system even if it does receive funding.
The lack of potential benefits and the funding uncertainty inherent in the government budgeting process combine to create an environment in which federal agencies have low opportunity costs for reserving spectrum and high opportunity costs for relinquishing it. Creating market mechanisms that reverse these opportunity costs would provide government agencies with incentives to voluntarily relinquish or share their spectrum in ways that promote overall spectrum efficiency.
Federal users lack incentives to share spectrum with other federal users
The lack of incentives for efficient use of federal spectrum extends to intra-agency sharing as well.
There are approximately eighty different federal entities that are authorized to use federal spectrum. It would be more efficient for multiple agencies to share spectrum and systems in certain bands, but the lack of market incentives combined with jurisdictional issues make it difficult for them to work together. For example, DOJ, DHS, and DOT tried to build a shared wireless network for voice communications, but, “despite costing over $356 million over 10 years,” the project failed to achieve the results intended.
Market mechanisms that permit federal agencies to profit from their spectrum could eliminate the funding issues and alleviate the “turf wars” that plague intra-agency projects.
Potential mechanisms for repurposing federal spectrum
The current proposals for repurposing federal spectrum fall into three general categories.
One option is to create a GSA-like agency for federal spectrum users. This would provide an incentive for efficient use of federal spectrum by imposing an opportunity cost for inefficiency (in the form of rents paid by federal spectrum users to the new agency), but it would not improve funding mechanisms for federal wireless systems.
Another option is the sharing-only approach proposed by the President’s Council of Advisors on Science and Technology (PCAST). This approach could provide commercial users with additional access to federal spectrum, but it would not alter federal incentives or funding and lacks the degree of certainty that is typically necessary for substantial commercial investment.
The third option would permit federal spectrum users to sell or lease their spectrum rights and use the funds to build new systems or secure usage rights on commercial systems. This could be accomplished through the use of incentive auctions in some circumstances, though individually negotiated transactions between federal and commercial users would provide significantly more flexibility. This alternative would tend to reverse (by merging) the incentives discussed above: Federal users would face higher opportunity costs for reserving spectrum and lower opportunity costs for relinquishing it.
The third option also has the advantage of permitting multiple approaches to the issue of apportioning spectrum for federal and commercial uses. I expect that, even if government agencies were permitted to engage in secondary market transactions with commercial spectrum users, they would still prefer sharing on a non-interference basis in bands with unique requirements, which would accommodate additional spectrum for unlicensed uses. If it appeared that federal users still lacked sufficient incentives to improve the efficiency of their spectrum use, Congress would retain the option of creating a GSA-like agency to charge rents to federal spectrum users.
Permitting voluntary spectrum transactions between federal and commercial users would harness the power of market forces to put both commercial and federal spectrum to its highest and best uses. As FCC Commissioner Rosenworcel noted recently, “our federal spectrum policy needs to be built on carrots, not sticks.” Giving federal spectrum users an opportunity to negotiate a share in the benefits of repurposing federal spectrum would be a carrot worth pursuing.







June 25, 2013
Richard Brandt on Jeff Bezos and amazon.com
Richard Brandt, technology journalist and author, discusses his new book, One Click: Jeff Bezos and the Rise of Amazon.Com. Brandt discusses Bezos’ entrepreneurial drive, his business philosophy, and how he’s grown Amazon to become the biggest retailer in the world. This episode also covers the biggest mistake Bezos ever made, how Amazon uses patent laws to its advantage, whether Amazon will soon become a publishing house, Bezos’ idea for privately-funded space exploration and his plan to revolutionize technology with quantum computing.
Related Links
One Click: Jeff Bezos and the Rise of Amazon.com, Brandt
History of Google/About Google, Brandt
Five minutes with Richard Brandt, Author of One Click, Ennyman







June 24, 2013
The Constructive Way to Combat Online Hate Speech: Thoughts on “Viral Hate” by Foxman & Wolf
The Internet’s greatest blessing — its general openness to all speech and speakers — is also sometimes its biggest curse. That is, you cannot expect to have the most widely accessible, unrestricted communications platform the world has ever known and not also have some imbeciles who use it to spew insulting, vile, and hateful comments.
It is important to put things in perspective, however. Hate speech is not the norm online. The louts who spew hatred represent a small minority of all online speakers. The vast majority of online speech is of a socially acceptable — even beneficial — nature.
Still, the problem of hate speech remains very real and a diverse array of strategies are needed to deal with it. The sensible path forward in this regard is charted by Abraham H. Foxman and Christopher Wolf in their new book, Viral Hate: Containing Its Spread on the Internet. Their book explains why the best approach to online hate is a combination of education, digital literacy, user empowerment, industry best practices and self-regulation, increased watchdog / press oversight, social pressure and, most importantly, counter-speech. Foxman and Wolf also explain why — no matter how well-intentioned — legal solutions aimed at eradicating online hate will not work and would raise serious unintended consequences if imposed.
In striking this sensible balance, Foxman and Wolf have penned the definitive book on how to constructively combat viral hate in an age of ubiquitous information flows.
Definitional Challenges & Free Speech Concerns
Defining “hate speech” is a classic eye-of-the-beholder problem: At what point does heated speech become hate speech and who should be in charge of drawing the line between the two? “The notion of a single definition of hate speech that everyone can agree on is probably illusory,” Foxman and Wolf note, especially because of “the continually evolving and morphing nature of online hate.” (p. 52, 103) “Like every other form of human communication, bigoted or hateful speech is always evolving, changing its vocabulary and style, adjusting to social and demographic trends, and reaching out in new ways to potentially receptive new audiences.” (p. 92)
Many free speech advocates (including me) argue that the government should not be in the business of ensuring that people never have their feelings hurt. Censorial solutions are particularly problematic here in the United States since they would likely run afoul of the protections secured by the First Amendment of the U.S. Constitution.
The clear trajectory of the Supreme Court’s free speech jurisprudence over the past half-century has been in the direction of constantly expanding protection for freedom of expression, even of the most repugnant, hateful varieties. Most recently, in Snyder v. Phelps, for example, the Court ruled that the Westboro Baptist Church could engage in hateful protests near the funerals of soldiers. “[T]his Nation has chosen to protect even hurtful speech on public issues to ensure that public debate is not stifled,” ruled Chief Justice John Roberts for the Court’s 8-1 majority. The Court has also recently held that the First Amendment protects lying about military honors (United States v. Alvarez, 2012), animal cruelty videos (United States v. Stevens, 2010), computer-generated depictions of child pornography (Ashcroft v. Free Speech Coalition, 2002), and the sale of violent video games to minors (Brown v. EMA, 2011). This comes on top of over 15 years of Internet-related jurisprudence in which courts have struck down every effort to regulate online expression.
Some will celebrate this jurisprudential revolution; others with lament it. Regardless, it is likely to remain the constitutional standard here in the U.S. As a result, there is almost no chance that courts here would allow restrictions on hate speech to stand. That means alternative approaches will continue to be relied upon to address it.
Foxman and Wolf acknowledge these constitutional hurdles but also point out that there are other reasons why “laws attempting to prohibit hate speech are probably one of the weakest tools we can use against bigotry.” (p. 171) Most notably, there is the scope and volume problem: “the sheer vastness of the challenge” (p. 103) which means “it’s simply impossible to monitor and police the vast proliferation of bigoted content being distributed through Web 2.0 technologies.” (p. 81) “The borderless nature of the Internet means that, like chasing cockroaches, squashing on offending website, page, or service provider does not solve the problem; there are many more waiting behind the walls — or across the border.” (p. 82) That’s exactly right and it also explains why solutions of a more technical nature aren’t likely to work very well either.
Foxman and Wolf also point out how hate speech laws could backfire and have profound unintended consequences. Beyond targeted laws that address true threats, harassment, and direct incitements to violence, Foxman and Wolf argue that “broader regulation of hate speech may send an ‘educational message’ that actually weakens rather than strengthens our system of democratic values.” (p. 171) That’s because such censorial laws and regulations undermine the very essence of deliberative democracy — robust exchange of potential controversial views — and leads to potential untrammeled majoritarianism. Worse yet, legalistic attempts to shut down hate speech can end up creating martyrs for fringe movements and, paradoxically, end up fueling conspiracy theories. (p. 80)
The Essential Role of Counter-speech & Education
Yet, “the challenge of defining hate speech shouldn’t lead us to give up on solving the problem,” argue Foxman and Woff. (p. 53) We must, they argue, refocus our efforts around “education as a bulwark of freedom.” (p. 170) Digital literacy — teaching citizens respectful online behavior — is the key to those education efforts.
A vital part of digital literacy efforts is the encouragement of counter-speech solutions to online hate. “[T]he best anecdote to hate speech is counter-speech – exposing hate speech for its deceitful and false content, setting the record straight, and promoting the values of respect and diversity,” note Foxman and Wolf. (p. 129) Or, as the old saying goes, the best response to bad speech is better speech. This principle has infused countless Supreme Court free speech decisions over the past century and it continues to make good sense. But we could do more through education and digital literacy efforts to encourage more and better forms of counter-speech going forward.
“Counter-speech isn’t only or even primarily about debating hate-mongers,” they note. “It’s about helping to create a climate of tolerance and openness for people of all kinds, not just on the Internet but in every aspect of local, community, and national life.” (p. 146) This is how digital literacy becomes digital citizenship. It’s about forming smart norms and personal best practices regarding beneficial online interactions.
Intermediary Policing
What more can be done beyond education and counter-speech efforts? Foxman and Wolf envision a broad and growing role for intermediaries to help to police viral hate. “We are convinced that if much of the time and energy spent advocating legal action against hate speech was used in collaborating and uniting with the online industry to fight the scourge of online hate, we would be making more gains in this fight,” they say. (p. 121) Among the steps they would like to see online operators take:
Establishing clear hate speech policies in their Terms of Service and mechanisms for enforcing them;
Making it easier for users to flag hate speech and to speak out against it;
Facilitating industry-wide education and best practices via multi-stakeholder approaches; and
Limiting anonymity and moving to “real-name” policies to identify speakers.
De-anonymization / Real-name policies
Most of these are imminently sensible solutions that should serve as best practices for online service providers and social media platform operators. But their last suggestion for sites to consider limiting anonymous speech will be controversial, especially at a time when many feel that privacy is already at serious risk online and when some critics argue that intermediaries already “censor” too much content as it is. (See, for example, this Jeff Rosen essay on “The Delete Squad: Google, Twitter, Facebook and the New Global Battle over the Future of Free Speech” and this Evgeny Morozov editorial, “You Can’t Say That on the Internet”).
Anonymous online speech certainly facilitates plenty of nasty online comments. There’s plenty of evidence — both scholarly and anecdotal — that “deindividuation” occurs when people can post anonymously. As Foxman and Wolf explain it: “People who are able to post anonymously (or pseudonymously) are far more likely to say awful things, sometimes with awful effects. Speaking from behind a blank wall that shields a person from responsibility encourages recklessness – it’s far easier to hit the ‘send’ button without a second thought under those circumstances.” (p. 114)
On the other hand, there needs to be a sense of balance here. We protect anonymous speech for the same reason we protect all other forms of speech, no matter how odious: With the bad comes a lot of good. Forcing all users to identify themselves to get at handful of troublemakers is overkill and it would result in the chilling of a huge amount of legitimate speech.
Nonetheless, many governments across the globe are pushing for restrictions on anonymous speech. As Cole Stryker noted in his recent book, Hacking the Future: Privacy, Identity, and Anonymity on the Web, “we are seeing is an all-out war on anonymity, and thus free speech, waged by a variety of armies with widely diverse motivations, often for compelling reasons.” (p. 229). Stryker is right. In fact, less than two weeks ago, to produce the names of the people behind anti-Semitic tweets that appeared on the site last year. Meanwhile, plenty of academics, including many here in the U.S., have stepped up their efforts to ban or limit online anonymity. If you don’t believe me, I suggest you read a few of the chapters of The Offensive Internet: Speech, Privacy, and Reputation (Saul Levmore & Martha C. Nussbaum, eds.). It’s a veritable fusillade against anonymity as well as Section 230, the U.S. law that limits liability for intermediaries who post materials by others.
In Viral Hate, Foxman and Wolf stop short of suggesting legal restrictions on anonymity, preferring to stick with experimentation among private intermediaries. One of the book’s authors (Wolf) penned an essay in The New York Times last November (“Anonymity and Incivility on the Internet”) suggesting that while “this is not a matter for government… But it is time for Internet intermediaries voluntarily to consider requiring either the use of real names (or registration with the online service) in circumstances, such as the comments section for news articles, where the benefits of anonymous posting are outweighed by the need for greater online civility.” Specifically, Wolf wants the rest of the Net to follow Facebook’s lead: “It is time to consider Facebook’s real-name policy as an Internet norm because online identification demonstrably leads to accountability and promotes civility.”
These proposals prompted strong responses from some academics and average readers who decried the implications of such a move for both privacy and free speech. But, again, it is worth reiterating that Foxman and Wolf do not call for government mandates to achieve this. “[T]his notion of promulgating a new standard of accountability online is not a matter for government intervention, given the strictures of the First Amendment,” they argue. (p. 117)
However, Foxman and Wolf do suggest one innovative alternative that merits attention: premium placement for registered commenters. The New York Times and some other major content providers have experimented with premium placement, whereby those registered on the site have their comments pushed up in the queue while other comments appear down below them. On the other hand, I don’t like the idea of having to register for every news or content site I visit, so I would hope such approaches are used selectively. Another useful approach involves letting users of various social media sites and content services to determine whether they wish to allow comments on their user-generated content at all. Of course, many sites and services (such as YouTube, Facebook, and most blogging services) already allow that.
Conclusion
There are times in the book when Foxman and Wolf push their cause with a bit too much rhetorical flair, as when they claim that “Hitler and the Nazis could never have dreamed of such an engine of hate (as the Internet”). (p. 10) Perhaps there is something to that, but it is also true that Hitler and the Nazis could have never of dreamed of a platform for individual empowerment, transparency, and counter-speech such as the Internet. It was precisely because they were able to control the very limited media and communications platforms of their age that the Nazis were about to exert total control over the information systems and create a propaganda hate machine that had no serious challenge from the public or other nations. Just ask Arab dictators which age they’d prefer to rule in! It is certainly much harder for today’s totalitarian thugs to keep secrets bottled up and it is equally hard for them to spread lies and hateful propaganda without being met with a forceful response from the general citizenry as well as those in other nations. So the “Hitler-would-have-loved-the-Net” talk is unwarranted.
I’m also a bit skeptical of some of the metrics used to measure this problem. While there is clearly plenty of online hate to be found across the Net today, efforts to quantify it inevitably run right back into the same subjective definition problems that Foxman and Wolf do such a nice job explaining throughout the text. So, if we have such a profound ‘eye-of-the-beholder’ problem at work here, how is it that we can be sure that quantitative counts are accurate? That doesn’t mean I’m opposed to efforts to quantify online hate, rather, we just need to take such measures with a grain of salt.
Finally, I wish the authors would have developed more detailed case studies of how companies outside the mainstream are dealing with these issues today. Foxman and Wolf focus on big players like Google, Facebook, and Twitter for obvious reasons, but plenty of other online providers and social media operators have policies and procedures in place today to deal with online hate speech. A more thorough survey of those differing approaches might have helped us gain a better understanding of which policies make the most sense going forward.
Despite those small nitpicks, Foxman and Wolf have done a great service here by offering us a penetrating examination of the problem of online hate speech while simultaneously explaining the practical solutions necessary to combat it. Some will be dissatisfied with their pragmatic approach to the issue, feeling on one hand that the authors have not gone far enough in bringing in the law to solve these problems, while others will desire a more forceful call for freedom of speech and just growing a thicker skin in response to viral hate. But I believe Foxman and Wolf have struck exactly the right balance here and given us a constructive blueprint for addressing these vexing issues going forward.







June 20, 2013
National Review gets Bitcoin very wrong
National Review today runs a pretty unfortunate article about Bitcoin in which the reporter, Betsy Woodruff, tries to live for a week using only bitcoins—a fun stunt already done by Kashmir Hill about two months ago. Aside from misrepresenting libertarianism, what's unfortunate about the article is how Bitcoin is presented to NR's readers, many of whom may be hearing about the virtual currency for the first time. Woodruff, who admits she doesn't completely understand how Bitcoin works, nevertheless writes,
From what I can tell, the main reason Bitcoin has any practical value is the existence of Silk Road, a website that lets users buy drugs and other illegal material online. …
A lot of Bitcoin aficionados will probably take issue with my next point here, but I’m pretty sure history will eventually be on my side. My theory is that Silk Road is the Fort Knox of Bitcoin. Bitcoin, from what I can tell, isn’t valuable because of idealistic Ron Paul supporters who feel it’s in their rational self-interest to invest in a monetary future unfettered by Washington; Bitcoin is valuable because you can use it to do something that you can’t use other forms of currency to do: buy drugs online. As long as Bitcoin is the best way to buy drugs online, and as long as there is a demand for Internet-acquired drugs, there will be a demand for Bitcoin.
Woodruff is right that folks who understand Bitcoin will take issue with her because she's demonstrably wrong. While it's true that illicit transactions probably did help bootstrap the Bitcoin economy early on, we are way past the point where such transactions account for any sizable portion of the economy. It's easy to put her "theory" to the test: Nicolas Cristin of Carnegie Mellon has estimated that Silk Road generates about $2 million in sales a month. The estimated total transaction volume for the whole bitcoin economy over the last 30 days is just over $770 million. So, Silk Road accounts for about 0.25% of bitcoin transactions—far from being the "Fort Knox of Bitcoin," as Woodruff says. And to put that in perspective, the UN estimates that the illicit drug trade accounts for 0.9% of world GDP.
The fact is that Bitcoin is not only a revolutionary new payments system that potentially disrupts traditional providers and can help serve the billions of unbanked around the world, but it also has the potential to be a distributed futures or securities market, or a distributed notary service. This is why Peter Thiel's Founders Fund and Fred Wilson's Union Square Ventures are investing millions of dollars in Bitcoin startups. Should we really think that these investors have overlooked what Woodruff posits—that the only value of bitcoins is to buy drugs? No, and I hope NR updates its story.







June 19, 2013
Bad news from Obama’s memo on federal spectrum
A few days ago, the big news in the telecom world was that President Obama again ordered federal agencies to share and sell their spectrum to expand commercial mobile broadband use. This effort is premised on the fact that agencies use their gifted airwaves poorly while demand for mobile broadband is surging. While the presidential memorandum half-heartedly supports clearing out agencies from some bands and selling it off, the focus of the memo is shared access, whereby federal agencies agree to allow non-federal users to use the same spectrum bands with non-interfering technologies.
The good news is that there is no mention of PCAST’s 2012 recommendation to the president to create a 1000 MHz “superhighway” of unlicensed federal spectrum accessed by sensing devices. This radical proposal would replace the conventional clearing-and-auction process with a spectrum commons framework reliant on unproven sensing technologies. Instead of consumers relying on carriers’ spectrum for mobile broadband, this plan would crudely imitate (in theory) wifi on steroids, where devices would search out access over a huge portion of valuable spectrum, avoiding federal users. Its omission in the recent memo likely means the unlicensed superhighway won’t be pursued.
Still, this doubling-down on other forms of dynamic spectrum sharing is unfortunate for several reasons. First, it mostly entrenches the disastrous status quo by acceding to federal agencies’ claims that they can’t be safely moved. Giving federal agencies free spectrum decades ago was a costly mistake that needs to be corrected through pricing and through clearing. By throwing their hands up and saying that clearing and auctioning federal spectrum is too difficult and sharing is the best alternative, the administration condemns us to suffer for the sins of our fathers.
Second, sharing, as envisioned in the memo, will not be accomplished quickly or extensively. Whatever technologies come out of this–there are several options, which only adds research delays–will be constrained by what interference risks the agencies accept. Engineering tests and simulations cannot answer this question; it is an economic and political question, and the economics is very distorted as it is. Federal agencies and particularly the military are very jealous of their spectrum. And who can blame them, since their wireless systems are often used for communications and training exercises that, if not directly protecting the lives of civilians, employees, and soldiers, are an important component of preparation for combat. But this jealousy means agencies are not good at sharing wireless bandwidth.
For “sharing skeptics,” UWB’s experiences illuminates our concerns. Ultrawideband (UWB) is a wireless low-power technology used for radar and data services and, beginning in 1989, its proponents sought regulatory approval to share federal spectrum for UWB commercial applications. UWB uses huge portions of spectrum but is very low power–transmissions from a cellphone are millions of times more powerful than UWB transmissions. Even then, UWB applicants were subjected to a process that can only be described as Kafka-esque as it went–for 13 years–agency to agency, submitting filings and completing interference tests, attempting to show that the technology would not threaten federal operations, before it finally got approval. Indicative of agency foot-dragging, a UWB manufacturer noted,
It took NTIA nearly a year to obtain internal sign off by government users of spectrum to approve with conditions the requests for waivers submitted by [UWB] companies. This despite the fact that the devices . . . were lifesaving instruments for public safety and law enforcement personnel, and all 2500 devices requested, if operating together in a single room, would emit less than one quarter the power of a cell phone.
That same UWB applicant made over 100 trips to DC in 6 years and spent millions of dollars to push his technology. Another large UWB company backed by Intel went out of business in the meantime. To be clear, the technologies contemplated in the memo are different from UWB, but UWB is not alone and the institutional resistance will be the same for future sharing technologies. There will be extensive tests, frequent denials, delays, and billions of dollars of continued waste of underused federal spectrum.
I have no doubt the heads of NTIA and DoD favor making mobile broadband more available to consumers. But it is also their duty to ensure that military and federal systems work well all the time. Given these two priorities (faster mobile downloads of cat videos versus public safety and military training), guess which one the NTIA and agencies will favor? What probability of service disruption will federal agencies tolerate? The answer–as we’ve seen in previous sharing attempts–is vanishingly small. That means if any technologies are approved for sharing on federal bands–a process that will take years–they will be likely constrained by very conservative technical criteria and low-power operations.
The memo’s best recommendation is exploring “incentives” (that is, pricing) for federal agencies to relinquish spectrum. Blair Levin–who worked on the FCC’s 2010 National Broadband Plan–voiced support for creating a “GSA for spectrum” at a Washington Post forum this week, and hopefully this sentiment will become a priority. Until agencies are paying market prices for this valuable resource, attempts to force agencies to share are bound to run into these problems since there is no way to analyze the economic tradeoffs.
But a GSA for spectrum is a long ways off and I suspect the regulatory risks and delays in the interim, combined with the poor economics of the permitted technologies, will scare away most investment. Whatever does emerge will be a poor substitute for the robust wireless networks we see everyday on our smartphones using exclusively licensed commercial spectrum, which is why the memo’s focus on sharing–not clearing and auctioning–is sorry news.
For more on proposals for reclaiming federal spectrum through clearing and auctioning, please see my hot-off-the-presses Mercatus working paper.







June 18, 2013
Declan McCullagh on the NSA leaks
Declan McCullagh, chief political correspondent for CNET and former Washington bureau chief for Wired News, discusses recent leaks of NSA surveillance programs. What do we know so far, and what more might be unveiled in the coming weeks? McCullagh covers legal challenges to the programs, the Patriot Act, the fourth amendment, email encryption, the media and public response, and broader implications for privacy and reform.
Related Links
Snowden: NSA snoops on U.S. phone calls without warrants, McCullagh
Snowden: Feds can’t plug leaks by ‘murdering me’, McCullagh
Feds: Power grid vulnerable to cyber threats, McCullagh







June 13, 2013
Mr. Bitcoin goes to Washington
Today I had the great pleasure of moderating a panel discussion at a conference on the “Virtual Economy” hosted by Thomson Reuters and the International Center for Missing and Exploited Children. On my panel were representatives from the Bitcoin Foundation, the Tor Project, and the DOJ, and we had a lively discussion about how these technologies can potentially be used by criminals and what these open source communities might be able to do to mitigate that risk.
The bottom line message that came out of the panel (and indeed every panel) is that the Tor and Bitcoin communities do not like to see the technologies they develop put to evil uses, and that they are more than willing to work with policymakers and law enforcement to the extent that they can. On the flip side, the message to regulators was that they need to be more open, inclusive, and transparent in their decision making if they expect cooperation from these communities.
I was therefore interested in the keynote remarks delivered by Jennifer Shasky Calvery, the Director of the Treasury Department’s Financial Crimes Enforcement Network. In particular, she addressed the fact that since there have been several enforcement actions against virtual currency exchangers and providers, the traditional banking sector has been wary of doing business with companies in the virtual currency space. She said:
I do want to address the issue of virtual currency administrators and exchangers maintaining access to the banking system in light of the recent action against Liberty Reserve. Again, keep in mind the combined actions by the Department of Justice and FinCEN took down a $6 billion money laundering operation, the biggest in U.S. history.
We can understand the concerns that these actions may create a broad-brush, reaction from banks. Banks need to assess their risk tolerance and the risks any particular client might pose. That’s their obligation and that’s what we expect them to do.
And this goes back to my earlier points about corporate responsibility and why it is in the best interest of virtual currency administrators and exchangers to comply with their regulatory responsibilities. Banks are more likely to associate themselves with registered, compliant, transparent businesses. And our guidance should help virtual currency administrators and providers become compliant, well-established businesses that banks will regard as desirable and profitable customers.
While it’s true that FinCEN’s March guidance provides clarity for many actors in the Bitcoin space, it is nevertheless very ambiguous about other actors. For example, is a Bitcoin miner who sells for dollars the bitcoins he mines subject to regulation? If I buy those bitcoins, hold them for a time as an investment, and then resell them for dollars, am I subject to regulation? In neither case are bitcoins acquired to purchase goods or services (the only use-case clearly not regulated according to the guidance). And even if one is clearly subject to the regulations, say as an exchanger, it takes millions of dollars and potentially years of work to comply with state licensing and other requirements. My concern is that banks will not do business with Bitcoin start-ups not because they pose any real criminal risk, but because there is too much regulatory uncertainty.
My sincere hope is that banks do not interpret Ms. Shasky Calvery’s comments as validation of their risk-aversion. Banks and other financial institutions should be careful about who they do business with, and they certainly should not do business with criminals, but it would be a shame if they felt they couldn’t do business with an innovative new kind of start-up simply because that start-up has not been (and may never be) adequately defined by a regulator. Unfortunately, I fear banks may take the comments to suggest just that, putting start-ups in limbo.
Entrepreneurs may want to comply with regulation in order to get banking services, and they may do everything they think they have to in order to comply, but the banks may nevertheless not want to take the risk given that the FinCEN guidance is so ambiguous. I asked Ms. Shasky Calvery if there was a way entrepreneurs could seek clarification on the guidance, and she said they could call FinCEN’s toll-free regulatory helpline at (800) 949–2732. That may not be very satisfying to some, but it’s a start. And I hope that any clarification that emerges from conversations with FinCEN are made public by the agency so that others can learn from it.
All in all, I think today we saw the first tentative steps toward a deeper conversation between Bitcoin entrepreneurs and users on the one hand, and regulators and law enforcement on the other. That’s a good thing. But I hope regulators understand that it’s not just the regulations they promulgate that have consequences for regulated entities, it’s also the uncertainty they can create through inaction.
Ms. Shasky Calvery also said:
Some in the press speculated that our guidance was an attempt to clamp down on virtual currency providers. I will not deny that there are some troublesome providers out there. But, that is balanced by a recognition of the innovation these virtual currencies provide, and the financial inclusion that they might offer society. A whole host of emerging technologies in the financial sector have proven their capacity to empower customers, encourage the development of innovative financial products, and expand access to financial services. And we want these advances to continue.
That is a welcome sentiment, but those advances can only continue if there are clear rules made in consultation with regulated parties and the general public. Hopefully FinCEN will revisit its guidance now that the conversation has begun, and as other regulators consider new rules, they will hopefully engage the Bitcoin community early in order to avoid ambiguity and uncertainty.







June 12, 2013
My take on Prism
Over at The Umlaut, I try to articulate why even people who have “nothing to hide” should be concerned about NSA surveillance:
I have no doubt that Prism is a helpful tool in combatting terrorism and enforcing the law, as the Obama administration claims. But ubiquitous surveillance doesn’t just help enforce the law; it changes the kinds of laws that can be enforced. It has Constitutional implications, not just because it violates the Fourth Amendment, which it does, but because it repeals a practical barrier to ever greater tyranny.
Read the whole thing, and pass it on.







Adam Thierer's Blog
- Adam Thierer's profile
- 1 follower
