Adam Thierer's Blog, page 57

September 11, 2013

Privacy Identity Innovation Conference Next Week

SpaceNeedleI’m excited to be attending the big annual Privacy Identity Innovation (pii2013) conference next week in Seattle, Washington from September 16-18. Organized by the amazing Natalie Fonseca, who also created the widely attended Tech Policy Summit, the Privacy Identity Innovation conference brings together some of the best and brightest minds involved in the digital economy and information technology policy.



Natalie and her team have put together another terrific agenda and group of all-star speakers to debate the “challenges associated with managing and securing the vast amounts of personal data being generated in our increasingly connected world” as well as the “huge opportunities for innovation if done properly.” There will be panels debating the implications of wearable technologies, Google Glass, government surveillance practices, digital advertising, transparency efforts, privacy by design, identification technologies and issues, and privacy developments in Europe and other countries, among other issues. The event also features workshops, demos, and other networking opportunities.



I’m looking forward to my panel on “Emerging Technologies and the Fine Line between Cool and Creepy.” That’s an issue I’ve had a lot to say about in blog posts here as well as recent law review articles. Occasional TLF contributor Larry Downes will also be on that panel with me.



Anyway, if you’ll be out there in Seattle for the big show, please make sure to find me and introduce yourself. I’ll be doing plenty of live-Tweeting from the event that you can read if you following me at (@AdamThierer) on Twitter.

 •  0 comments  •  flag
Share on Twitter
Published on September 11, 2013 19:04

Net Neutrality Returns – As Farce

Over on Forbes today, I have a very long post inspired by Monday’s oral arguments in Verizon’s challenge of the FCC’s Open Internet rules, passed in 2010



I say “inspired” because the post has nothing to say about the oral arguments which, in any case, I did not attend.  Mainstream journalists can’t resist the temptation to try to read into the questions asked or the mood of the judges some indication of how the decision will come out



But as anyone who has ever worked in a court or followed appellate practice  well knows, the tone of oral arguments signals nothing about a judge’s point-of-view.  Often, the harshest questioning is reserved for the side a judge is leaning towards supporting, perhaps because the briefs filed were inadequate.  Bad briefs create more work for the judge and her clerks.



I use the occasion of the hearing to take a fresh look at the net neutrality “debate,” which has been on-going since at least 2005, when I first started paying attention to it.  In particular, I try to disentangle the political term “net neutrality” (undefined and, indeed, not even used in the 2010 Open Internet order) from the engineering principles of packet routing.



According to advocates for government regulation of broadband access, the political argument for net neutrality regulation is simply a codification of the Internet’s design.  But regardless of whether it would even make sense to transform the FCC into the governing body of engineering protocols for the network (the Internet Society and the its engineering task forces are and always have been doing a fine job, thanks very much), the reality is that the political argument has almost nothing to do with the underlying engineering.



Indeed, those most strongly advocating for more government regulation either don’t understand the engineering or intentionally mischaracterize it, or both.  That’s clear from the wide range of supposed competitive problems that have been lumped together under the banner of “net neutrality” issues over the years–almost none of which have anything to do with packet routing.



Fortunately, very little of the larger political agenda of the loose coalition of net neutrality advocates is reflected in the rules ultimately passed by a bare majority of the FCC in 2010.  Even so, those rules, limited as they were, face many challenges.



For one thing, the FCC, despite over a year of dedicated attention to the problem, could identify only four incidents that suggested any kind of market failure, and only one of which (the Comcast-BitTorrent incident) was ever actually considered in detail by the Commission.  (Two of the others never even rose to the level of a complaint.)  The agency was left to regulate on the basis of “preserving” the Open Internet through what it called (nearly a dozen times) “prophylactic” rules.



Second, and of particular interest in the D.C. Circuit proceeding, Congress has never authorized the FCC to issue rules dealing with broadband Internet access.  Though many authorizing bills have circulated over the years, none have ever made it out of committee.  With no legal basis to regulate, the agency was left pointing to irrelevant provisions of the existing Communications Act–most of which were already rejected by the same court in the Comcast case.  Nothing in the law has changed since Comcast, and on that basis, regardless of the merits of Internet regulation, the FCC is very likely to lose.  Which the Commission surely knew in passing the rules in 2010.



The piece ends by describing, as I did in my testimony before the House Judiciary Committee in early 2011, how the Report and Order betray the technical reality that from an engineering standpoint, even the supposed neutrality of packet routing is largely a sentimental myth.  The FCC identified and exempted a dozen network management technologies, practices, and protocols that they acknowledged do not follow the neutrality principle, but which are essential to effective and efficient management of the network.  There is no “neutral” Internet to preserve, and never was.



The agency was right to exempt these practices.  But the problem with the rules as written is that they could not and did not extend to future innovations that new applications and new users will certainly make as essential as today’s management techniques.



If the rules stand, network engineers, application developers, device makers and others in the vibrant, dynamic Internet ecosystem will be forced to seek permission to innovate from the FCC, which will both slow the high-speed world of Internet design to a crawl and introduce a decision maker with no technical expertise and lots of political baggage.



That of course was the kind of counter-productive and unnecessary regulatory intrusion that Internet users successfully rose up against last year when the UN’s International Telecommunications Union threatened to assert itself in basic Internet governance, or the year before that when Congress, without technical understanding of the most basic variety, tried to re-architect the Internet  on behalf of media companies in the failed SOPA and PIPA legislation.



If the FCC gains a foothold in broadband access with the Open Internet rules or other efforts to gain oversight where Congress has delegated none, expect a similar reaction.  Or, in any case, hope for one.

 •  0 comments  •  flag
Share on Twitter
Published on September 11, 2013 10:20

September 3, 2013

Thomas Rid on cyber war

Post image for Thomas Rid on cyber war

Thomas Rid, author of the new book Cyber War Will Not Take Place discusses whether so-called “cyber war” is a legitimate threat or not. Since the early 1990s, talk of cyber war has caused undue panic and worry and, despite major differences, the military treats the protection of cyberspace much in the same way as protection of land or sea. Rid also covers whether a cyber attack should be considered an act of war; whether it’s correct to classify a cyber attack as “war” considering no violence takes place; how sabotage, espionage and subversion come into play; and offers a positive way to view cyber attacks — have such attacks actually saved millions of lives?



Download



Related Links


Cyber War Will Not Take Place, Rid
About Thomas Rid, Rid
The Great Cyberscare, Rid
Cyberwar: don’t believe the hype, Stilgherrian
 •  0 comments  •  flag
Share on Twitter
Published on September 03, 2013 15:59

CBS & Time Warner Cable Make a Deal to End Blackout

Yesterday, Time Warner Cable and CBS reached a deal to end the weeks-long impasse that had resulted in CBS being blacked out in over 3 million U.S. households.



I predicted the two companies would resolve their differences before the start of the NFL season in a RealClearPolicy op-ed published last week:



From Los Angeles to New York, 3 million Americans in eight U.S. cities haven’t been able to watch CBS on cable for weeks, because of a business dispute between the network and Time Warner Cable (TWC). The two companies can’t agree on how much TWC should pay to carry CBS, so the network has blacked out TWC subscribers since August 1. With the NFL season kicking off on September 5, the timing couldn’t be worse for football fans.blackouts-work-1

Regulators at the Federal Communications Commission (FCC) face growing pressure to force the feuding companies to reach an agreement. But despite viewers’ frustrations with this standoff, government intervention isn’t the answer. If bureaucrats begin “overseeing” disputes between network owners and video providers, television viewers will face higher prices or lower-quality shows.

TWC and CBS are playing hardball over serious cash. CBS reportedly seeks to double its fee to $2 per subscriber each month, which TWC claims is an outrageous price increase. But CBS argues it costs more and more to develop hit new shows like Under the Dome, so it’s only fair viewers pay a bit more.

Both sides have a point. TWC is looking out for its millions of subscribers—and its bottom line—by keeping programming costs down. CBS, on the other hand, needs cash to develop creative new content, and hopes it can make some money doing so.





What’s the “fair” price for CBS? The answer lies in the marketplace, which empowers firms to “discover” prices through negotiation. Both CBS and TWC have strong incentives to end this dispute—their shareholders won’t put up with this dispute forever. CBS can’t be happy about losing a reported $400,000 each day TWC subscribers can’t tune in, while TWC is surely losing customers to competing providers—including Verizon, which is aggressively wooing New Yorkers with its CBS-equipped FiOS service.

If the FCC intervenes, it must decide how much TWC should pay CBS. Regulators may be able to read charts, but they can’t read minds. How, then, can the FCC divine how much value the two companies and their customers place on these competing priorities? Given how Washington works, the feds will probably bend to whichever side hires the best-connected lobbyists and influence peddlers.

As long as networks are free to bargain with video providers, television blackouts will happen—but not often. According to economist Jeffrey Eisenach, blackouts disrupt less than 1 in 10,000 viewing hours for consumers. Disputes over fees rarely interrupt programming because they infuriate viewers, ultimately harming networks and cable companies alike. But this hardly means blackouts should be illegal. Imagine a labor union that couldn’t strike—no rational private-sector employer would take its wage demands seriously.

Why should cable companies pay for broadcast networks in the first place, given that broadcasters transmit their networks over the air for free? Because consumers prefer the convenience and reliability of network channels distributed by cable, and in order to satisfy this preference cable companies need access to material that does not belong to them. This access requires negotiating with the networks.

The alternative isn’t forcing networks to give their content away for free, but rather for cable subscribers to pay slightly lower bills but lose broadcast channels which they could still watch via antenna, just like everyone else. If customers preferred this, cable companies would happily stop paying broadcasters. As it is, cable consumers are paying more for convenience by choice, not unlike someone who goes to a weekend movie for $12 when they could go on a Tuesday night for $7.

If the government truly wants to help television viewers, Congress should nix the unfair and anti-competitive legal perks that broadcast affiliates currently enjoy, such as the federal regulation requiring cable companies to buy broadcast content only from the local affiliates in each city. In other words, a cable company can’t let New Yorkers watch primetime shows provided by any CBS station in the nation other than the New York CBS affiliate.

CBS and Time Warner Cable will reach a deal soon enough—they can’t afford not to. Meanwhile, if you’re sticking with TWC, you can still catch your CBS favorites over the air or on Internet platforms like Netflix, Amazon, and Hulu. So long as government stays out of the vibrant entertainment market, there will always be alternatives.



For more on this subject, see the recent essays by fellow Liberators Adam Thierer and Jerry Brito.

 •  0 comments  •  flag
Share on Twitter
Published on September 03, 2013 13:07

August 29, 2013

Edith Ramirez’s ‘Big Data’ Speech: Privacy Concerns Prompt Precautionary Principle Thinking

Much of my recent research and writing has been focused on the contrast between “permissionless innovation” (the notion that innovation should generally be allowed by default) versus its antithesis, the “precautionary principle” (the idea that new innovations should be discouraged or even disallowed until their developers can prove that they won’t cause any harms).  I have discussed this dichotomy in three recent law review articles, a couple of major agency filings, and several blog posts. Those essays are listed at the end of this post.



In this essay, I want to discuss a recent speech by Federal Trade Commission (FTC) Chairwoman Edith Ramirez and show how precautionary principle thinking is increasingly creeping into modern information technology policy discussions, prompted by the various privacy concerns surrounding “big data” and the “Internet of Things” among other information innovations and digital developments.



First, let me recap the core argument I make in my recent articles and filings. It can be summarized as follows:




If public policy is guided at every turn by the precautionary mindset then innovation becomes impossible because of fear of the unknown . Hypothetical worst-case scenarios trump all other considerations under this mentality. Social learning and economic opportunities become far less likely under such a policy regime. In practical terms, it means fewer services, lower quality goods, higher prices, diminished economic growth, and a decline in the overall standard of living. (See this essay and this one.)
Wisdom is born of experience, including experiences involving risk and the possibility of mistakes and accidents . Patience and a general openness to permissionless innovation represent the wise disposition toward new technologies not only because it provides breathing space for future entrepreneurialism, but also because it provides an opportunity to observe both the evolution of societal attitudes toward new technologies and how citizens adapt to them. (See this essay.)
Not every wise ethical principle, social norm, or industry best practice automatically makes for wise public policy . If we hope to preserve a free and open society, we simply cannot convert every ethical directive or norm — no matter how sensible — into a legal directive or else the scope of human freedom and innovation will need to shrink precipitously. (See this essay.)
The best solutions to complex social problems are organic and “bottom-up” in nature . User education and empowerment, informal household media rules, social pressure, societal norms, and targeted enforcement of existing legal norms (especially through the common law) are almost always superior to “top-down,” command-and-control regulatory edits and bureaucratic schemes of a “Mother, May I” nature. (See this essay).
For the preceding reasons, when it comes to information technology policy, “permissionless innovation” should, as a general rule, trump “precautionary principle” thinking. To the maximum extent possible, the default position toward new forms of technological innovation should be “innovation allowed,” or what Paul Ohm has appropriately labeled the “anti-Precautionary Principle.” (See this essay.)


Again, we today are witnessing the clash of these conflicting worldviews in a fairly vivid way in many current debates about online commercial data collection, “big data,” and the so-called “Internet of Things.” For example, FTC Chairwoman Ramirez recently delivered a speech at the annual Technology Policy Institute Aspen Forum on the topic of “The Privacy Challenges of Big Data: A View from the Lifeguard’s Chair.” Ramirez made several provocative assertions and demands in the speech, but here’s the one “commandment” I really want to focus on. Claiming that “One risk is that the lure of ‘big data’ leads to the indiscriminate collection of personal information,” Chairwoman Ramirez went on to argue:



The indiscriminate collection of data violates the First Commandment of data hygiene: Thou shall not collect and hold onto personal information unnecessary to an identified purpose. Keeping data on the offchance that it might prove useful is not consistent with privacy best practices. And remember, not all data is created equally. Just as there is low quality iron ore and coal, there is low quality, unreliable data. And old data is of little value. (emphasis added)


And later in the speech she goes on to argue that “Information that is not collected in the first place can’t be misused” and then suggests a parade of horribles that will befall if such data collection is allowed at all.



The Problem with “Mother, May I”?

So here we have a rather succinct articulation of precautionary principle thinking as applied to modern data collection practices. Chairwoman Ramirez is essentially claiming that — because there are various privacy risks associated with data collection and aggregation — we must consider preemptive and potentially highly restrictive approaches to the initial collection and aggregation of data.



The problem with that logic should be fairly obvious and it was perfectly identified by the great political scientist Aaron Wildavsky in his seminal 1988 book Searching for Safety. Wildavsky warned of the dangers of the “trial without error” mentality — otherwise known as the precautionary principle approach — and he contrasted it with the trial-and-error method of evaluating risk and seeking wise solutions to it. Wildavsky argued that:



The direct implication of trial without error is obvious: If you can do nothing without knowing first how it will turn out, you cannot do anything at all. An indirect implication of trial without error is that if trying new things is made more costly, there will be fewer departures from past practice; this very lack of change may itself be dangerous in forgoing chances to reduce existing hazards. (emphasis added)


Let’s apply that lesson to Chairwoman Ramirez’s speech. When she argues that “Information that is not collected in the first place can’t be misused,” there is absolutely no doubt that her statement is true. But it is equally true that information that is not collected at all is information that might have been used to provide us with the next “killer app” or the great gadget or digital service that we cannot currently contemplate but that some innovative entrepreneur out there might be looking to develop.



Likewise, claiming that “old data is of little value” and issuing the commandment that “Thou shall not collect and hold onto personal information unnecessary to an identified purpose” reveals a rather stunning arrogance about the possibility of serendipitous data discovery: Either Chairwoman Ramirez doesn’t think it can happen or she doesn’t care if it does. But the reality is that the cornucopia of innovation information options and opportunities we have at our disposal today was driven in large part by data collection, including personal data collection. And often those innovations were not part of some initial grand design; instead they came about through the discovery of new and interesting things that could be done with data after the fact.



For example, many of the information services and digital technologies that we enjoy and take for granted today — language translation tools, mobile traffic services, digital mapping technologies, spam and fraud detection tools, instant spell-checkers, and so on — came about not necessarily because of some initial grand design but rather through innovative thinking after-the-fact about how preexisting data sets might be used in interesting new ways. As Viktor Mayer-Schonberger and Kenneth Cukier point out in their recent book, Big Data: A Revolution That Will Transform How We Live, Work, and Think, “data’s value needs to be considered in terms of all the possible ways it can be employed in the future, not simply how it is used in the present.” “In the big-data age,” they note, “data is like a magical diamond mine that keeps on giving long after its principle value has been tapped.” (p. 103-4)



In any event, if the new policy in the United States is to follow Chairwoman Ramirez’s pronouncement that “Keeping data on the offchance that it might prove useful is not consistent with privacy best practices,” then much of the information economy as we know it today will need to be shut down. At a minimum, entrepreneurs will need to start hiring a lot more lobbyists who can sit in Washington and petition the FTC or other policymakers for permission to innovate whenever they have an interesting new idea for how to use data in order to offer us a new service that was not initially collected for a previously stated purpose. Again, it’s “Mother, May I” regulation and we had better get used to a lot more of it if we go down the path that Chairwoman Ramirez is charting.



Alternative, Less-Restrictive Remedies

But here’s the biggest flaw in Chairwoman Ramirez’s reasoning: There is no need for preemptive, prophylactic, precautionary approaches when less-restrictive and potentially equally effective remedies exist.



The title of Ramirez’s speech was subtitled “A View from the Lifeguard’s Chair,” implying that her role is oversee online practices to ensure consumers are safe. That’s a noble intention, but based on some of her remarks, one is left wondering if her true intention is to just drain the information oceans instead.



But there are better ways to deal with dangerous digital waters. In my work on both online child safety and commercial data privacy, I have argued that the best answer to these complex social problems is a mix of technological controls, social pressure and, informal rules and norms, and, most importantly, education and digital literacy efforts.  And government can play an important role by helping educate and empower citizens to help prepare them for our new media environment.



That was the central finding of a blue-ribbon panel of experts convened in 2002 by the National Research Council of the National Academy of Sciences to study how best to protect children in the new, interactive, “always-on” multimedia world. Under the leadership of former U.S. Attorney General Richard Thornburgh, the group produced an amazing report entitled Youth, Pornography, and the Internet, which outlined a sweeping array of methods and technological controls for dealing with potentially objectionable media content or online dangers. Ultimately, however, the experts used a compelling metaphor to explain why education was the most important tool on which parents and policymakers should rely:



Technology—in the form of fences around pools, pool alarms, and locks—can help protect children from drowning in swimming pools. However, teaching a child to swim—and when to avoid pools—is a far safer approach than relying on locks, fences, and alarms to prevent him or her from drowning. Does this mean that parents should not buy fences, alarms, or locks? Of course not—because they do provide some benefit. But parents cannot rely exclusively on those devices to keep their children safe from drowning, and most parents recognize that a child who knows how to swim is less likely to be harmed than one who does not. Furthermore, teaching a child to swim and to exercise good judgment about bodies of water to avoid has applicability and relevance far beyond swimming pools—as any parent who takes a child to the beach can testify. (p. 224)


Regrettably, as I noted in my old book on online safety, we often fail to teach our children how to swim in the new media waters. Indeed, to extend the metaphor, it is as if we are generally adopting an approach that is more akin to just throwing kids in the deep water and waiting to see what happens. The same is true for digital privacy. We sometimes expect both kids and adults to figure out how to swim in these information currents without a little training first.



To rectify this situation, a serious media literacy and digital citizenship agenda is needed in America. Media literacy programs teach children and adults alike to think critically about media, and to better analyze and understand the messages that media providers are communicating.  I went on to argue in my old book that government should push media literacy efforts at every level of the education process. And those efforts should be accompanied by widespread public awareness campaigns to better inform parents about the parental control tools, rating systems, online safety tips, and other media control methods at their disposal.



In the three recent law review articles listed below, I extended this model to privacy and showed how this bottom-up, education and empowerment-based approach is equally applicable to all the debates we are having today about commercial data collection. And I also stressed to vital importance of personal responsibility and corporate responsibility as part of these digital citizenship efforts.



Conclusion

So, in sum, the key question going forward is: Are we going teach people how to swim, or are we going to drain the information oceans based on the fear that people could be harmed by the very existence of some deep data waters?



Chairwoman Ramirez concluded her speech by noting that, “Like the lifeguard at the beach, though, the FTC will remain vigilant to ensure that while innovation pushes forward, consumer privacy is not engulfed by that wave.” As well-intentioned as that sounds, the thrust of her remarks suggest that fear of the water is prompting this particular lifeguard to consider drastic precautionary steps to save us from the potential dangers of those waves. Needless to say, such a mentality and corresponding policy framework would have profound ramifications.



Indeed, let’s be clear about what’s at stake here. This is not about “protecting corporate profits” or Silicon Valley companies. This is about ensuring that individuals as both citizens and consumers continue to enjoy the myriad benefits that accompany an open, innovative information ecosystem. We can find better ways to address the dangers of deep data waters without draining the info-oceans. Let’s teach people how to swim in those waters and how to be responsible data stewards so that we can all continue to enjoy the many benefits of our modern data-driven economy.





  Additional Reading :

Law Review Articles:




The Pursuit of Privacy in a World Where Information Control is Failing” – Harvard Journal of Law & Public Policy
“Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle” – Minnesota Journal of Law, Science & Technology
A Framework for Benefit-Cost Analysis in Digital Privacy Debates” – George Mason University Law Review


Blog posts:




Who Really Believes in ‘Permissionless Innovation’?” – March 4, 2013
What Does It Mean to ‘Have a Conversation’ about a New Technology?”
Planning for Hypothetical Horribles in Tech Policy Debates,” August 6, 2013
On the Line between Technology Ethics vs. Technology Policy,” August 1, 2013
Can We Adapt to the Internet of Things?” – June 19, 2013 (IAPP Privacy perspectives blog)


Testimony / Filings:




Senate Testimony on Privacy, Data Collection & Do Not Track – April 24, 2013
Comments of the Mercatus Center to the FTC in Privacy & Security Implications of the Internet of Things
Mercatus filing to FAA on commercial domestic drones
 •  0 comments  •  flag
Share on Twitter
Published on August 29, 2013 11:39

August 28, 2013

Debate over Motion Picture Tax Incentives Intensifies

A new article by Peter Caranicas and Rachel Abrams in Variety entitled, “Runaway Production: The United States of Tax Incentives,” notes how “[Motion picture] Producers looking for a location weigh many factors — screenplay, crew base, availability of stages, travel and lodging — but these days, first and foremost, they consider the local incentives and tax breaks that can reduce a production’s budget.” In other words, when every state and local government dreams of being “the next Hollywood,” they are willing to shower the entertainment industry with some pretty nice inducements at taxpayers expense.



But these programs are growing more controversial and some state and local governments are reconsidering the wisdom of these efforts. The article cites my Mercatus Center colleague Eileen Norcross, who points out the most serious problem with these programs:



Other arguments against incentives hold that they don’t help the states that offer them. In March, the Massachusetts revenue commission issued a scathing report on the state’s tax credit program, which stated that two-thirds of the total $175 million awarded in 2011 went to out-of-state spending. “The critique is that while they appear to bring in short-term temporary activity to a state or community, a lot of those benefits flow to the production companies,” says Eileen Norcross, a senior research fellow at George Mason U. “The people who are hired locally tend to be (in) more low-wage service industry jobs. It provides a temporary economic blip on the radar, and then it’s sort of fleeting.”


Eileen is exactly right.  I have previously covered this issue here in an essay entitled, “State Film Industry Incentives: A Growing Cronyism Fiasco,” which was later expanded and included as a section in my 73-page forthcoming law review article with Brent Skorup, “A History of Cronyism and Capture in the Information Technology Sector.” In those articles, I noted that all the serious economic reviews of these programs find that there is no evidence these tax incentives help state or local economies. And there are many other problems with these tax inducements, including the fact that they open up the door to more meddling in content decisions by government officials and to serious abuse by fly-by-night scam artists looking to take advantage of state-sponsored cronyism schemes.



As I noted in concluding my earlier blog post in this,



In sum, film tax credit cronyism puts taxpayers at risk without any corresponding benefits to them or the state.  Glamor-seeking and state pride seem to be the primary motivational factors driving state legislators to engage in such economically illogical behavior. It’s like “smokestack-chasing” for the Information Age, except in this case you don’t even have a factory left in town after your economic development efforts go bust. This cronyist activity benefits no one other than film studios. States should end their film incentive programs immediately.
 •  0 comments  •  flag
Share on Twitter
Published on August 28, 2013 07:49

August 26, 2013

Bitcoin: A Primer for Policymakers

Last week, the Mercatus Center released “Bitcoin: A Primer for Policymakers” by yours truly and Andrea Castillo. In it we describe how the digital currency works and address many of the common misconceptions about it. We also analyze current laws and regulations that may already cover digital currencies and warn against preemptively placing regulatory restrictions on Bitcoin that could stifle the new technology before it has a chance to evolve. In addition, we give several recommendations about how to treat Bitcoin in future.








As I say in the video that accompanies the paper, Bitcoin is still very experimental and it might yet fail for any number of reasons. But, one of those reasons should not be that policymakers failed to understand it. Unfortunately, signs of misunderstanding abound, and that is why we wrote the primer.





For example, New York Superintendent of Financial Services Benjamin Lawsky recently made headlines after his office subpoenaed two-dozen Bitcoin-related businesses and investors. He went on CNBC’s Closing Bell to discuss his action, and here is how he answered the host’s question, “Have you seen specific evidence, or do you sense that there is illegal activity going on using Bitcoin?”




We know it from the Liberty Reserve case, a major indictment that came down recently. Six billion dollars in transactions, clearly finding narco-terrorism. It’s something we’re deeply concerned about.




Huh? What does Liberty Reserve have to do with Bitcoin? In short, nothing. Liberty Reserve was a centralized currency almost assuredly designed to evade government control and facilitate illegal activity. Brian Krebs has had great coverage of that currency. For reasons we explain in the primer, Bitcoin makes a terrible currency for criminals relative to centralized currencies like Liberty Reserve and now WebMoney and Perfect Money. Yet journalists, members of the public, and policymakers very often lump them all together under the rubric of “virtual currencies,” and don’t appreciate the differences that have important ramifications for policy.



In short, before they consider making policy, we hope regulators will better understand Bitcoin and similar decentralized crypto-currencies. We hope our primer helps educate folks about the basics, and we know that we and many other in the Bitcoin community stand ready to answer any questions we can about this revolutionary new technology and its social and political implications.

 •  0 comments  •  flag
Share on Twitter
Published on August 26, 2013 05:57

August 24, 2013

Book Review: Anupam Chander’s “Electronic Silk Road”

Electronic Silk Road book coverAs I’ve noted before, I didn’t start my professional life in the early 1990s as a tech policy wonk. My real passion 20 years ago was free trade policy. Unfortunately for me, as my boss rudely informed me at the time, the world was already brimming with aspiring trade analysts and probably didn’t need another. This was the time of NAFTA and WTO negotiations and seemingly everybody was lining up to get into the world of trade policy during that period.



And so, while I was finishing a master’s degree with trade theory applications and patiently hoping for opportunities to open up, I decided to take what I thought was going to be a brief detour into the strange new world of the Internet and information technology policy. Of course, I never looked backed. I was hooked on Net policy from Day 1.  But I never stopped caring about trade theory and I have always remained passionate about the essential role that free trade plays in expanding commerce, improving human welfare, and facilitating more peaceful interactions among the diverse cultures and countries of this planet.



I only tell you this part of my own backstory so that you understand why I was so excited to receive a copy of Anupam Chander’s new book, The Electronic Silk Road: How the Web Binds the World Together in Commerce. Chander’s book weaves together trade theory and modern information technology policy issues. His over-arching goal is to sketch out and defend “a middle ground between isolation and unregulated trade, embracing free trade and also its regulation.” (p. 209)



In a writing style that is clear and direct, Chander explores the competing forces that facilitate and threaten what he refers to as “Trade 2.0.”  He identifies four distinctive legal challenges for “net-work,” which is his generic descriptor for “information services delivered remotely through electronic communications systems.” (p. 2):




“Legal roadblocks to the free flow of net-work;
The lack of adequate legal infrastructure, as compared to trade in traditional goods;
The threat to law itself posed by the footloose nature of net-work and the uncertainty of whose law should govern net-work transactions; and
The danger that local control of net-work might lead to either Balkanization – the disintegration of the World Wide Web into local arenas – or Stalinization – the repression of political dissidents, identified through their online activity by compliant net-work service providers.” (p. 143).


At the heart of the book is an old tension that has long haunted trade policy: How do you achieve the benefits of free trade through greater liberalization without completely undermining the sovereign authority of nation-states to continue enforcing their preferred socio-political legal and cultural norms? After all, as Chander notes, “States will be loathe to abandon their law in the face of the offerings mediated by the Internet.” (p. 34)  “If crossborder flows of information grossly undermine our privacy, security, or the standards of locally delivered services, they will not long be tolerated,” he notes. (p. 173)  These are just a few of the reasons that barriers to trade remain and why, as Chander explains, “the flat world of global business and the self-regulating world of cyberspace remain distant ideals.” (p. 173).



Striking the Balance

Chander wants to counter that impulse to expand the horizons of Trade 2.0, but he argues that, to some extent, nation-states will always need to be appeased along the way. Consequently, he argues that “we must dismantle the logistical and regulatory barriers to net-work trade while at the same time ensuring that public policy objectives cannot easily be evaded through simple jurisdictional sleight of hand or keystroke.” (p. 34) Again, this reflects his desire for both greater liberalization of markets as well as the preservation of a residual role for states in shaping online commerce and activities.



He says we can achieve this Goldilocks-like balance through the application of three key principles.



The first is harmonization of laws and policies, preferably through multinational accords. “Efforts to harmonize laws across nations and standards among professional associations will prove essential to preserve a global cyberspace in the face of national regulation,” Chander insists. (p. 187)



The second principle is “glocalization,” or “the creation or distribution of products or services intended for a global market but customized to conform to local laws — within the bounds of international law.” (p. 169)



The final key principle is more self-regulatory in character. It is the operational norm of “do no evil” as it pertains to requests from repressive states to have Internet intermediaries to crack down on free speech or privacy.  “[W]e must seek to nurture a corporate consciousness among information providers of their role in liberation or oppression,” Chander argues. (p. 205)



In a sense, what Chander is recommending here is largely the way global information markets already work. Thus, instead of being aspirational, Chander’s book is actually just more descriptive of the reality we see on the ground today.



For example, the harmonization efforts he recommends to facilitate Trade 2.0 have been underway in various fora and trade accords for several years now. Chander does a nice job describing many of those efforts in the book.



Likewise, his “glocalization” recommendation is to some extent already today’s norm. After a series of high-profile legal skirmishes over the past dozen years, Internet giants such as Yahoo, Google, Facebook, Cisco, Microsoft and others have all eventually folded under legal and regulatory pressure from various governments across the globe and sought to accommodate parochial regulatory requests, even as they expand their efforts internationally. Again, Chander discusses several of the more well-known case studies in the text.



Finally, however, there have been moments when — especially as it pertains to certain free speech matters — some of these corporate players have stood up for a “do no evil” approach when repressive governments come calling.  In this regard, Chander only briefly mentions the work of the Global Network Initiative, which is somewhat surprising since it has been focused on this mission since its inception in 2008. Nonetheless, such “do no evil” moments have happened (for example, Google bowing out of China), although the track record of success here has been spotty to say the least.



Technological Neutrality

Chander also wants to make sure that online markets are not somehow advantaged relative to traditional markets and technologies. “Trade law should not allow countries to insist on a regulatory nirvana in cyberspace unmatched in real space,” he insists. (p. 155)



Fair enough, but how we achieve neutrality and level the proverbial playing field is, of course, important. The problem is that most nation-states seek to harmonize in the direction of greater control. The rise of electronic networks and online commerce presents us with the opportunity to reconsider the wisdom of long-standing statutes and regulations that are either no longer needed or perhaps never should have been on the books in the first place.



This is why I have repeatedly proposed here and elsewhere that, when it comes to domestic information policy spats that involve old and new players and technologies, we should consider borrowing a page from trade law by adopting the equivalent of a “Most Favored Nation” (MFN) clause for communications and media policy. In a nutshell, this policy would state that: “Any operator seeking to offer a new service or entering a new line of business, should be regulated no more stringently than its least regulated competitor.” Such a MFN for communications and media policy would ensure that regulatory parity exists within this arena as the lines between existing technologies and industry sectors continue to blur.



Although it will often be difficult to achieve in practice, the aspirational goal of placing all players and technologies on the same liberalized level playing field should be at the heart of information technology policy to ensure non-discriminatory regulatory treatment of competing providers and technologies.



But let’s be clear about what this means: To level the proverbial playing field properly, I believe we should be “deregulating down” instead of regulating up to place everyone on equal footing. This would achieve technological neutrality through greater technological freedom and marketplace liberalization.



Of course, others (possibly including Chander) would likely claim that could lead to a “race to the bottom” in certain instances by disallowing state action and the application of local laws and norms. But one person’s “race to the bottom” is another person’s race to the top!  It all depends on the perspective you adopt toward liberalization efforts. For me, the more liberalization the better. The history of deregulation has been shown in one market after another to improve consumer welfare by expanding choice, increasing innovation, and generally pushing prices lower.



Policies of Freedom

What other specific policies can help us strike the right balance going forward?



I was extremely pleased to see Chander discuss the Clinton Administration’s July 1997 Framework for Global Electronic Commerce. It was instrumental in setting the right tone for e-commerce policy before the turn of the century. The Framework stressed the importance of taking a general “hands off” approach to these markets and treating the Internet as a global free-trade zone. It set forth five key principles for Net governance, including: “the private sector should lead;” “governments should avoid undue restrictions on electronic commerce;” “where governmental involvement is needed, its aim should be to support and enforce a predictable, minimalist, consistent and simple legal environment for commerce,” and other light-touch policy recommendations.



As I noted in the title of my 2012 Forbes essay on the Framework, “15 Years On, President Clinton’s 5 Principles for Internet Policy Remain the Perfect Paradigm.” Chander generally embraces these principles, too, even though some of his “glocalization” recommendations cut against the grain of this vision.



Importantly, Chander also highlights four specific U.S. policies that have fostered the growth of electronic trade.




“The First Amendment guarantee of freedom of speech;
The Communications Decency Act’s Section 230, granting immunity to web hosts for user-generated information; [see my old Forbes essay, “The Greatest of All Internet Laws Turns 15” for an explanation of why Sec. 230 has been so important.]
Title II of the Digital Millennium Copyright Act (DMCA), granting immunity to web hosts for copyright infringement; and
Weak consumer privacy regulations [which have] created breathing room for the rise of Web 2.0.”


“This permissive legal framework offers the United States as a sort of export-processing zone in which Internet entrepreneurs can experiment and establish services.” (p. 57)  Chander gets it exactly right here. Legally speaking, this is the secret sauce that continues to power the Net.



But Chander doesn’t really fully confront the inherent contradiction in earlier calling for “technological neutrality” between cyberspace and the traditional economy while also praising all these legal policies, which generally treated the Internet in an “exceptionalist” fashion. I would argue that some of that asymmetry was essential, however, not only to allow the Net to get out of its cradle and grow, but also because it taught us how light-touch regulation was generally superior to traditional heavy-handed regulatory paradigms and mechanisms. Now we just need to keep harmonizing in the direction of the greater freedom that the Internet and online markets enjoy.



Multi-stakeholderism?

One surprising thing about Chander’s book is the general absence of the term “multi-stakeholderism.”  It is getting hard to pick up any Internet policy tract these days and not find reference to multi-stakeholder processes of one sort or another. In particular, I expected to see more linkages to broader Net freedom fights involving the U.N. and the WCIT process.



In this sense, it would have been interesting to see Chander bridge the gap between his work here on free trade in information services and the proposals of various Internet governance scholars and advocacy groups. In particular, I would have liked to have heard what Chander thinks about the conflicting Internet policy paradigms set forth in important recent books from Rebecca MacKinnon (“Consent of the Networked”) and Ian Brown and Christopher Marsden (“Regulating Code”) on one hand, versus those of Milton Mueller (“Networks and States”) and David Post (“Jefferson’s Moose”) on the other. I think Chander would generally be more comfortable with the policy paradigms and proposals sketched out by MacKinnon and Brown & Marsden (whereas I am definitely more in league with Mueller and Post), but I’m not entirely sure where he stands.



Regardless, I would have liked to have seen some discussion of these issues in Chander’s otherwise excellent book.



Semantic Choices

I suppose my only other complaint with the book comes down to some semantic issues, beginning with its title.  In some ways, calling it The Electronic Silk Road makes perfect sense since Chander wants us to think of the parallels to the Silk Road of ancient times, of course. Alas, these days it is hard to utter the term “Silk Road” and not think of people buying and selling illegal drugs or other shady stuff in the online black market of the same name. So that will be confusing to some.



I’m also not a big fan of some of the other catch-phrases Chander uses throughout the book. Using the term “net-work,” for example, is a bit too cute for my taste and there are times it gets confusing. And the term “glocalization” is the sort of thing that you’d expect to see on the Fake Jeff Jarvis parody account on Twitter (actually, I think he has used it before) and once critic Evgeny Morozov catches wind of it he will, no doubt, eventually use to linguistically lynch Chander.



Finally, should trade in information and e-commerce be “Trade 2.0” or is it really “Trade 3.0”? To me, Trade 1.0 =agricultural & industrial trade; Trade 2.0 = trade in services; and Trade 3.0 = trade in information and electronic commerce. Doesn’t that make more sense? In any event, the whole 1.0, 2.0, 3.0 thing has gotten a bit clichéd in its own right.



Conclusion

I enjoyed Anupam Chander’s Electronic Silk Road and can recommend it to anyone who is looking to connect the dots between international trade theory and Internet policy / ecommerce developments. The reader will find a little bit of everything in the book, such as classical trade theory from Smith and Ricardo alongside a discussion of Coasean theories of the firm and Benkler-esque theories of commons-based peer production.



Best of all, it is an extremely accessible text such that either a trade policy guru or a Net policy wonk could pick it up and learn a lot about the opposing issues they may not of heard of before. I could also imagine several of the chapters becoming assigned reading in both trade policy courses and cyberlaw programs alike. It’s a supremely balanced treatment of the issues.

 •  0 comments  •  flag
Share on Twitter
Published on August 24, 2013 14:53

New Law Review Article on “A Framework for Benefit-Cost Analysis in Digital Privacy Debates”

GMLR coverI’m pleased to announce the release of my latest law review article, “A Framework for Benefit-Cost Analysis in Digital Privacy Debates.” It appears in the new edition of the George Mason University Law Review. (Vol. 20, No. 4, Summer 2013)



This is the second of two complimentary law review articles I am releasing this year dealing with privacy policy. The first, “The Pursuit of Privacy in a World Where Information Control is Failing,” was published in Vol. 36 of the Harvard Journal of Law & Public Policy this Spring. (FYI: Both articles focus on privacy claims made against private actors — namely, efforts to limit private data collection — and not on privacy rights against governments.)



My new article on benefit-cost analysis in privacy debates makes a seemingly contradictory argument: benefit-cost analysis (“BCA”) is extremely challenging in online child safety and digital privacy debates, yet it remains essential that analysts and policymakers attempt to conduct such reviews. While we will never be able to perfectly determine either the benefits or costs of online safety or privacy controls, the very act of conducting a regulatory impact analysis (“RIA”) will help us to better understand the trade-offs associated with various regulatory proposals.



However, precisely because those benefits and costs remain so remarkably subjective and contentious, I argue that we should look to employ less-restrictive solutions — education and awareness efforts, empowerment tools, alternative enforcement mechanisms, etc. — before resorting to potentially costly and cumbersome legal and regulatory regimes that could disrupt the digital economy and the efficient provision of services that consumers desire. This model has worked fairly effectively in the online safety context and can be applied to digital privacy concerns as well.



The article is organized as follows. Part I examines the use of BCA by federal agencies to assess the utility of government regulations. Part II considers how BCA can be applied to online privacy regulation and the challenges federal officials face when determining the potential benefits of regulation. Part III then elaborates on the cost considerations and other trade-offs that regulators face when evaluating the impact of privacy-related regulations. Part IV discusses alternative measures that can be taken by government regulators when attempting to address online safety and privacy concerns. This article concludes that policymakers must consider BCA when proposing new rules but also recognize the utility of alternative remedies such as education and awareness campaigns, to address consumer concerns about online safety and privacy.



I’ve embedded the full article down below in a Scribd reader, but you can also download it from my SSRN page and my Mercatus author page.



A Framework for Benefit-Cost Analysis in Digital Privacy Debates by Adam Thierer

 •  0 comments  •  flag
Share on Twitter
Published on August 24, 2013 14:34

August 20, 2013

Timothy B. Lee on the future of tech journalism

Post image for Timothy B. Lee on the future of tech journalism

Timothy B. Lee, founder of The Washington Post’s blog The Switch discusses his approach to reporting at the intersection of technology and policy. He covers how to make tech concepts more accessible; the difference between blogs and the news; the importance of investigative journalism in the tech space; whether paywalls are here to stay; Jeff Bezos’ recent purchase of The Washington Post; and the future of print news.



Download



Related Links


Here are eight ways investing in Bitcoins could go horribly wrongn, Lee


Blind advocates: Hollywood lobbying threatens deal for accessible books, Lee

New Job, Lee
 •  0 comments  •  flag
Share on Twitter
Published on August 20, 2013 06:42

Adam Thierer's Blog

Adam Thierer
Adam Thierer isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Adam Thierer's blog with rss.