Adam Thierer's Blog, page 85

September 9, 2012

The Precautionary Principle & Information Technology: Airlines & Gadgets Edition

Psychologists Daniel Simons and Christopher Chabris had an interesting editorial in The Wall Street Journal this weekend asking, “Do Our Gadgets Really Threaten Planes?” They conducted an online survey of 492 American adults who have flown in the past year and found that “40% said they did not turn their phones off completely during takeoff and landing on their most recent flight; more than 7% left their phones on, with the Wi-Fi and cellular communications functions active. And 2% pulled a full Baldwin, actively using their phones when they weren’t supposed to.”



Despite the widespread prevalence of such law-breaking activity, planes aren’t falling from the sky and yet the Federal Aviation Administration continues to enforce the rule prohibiting the use of digital gadgets during certain times during flight. “Why has the regulation remained in force for so long despite the lack of solid evidence to support it?” Simons and Chabris ask? They note:



Human minds are notoriously overzealous “cause detectors.” When two events occur close in time, and one plausibly might have caused the other, we tend to assume it did. There is no reason to doubt the anecdotes told by airline personnel about glitches that have occurred on flights when they also have discovered someone illicitly using a device. But when thinking about these anecdotes, we don’t consider that glitches also occur in the absence of illicit gadget use. More important, we don’t consider how often gadgets have been in use when flights have been completed without a hitch. Our survey strongly suggests that there are multiple gadget violators on almost every flight.


That’s all certain true, but what actually motivated this ban — and has ensured its continuation despite a lack of evidence it is needed to diminish technological risk — is the precautionary principle. As the authors correct note:



Fear is a powerful motivator, and precaution is a natural response. Regulators are loath to make policies less restrictive, out of a justifiable concern for passenger safety. It is easy to visualize the horrific consequences should a phone cause a plane to crash, so the FAA imposes this inconvenience as a precaution.

Once a restriction is in place, though, removing it becomes a challenge because every day without a gadget-induced accident cements our belief that the status quo is right and justified. Unfortunately, this logic is little better than that of Homer Simpson, who organized an elaborate Bear Patrol in the city of Springfield and exulted in the absence of bear sightings that ensued.


This is a prime example of the precautionary principle in action. In my recent 80-page paper entitled, “Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle.” I noted that how we might be witnessing the rise of a “precautionary principle” for some information technology policy matters. The adoption of an information precautionary principle would restrict progress in this arena until technology creators or proponents can demonstrate new tools are perfectly safe. That’s essentially what the FAA has done with its ban on digital gadgets during certain times of air travel.



Of course, it is easier to sympathize with the precautionary perspective in this case than others because the risks of digital gadgetry and wireless communications during flight really were unknown early on, and few wanted to conduct a real-time experiment when the potential downsides were so catastrophic. And yet, as Simons and Chabris observe, we’ve conducted that experiment anyway! Air travelers decided to ignore the ban and continue to use digital gadgets. And, luckily, the sky didn’t fall, or in this case planes didn’t fall out of the sky, at least.



What’s amazing about this case, however, is that the FAA has continued to enforce its precautionary-minded regulation long after it’s been shown to be needed and has been so widely ignored anyway. I suppose that, like Homer Simpson, some of these officials believe that their precautionary steps have led to greater safety, or don’t have any costs or trade-offs and, therefore, there’s nothing wrong with their “better to be safe than sorry” thinking. Of course, that’s the fatal flaw in all precautionary principle thinking, as I note in my paper. There most certainly are many costs and trade-offs associated with banning technology or its use. They may not be as profound in this case as in others, but that doesn’t mean that they do not exist.



Regardless, now that the FAA has finally decided to take a second look at their policy, perhaps they be willing to admit that there never really was much sense to this particular application of the precautionary principle and that the time has come to end this ban and let individual airlines experiment with different approaches.




 •  0 comments  •  flag
Share on Twitter
Published on September 09, 2012 11:23

September 6, 2012

Let’s Not Exaggerate Privacy Risks: Re-Identification Isn’t So Easy After All, says New Barth-Jones Paper

The privacy debate has been increasingly shaped by an apparent consensus that de-identifying sets of personally identifying information doesn’t work.  In particular, this has led the FTC to abandon the PII/non-PII distinction on the assumption that re-identification is too easy.  But a new paper shatters this supposed consensus by rebutting the methodology of Latanya Sweeney’s seminal 1997 study of re-identification risks, which in turn, shaped the HIPAA’s rules for de-identification of health data and the larger privacy debate ever since.



This new critical paper, “The ‘Re-Identification’ of Governor William Weld’s Medical Information: A Critical Re-Examination of Health Data Identification Risks and Privacy Protections, Then and Now” was published by Daniel Barth-Jones, an epidemiologist and statistician at Columbia University. After carefully re-examining the methodology of Sweeney’s 1997 study, he concludes that re-identification attempts will face “far-reaching systemic challenges” that are inherent in the statistical methods used to re-identify. In short, re-identification turns out to be harder than it seemed—so our identity can more easily be obscured in large data sets. This more nuanced story must be understood by privacy law scholars and public policy-makers if they want to realistically assess current privacy risks posed by de-identified data—not just for health data, but for all data.



The importance of Barth-Jones’s paper is underscored by the example of Vioxx, which stayed on the market years longer than it should have because of HIPAA’s privacy rules, thus resulting in  88,000 and 139,000 unnecessary heart attacks, and 27,000-55,000 avoidable deaths—as University of Arizona Law Professor Jane Yakowitz Bambauer explained in a recent Huffington Post piece.



Ultimately, overstating the risk of re-identification causes policymakers to strike the wrong balance in the trade-off of privacy with other competing values.  As Barth-Jones and Yakowitz have suggested, policymakers should instead focus on setting standards for proper de-identification of data that are grounded in a rigorous statistical analysis of re-identification risks.  A safe harbor for proper de-identification, combined with legal limitations on re-identification, could protect consumers against real privacy harms while still allowing the free flow of data that drives research and innovation throughout the economy.



Unfortunately, the Barth-Jones paper has not received the attention it deserves.  So I encourage you consider writing about this, or just take a moment to share this with your friends on Twitter or Facebook.




 •  0 comments  •  flag
Share on Twitter
Published on September 06, 2012 08:22

The 12 Best Papers on Antitrust & the Digital Economy

In my last post, I discussed an outstanding new paper from Ronald Cass on “Antitrust for High-Tech and Low: Regulation, Innovation, and Risk.” As I noted, it’s one of the best things I’ve ever read about the relationship between antitrust regulation and the modern information economy. That got me thinking about the what other papers on this topic that I might recommend to others. So, for what it’s worth, here are the 12 papers that have most influenced my own thinking on the issue. (If you have other suggestions for what belongs on the list, let me know. No reason to keep it limited to just 12.)




J. Gregory Sidak & David J. Teece, “Dynamic Competition in Antitrust Law,” 5 Journal of Competition Law & Economics (2009).
Geoffrey A. Manne &  Joshua D. Wright, “Innovation and the Limits of Antitrust,” 6 Journal of Competition Law & Economics, (2010): 153
Joshua D. Wright, “Antitrust, Multi-Dimensional Competition, and Innovation: Do We Have an Antitrust-Relevant Theory of Competition Now?” (August 2009).
Daniel F. Spulber, “Unlocking Technology: Antitrust and Innovation,” 4(4) Journal of Competition Law & Economics, (2008): 915.
Ronald Cass, “Antitrust for High-Tech and Low: Regulation, Innovation, and Risk,” 9(2) Journal of Law, Economics and Policy, Forthcoming (Spring 2012)
Richard Posner, “Antitrust in the New Economy,” 68 Antitrust Law Journal, (2001).
Stan J. Liebowitz & Stephen E. Margolis,”Path Dependence, Lock-in, and History,” 11(1) Journal of Law, Economics and Organization, (April 1995): 205-26.
Robert Crandall and Charles Jackson, “Antitrust in High-Tech Industries,” Technology Policy Institute (December 2010).
Bruce Owen, “Antitrust and Vertical Integration in ‘New Economy’ Industries,” Technology Policy Institute (November 2010).
Douglas H. Ginsburg & Joshua D. Wright, “Dynamic Analysis and the Limits of Antitrust Institutions,” 78 (1) Antitrust Law Journal (2012): 1-21.
Thomas Hazlett, David Teece, Leonard Waverman, “Walled Garden Rivalry: The Creation of Mobile Network Ecosystems,” George Mason University Law and Economics Research Paper Series, (November 21, 2011), No. 11-50.
David S. Evans, “The Antitrust Economics of Two Sided Markets.”



 •  0 comments  •  flag
Share on Twitter
Published on September 06, 2012 07:50

The Best Paper on Antitrust that You Will Read This Year

Ronald Cass, Dean Emeritus of Boston University School of Law, has penned the best paper on antitrust regulation that you will read this year, especially if you’re interested in the relationship between antitrust and  information technology sectors.  His paper is entitled, “Antitrust for High-Tech and Low: Regulation, Innovation, and Risk,” and it makes two straightforward points:




Antitrust enforcement has characteristics and risks similar to other forms of regulation.
Antitrust authorities need to exercise special care in making enforcement decisions respecting conduct of individual dominant firms in high-technology industries.


Here are some highlights from the paper that build on those two points.



Antitrust Is Economic Regulation & Carries Many of the Same Risks



As I noted in my 2009 review of Gary Reback’s antitrust screed “Free the Market,” there are few things that frustrate me more than the myth that antitrust is somehow not a form of economic regulation.  I hear this tired old argument trotted out time and time again, even by many conservatives. It’s utter bunk. Cass makes that abundantly clear in his paper.  “Application of antitrust laws by government officials… has the same risks and problems associated with other forms of regulation, including other “fair play” regulations,” notes Cass. “It requires considerable information on how particular firms and particular markets work, on the effect of particular business practices, and on the costs and benefits of intervening to stop a particular practice as opposed to allowing market forces to limit its effects,” he says (p. 6-7).



Cass isn’t the only one who has made this point.  As James Miller notes in this Federalist Society video (starting around the 18-minute mark), antitrust is not just a form of regulation but it often takes the form of a industrial policy scheme, complete with all its failings. Rick Rule agrees, noting how antitrust is a specialized form of regulation. Cass also appeared at that event and, starting around the 36-minute mark, makes his case for antitrust as just another form of regulation. If you want to watch the entire panel discussion, I’ve embedded the video down below.



Information Technology Markets are Highly Dynamic; Antitrust Can Hurt High-Tech Innovation



The more important takeaway from Cass’s excellent paper is that, precisely because antitrust regulation is haunted by many of the same problems as traditional economic regulatory controls, it is particularly ill-suited for fast-paced, rapidly-evolving information technology markets. “The problem arises in part because, while the concerns over network effects are dynamic, the principal tools for antitrust analysis – especially respecting definition of the relevant market – are static,” Cass observes. “These tools almost inevitably orient enforcers’ decisions toward excessive concern with one part of what, rightly understood, is a much larger competitive picture, even though the composition of the larger picture is difficult to predict. (p. 3) “Rather than demonstrating special caution in venturing into this set of cases, however, antitrust enforcers seem anxious to engage the leading high-technology firms while markets are evolving at a rapid pace,” he notes. (p. 2) Such intervention is particularly unwise, Cass argues, because:



These are markets where it is particularly difficult to maintain dominance, where sustained leadership over some time frame most likely indicates strong efficiencies (strong consumer value), and where innovations that are not yet recognized as significant can offer the strongest constraints on dominant firm behavior and the most important challenges to crafting a meaningful remedy that does more than disadvantage an individual contestant in a changing world. (p. 35)


The real danger of excessive antitrust is how it can force innovators to take their eye off the ball and spend more time trying to please policymakers than the general public. Cass notes:



If successful firms trying to stay on top in industries that can change rapidly and unpredictably often become targets for antitrust scrutiny, rational calculations of innovation costs (investments that help firms succeed) will necessarily include the (discounted) cost of contesting antitrust challenges as well as the costs of directly pursuing innovation. Antitrust inquiries can exact extraordinarily high costs from target firms, both in direct expenditures and in distraction from core business operations. That is true even for inquiries that do not result in suits, as enterprises facing the possibility of a long, expensive lawsuit (and, if the suit is lost, a potentially expensive and disruptive remedy) obviously will respond by trying both to persuade enforcement authorities that their conduct has been lawful and to avoid conduct that will increase the prospect of an action being filed. (p. 10)


Cass identifies IBM’s 13-year long antitrust ordeal as “the paradigmatic case for ill-conceived antitrust enforcement” where all these problems where on display. During the 13-year case, the government collected more than 750 million documents and required IBM to retain 200 attorneys at one point. (Read CNet staff writer Rachel Konrad’s summary of the fiasco from back in 2000). The DOJ finally abandoned the case in 1982 after it became clear how markets had evolved around whatever earlier “dominance” IBM had in mainframe markets. Namely, the desktop PC and software revolution had passed IBM (and clueless antitrust regulators) right by. “In the end,” notes Cass, “the case stands for the proposition that government officials, even with the benefit of extensive investigation and expertise, are unlikely to appreciate the most important sources of competition to enterprises that dominate a particular market and are especially prone to ill-advised interventions based on theoretical objections to market structure.” (p. 16) Worse yet, he notes, was the impact on IBM’s ability to innovate:



More significant than the draw on IBM’s funds were two other byproducts of the antitrust litigation: the distraction of its executives from planning and executing functions necessary to IBM’s long-term business interests, and the active discouragement of decisions that would have benefited the business but might have triggered further antitrust action. (p. 15)


As Peter Pitsch noted in his 1996 PFF book The  Innovation Age, “In 1981 the Department of Justice was still pressing their case against IBM while market forces were about to lay waste to the company.” Pitsch noted that IBM’s manufacturing capacity was slashed in the years that followed and also notes that, astonishingly, “in the space of five years after 1987, IBM lost two thirds of its market value — more than $70 billion.” IBM has recovered and is a very different company today, of course. Yet, it seems clear that the DOJ’s antitrust industrial policy scheming decimated the firm’s chances of keeping pace with others digital technology leaders during the 80s and even 90s.



Cass notes that this same thing played out for Microsoft following its antitrust ordeal as the firm was forced to become extra cautious about how it innovated with regulators always staring over their shoulder. Yet, “it is plain that the real competitive threat to the company came from innovations that lay outside the market as government officials saw it,” Cass notes, since few were talking about search and social networking in the late 90s as a serious threat to Microsoft’s hegemony.



Lessons: Appreciate Dynamism and Be Careful about Market Definition



Cass leaves us with several lessons from the history he recounts. I’ll just cite a few passages here, but generally his lessons can be boiled down to: (1) before intervening, appreciate just how dynamic these information technology markets can be; and, relatedly, (2) be very careful about how you define markets for purposes of antitrust analysis. He notes, for example:




With this in mind, the overarching caution to antitrust enforcers that emerges from the cases reviewed above is against presuming that the obvious, common-sense boundaries around a market… appropriately set the field of vision for antitrust enforcement (much less the artificially circumscribed market definitions that enforcers will urge when a case has been initiated). The market boundaries that so often are taken for granted frequently fail to capture the most important sources of competition. That is true even in markets as “old-line” and seemingly simple as the auto market, but it is even more likely to be true in high-technology industries where, almost by definition, new innovations will revise established assumptions about how things are done. The market definition problem reflects more than the fact that officials so frequently cannot see changes coming that will dramatically alter competitive conditions in an industry. Almost no one, even those most intimately engaged in the industry itself, is apt to make good predictions about which technologies will succeed or what the ultimate scope of a new technology will be. (p. 28-9)
The more trenchant flaw in antitrust enforcement is not officials’ failure to identify specific market changes or specific companies that will dramatically rise or fall in value. Rather, the larger problem is that it is exceedingly difficult for government officials to discern the critical factors that explain what actually makes a particular firm dominant, the factors that affect the durability of dominance, or the kinds of change in the market (either on the demand side or the supply side) that could dramatically erode that dominance. (p. 28)
Despite the networks they have established, each of these businesses also is notable for the relative ease with which consumers can switch from one provider (or one technology) to another – allowing consumers to substitute one product or service for another or, in many cases, to add additional products or services from multiple providers at minimal or zero cost. (p. 31)


These lessons and themes have motivated all my thinking about how information technology policy should be formulated and the (very limited) role that antitrust regulation should play. Just about every other installment of my weekly Forbes column has dealt with such issues, including most notably these essays:




Tech Titans & Schumpeter’s Vision,” (8/22/11)
No One Owns A Techno Crystal Ball” (10/2/11)
The Rule Of Three: The Nature of Competition In The Digital Economy” (6/29/12)
Bye Bye BlackBerry. How Long Will Apple Last?” (4/1/12)
Regulatory, Antitrust and Disruptive Risks Threaten Apple’s Empire” (4/8/12)
Searching In Vain For An Antitrust Case Against Google” (6/30/12)
Sunsetting Technology Regulation: Applying Moore’s Law to Washington” (3/25/12)


Anyway, please make sure to read the entire Cass paper. It’s a keeper. I know I will be citing it in virtually everything I write on the topic in coming months and years. In a follow-up post, I will offer a list of other important papers on antitrust and high-tech markets that you want to have on your reading list.



 






 •  0 comments  •  flag
Share on Twitter
Published on September 06, 2012 07:41

The New WCITLeaks

Today, Jerry and I are pleased to announce a major update to WCITLeaks.org, our project to bring transparency to the ITU’s World Conference on International Telecommunications (WCIT, pronounced wicket).



If you haven’t been following along, WCIT is an upcoming treaty conference to update the International Telecommunication Regulations (ITRs), which currently govern some parts of the international telephone system, as well as other antiquated communication methods, like telegraphs. There has been a push from some ITU member states to bring some aspects of Internet policy into the ITRs for the first time.



We started WCITLeaks.org to provide a public hosting platform for people with access to secret ITU documents. We think that if ITU member states want to discuss the future of the Internet, they need to do so on an open and transparent basis, not behind closed doors.



Today, we’re taking our critique one step further. Input into the WCIT process has been dominated by member states and private industry. We believe it is important that civil society have its say as well. That is why we are launching a new section of the site devoted to policy analysis and advocacy resources. We want the public to have the very best information from a broad spectrum of civil society, not just whatever information most serves interests of the ITU, member states, and trade associations.





At the same time, we’re not backing off from our original position. We think the ITU’s policy of keeping WCIT-related documents secret is becoming increasingly untenable. We received an email from the ITU’s press office yesterday announcing a global press briefing. Here is what it said:



As the conference approaches, there is quite a lot of misinformation being circulated concerning the agenda and process of the conference. Join this global discussion to find out what’s REALLY going to be discussed, and how the process of proposals and debates operates to ensure a global consensus among all countries.


Misinformation, they claim—about documents the ITU keeps secret. If the ITU and its client states have nothing to hide, why are they keeping information from the public? The best way to fight misinformation is with transparency. We call on the ITU and its member states to make all documents associated with global telecommunications available to the public.



We could also use your help. Please help us spread the word about WCITLeaks to anyone who may be interested. In addition, we ask our users around the world to apply pressure to their governments to make their documents publicly available. Finally, please make good use of our new resources section; it is vital for the future of the Internet that the global citizenry be well-informed about potential threats to the free flow of information.



This post originally appeared on elidourado.com.




 •  0 comments  •  flag
Share on Twitter
Published on September 06, 2012 07:33

September 4, 2012

Adam Thierer on nationalizing Facebook

Post image for Adam Thierer on nationalizing Facebook

Adam Thierer, senior research fellow at the Mercatus Center at George Mason University, discuses recent calls for nationalizing Facebook or at least regulating it as a public utility. Thierer argues that Facebook is not a public good in any formal economic sense, and nationalizing the social network would be a big step in the wrong direction. He argues that nationalizing the network is neither the only nor the most effective means of solving privacy concerns that surround Facebook and other social networks. Nor is Facebook is a monopoly, he says, arguing that customers have many other choices. Thierer also points out that regulation is not without its problems including the potential that a regulator will be captured by the regulated network thus making monopoly a self-fulfilling prophecy.



Listen to the Podcast



Download MP3



Related Links


The Perils of Classifying Social Media Platforms as Public Utilities, by Thierer
“Let’s Nationalize Facebook”, Slate
“10 Reasons Why Nationalizing Facebook Would Be Ridiculous”, by Thierer
“Stupid Idea Of The Day: Let’s Nationalize Facebook!”, Forbes
“Nationalize Facebook? Really?”, Reason



 •  0 comments  •  flag
Share on Twitter
Published on September 04, 2012 13:24

August 30, 2012

The ACLU vs. Itself on User Empowerment for Online Safety & Privacy

I have always found it strange that the ACLU speaks with two voices when it comes to user empowerment as a response to government regulation of the Internet. That is, when responding to government efforts to regulate the Internet for online safety or speech purposes, the ACLU stresses personal responsibility and user empowerment as the first-order response. But as soon as the conversation switches to online advertising and data collection, the ACLU suggests that people are basically sheep who can’t possibly look out for themselves and, therefore, increased Internet regulation is essential. They’re not the only ones adopting this paradoxical position. In previous essays I’ve highlighted how both EFF and CDT do the same thing. But let me focus here on ACLU.



Writing today on the ACLU “Free Future” blog, ACLU senior policy analyst Jay Stanley cites a new paper that he says proves “the absurdity of the position that individuals who desire privacy must attempt to win a technological arms race with the multi-billion dollar internet-advertising industry.” The new study Stanley cites says that “advertisers are making it impossible to avoid online tracking” and that it isn’t paternalistic for government to intervene and regulate if the goal is to enhance user privacy choices. Stanley wholeheartedly agrees. In this and other posts, he and other ACLU analysts have endorsed greater government action to address this perceived threat on the grounds that, in essence, user empowerment cannot work when it comes to online privacy.



Again, this represents a very different position from the one that ACLU has staked out and brilliantly defended over the past 15 years when it comes to user empowerment as the proper and practical response to government regulation of objectionable online speech and pornography. For those not familiar, beginning in the mid-1990s, lawmakers started pursuing a number of new forms of Internet regulation — direct censorship and mandatory age verification were the primary methods of control — aimed at curbing objectionable online speech. In case after case, the ACLU rose up to rightly defend our online liberties against such government encroachment. (I was proud to have worked closely with many former ACLU officials in these battles.) Most notably, the ACLU pushed back against the Communications Decency Act of 1996 (CDA) and the Child Online Protection Act of 1998 (COPA) and they won landmark decisions for us in the process.



In those and other cases, the ACLU playbook wasn’t just solely focused on a pure First Amendment defense. In other words, they didn’t just say ‘Well, First Amendment values are at stake here, and so all you parents, prudes, and policymakers should just get over your obsession with eradicating online porn.” No, what really won the day for us in these cases was the user empowerment angle. The ACLU rightly noted (and proved in court) that many “less-restrictive means” — filters, monitoring tools, ratings, labels, user education, media literacy, etc. — were available to the public and that those tools and strategies provided compelling alternatives to government regulation. Thus, paternalistic government regulation should yield to those alternatives and the public (namely, parents) should be expected to take responsibility and use those less-restrictive means to protect themselves and their kids. That is the proper approach for a society that cherishes free speech, personal responsibility, and a citizenry with diverse tastes and values.



Not only did the ACLU get courts to agree with this, but the logic of user empowerment as a trump to speech controls became so compelling to justices that in some cases they actually went beyond what free speech advocates had asked or expected, even in non-Internet related decisions. For example, in United States v. Playboy Entertainment Group (2000), the Court struck down a law that required cable companies to “fully scramble” video signals transmitted over their networks if those signals included any sexually explicit content. Echoing its earlier holding in Reno v. ACLU, the Court found that less restrictive means were available to parents looking to block those potentially objectionable signals in the home. Specifically, the Court argued that:



[T]argeted blocking [by parents] enables the government to support parental authority without affecting the First Amendment interests of speakers and willing listeners—listeners for whom, if the speech is unpopular or indecent, the privacy of their own homes may be the optimal place of receipt. Simply put, targeted blocking is less restrictive than banning, and the Government cannot ban speech if targeted blocking is a feasible and effective means of furthering its compelling interests.


More importantly, the Court held that:



It is no response that voluntary blocking requires a consumer to take action, or may be inconvenient, or may not go perfectly every time. A court should not assume a plausible, less restrictive alternative would be ineffective; and a court should not presume parents, given full information, will fail to act.


Importantly, the Court endorsed that same logic for video games in the landmark 2011 decision in Brown v. EMA, which struck down a California that prohibited the sale or rental of “violent video games” to minors.



As I noted in my old book on Parental Controls & Online Child Protection, this is an extraordinarily high bar that the Supreme Court has set for policymakers wishing to regulate modern media content or online expression. Not only is it clear that the Court is increasingly unlikely to allow the extension of analog-era content regulations to new media outlets and technologies, but it appears likely that judges will apply much stricter constitutional scrutiny to all efforts to regulate speech and media providers in the future. And we really have to thank the ACLU for getting this user empowerment revolution started because, make no doubt about it, it was that hook that ushered in this amazing jurisprudential revolution — for the Internet, for video games, for new media, for everything.



Sadly, however, the ACLU is now abandoning the user empowerment approach, at least as it pertains to digital privacy regulation.



In Stanley’s latest piece as well as many other ACLU statements on privacy issues, we hear almost nothing about the importance of keeping the Net free of unnecessary regulation or that government regulation should yield to user empowerment. Instead, we are told that citizens cannot be expected to look out for themselves in this way, or that they can’t possibly hope to “win the arms race” against online advertisers. I think that is utter nonsense. The fact of the matter is that it is far, far harder to win “the arms race” against online porn and objectionable speech using user empowerment tools than it is to defeat online advertising or “tracking.”  There exists a very broad array of privacy-enhancing user empowerment tools and strategies today that can help privacy-sensitive individuals attain greater protection. Here’s a big filing I submitted to the Federal Trade Commission documenting just some of what is on the market today. (See Sec. VI). But here’s just a short list of things users can do or install to better enhance their online privacy:




adjust your browser’s privacy settings to clear out and block the cookies most online ad networks and utilize private browsing or “incognito” modes to surf the Web more privately;
download tools to help you manage cookies, blocking web scripts, and so on.  Some of the more notable ones include: Ghostery, NoScript, Cookie Monster, Better Privacy, Track Me Not, and the Targeted Advertising Cookie Opt-Out or “TACO” (all for Firefox); No More Cookies (for Internet Explorer); Disconnect (for Chrome); AdSweep (for Chrome and Opera); CCleaner (for PCs); and Flush (for Mac).
download AdBlockPlus and block almost all online advertising on most websites, and thus the data collection performed by online cookies. (It remains the most-downloaded add-on for both the Firefox and Chrome web browsers)
use “ad preference managers” from major search companies. Google, Microsoft and Yahoo! all offer easy to use opt-out tools and educational webpages that clearly explain to consumers how digital advertising works. Meanwhile, DuckDuckGo offers as alternative search experience that blocks data collection altogether.


Again, this list just scratches the surface. New empowerment solutions like these are are constantly turning up. And many other tools and strategies exist that users can tap. See this excellent recent article by Kashmir Hill of Forbes, “10 Incredibly Simple Things You Should Be Doing To Protect Your Privacy.”



Now, let me be clear: These solutions aren’t perfect. There are no silver bullets or simple fixes when it comes protecting our privacy online. But the exact same thing is has always been true for objectionable online content. I find that by using tools and strategies such as those listed above, however, you can eliminate most online advertising and data collection from your digital life. By contrast, as good as online safety tools are, a lot more gets through. That’s because what counts as “objectionable content” is notoriously subjective and, therefore, no tool or strategy can ever work perfectly. “Good enough” seems to be the standard we have to accept here. Again, the same can be said for privacy controls, but it is my contention that, relatively speaking, they actually do a better job if you are willing to live with some inconveniences (as can be the case if you are constantly clearing out your cookies and blocking all scripts, some of which may be important for site functionality). But those are trade-offs you need to accept if you want to ensure all ads are blocked or no data is collected. (Of course, once again, the exact same thing is has always been true for objectionable online content. It can be a huge inconvenience for parents and guardians to try to deal with online porn and objectionable content using all those user empowerment tools and strategies, no matter how good they are). Regardless, my argument here is that, contrary to what many advocates of privacy regulation claim, privacy empowerment tools and strategies can be remarkably effective at screening out almost all online advertising and greatly limiting any collection of personal data.



I can imagine that one response to what I have said here is that, regardless of how well the respective classes of user empowerment tools work, that privacy “harms” are more serious and deserve greater government scrutiny and regulation than objectionable online speech/content. But that’s a subjective squabble we’ll never be able to definitively answer. Plenty of people would argue the opposite: that exposure to online porn and objectionable speech will do more harm to minors and society than any amount of online advertising or data collection ever would. Personally, I think both harms are grotesquely inflated “technopanics,” as I noted in this 80-page paper on the topic.



I can anticipate another response that goes like this: “Well, what’s wrong with the government doing a little paternalistic nudging if it’s focused on better empowering users?” First, let’s be clear that groups like ACLU, EFF, and CDT did not adopt that position for objectionable online speech/content. And with good reason. They understood that if we invite the government to come in and create and/or mandate the empowerment tools to be used to address the problem, it could serve as a Trojan Horse that policymakers could later use to expand their influence over speech and speech platforms. But why, then, would the same concern not apply to efforts by the government to mandate certain privacy tools or controls? Such a move would serve as the same sort of open-ended invite to the government to come in and meddle more with online networks.



I suspect what this all comes down to is the artificial distinction between speech rights and economic liberties that the ACLU and other groups have made through the years.  If the regulatory proposals are more about speech regulation, then the ACLU and others will say that personal responsibility and user empowerment represent the proper first-order response. But if we are talking about something perceived to be economic regulation (like advertising regulation), then the standard seems to change and all the talk of personal responsibility and user empowerment go right out the window. (Of course, this is just the classic distinction between “civil libertarians” and actual libertarians manifesting itself in a different way. While the two groups share a mutual distrust of government regulation of speech and social affairs, the civil libertarians distrust free markets and invite regulation of them there whereas the actual libertarians do not.)



But let’s ignore all these other issues and ask a different question: What about the precedent ACLU is setting here by saying user empowerment is hopeless when it comes to privacy? It goes without saying that more than a few social conservatives and regulatory-minded child safety organizations may be listening! Don’t be surprised if those folks throw the ACLU’s words back at them next time controls on speech and expression are being contemplated. They will argue that if people are sheep when it comes to protecting their privacy, then they must also be sheep when it comes to protecting themselves and their families from porn and other objectionable things online.



To me, the consistent and principled position here is this: Personal responsibility and user empowerment should be the first-order solution for all these issues. Governments should only intervene when clear harm can be demonstrated and user empowerment truly proves ineffective as a solution. Conjectural fears must not drive Internet regulation. While there are many legitimate online safety privacy concerns out there, we can find better, less-restrictive ways of dealing with them than by inviting greater government controls for cyberspace.




 •  0 comments  •  flag
Share on Twitter
Published on August 30, 2012 13:36

August 28, 2012

Nicolas Christin on anonymous online market Silk Road

Post image for Nicolas Christin on anonymous online market Silk Road

Nicolas Christin, Associate Director of the Information Networking Institute at Carnegie Mellon University, discuses the Silk Road anonymous online marketplace. Silk Road is a site where buyers and sellers can exchange goods much like eBay and Craigslist. The difference is that the identity of both the buyers and sellers is anonymous and goods are exchanged for bitcoins rather than traditional currencies. The site has developed a reputation of being a popular online portal for buying and selling drugs because of this anonymity, which has caused some politicians to call for the site to be investigated and closed by law enforcement. Despite all of this, the Silk Road remains a very stable marketplace with a very good track record of consumer satisfaction. Christin conducted an extensive empirical study of the site, which he discusses.



embed type=”application/x-shockwave-flash” wmode=”transparent” src=”http://www.google.com/reader/ui/35236...″ width=”320″>
Download




“Traveling the Silk Road: A measurement analysis of a large anonymous online marketplace”, by Chistin
“Underground Website Lets You Buy Any Drug Imaginable”, Wired
“Study estimates $2 million a month in Bitcoin drug sales”, Ars Technica
“Gavin Andresen on Bitcoin”, Surprisingly Free



 •  0 comments  •  flag
Share on Twitter
Published on August 28, 2012 11:30

August 27, 2012

Ends Justifying Means: Inconsistencies Between FCC Special Access and Verizon-SpectrumCo Orders

To summarize, on August 22, the FCC found it was appropriate to re-impose monopoly price cap regulations developed over twenty years ago because the FCC lacked “reliable” evidence that cable operators are competing in the special access market. On August 23, the very next day, the FCC found cable companies are “well-positioned” to compete in the special access market and are “increasingly successful” competing in that market. . . . It is impossible to reconcile these inconsistent findings.



Last week, the FCC issued two significant orders. Late Wednesday evening, the FCC issued an order suspending its pricing flexibility rules for special access services (“Special Access Order”), and on Thursday afternoon, it issued an order approving multiple transactions between Verizon Wireless and several cable companies (Comcast, Time Warner, Bright House Networks, and Cox) as well as mobile providers T-Mobile and Leap (“Verizon-Cable Order”).



The FCC addressed special access competition in both orders. One would assume two FCC findings regarding special access issued within a single 24-hour period would be consistent with one another, but that would be assuming too much. The findings in these two orders relied on evidence submitted by the same companies to reach contradictory conclusions.



August 22 Special Access Order. In its August 22 order, the FCC found that its pricing flexibility rules were harming consumers and hindering investment in facilities-based competition. To address this concern, the FCC suspended its pricing flexibility rules, which means that price-cap carriers (e.g., Verizon and AT&T) are presumed to be monopolists who are required to offer special access services at FCC regulated rates, while their competitors (e.g., Time Warner) can offer special access services at market rates.



Sprint and Time Warner were two of the most vociferous advocates for suspension of pricing flexibility. Both companies submitted numerous filings in the special access proceeding over a number of years, and the FCC cites their filings over 40 times in the Special Access Order. Neither company has the interests of consumers at heart. Sprint benefits from suspension of pricing flexibility by obtaining special access services at regulated rates when they are lower than market rates, and Time Warner benefits from suspension by gaining a competitive advantage in the special access market when it can undercut the regulated rates of price-cap carriers.



The FCC had no need to address the benefits of its ruling to either Sprint or Time Warner, however, because the FCC didn’t rely on data regarding special access services for wireless backhaul or the provision of special access on a competitive basis by cable operators. Instead, the FCC based its suspension finding solely on outdated data regarding the presence of special access competition collocated in price-cap carrier wire centers in a handful of geographic areas. As Commissioner Pai noted in his dissent, carriers are most likely to collocate if they intend to use the price-cap carrier’s last-mile facilities for retail services offered directly to consumers. Wireless carriers who rely on special access for mobile backhaul don’t offer retail services using price-cap carrier’s last mile facilities and often rely on cable operators or fixed wireless to meet their backhaul needs. Cable operators typically do not rely on the last-mile facilities of price-cap carriers to provide retail services either, and they don’t need to collocate their facilities in price-cap wire centers when they provide competitive special access services on a wholesale basis to wireless carriers for backhaul.



The FCC recognized these realities when it first adopted its pricing flexibility rules in 1999. The FCC always understood “collocation may underestimate the extent of competitive facilities within a wire center because it fails to account for the presence of competitors that do not use collocation and have wholly bypassed [price-cap carrier] facilities.” Since 1999, special access that bypasses price-cap facilities has increased dramatically. For example, Clearwire has since built an entirely new wireless broadband network that relies almost exclusively on self-provisioned, fixed wireless backhaul. The FCC nevertheless rejected evidence of such competition in its Special Access Order because the FCC lacks “reliable data on the extent or location of this [non-collocated] competition.”



August 23 Verizon-Cable Order. In the Verizon transactions proceeding, several commenters – including Sprint – argued that the commercial agreements between Verizon and the cable companies may lead the cable companies to engage in anticompetitive conduct in their provision of backhaul services to mobile wireless operators. Sprint argued that in many markets, its only sources for backhaul are Verizon and the cable company operating in that market. Sprint argued that Verizon’s commercial agreements with the cable operators would create an “effective monopoly,” which would harm “competition.” Implicit in this argument is Sprint’s belief that there is, at worst, a competitive duopoly in the special access market, not a monopoly. Yet, in the special access proceeding, Sprint convinced the FCC to re-impose monopoly price regulation on Verizon while leaving Verizon’s cable competitors completely unregulated.



In the Verizon-Cable Order, the FCC relied on pricing evidence submitted by Sprint to reject its arguments that a lack of competition in the special access market would raise consumer prices. The FCC found that “even a significant increase in [wireless] backhaul costs is unlikely to have a material impact on [wireless] subscriber rates. In other words, even if Verizon were able to raise the price of its special access services for backhaul unilaterally, consumers would not be harmed.



The FCC also found that cable operators are playing a very “successful” role in the special access market based on “evidence” from online analyst reports the FCC considered reliable.




“We find that, even if the Cable Companies had the ability to foreclose access to their backhaul service or charge significantly higher prices to Verizon Wireless’s competitors (thereby imposing a competitively significant cost on Verizon Wireless’s competitors), they would not have an incentive to do so. We find that such an action would reduce their own revenue and carry a very significant cost to the Cable Companies, given the large and growing nature of the backhaul services market and the evidence that Cable Companies are both well-positioned to compete in that market and increasingly successful when they do so. We conclude that any incentives the Commercial Agreements might create to favor Verizon Wireless or exclude its rivals in the provision of backhaul services are outweighed by the clear incentives against such behavior.”


To summarize, on August 22, the FCC found it was appropriate to re-impose monopoly price cap regulations developed over twenty years ago because the FCC lacked “reliable” evidence that cable operators are competing in the special access market. On August 23, the very next day, the FCC found cable companies are “well-positioned” to compete in the special access market and are “increasingly successful” competing in that market. The FCC found that cable companies had strong incentives to compete in the special access market, and would suffer “very significant cost[s]” if they were to forgo such competition. Finally, and most importantly, the FCC found that “even a significant increase” in the cost of wireless backhaul would be unlikely to harm consumers.



It is impossible to reconcile these inconsistent findings. Chairman Genachowski pledged the FCC would be a “fact-based, data-driven agency.” Yet, during a hot summer week in August when Congress was out of session, the FCC’s facts and data changed on a daily basis as required to support the FCC’s preferred policy outcome. That’s a data-driven approach of sorts – cherry-picking data to arrive at a predetermined outcome that picks winners and losers rather than protects consumers.




 •  0 comments  •  flag
Share on Twitter
Published on August 27, 2012 09:16

August 23, 2012

FCC Relies on Fallacies, Not Evidence, in Special Access Order

How does the FCC justify taking action without an adequate evidentiary basis? By relying on a series of fallacies to provide an aura of evidence without actually having any. That’s a problem for an agency that wants to be seen as fact-based and data driven. Fallacies are like zeros: No matter how many you have, you still have nothing.




Yesterday the Federal Communications Commission (FCC), our government’s communications industry experts, issued an order that would flunk an introductory college course in logic. Despite issuing multiple data requests, in October 2011, the FCC told the DC Circuit Court of Appeals that it “lacked a sufficient evidentiary record” to document claims that its “pricing flexibility rules” governing special access were flawed. The FCC’s evidentiary record hasn’t improved, but it suspended its pricing flexibility rules on a so-called “interim” basis anyway while it tries to figure out how to obtain the data it needs to do a transparent, data based analysis.



How does the FCC justify taking action without an adequate evidentiary basis? By relying on a series of fallacies to provide an aura of evidence without actually having any. That’s a problem for an agency that wants to be seen as fact-based and data driven. Fallacies are like zeros: No matter how many you have, you still have nothing.



Consider these fallacies:



Naked Assertion: A fallacy in which a premise in an argument is assumed to be true merely because it says it is true.



In his separate statement, Genachowski says the FCC is suspending its rules “Based on the record and the undisputed finding that legacy regulations are not working as intended.” What undisputed finding? Commissioners McDowell and Pai disputed this finding in their dissenting statements (here and here), because the FCC lacks sufficient evidence to determine whether or not the rules are working as intended.



Loaded Question: A fallacy in which someone asks a question that presupposes something that hasn’t been proven or accepted by everyone involved.



Genachowski relies on this fallacy to attack the dissenting Commissioners. He says, “My colleagues’ dissents struggle to explain why the Commission should ignore the record and maintain a broken system,” which presupposes the system is “broken.” Genachowski also says the “dissenters have no answer to the harm their approach would cause,” which presupposes the pricing flexibility rules are causing “harm.” But, as their dissenting statements demonstrate, McDowell and Pai don’t believe the rules are broken or causing harm.



Shifting the Burden of Proof: According to the rules of logic, the burden of proof is always on the person who is asserting something. This fallacy shifts the burden of proof to the person who denies or questions the assertion.



The “evidence” regarding special access hasn’t changed since Genachowski testified under oath, “there is no point in doing something” about special access “that is not based on facts and data.” The only thing that changed was the burden of proof. Rather than require the proponents of special access regulation to prove market failure with solid evidence, the FCC assumed market failure based on fragmented, outdated data regarding colocation in the wire centers of certain local exchange carriers in select geographic areas. It “acknowledge[d] that this evidence is limited,” but nevertheless believed the evidence “suggests . . . the accuracy of the use of collocations as a proxy for actual or potential competition warrants further investigation.” Based on this evidence warranting further investigation, the FCC presumes its pricing flexibility rules “have not worked as intended.”



But, the FCC rejected countervailing evidence of special access competition from competitors that don’t collocate in wire centers, e.g., cable providers. Despite its own finding just last month that 98.5 percent of U.S. homes are served by cable providers – whose networks are typically capable of higher broadband speeds than regulated special access services – the FCC found in its special access order that, because it lacks “reliable data on the extent or location of this [non-collocated] competition, it does not change our conclusion that new pricing flexibility petitions should be suspended at this time.



In effect, the FCC shifted the burden of proof by relying on flimsy evidence to support its assertion while rejecting countervailing evidence as “unreliable.”



The saddest fallacy of all is Genachowski’s assertion that this order will promote broadband competition. Encouraging potential competitors to lease narrowband special access lines from incumbent telephone companies at government-regulated rates won’t promote “broadband” or provide the benefits of free market competition. Making special access lines available at government subsidized rates will only encourage potential competitors to become reliant on the services of the incumbents. Why should competitors build innovative, ultra high-speed fiber networks that would provide real competition when the government is giving them a break on copper wire?




 •  0 comments  •  flag
Share on Twitter
Published on August 23, 2012 06:21

Adam Thierer's Blog

Adam Thierer
Adam Thierer isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Adam Thierer's blog with rss.