Adam Thierer's Blog, page 76
January 17, 2013
Mercatus Center at GMU will pay you to get an econ master’s degree
Eli Dourado said it best:
Kind of hard to believe, Mercatus will pay you $20k/year to get an MA in Economics. Plus you might “get” to be my RA. grad.mercatus.org/graduate/ma-fe…
— Eli Dourado (@elidourado) January 17, 2013
More information available here. Some details:
The Mercatus Center’s MA Fellowship program is targeted toward students with an interest in gaining advanced training in economics, but who do not anticipate a career in academia. Students who anticipate working in public policy are ideal candidates for this fellowship. The two-year program offers full tuition towards an MA in applied economics from George Mason University, a generous stipend, and experience publishing policy articles and papers with Mercatus Center senior scholars. For more information please email MAFellows@mercatus.org.
The application deadline for Fall 2013 is March 1, 2013.







Toward a Technology “Watchful Waiting” Principle
When the smoke cleared and I found myself half caught-up on sleep, the information and sensory overload that was CES 2013 had ended.
There was a kind of split-personality to how I approached the event this year. Monday through Wednesday was spent in conference tracks, most of all the excellent Innovation Policy Summit put together by the Consumer Electronics Association. (Kudos again to Gary Shapiro, Michael Petricone and their team of logistics judo masters.)
The Summit has become an important annual event bringing together legislators, regulators, industry and advocates to help solidify the technology policy agenda for the coming year and, in this case, a new Congress.
I spent Thursday and Friday on the show floor, looking in particular for technologies that satisfy what I coined the The Law of Disruption: social, political, and economic systems change incrementally, but technology changes exponentially.
What I found, as I wrote in a long post-mortem for Forbes, is that such technologies are well-represented at CES, but are mostly found at the edges of the show–literally.
In small booths away from the mega-displays of the TV, automotive, smartphone, and computer vendors, in hospitality suites in nearby hotels, or even in sponsored and spontaneous hackathons going on around town, I found ample evidence of a new breed of innovation and innovators, whose efforts may yield nothing today or even in a year, but which could become sudden, overnight market disrupters.
Increasingly, it’s one or the other, which is saying something all by itself. For one thing, how do incumbents compete with such all or nothing innovations?
That, however, is a subject for another day.
For now, consider again the policy implications of such dramatic transformations. As those of us sitting in room N254 debated the finer points of software patents, IP transition, copyright reform, and the misapplication of antitrust law to fast-changing technology industries (increasingly, that means ALL industries), just a few feet away the real world was changing under our feet.
The policy conference was notably tranquil this year, without such previous hot-button topics as net neutrality, SOPA, or the lack of progress on spectrum reform to generate antagonism among the participants. But as I wrote at the conclusion of last year’s Summit, at CES, the only law that really matters is Moore’s Law. Technology gets faster, smaller, and cheaper, not just predictably but exponentially.
As a result, the contrast between what the regulators talk about and what the innovators do gets more dramatic every year, accentuating the figurative if not the literal distance between the policy Summit and the show floor. I felt as if I had moved between two worlds, one that follows a dainty 19th century wind-up clock and the other that marks time using the Pebble watch, a fully-connected new timepiece funded entirely by Kickstarter.
The lesson for policymakers is sobering, and largely ignored. Humility, caution, and a Hippocratic-like oath of first-do-no-harm are, ironically, the most useful things regulators can do if, as they repeat at shorter intervals, their true goal is to spur innovation, create jobs, and rescue American entrepreneurialism.
The new wisdom is simple, deceptively so. Don’t intervene unless and until it’s clear that there is demonstrable harm to consumers (not competitors), that there’s a remedy for the harm that doesn’t make things, if only unintentionally, worse, and that the next batch of innovations won’t solve the problem more quickly and cheaply.
Or, as they say to new interns in the Emergency Room, “Don’t just do something. Stand there.”
That’s a hard lesson to learn for those of us who think we’re actually surgical policy geniuses, only to find increasingly we’re working with blood-letting and leeches. And no anesthesia.
In some ways, it’s the opposite of an approach that Adam Thierer calls the Technology Precautionary Principle. Instead of panicking when new technologies raise new (but likely transient) issues, first try to let Moore’s Law sort it out, until and if it becomes crystal clear that it can’t. Instead of a hasty response, opt for a delayed response. Call it the Watchful Waiting Principle.
Not as much fun as fuming, ranting, and regulating at the first sign of chaos, of course, but far more helpful.
That, in any case, is the thread of my dispatches from Vegas:
“Telcos Race Toward an all-IP Future,” CNET
“At CES, Companies Large and Small Bash Broken Patent System, Forbes
“FCC, Stakeholders Align on Communications Policy—For Now,” CNET
“The Five Most Disruptive Technologies at CES 2013, Forbes







Netflix Blocking Internet Access to HD Movies
Unfortunately, most consumers won’t realize that Netflix is trying to impose its costs on all Internet consumers to gain an anticompetitive price advantage against its over-the-top competitors.
At the Consumer Electronic Show two weeks ago, Netflix announced that it would block consumer access to high definition and 3D movies (HD) for customers of Internet service providers (ISPs) that Netflix disfavors. Netflix’s goal is to coerce ISPs into paying for a free Internet fast lane for Netflix content. If Netflix succeeds, it would harm Internet consumers and competition among video streaming providers. It would also fundamentally alter the economics and openness of the Internet, “where consumers make their own choices about what applications and services to use and are free to decide what content they want to access, create, or share with others.”
Ironically, Netflix’s strategy is a variant of the doomsday narrative spun by net neutrality activists over the last decade. Their narrative assumes ISPs will use their gatekeeper control to block their customers from accessing Internet content distributed by competitors. Of course, ISPs have never blocked consumer access to competitive Internet content. Now that the FCC has distorted the Internet marketplace through the adoption of asymmetric net neutrality rules, Netflix, the dominant streaming video provider, has decided to block consumer access to its content.
This may not seem like a big deal given the relatively limited HD content currently available on Netflix. But that’s about to change in a very big way. Netflix recently announced a new multi-year licensing agreement that makes it the “exclusive American subscription TV service for first run live-action and animated features from the Walt Disney Studios.” In addition to Disney-branded content (e.g., The Lion King), the deal includes content produced by Pixar (e.g., Brave), Lucasfilm (e.g., Star Wars), and Marvel (e.g., The Avengers). Netflix also announced a multi-year deal with Turner Broadcasting and Warner Bros. that includes the Cartoon Network and exclusive distribution rights to TNT’s television series Dallas. As an analyst recently told Ars Technica, “These movies, if you’ve got young kids—you’ve got to have Netflix.”
Netflix has decided to use this new market power to force ISPs to pay for its own Internet fast lane. In classic double-speak, Netflix calls its fast lane the “Netflix Open Connect” content delivery network (CDN). Though Netflix uses the word “open” to describe its CDN, it is not part of the open Internet. It is only “open” to Netflix for the delivery of its content, and it is only “open” to ISPs who connect to it on terms dictated by Netflix.
The costs of the ordinary CDNs (e.g., Level 3 and Limelight) that deliver Netflix are borne by Netflix and incorporated into the price of its retail service. Netflix pays these CDNs to deliver content to Netflix subscribers, and the CDNs pay the costs of delivering Netflix content on the Internet. With this model, the additional costs of delivering Netflix content (due to its desire for distributed content servers) are ultimately borne only by Netflix subscribers.
With its “Open Connect” model, Netflix is withholding content from the customers of ISPs that decline to accede to its demands. Though the details of its demands are unknown, it appears Netflix is requiring that ISPs “peer” with them or pay for the installation of Netflix equipment inside their networks as well as the ongoing costs of operating that equipment.
Netflix’s model is inconsistent with standard Internet peering arrangements, harmful to consumers, and blatantly anticompetitive. By shifting its costs to ISPs, Netflix is distributing the costs of delivering its service across all Internet consumers. ISPs that agree to pay the installation and ongoing operational costs of hosting Netflix equipment inside their networks would have every incentive to pass these costs on to their subscribers as higher rates for Internet access. It would be one thing if ISPs were able to raise Internet access rates only for Netflix subscribers. Due to the FCC’s net neutrality rules, however, an ISP would likely be required to increase its rates for all of its subscribers to cover the additional costs imposed by Netflix – including its subscribers who don’t use the Netflix service. The result: ISP customers who subscribe to competitive streaming video providers would unwittingly be paying for the delivery of Netflix service as well, and Netflix would have a significant price advantage over its competitors.
Theoretically, streaming video competitors could mimic Netflix and try to force ISPs to cover the costs of private fast lanes for them as well. In reality, the combination of exclusive content arrangements, first mover advantages, and asymmetric net neutrality regulation enjoyed by Netflix make it unlikely that a new competitor could mimic Netflix’s strategy. Netflix admits it is the “world’s leading Internet subscription service for enjoying TV shows and movies,” and that its traffic accounts for more than 30 percent of peak Internet traffic on U.S. networks. According to Dan Rayburn at Streaming Media:
There are maybe half-a-dozen content owners who are delivering enough volume of bits, have the technical expertise and have the money to build out their own CDN. Only companies the size of Google, Apple, Microsoft, Netflix and Facebook can take on such a task.
He also notes that the average family of four likely has ten Netflix enabled devices in their home today – something that “can be done by others, but it takes time, a lot of money and lots of development.”
The available evidence indicates Netflix is shamelessly leveraging its market power and its subscribers to cajole ISPs into paying for its private fast lane at the ultimate expense of all Internet consumers and its competitors. When I inquired about its “Super HD” service on the Netflix website, the website replied in ominous red text: “Your Internet Provider is not configured for Super HD yet.” (Screenshot available here.) In a subdued, friendly gray, it said:
Super HD requires that your Internet Provider is part of the Netflix Open Connect network. Please contact your Internet Provider to request that they join the Netflix Open Connect network so you can get Super HD.
Neither my ISP nor the open Internet is preventing Netflix from allowing me to access its HD content. Netflix is choosing to block me from accessing its HD content because my ISP hasn’t agreed to host Netflix equipment for free and Netflix doesn’t want to pay another CDN to deliver HD content to my ISP.
Unfortunately, most consumers won’t realize that Netflix is trying to impose its costs on all Internet consumers to gain an anticompetitive price advantage against its over-the-top competitors. If most consumers end up blaming ISPs for Netflix’s choice, I expect Netflix will increase its demands along with its leverage as it secures exclusive access to even more “must have” content. I wouldn’t be surprised if Netflix attempts to graft the “basic tier” model used in traditional video subscription services on to the Internet.
Think Netflix doesn’t have enough muscle to bully ISPs? Think again. Although Netflix won’t disclose the full list of ISPs that have succumbed to its pressure tactics, Cablevision and Google Fiber in the U.S. and a host of global ISPs (including Virgin Media, British Telecom, Telmex, Telus, TDC, and GVT) have already agreed to install a “free” Netflix fast lane in their networks. The revenue and global scale provided by these deals combined with the asymmetric limitations of the net neutrality rules will make it even harder for the remaining ISPs in the U.S. to resist Netflix’s demands.
When the FCC considered adopting net neutrality rules, Commissioner Michael Copps warned of the potential for unintended consequences that attend asymmetric regulation: “In particular, we need to recognize that the gatekeepers of today may not be the gatekeepers of tomorrow.” Copps believed his “job [was] not so much to mediate among giants as it [was] to protect consumers.” Now that it is the Internet gatekeeper of Star Wars and other iconic films, what rule will stop Netflix from demanding additional payments from ISPs if net neutrality rules prevent ISPs from recovering the additional costs only from Netflix subscribers?







January 15, 2013
Daniel Lyons on broadband pricing and data caps
Daniel Lyons, assistant professor at Boston College Law School, discusses his new Mercatus Center Working Paper, “The Impact of Data Caps and Other Forms of Usage-Based Pricing for Broadband Access.” Describing the system most of us are used to as an all-you-can-eat version of internet access, Lyons explains why it might make more sense for Internet Service Providers (ISPs) to transition to usage-based pricing, a type of metered model for broadband.
According to Lyons, the fixed costs of building up a broadband network are so great that any attempt to create an equitable cost distribution that can recoup these costs forces lighter users to subsidize heavier users. These types of flat rate payment programs often can be a barrier to low-income users. Instead, Lyons advocates for a usage-based system. In response to concerns about possible anti-competitive behavior by ISPs, Lyons further proposes that enforcement of policy transparency among ISPs might be an appropriate role for government.
Related Links
The Impact of Data Caps and Other Forms of Usage-Based Pricing for Broadband Access, by Lyons
The Specter Of Broadband Price Controls, Forbes
Pricing Experimentation & Broadband Usage-Based Pricing, The Technology Liberation Front







January 11, 2013
The Perils of Parochial Privacy Policies
Here’s a thought experiment. Let’s say you believe the Internet economy needs more regulation to guard against potential privacy violations or what you regard as excessive data aggregation. Further, you believe that no amount of self-regulation, social norms, market pressure, education, empowerment, or anything else could possibly substitute for regulation. I know there are a lot of people out there today who feel this way. Regardless of the merits of such claims, here’s my question for you: Do the ends (enhanced privacy protections) justify any means (regulation at any and every level of government)? For example, what would you think about having all 50 states creating their own Privacy Offices or Data Protection Bureaus that issued regulations or recommendations about Internet best practices?
What got me thinking about this was this new blog post by Parker Higgins of EFF, “California Attorney General Releases Mobile Privacy Recommendations.” In the essay, Higgins showers praise on California Attorney General Kamala D. Harris, who just released a document (“Privacy on the Go“) that lays out a long set of privacy “best practices” for mobile app developers. Higgins writes:
EFF applauds this important step forward, and congratulates the California Attorney General on a thorough and clearly written explanation of the importance of mobile privacy and how developers can deliver. It’s true that as technology changes, the specific needs and guidelines for companies will need to adapt. We could well see a time when these principles do not adequately protect the rights and needs of consumers. However, right now these principles represent a huge step forward — going beyond existing law in a way that improves transparency, accountability, and choice for users of mobile devices.
Regardless of the merits of the principles and recommendations contained in that report — and I agree that many of them are quite sensible best practices that industry should be following — I can’t help but wonder whether it is wise for EFF to be cheering on state-based Internet meddling so openly. OK, so I can hear the primary objection: It’s not regulation; it’s just a set of recommendations! Well, yes and no. What AG Harris is doing here is an exercise in soft power or regulatory nudging. It’s a variation of what Tim Wu calls the “agency threats” model of regulating without any formal regulation being promulgated. (Wu enthusiastically endorses such exercises in arbitrary soft power). Or it’s what Randy Picker refers to a “non-law law,” which we are seeing more and more of on this front through the use of “best practice” reports or other agency guidance. And this is happening against the backdrop of a gradual expansion of formal privacy law in the state, such as the the California Online Privacy Protection Act (OPPA). Moreover, the state also has its own Office of Privacy Protection and AG Harris recently announced the creation of a Privacy Enforcement and Protection Unit in the Calif. Department of Justice. Last year, she also brokered a Joint Statement of Principles that was adopted by the leading operators of mobile application platforms “to help bring mobile apps in compliance with the California Online Privacy Protection Act.”
Thus, when the AG announces a new set of best practices and strongly suggests industry should be following them, there’s an implied “or else!” threat that hangs like a quasi-regulatory Sword of Damocles over the collective necks of everyone in this sector. Regardless of how you feel about such “administrative arm-twisting,” I would hope we could agree that there is some theoretical limit to efficient state-based regulation of a network that is national or global in scope, such as the Internet. And yet that’s the perilous path we’re heading down if more states begin to mimic AG Harris and the state of California.
I can’t help but think that if AG Harris was issuing best practices on almost any other Internet policy issue — online free speech, copyright, cybersecurity, online authentication, etc. — that EFF would be (rightly) screaming bloody murder or at least raising some tough questions about the potentially slippery slope of increased state-based Internet meddling. But because there’s a bit of selective morality at work here — EFF welcomes more privacy regulation but opposes most other forms of information control — they are willing to turn a blind eye to the danger of a parochial patchwork of Internet policies in the privacy context.
Perhaps such nudging ends in California and doesn’t spread more broadly across the U.S. But that’s a pretty big risk. I hope EFF and others give more thought to what they are sanctioning here. 50 state Internet Bureaus isn’t likely to help the digital economy or serve the long-term interests of consumers.
Further Reading
Copyright, Privacy, Property Rights & Information Control: Common Themes, Common Challenges
When It Comes to Information Control, Everybody Has a Pet Issue & Everyone Will Be Disappointed
The ACLU vs. Itself on User Empowerment for Online Safety & Privacy
Privacy as an Information Control Regime: The Challenges Ahead
Isn’t “Do Not Track” Just a “Broadcast Flag” Mandate for Privacy?







January 10, 2013
Crawford’s Misplaced Nostalgia for Utility Regulation
In her new book, Captive Audience, Susan Crawford makes the same argument that the lawyers for AT&T made in Judge Harold H. Greene’s courtroom in response to the government’s antitrust complaint beginning in 1981, i.e., that telephone service was a “natural monopoly.” In those days, AT&T wanted regulation and hated competition, which is the same as Crawford’s perspective with respect to broadband now. Here is what she said today on the Diane Rehm Show:
Diane Rehm: “Is regulation the next step?”
Susan Crawford: “It always has been for these industries, because it really doesn’t make sense to have more than one wire into our homes. It is a very expensive thing to install; once it’s there, it has to be kept up to the highest level of maintenance, it has to allow for lots of competition at the retail level—across this wholesale facility—and it has to be available to consumers at reasonable cost. That kind of result isn’t produced by the marketplace; it doesn’t happen by magic, because … when you can divide markets, and cooperate, you’re not going to come up with the best solution for consumers.
In her book, Crawford candidly says that “America needs to move to a utility model” for broadband … and “stop treating this commodity as if it were a first-run art film…”
It’s time for a stroll down memory lane.
In the early 1970’s, writes Steve Coll in his wonderful book on this subject (one of the most readable ever), there was a “precipitous and unprecedented decline in the quality of AT&T’s basic phone service to the public” and “morale among AT&T’s one million employees was disintegrating into malaise and dissension.” This was when basic phone service was a utility.
By 1970 … the decline had reached crisis proportions in a number of major cities, including New York. The basic problem was one of supply and demand: too much demand for new phone service and not enough AT&T facilities to accommodate all the new customers. The result had been horrendous delays and breakdowns, especially in Manhattan, the nation’s media and financial capital. Television networks, banks, securities underwriters, and publishing companies—all of which wielded great influence over how AT&T was perceived by investors and the public—had experienced long, aggravating delays in obtaining new phone service and having their phone systems repaired … Ma Bell quickly became a favorite object of jokes and political satire. Lily Tomlin, the “Laugh In” comedienne, had developed a popular routine around an insolent telephone operator which seemed to capture perfectly the widespread unrest over deteriorating phone service.
Innovation also suffered during this period, because a 1956 consent decree severely limited AT&T’s ability to develop new business products based on emerging computer technologies. The point is, regulation doesn’t always perform like the textbook model. In telecommunications, the record is extremely spotty. If regulation did not play a direct role in the service quality problems of 1970, for example, it certainly was powerless to stop them.
Coll’s excellent book is entitled The Deal of the Century: The Breakup of AT&T (1986); sadly, I have not been able to find a reference to this volume in Crawford’s source material.
The AT&T divestiture is frequently cited by progressives and populists as a splendid example of government intervention. For Crawford, a similar intervention pointed in the reverse direction is the sort of thing the broadband industry needs now. This is what Judge Greene had to say in 1985:
I have no doubt about the correctness of deregulation. The basic fact of the phone industry is it grew up when it was a natural monopoly: wooden poles and copper wires. Once it became possible to bypass this network through microwaves, AT&T’s [long-distance] monopoly could not survive.
As Judge Greene’s observation makes clear, it’s a gross exaggeration to claim that the government magically created the conditions for competition, as some do; technology did that. The government—which awarded monopoly franchises and encouraged hidden cross-subsidies that were incompatible with competition—merely (as the late Alfred E. Kahn put it) had to “get the hell out of the way.” Today, voice, video and data services can all be bypassed, and monopolies cannot survive.
Coll notes that the government lawyers were “driven by the conviction that AT&T was ‘unregulatable,’ as Walter Hinchman, the former common carrier chief, always put it.” Crawford’s great flaw is her stubborn refusal to accept the frequent occurrence of regulatory failure.
Coll also cites the following observation by Irving Kristol, which captures how I partly felt reading Crawford’s book overall:
Irving Kristol, the former socialist turned neoconservative editor of The Public Interest, once commented that U.S. v. AT&T was less a conventional antitrust case than a “modern day variant on classical Marxist class warfare theories,” because it was fundamentally a struggle for power between a class of bureaucrats in the government—lawyers and technocrats in the Justice department, the FCC’s common carrier bureau, and in Congress—and the class of bureaucrats who ran the nation’s phone system, the one million employees of AT&T.
As Crawford’s book makes clear to me, that struggle for power rages on.







January 8, 2013
Gabriella Coleman on the ethics of free software
Gabriella Coleman, the Wolfe Chair in Scientific and Technological Literacy in the Art History and Communication Studies Department at McGill University, discusses her new book, “Coding Freedom: The Ethics and Aesthetics of Hacking,” which has been released under a Creative Commons license.
Coleman, whose background is in anthropology, shares the results of her cultural survey of free and open source software (F/OSS) developers, the majority of whom, she found, shared similar backgrounds and world views. Among these similarities were an early introduction to technology and a passion for civil liberties, specifically free speech.
Coleman explains the ethics behind hackers’ devotion to F/OSS, the social codes that guide its production, and the political struggles through which hackers question the scope and direction of copyright and patent law. She also discusses the tension between the overtly political free software movement and the “politically agnostic” open source movement, as well as what the future of the hacker movement may look like.
Related Links
Coding Freedom: The Ethics and Aesthetics of Hacking, by Coleman
Hacker Politics and Publics. Public Culture, by Coleman
Hackers. The Johns Hopkins Encyclopedia of Digital Textuality. Forthcoming, 2012, by Coleman







January 7, 2013
Larry Downes’ “A Rational Response to the Privacy ‘Crisis’”
We don’t expect news reports to exhibit the tightest legal reasoning, of course, but Sunday’s New York Times story on location privacy made a runny omelet of some important legal issues relating to privacy.
The starting point is United States v. Jones, a case the Supreme Court decided last January. The Court held that government agents violated the Fourth Amendment when they attached a GPS tracking device to a vehicle without a warrant and used it to determine the location of a suspect for four weeks. Location information can be revealing.
“Some advocacy groups view location tracking by mobile apps and ad networks as a parallel, warrantless commercial intrusion,” says the story. A location privacy bill forthcoming from Senator Al Franken (D-MN) “suggests that consumers may eventually gain some rights over their own digital footprints.”
Jones was about government agents—their freedom of action specifically disabled by the Fourth Amendment—invading a recognized property right (in one’s car) to gather data. There is little analogy to location tracking by mobile devices, apps, and networks, which are privately provided, voluntarily adopted, and which violate no recognized right. Indeed, their tracking provides various consumer benefits. The Times piece equivocates between the government’s failure to get a legally required search warrant in Jones and uses of data that some may feel “unwarranted,” in the sense of being “uncalled for under the circumstances.”
The first line of Larry Downes’ new Cato Policy Analysis, “A Rational Response to the Privacy ‘Crisis’,” could have been written for the Times‘ sloppy analogy:
“What passes today as a ‘debate’ over privacy lacks agreed-upon terms of reference, rational arguments, or concrete goals,” Downes says. The paper examines how the “creepy factor” permeates privacy debates rather than crisp thinking and clear-headed examination.
It’s not that location tracking doesn’t generate legitimate privacy concerns. It does. People don’t know how location information is collected and used. They don’t always know how to stop its collection. And the future consequence of location information collected today is unclear. But the capacity of private actors to harm individuals with location data is limited. Their incentive to do so is even smaller. And avoiding location tracking is simply done (at significant costs to convenience).
As Downes’ piece illustrates, we’ve seen this kind of debate before, and we’ll see it again: A particular innovation spurs privacy concerns and a backlash (whipped by legislators and regulators). A negotiation between consumers and industry, facilitated by the news media, advocates, and a variety of other actors, produces the way forward. As often as not, the way forward is a partial or complete embrace of the technology and its benefits. Plenty of times, the threat never materializes (see pervasive RFID).
Downes explores the legal explanation for what happens when consumers adopt new technologies that use personal information to produce custom content and services—this question of “rights over … digital footprints.” He finds that licensing is the best explanation for what is happening. When consumers use the many online services available to them, they license data that they might otherwise control.
The legal framework Downes puts forward sets the stage for iterative, contract-based development of rules for how data may be used in the information economy. It cuts against top-down dictates like Franken’s proposal to regulate future technologies today, knowing so little of how technology or society will develop.
Ultimately, no legislature can resolve the deep and conflicted cultural issues playing out in the privacy debate. Downes characterizes that debate as revealed tension between Americans’ Davey Crockett side—the privacy-protective frontiersmen—and our collective Puritanism. We are participants in and parts of a very watchful society.
It’s worth a read, Larry Downes’s “A Rational Response to the Privacy ‘Crisis’.”







January 3, 2013
FTC Deservedly Closes Google Antitrust Investigation Without Taking Action
I have been a critic of the Federal Trade Commission’s investigation into Google since it was a gleam in its competitors’ eyes—skeptical that there was any basis for a case, and concerned about the effect on consumers, innovation and investment if a case were brought.
While it took the Commission more than a year and a half to finally come to the same conclusion, ultimately the FTC had no choice but to close the case that was a “square peg, round hole” problem from the start.
Now that the FTC’s investigation has concluded, an examination of the nature of the markets in which Google operates illustrates why this crusade was ill-conceived from the start. In short, the “realities on the ground” strongly challenged the logic and relevance of many of the claims put forth by Google’s critics. Nevertheless, the politics are such that their nonsensical claims continue, in different forums, with competitors continuing to hope that they can wrangle a regulatory solution to their competitive problem.
The case against Google rested on certain assumptions about the functioning of the markets in which Google operates. Because these are tech markets, constantly evolving and complex, most assumptions about the scope of these markets and competitive effects within them are imperfect at best. But there are some attributes of Google’s markets—conveniently left out of the critics’ complaints— that, properly understood, painted a picture for the FTC that undermined the basic, essential elements of an antitrust case against the company.
That case was seriously undermined by the nature and extent of competition in the markets the FTC was investigating. Most importantly, casual references to a “search market” and “search advertising market” aside, Google actually competes in the market for targeted eyeballs: a market aimed to offer up targeted ads to interested users. Search offers a valuable opportunity for targeting an advertiser’s message, but it is by no means alone: there are myriad (and growing) other mechanisms to access consumers online.
Consumers use Google because they are looking for information — but there are lots of ways to do that. There are plenty of apps that circumvent Google, and consumers are increasingly going to specialized sites to find what they are looking for. The search market, if a distinct one ever existed, has evolved into an online information market that includes far more players than those who just operate traditional search engines.
We live in a world where what prevails today won’t prevail tomorrow. The tech industry is constantly changing, and it is the height of folly (and a serious threat to innovation and consumer welfare) to constrain the activities of firms competing in such an environment by pigeonholing the market. In other words, in a proper market, Google looks significantly less dominant. More important, perhaps, as search itself evolves, and as Facebook, Amazon and others get into the search advertising game, Google’s strong position even in the overly narrow “search market” is far from unassailable.
This is progress — creative destruction — not regress, and such changes should not be penalized.
Another common refrain from Google’s critics was that Google’s access to immense amounts of data used to increase the quality of its targeting presented a barrier to competition that no one else could match, thus protecting Google’s unassailable monopoly. But scale comes in lots of ways.
Even if scale doesn’t come cheaply, the fact that challenging firms might have to spend the same (or, in this case, almost certainly less) Google did in order to replicate its success is not a “barrier to entry” that requires an antitrust remedy. Data about consumer interests is widely available (despite efforts to reduce the availability of such data in the name of protecting “privacy”—which might actually create barriers to entry). It’s never been the case that a firm has to generate its own inputs for every product it produces — and there’s no reason to suggest search or advertising is any different.
Additionally, to defend a claim of monopolization, it is generally required to show that the alleged monopolist enjoys protection from competition through barriers to entry. In Google’s case, the barriers alleged were illusory. Bing and other recent entrants in the general search business have enjoyed success precisely because they were able to obtain the inputs (in this case, data) necessary to develop competitive offerings.
Meanwhile unanticipated competitors like Facebook, Amazon, Twitter and others continue to knock at Google’s metaphorical door, all of them entering into competition with Google using data sourced from creative sources, and all of them potentially besting Google in the process. Consider, for example, Amazon’s recent move into the targeted advertising market, competing with Google to place ads on websites across the Internet, but with the considerable advantage of being able to target ads based on searches, or purchases, a user has made on Amazon—the world’s largest product search engine.
Now that the investigation has concluded, we come away with two major findings. First, the online information market is dynamic, and it is a fool’s errand to identify the power or significance of any player in these markets based on data available today — data that is already out of date between the time it is collected and the time it is analyzed.
Second, each development in the market – whether offered by Google or its competitors and whether facilitated by technological change or shifting consumer preferences – has presented different, novel and shifting opportunities and challenges for companies interested in attracting eyeballs, selling ad space and data, earning revenue and obtaining market share. To say that Google dominates “search” or “online advertising” missed the mark precisely because there was simply nothing especially antitrust-relevant about either search or online advertising. Because of their own unique products, innovations, data sources, business models, entrepreneurship and organizations, all of these companies have challenged and will continue to challenge the dominant company — and the dominant paradigm — in a shifting and evolving range of markets.
It would be churlish not to give credit where credit is due—and credit is due the FTC. I continue to think the investigation should have ended before it began, of course, but the FTC is to be commended for reaching this result amidst an overwhelming barrage of pressure to “do something.”
But there are others in this sadly politicized mess for whom neither the facts nor the FTC’s extensive investigation process (nor the finer points of antitrust law) are enough. Like my four-year-old daughter, they just “want what they want,” and they will stamp their feet until they get it.
While competitors will be competitors—using the regulatory system to accomplish what they can’t in the market—they do a great disservice to the very customers they purport to be protecting in doing so. As Milton Friedman famously said, in decrying “The Business Community’s Suicidal Impulse“:
As a believer in the pursuit of self-interest in a competitive capitalist system, I can’t blame a businessman who goes to Washington and tries to get special privileges for his company.… Blame the rest of us for being so foolish as to let him get away with it.
I do blame businessmen when, in their political activities, individual businessmen and their organizations take positions that are not in their own self-interest and that have the effect of undermining support for free private enterprise. In that respect, businessmen tend to be schizophrenic. When it comes to their own businesses, they look a long time ahead, thinking of what the business is going to be like 5 to 10 years from now. But when they get into the public sphere and start going into the problems of politics, they tend to be very shortsighted.
Ironically, Friedman was writing about the antitrust persecution of Microsoft by its rivals back in 1999:
Is it really in the self-interest of Silicon Valley to set the government on Microsoft? Your industry, the computer industry, moves so much more rapidly than the legal process, that by the time this suit is over, who knows what the shape of the industry will be.… [Y]ou will rue the day when you called in the government.
Among Microsoft’s chief tormentors was Gary Reback. He’s spent the last few years beating the drum against Google—but singing from the same song book. Reback recently told the Washington Post, “if a settlement were to be proposed that didn’t include search, the institutional integrity of the FTC would be at issue.” Actually, no it wouldn’t. As a matter of fact, the opposite is true. It’s hard to imagine an agency under more pressure, from more quarters (including the Hill), to bring a case around search. Doing so would at least raise the possibility that it were doing so because of pressure and not the merits of the case. But not doing so in the face of such pressure? That can almost only be a function of institutional integrity.
As another of Google’s most-outspoken critics, Tom Barnett, noted:
[The FTC has] really put [itself] in the position where they are better positioned now than any other agency in the U.S. is likely to be in the immediate future to address these issues. I would encourage them to take the issues as seriously as they can. To the extent that they concur that Google has violated the law, there are very good reasons to try to address the concerns as quickly as possible.
As Barnett acknowledges, there is no question that the FTC investigated these issues more fully than anyone. The agency’s institutional culture and its committed personnel, together with political pressure, media publicity and endless competitor entreaties, virtually ensured that the FTC took the issues “as seriously as they [could]” – in fact, as seriously as anyone else in the world. There is simply no reasonable way to criticize the FTC for being insufficiently thorough in its investigation and conclusions.
Nor is there a basis for claiming that the FTC is “standing in the way” of the courts’ ability to review the issue, as Scott Cleland contends in an op-ed in the Hill. Frankly, this is absurd. Google’s competitors have spent millions pressuring the FTC to bring a case. But the FTC isn’t remotely the only path to the courts. As Commissioner Rosch admonished,
They can darn well bring [a case] as a private antitrust action if they think their ox is being gored instead of free-riding on the government to achieve the same result.
Competitors have already beaten a path to the DOJ’s door, and investigations are still pending in the EU, Argentina, several US states, and elsewhere. That the agency that has leveled the fullest and best-informed investigation has concluded that there is no “there” there should give these authorities pause, but, sadly for consumers who would benefit from an end to competitors’ rent seeking, nothing the FTC has done actually prevents courts or other regulators from having a crack at Google.
The case against Google has received more attention from the FTC than the merits of the case ever warranted. It is time for Google’s critics and competitors to move on.
[Crossposted at Forbes.com]







December 21, 2012
Old arguments about usage-based pricing are still wrong
The following is a guest post by Daniel Lyons, an assistant professor at Boston College Law School who specializes in the areas of property, telecommunications and administrative law.
While much of the broadband world anxiously awaits the DC Circuit’s net neutrality ruling, consumer groups have quietly begun laying the groundwork for their next big offensive, this time against usage-based broadband pricing. That movement took a significant step forward this week as the New America Foundation released a report criticizing data caps, and as Oregon Senator Ron Wyden introduced a bill that would require the Federal Communications Commission to regulate broadband prices.
But as this blog has noted before, these efforts are misguided. Usage-based pricing plans are not inherently anti-consumer or anticompetitive. Rather, they reflect different pricing strategies through which a broadband company may recover its costs from its customer base and fund future infrastructure investment. Usage-based pricing allows broadband providers to force heavier users to contribute more toward the fixed costs of building and maintaining a network. Senator Wyden’s proposal would deny providers this freedom, meaning that lighter users will likely pay more for broadband access and low-income consumers who cannot afford a costly unlimited broadband plan will be left on the wrong side of the digital divide.
In a working paper I published with the Mercatus Center in October I had already debunked the arguments that the New America Foundation relies upon to make their case. NAF suggests that broadband providers should be unconcerned about costs because gross margins on broadband service are high, and the marginal cost of data transport is relatively low and falling. This is largely true, but also largely irrelevant. For broadband providers, as many other networked industries, the challenge is generating sufficient revenue to recover their fixed costs and fund future network investment. Broadband providers have invested over $300 billion in private capital in the past decade to build and upgrade the nation’s broadband networks. And because Internet traffic is expected to triple by 2016, analysts expect them to continue to invest $30–40 billion annually to expand and upgrade their networks.
Therefore the key broadband pricing question is not the marginal cost of transport, but the best strategy for recovering those fixed costs. The unlimited flat-rate model that New America Foundation favors is one solution, but a relatively inefficient one. As the FCC has noted, flat-rate pricing forces “all subscribers to pay the same amount for broadband service, regardless of the performance or usage of the service, would force lighter end users of the network to subsidize heavier end users.” Heavier users consume significantly more of the network’s total bandwidth each month than the average consumer. This means that light users pay a higher effective rate for broadband service, cross-subsidizing the activities of those who spend more time online.
Usage-based pricing allows broadband providers to shift more of those fixed costs onto those who use the network the most. This strategy is known to economists as price discrimination, and despite its sinister-sounding name, it is a relatively common phenomenon. Airlines routinely charge different rates to students and businessmen; movie theaters charge the average movie-goer more than children or seniors; car dealers give a better price to consumers who haggle. In each case, two customers face different prices for the same product, based on their willingness to pay. The practice is common and uncontroversial.
In the broadband industry, price discrimination can help make broadband more affordable to low-income consumers. A new paper by economist Steve Wildman explains that usage-based pricing allows broadband providers to offer cheaper entry-level broadband plans with limited data use to customers who do not need unlimited access and cannot, or will not, pay the higher flat rate for a level of service they do not need. The lower margins on these plans are offset by greater margins on higher-use plans. In that way, usage-based pricing can promote greater broadband penetration, which is one of the most important goals of the FCC’s National Broadband Plan.
The NAF Report also accuses broadband providers of using caps to create artificial scarcity, in an effort to pad profits. This seems an odd critique: under any usage-based pricing model, the broadband provider is paid according to the amount a customer uses the system. Therefore it is in the provider’s interest to get the customer to use the network more, not less. But setting aside this basic critique, NAF’s argument is misplaced. A conspiracy to create scarcity to raise rates only works if the customer has market power: otherwise consumers will simply switch to a better deal elsewhere. But if the provider has market power, there is no need to go through this hypothetical scarcity kabuki dance: the provider can simply raise the price on the unlimited flat rate plan and pocket the additional revenue.
Similarly, Senator Wyden’s bill seems focused on the risk that broadband providers will use usage-based pricing to protect legacy cable services from over-the-top competitors such as Netflix. My Mercatus report addresses this argument at length. In short, some broadband providers have incentives to use data caps to harm competitors. But antitrust law protects competition, not competitors. Vertical restraints on trade can be harmful or beneficial to consumers, depending on the context. For example, AT&T’s exclusive agreement to carry the iPhone gave it an advantage over Verizon and other wireless providers, but this vertical restraint helped consumers by jumpstarting a sleepy smartphone industry and igniting the mobile broadband revolution.
Therefore the movement against usage-based pricing is misdirected. As I have explained before, it is not usage-based pricing that is the villain, but market power—and more specifically, the misuse of market power in ways that harm consumers. Antitrust law already provides a remedy for this harm. Senator Wyden proposes a broad ex ante prohibition on almost all forms of usage-based pricing. But consumers are better protected by antitrust law’s ex post enforcement against specific harmful practices. This remedy safeguards against abuse of usage-based pricing, while allowing broadband providers the freedom to experiment with different pricing strategies in innovative ways, which ultimately gives consumers more options and makes the network more efficient.







Adam Thierer's Blog
- Adam Thierer's profile
- 1 follower
