Adam Thierer's Blog, page 156
December 1, 2010
The EU tightens the noose around Google
[Cross-posted at Truth on the Market]
Here we go again. The European Commission is after Google more formally than a few months ago (but not yet having issued a Statement of Objections).
For background on the single-firm antitrust issues surrounding Google I modestly recommend my paper with Josh Wright, Google and the Limits of Antitrust: The Case Against the Antitrust Case Against Google (forthcoming soon in the Harvard Journal of Law & Public Policy, by the way).
According to one article on the investigation (from Ars Technica):
The allegations of anticompetitive behavior come as Google has acquired a large array of online services in the last couple of years. Since the company holds around three-quarters of the online search and online advertising markets, it is relatively easy to leverage that dominance to promote its other services over the competition.
(As a not-so-irrelevant aside, I would just point out that I found that article by running a search on Google and clicking on the first item to come up. Somehow I imagine that a real manipulative monopolist Google would do a better job of white-washing the coverage if its ability to tinker with its search results is so complete.)
More to the point, these sorts of leveraging of dominance claims are premature at best and most likely woefully off-base. As I noted in commenting on the Google/Ad-Mob merger investigation and similar claims from such antitrust luminaries as Herb Kohl:
If mobile application advertising competes with other forms of advertising offered by Google, then it represents a small fraction of a larger market and this transaction is competitively insignificant. Moreover, acknowledging that mobile advertising competes with online search advertising does more to expand the size of the relevant market beyond the narrow boundaries it is usually claimed to occupy than it does to increase Google's share of the combined market (although critics would doubtless argue that the relevant market is still "too concentrated"). If it is a different market, on the other hand, then critics need to make clear how Google's "dominance" in the "PC-based search advertising market" actually affects the prospects for competition in this one. Merely using the words "leverage" and "dominance" to describe the transaction is hardly sufficient. To the extent that this is just a breathless way of saying Google wants to build its business in a growing market that offers economies of scale and/or scope with its existing business, it's identifying a feature and not a bug. If instead it's meant to refer to some sort of anticompetitive tying or "cross-subsidy" (see below), the claim is speculative and unsupported.
The EU press release promotes a version of the "leveraged dominance" story by suggesting that
The Commission will investigate whether Google has abused a dominant market position in online search by allegedly lowering the ranking of unpaid search results of competing services which are specialised in providing users with specific online content such as price comparisons (so-called vertical search services) and by according preferential placement to the results of its own vertical search services in order to shut out competing services.
The biggest problem I see with these claims is that, well, they make no sense. First, if someone is searching for a specific vertical search engine on Google by typing its name into Google, it will invariably come up as the first result. If one is searching for price comparison sites more generally by searching in Google for "price comparison sites" lots of other sites top the list before Google's own price comparison site shows up. If one is searching for a specific product and hoping to find price comparisons on Google, why on Earth would that person be hoping to find not Google's own efforts at price comparison, built right into its search engine, but instead a link to another site and another several steps before finding the information? As a practical matter, Google doesn't actually do this particularly well (not as well as Bing, in any case, where the link to its own shopping site almost always comes up first; on Google I often get several manufacturer or other retailer sites before Google's comparison shopping link appears further down the page).
But even if it did, it's hard to see how this could be a problem. The primary reason for this? Google makes no revenue (that I know of) from users clicking through to purchase anything from its shopping page. The page has paid search results only at the bottom (rather than the top as on a normal search page), the information is all algorithmically generated, and retailers do not pay to have their information on the page. If this is generating something of value for Google it is doing so only in the most salutary fashion: By offering additional resources for users to improve their "search experience" and thus induce them to use Google's search engine. Of course, this should help Google's bottom line. Of course this makes it a better search engine than its competitors. These are good things, and the fact that Google offers effective, well-targeted and informative search results, presented in multiple forms, demonstrates its (and the industry's as a whole) degree of innovation and effort–the sort of effort that is typically born out of vibrant competition, not the complacency of a fat, happy monopolist. The claim that Google's success harms its competitors should fall on deaf ears.
The same goes for claims that Google favors its own maps, by the way–to the detriment of MapQuest (paging Professor Schumpeter . . . ). Look for the nearest McDonalds in Google and a Google Map is bound to top the list (but not be the exclusive result, of course). But why should it be any other way? In effect, what Google does is give you the Web's content in as accessible and appropriate a form as it can. By offering not only a link to McDonalds' web site, as well as various other links, but also a map showing the locations of the nearest restaurants, Google is offering up results in different forms, hoping that one is what the user is looking for. Why on Earth should Google be required to use someone else's graphical presentation of the nearby McDonalds restaurants rather than its own simply because the presentation happens to be graphical rather than in a typed list?
So what's going on?
First off, in essence, the EU is taking up the argument put forth by (the EU's very own) Foundem in its complaint against Google. Foundem is a UK price comparison site. It claims that it was targeted by Google and demoted in Google's organic search results. Its argument is laid out here. But Google responds that it is simply applying its algorithm to the site (along with all other sites) and finds some things lacking. In fact, all Foundem does, in essence, is pull information from other sites and present it on its own. While in general this is little different than what Google does (although the quality of the information and its presentation may be different), from the point of view of a user who has already searched once in Google, the prospect of Google serving up sites requiring the user to make duplicate searches in other search engines to find the information she is looking for would seem to be pretty poor. In part for this reason Google disfavors sites in its searches that simply duplicate other sites' content. While Foundem may offer something more than the typical spam site that Google intends to block, this fact is not immediately obvious (and, for what it's worth, apparently Google was eventually convinced of the difference and has lifted the "penalty" formerly imposed on Foundem).
To make an antitrust claim out of this, one has to adopt a sort of "essential facilities" stance with respect to Google, in essence claiming that (Google's users' interests be damned) if Google is the only way users can get to its competitors' sites, it must provide that access. The essential facilities doctrine, dealt a near-death blow by the Supreme Court in Trinko, has long been on the ropes. As Areeda and Hovenkamp said of it, "the essential facility doctrine is both harmful and unnecessary and should be abandoned." That is true in this case, as in the others before it. On the one hand Google does not preclude, nor does it have the power to preclude, users from accessing Foundem's site: all they need do is type "www.foundem.com" into a web browser. To the extent that Google can and does (or did) limit Foundem's access to its search results page, it is not controlling access to an "essential facility" in any sense other than Wal-Mart controls access to its own stores. "Google search results generated by its proprietary alogrithm and found on its own web pages" is not a market to which access should be forcibly granted by the courts or legislature. While Europe takes a less critical view of the doctrine (see Microsoft), it shouldn't.
And as Josh has pointed out, Microsoft's fingerprints are all over these cases (see also here and here where Microsoft Deputy General Counsel, Dave Heiner, essentially lays out the unfortunate state of play in this arena–a state of play that has ensnared Microsoft in the past). The relevance of which is just this: When the EU went after Microsoft itself, many of us decried the case in part as a witch hunt by competitors looking for advantage through regulatory means when they were unable to get it through innovation, marketing and the like. The case against Google in the EU looks to be following the same unfortunate pattern, and even the same unfortunate case-law. Even if it is not true that the EU actually behaves in this fashion (indeed, appearances can be deceiving, sometimes a cigar is just a cigar, etc., etc.), it is costly to everyone that it is so widely perceived to do so. This case doesn't help matters. It has always been true that the Holy Grail (to its competitors) of a Section 2 (or Dominance) case against Google was a substantive stretch but a near-inevitability nonetheless. But as Josh and I conclude our paper:
Indeed, it is our view that in light of the antitrust claims arising out of innovative contractual and pricing conduct, and the apparent lack of any concrete evidence of anticompetitive effects or harm to competition, an enforcement action against Google on these grounds creates substantial risk for a "false positive" which would chill innovation and competition currently providing immense benefits to consumers.
The cost of poorly-considered, seemingly politicized, competitor-induced antitrust cases is substantial.







In Uncle Sam, You've Got a Friend… Who Wants Everybody's DNA
In the latest WikiLeaks data dump, around a quarter-million confidential American diplomatic cables were published online. "Cablegate," as it is being called, has revealed some rather startling information. Among the tech-relevant secrets, the State Department tasked agents to collect DNA and other biometric information on foreigners of interest.
Specifically, U.S. officials were told that in addition to collecting "email addresses, telephone and fax numbers," they should also snap up "fingerprints, facial images, DNA, and Iris scans." This directive makes the recent TSA scandal over airport full body scanners seem like child's play.
Wired joked that this would explain to foreign leaders why the "chief of mission seemed a bit too friendly at the last embassy party."
Jokes aside, access to DNA information is potentially one of the most important privacy issues of the future.
In a world in which DNA sequencing is becoming exponentially faster and cheaper, it won't be long before it is possible to sequence everyone's genomes for medical purposes. Possession of an individual's DNA blueprint will be useful in fighting disease and in personalizing drugs and other therapies. Of course, as with any technology, DNA sequencing can be used for either good or evil purposes, so it will need to be used wisely.
[...]
Read the rest of my column here.







Brief Thoughts on today's Net Neutrality developments at the FCC
Late last night, FCC Chairman Julius Genachowski made explicit what he'd been hinting for weeks–that he was going to call for a vote in December on the agency's long-running net neutrality proceedings.
Today, the Chairman gave a speech outlining a new version of the rules he has circulated to fellow Commissioners, which will be voted on on Dec. 21, 2010..
The new order itself has not yet been made public, however, and the Chairman's comments didn't give much in the way of details. The latest version appears to reflect the proposed legislation circulated before the mid-term recess by then-Commerce chair Henry Waxman. That version, for those following the ball here, was itself based on the legislative framework proposed by Google and Verizon, which itself emerged from informal negotiations convened over the summer at the FCC.
So in some sense the agency is moving, albeit non-linearly, toward some kind of consensus.
I have a brief article this morning in the Orange County Register laying out the pros and cons of this latest iteration, to the extent that is possible without seeing the order.
The timing of today's announcement, however, is significant. This was Genachowski's last chance to wrap up the proceedings before the new Congress , with its Republican House and more even Senate, clocks in. Republicans on their own don't have the votes to pass legislation that would have blocked the FCC from voting on net neutrality later, but Republican leaders had threatened to use their oversight authority to put additional pressure on the FCC not to enact new neutrality rules.
That might still happen, of course, and already today several Republican leaders have promised to do whatever they can do undo today's developments. Assuming the Commission approves the rule at its December 21, 2010 meeting, there's also a strong likelihood of litigation challenging the rules and the FCC's authority to issue them.
So this is not the end of the net neutrality soap opera by any stretch of the imagination. If anything, it suggests a new chapter, one that will take the discussion farther away from the technical architecture of the Internet and the best interests of consumers and closer to pure political theater.
Late last night, FCC Chairman Julius Genachowski made explicit what he'd been hinting for weeks–that he was going to call for a vote in December on the agency's long-running net neutrality proceedings. Today, the Chairman gave a speech outlining a new version of the rules he has circulated to fellow Commissioners, which will be voted on on Dec. 21, 2010.. The new order itself has not yet been made public, however, and the Chairman's comments didn't give much in the way of details. The latest version appears to reflect the proposed legislation circulated before the mid-term recess by then-Commerce chair Henry Waxman. That version, for those following the ball here, was itself based on the legislative framework proposed by Google and Verizon, which itself emerged from informal negotiations convened over the summer at the FCC. So in some sense the agency is moving, albeit non-linearly, toward some kind of consensus. I have a brief article this morning in the Orange County Register laying out the pros and cons of this latest iteration, to the extent that is possible without seeing the order. The timing of today's announcement, however, is significant. This was Genachowski's last chance to wrap up the proceedings before the new Congress , with its Republican House and more even Senate, clocks in. Republicans on their own don't have the votes to pass legislation that would have blocked the FCC from voting on net neutrality later, but Republican leaders had threatened to use their oversight authority to put additional pressure on the FCC not to enact new neutrality rules. That might still happen, of course, and already today several Republican leaders have promised to do whatever they can do undo today's developments. Assuming the Commission approves the rule at its December 21, 2010 meeting, there's also a strong likelihood of litigation challenging the rules and the FCC's authority to issue them. So this is not the end of the net neutrality soap opera by any stretch of the imagination. If anything, it suggests a new chapter, one that will take the discussion farther away from the technical architecture of the Internet and the best interests of consumers and closer to pure political theater







FCC Pulls a Fast One on Net Neutrality, Presenting New Regulations as Fait Accompli to GOP Congress
Last June, when the FCC was careening towards issuing net neutrality rules on its own authority, even on the heels of a tongue-lashing from the DC Circuit in the Comcast decision (holding that the agency lacked authority to impose net neutrality principles on broadband as a deregulated Title I service), Charlie Kennedy (a giant of telecom law who's a partner at Wilkinson Barker and was an adjunct at the now-defunct Progress & Freedom Foundation) made a bold prediction in a PFF paper. That prediction was today, sadly, proven true with the FCC's net neutrality proposal. Charlie wrote:
With no clear consensus to be "restored" and no compelling need to overturn the Commission's de-regulatory classification of Internet access under Title I, there is simply no need for the FCC to undertake—let alone rush—this proceeding. The timing of the NOI's release and the rapid comment schedule suggest that the agency is simply trying to ram reclassification through as quickly as possible so that the 112th Congress—which seems likely to be even more hostile than the current Congress to the imposition of net neutrality regulation by the FCC—will be presented in January with a regulatory fait accompli. If that regulatory endrun around Congress succeeds, it will be remembered for decades as a pivotal moment in the decline of the rule of law and the rise of a regulatory bureaucracy "freed … from its congressional tether," as the D.C. Circuit rightly denounced the FCC's jurisdictional over-reach in Comcast.
In essence, Genachowski is asserting that, despite what the pesky DC Circuit thinks, he already has the authority to impose net neutrality regulation—so reclassification of broadband from deregulated Title I to common-carriage Title II is unnecessary. Talk about an agency freed from its congressional tether! See you in court, Mr. Chairman! And while we duke this out in court, infrastructure investment will stagnate—and consumers will suffer. As Charlie predicted:
Reclassification under the "Third Way" will also be the beginning of the Internet's "Lost Decade" (or more) of stymied investment, innovation, and job creation as all sides do battle over the legality of reclassification and its implementation. To paraphrase President John Adams: "Great is the guilt of an unnecessary regulatory war."
And as Adam Thierer points out, so much for the democratic accountability of administrative agencies and the rule of law!







Net neutrality announcement: No reclassification, good; no details, bad
Written with Jerry Ellig.
Chairman Genachowski's net neutrality announcement today was very short on details. What we learned is that the Chairman plans to buck Congress and the courts in a drive to regulate broadband. He is proceeding against the wishes of hundreds of members of Congress from both parties that have written the FCC demanding that it not adopt net neutrality rules until Congress has an opportunity to review the matter. Also, since he has announced that he will not seek to reclassify broadband as a regulated telecommunications service, he seems to be resisting the D.C. Circuit Court of Appeals, which told the FCC earlier this year that it lacked the authority to regulate broadband.
Genachowski's remarks gave us only a thumbnail sketch of how the rules he's advocating the FCC adopt. We don't know what authority would undergird new rules, and we don't know what the chairman means when he says that the new rules would prohibit "unreasonable" discrimination of content by service providers. The devil is in those details, and they seemingly won't be available until the FCC adopts the rules at its December 21st meeting — days before a new Congress is sworn in.
While taking reclassification off the table is a welcome compromise from the chairman, we don't understand the rush to action. Why the midnight announcement last night? Why the limited announcement today? Why not allow the new Congress to take up the matter?
Having chosen to act, however, it's "put up or shut up" time. Chairman Genachowski said the broadband providers have incentives to act as gatekeepers to the Internet and have prevented consumers from using the applications of their choice in the past. But it takes more than these assertions to justify a new regulation. Any net neutrality order needs to offer a coherent, logical theory that explains why broadband providers face systematic incentives to act in non-neutral ways that have no offsetting consumer benefits. And it needs to back up that theory with rigorous empirical evidence that proves a widespread problem exists — not just a repetition of the same handful of anecdotes about bad actors.
It's heartening to see that Chairman Genachoswki believes wireless broadband is at a different stage in its development and should be treated differently from landline broadband. But insisting that wireless is too different invites a sleight of hand trick that would allow the FCC to claim that broadband faces insufficient competition because wireless doesn't count. The commission has already done this in its National Broadband Plan, which dismisses third generation wireless as a competitor because it allegedly isn't fast enough. This stacks the deck in favor of regulation by making it easier to claim that wireline broadband doesn't face enough competition.







FTC Endorses "Do Not Track" Information Control Regime for the Internet
This morning, the Federal Trade Commission (FTC) released its eagerly-awaited Preliminary FTC Staff Report on Protecting Consumer Privacy in an Era of Rapid Change: A Proposed Framework for Businesses and Policymakers. As expected, the agency has generally endorsed an expanded regulatory regime to govern online data collection and advertising efforts in the name of protecting consumer privacy. More specifically, the agency endorsed a so-called "Do Not Track" mechanism that would supposedly help consumers block unwanted data collection or advertising. Here's how the agency describes it:
Such a universal mechanism could be accomplished by legislation or potentially through robust, enforceable self-regulation. The most practical method of providing uniform choice for online behavioral advertising would likely involve placing a setting similar to a persistent cookie on a consumer's browser and conveying that setting to sites that the browser visits, to signal whether or not the consumer wants to be tracked or receive targeted advertisements. To be effective, there must be an enforceable requirement that sites honor those choices. (p. 66)
I'm sure we'll have plenty more to say here about the issue in coming weeks and months (comments on the FTC report are due by Jan. 31), but we've already commented on this proposal here before. See 1, 2, 3. To briefly summarize a few of those concerns:
Ironically, depending on how it's implemented, a "Do Not Track" mechanism could potentially require individuals to surrender more personal information about themselves to companies or the government for purposes of authentication and enforcement of the rule.
It would also require a re-architecting of the Internet and the potential regulation of every web browser to ensure compliance. This will give the FTC and other lawmakers far greater control over the Internet's architecture.
For that reason, one can easily imagine would-be Net censors using the "Do Not Track" mechanism being used as a blueprint to regulate other types of online speech.
One also wonders if mandatory browser controls opens up a potential new back-door for government surveillance snoops to exploit.
Most importantly, if "Do Not Track" really did work as billed, it could fundamentally upend the unwritten quid pro quo that governs online content and services: Consumers get lots of "free" sites, services, and content, but only if we generally agree to trade some data about ourselves and have ads served up. After all, as we've noted many times before here, there is no free lunch. The cornucopia of seemingly free services and content at our fingertips didn't just fall to Earth like manna from heaven. Data collection and advertising made that all happen. If we undercut this goose that lays the Internet's golden eggs, consumers could see charges on many services that they currently pay little to nothing for. Do you want to pay $20 a month for your favorite social networking site? A dime per search on your preferred search engine? Well, that's the future that could await us if we continue down this regulatory road.
Again, more analysis to come.







November 30, 2010
European Commission Should Leave Internet Search Alone
By Ryan Radia and Wayne Crews
Today, the European Commission opened a formal antitrust investigation into Google to probe allegations that the firm rigged its search engine to discriminate against rivals. This intervention in the online search market, however, will distort the market's evolution, discourage competitors from innovating, and ultimately hurt consumers.
Google isn't a monopoly now, but the more it tries to become one, the better it will be for us all. When capitalist enterprises strive to earn a bigger market share, rival firms are forced to respond by trying to improve their offerings. Even if Google is delivering biased search results, it is only paving the way for competitors to break into the search market.
The European Commission is wrong to assume that Google possesses monopoly power. Google accounts for just 6 percent of all dollars spent on advertising in Europe. And even loyal Google users regularly find websites through competing search engines like Bing or through social websites like Facebook and Twitter.
Before resorting to tired old competition laws, European policy makers should remember that the Internet economy is hardly understood by anybody—including by regulators. We are in terra incognita; no one knows how information markets will evolve. But one thing is for sure: Online search technology cannot evolve properly if it is improperly regulated. Why make risky investments in hopes of revolutionizing Internet markets if marvelous success means regulation and confiscation?
The real threat to consumers is not from successful high-tech firms like Google, but from overreaching government interventions into competitive market processes. As economists have documented in scholarly journals, antitrust intervention is especially problematic in the information age, because it severely underestimates the critical role of innovation in dynamic high-tech markets.
In the information age, ingenuity—not market power—is the key to success. America's high-tech sector is strewn with former market leaders who were no match for the relentless forces of creative destruction. Rapid, unpredictable change is the hallmark of the modern digital economy. Google may be on top in many high-tech markets today, but it won't stay there for long unless it keeps innovating and delivering a superior search product.







The promise and limits of e-rulemaking
Earlier today I spoke at the Brookings Institution event "The Future of E-rulemaking: Promoting Public Participation and Efficiency," which was co-sponsored with the Administrative Conference of the United States. I made two points: we have not yet achieved regulatory transparency, and wiki-government does not overcome Hayek's knowledge problem. What follows are my remarks.
When we talk about e-rulemaking, we often think about a first generation and a second generation of e-rulemaking.
The first generation is focused on making available online all of the information related to regulation and the rulemaking process, as well as making it simple for citizens to participate electronically in traditional rulemaking. In this way we improve the transparency and accountability of the regulatory process.
The second generation moves beyond the basics to leverage the new social technologies of the internet to increase citizen participation and enhance agency expertise. This is the exciting stuff of using Twitter and Facebook and wikis and collaborative commenting systems to achieve a truly democratic, efficient, and responsive rulemaking process. And while I'm very excited by the prospect of this transformation, I feel I have to suggest some caution.
For one thing, I'm not sure we have successfully graduated from the first generation. Less than two years ago we launched OpenRegs.com because Regulations.gov did not offer something as simple as RSS feeds and had a less than ideal user interface. Since then it has been much improved, but if we look at the recommendations of the ABA Administrative Law Section's report on e-rulemaking — in which so many of the folks I see here today participated — or the recommendations of OMB Watch's Task Force on e-rulemaking, we can see that we're a long way from where we should be to say that the first generation is complete.
The data that is made available online is often not standardized or structured in a meaningful way. And the interface for public interaction, in my opinion, could be greatly improved. Technology is not the problem. The technology exists — and freely in many cases — to vastly improve the accessibility and transparency of rulemaking dockets online. What's missing are institutional reforms to require meaningful transparency. That's why I'm happy to see Sen. Lieberman's efforts on the E-Rulemaking Act of 2010, which would address many of these concerns including how e-rulemaking is funded.
I'd also love to see reform of federal contracting rules to allow agencies to entice open source communities and the types of small firms that are at the cutting edge of web innovation to help them develop great tools for e-rulemaking. FDMS and Regulations.gov were originally developed by Lockheed Martin, and they are now operated under contract by Booz Allen Hamilton. I'd love to see the sorts of experiences that firms like 37 Signals or Adaptive Patch would create given the opportunity.
Now, this all is not to say that the first generation of e-rulemaking must be done completely before we can begin experimenting in the second generation. But it does mean that to the extent that there are trade-offs, government should allocate resources to making sure that we have access to all relevant rulemaking data in structured searchable formats. If we're given the data, third parties will be able to begin the second-generation experimentation by employing social networks to increase awareness, and collaborative tools to distill the wisdom of the crowds.
For example, look at the amazing work that Cynthia Farina has been doing with Cornell's e-Rulemaking Initiative. In partnership with the Department of Transportation it has developed an experimental platform for citizen outreach through social media, human-moderated discussions, and collaboration on comments. With early access to data from DoT, they have been able to leverage networks, like Facebook and Twitter, and off-the-shelf open source tools, including WordPress and Digress-It, to bring together hundreds of interested citizens to collaborate on two live rulemakings. If more agencies made more of their data available early and often in usable formats, I'd like to think we'd see more experiments like Cornell's.
Now, in keeping with the cautionary tone of my remarks, I have to also give a bit of a warning about the second generation of e-rulemaking.
Ideally we turn to regulation only when there is a market failure. One reason we prefer markets is that we recognize that regulators can't possibly have all the information possessed by the myriad individual market actors — information that is communicated by prices. I fear that because new technologies make it easy for regulators to tap into "the wisdom of the crowds," they may believe that they have solved what Friedrich Hayek called "the knowledge problem." That's a conclusion that we must resist.
Another thing the Cornell initiative's experience has taught us is that it is very difficult to engage ordinary citizens in a rulemaking, much less getting them to make useful contributions, and that the doing so is very labor intensive. Now, I understand that Wikipedia is written and edited by an incredibly small fraction of its users, and yet they're able to build a remarkable resource. But this small number of wikipedians form a persistent community that has developed over the course of almost a decade, with clear norms and a real culture. While the peer production of knowledge to improve regulation no doubt shows promise, we should understand that we have not solved the knowledge problem, and may never be able to do more than marginally improve regulations.
So let's focus on finishing the first step toward the promise of e-rulemaking — greater online transparency — so that we can facilitate experimentation toward the next.







Peter Thiel on the stagnation of technological innovation
On the podcast this week, Peter Thiel, co-founder of PayPal, early investor in Facebook, and president of Clarium Capital, discusses the stagnation of technological innovation. Thiel gives reasons why innovation has slowed recently — offering examples of stalled sectors such as space exploration, transportation, energy, and biotechnology — while pointing out that growth in internet-based technologies is a notable exception. He aslo comments on political undercurrents of Silicon Valley, government regulation, privacy and Facebook, and his new fellowship program that will pay potential entrepreneurs to "stop out" of school for two years.
Related Links
"Technology = Salvation", Wall Street Journal interview with Thiel
"Investor Peter Thiel asks Silicon Valley: Where's the innovation?", L.A. Times
"The Education of a Libertarian", by Thiel
"Peter Thiel Makes Down Payment on Libertarian Ocean Colonies", Wired
To keep the conversation around this episode in one place, we'd like to ask you to comment at the web page for this episode on Surprisingly Free. Also, why not subscribe to the podcast on iTunes?







November 29, 2010
How Should Libertarians Think about The Master Switch?
Former TLF blogger Tim Lee returns with this guest post. Find him most of the time at the Bottom-Up blog.
Thanks to Jim Harper for inviting me to return to TLF to offer some thoughts on the recent Adam Thierer-Tim Wu smackdown. I've recently finished finished reading The Master Switch, and I didn't have have my friend Adam's viscerally negative reactions.
To be clear, on the policy questions raised by The Master Switch, Adam and I are largely on the same page. Wu exaggerates the extent to which traditional media has become more "closed" since 1980, he is too pessimistic about the future of the Internet, and the policy agenda he sketches in his final chapter is likely to do more harm than good. I plan to say more about these issues in future writings; for now I'd like to comment on the shape of the discussion that's taken place so far here at TLF, and to point out what I think Adam is missing about The Master Switch.
Here's the thing: my copy of the book is 319 pages long. Adam's critique focuses almost entirely on the final third of the book, (pages 205-319) in which Wu tells the history of the last 30 years and makes some tentative policy suggestions. If Wu had published pages 205-319 as a stand-alone monograph, I would have been cheering along with Adam's response to it.
But what about the first 200-some pages of the book? A reader of Adam's epic 6-part critique is mostly left in the dark about their contents. And that's a shame, because in my view those pages not only contain the best part of the book, but they're also the most libertarian-friendly parts.
Those pages tell the history of the American communications industries—telephone, cinema, radio, television, and cable—between 1876 and 1980. Adam only discusses this history in one of his six posts. There, he characterizes Wu as blaming market forces for the monopolization of the telephone industry. That's not how I read the chapter in question. Although Wu certainly suggests that market forces tended toward consolidation (which seems obviously correct), he also makes it clear that the government played an active role in the process, through the patent system, the Kingsbury Commitment, turning a blind eye to industrial sabotage, and later through explicit pro-monopoly regulation. Adam's only specific quibble with Wu's history is his failure to mention the nationalization of the telephone network during World War I. Maybe that's an important oversight, but I'm not sure it would have changed Wu's story very much. Certainly I think characterizing this section of the book as an anti-free-market screed is unfair.
The Master Switch takes an even more explicitly libertarian tone in its discussion of broadcasting. Wu makes it plain that everything about the radio (and later television) industries post-1927 was the result of heavy-handed government regulation. He tells how federal regulations robbed the inventor of FM radio of the opportunity to commercialize his invention, and how the FCC delayed the introduction of television by more than a decade to give RCA (then the dominant radio firm) time to perfect its own television technology.
It's easy to imagine chapters 5, 9, and 10 being published by Cato or the Mercatus Center. Consider, for example, this passage describing the FCC's decision to delay the introduction of television (p. 144):
Consider for a moment the oddness of this phenomenon in the putatively free-market economy. The government was deciding, in effect, when a product that posed no hazard to the public health would be "ready" for sale. Consider, too, how incongruous this was in a society under the First Amendment: a medium with great potential to further the exercise of free speech was being stalled until such time as the government could agree it had attained an acceptable technical standard. Rather than letting the market decide what a technology in its present state was worth, a federal agency—not even a democratically elected body—was to forbid its sale outright.
Whatever else you might say about this passage, it's certainly not blaming anything on market forces!
One of Wu's central points is that during the 20th century, the communications policy world was divided along different ideological lines. On one hand were the champions of monopoly and central planning—Wu chooses legendary AT&T president Theodore Vail as its intellectual father. On the other hand were champions of choice and competition. It's worth emphasizing that Adam and Wu are on the same side of this ideological battle. In 1930, 1950, or 1970, all of us would have been teaming up to oppose monopolistic regulations.
We would have regarded AT&T, RCA, and other state-sponsored monopolists as our common enemy. If we'd submitted amicus briefs in the Carterfone or MCI proceedings, we would have made largely the same arguments. Of course, we wouldn't have agreed perfectly on our long-term policy agenda, but we would have regarded that as a relatively minor area of disagreement compared to the pressing problem of repealing blatantly monopolistic government policies and bringing some degree of competition to communications markets. And for most of the 20th century we would have been the underdogs. In 1950, the monopolists were not only utterly dominant in Washington, D.C., but their ideology still had a great deal of cachet with the intellectual class.
Vail's corporatist ideology has fallen so far out of favor that today it's hard to find anyone who's willing to defend it forthrightly. The remnants of the once-great monopolists have been forced to adopt the rhetoric of the free market and pretend to care about choice and competition. And it's only in this new intellectual environment that Adam can plausibly
portray Wu a "cyber-collectivist" at the opposite end of the ideological spectrum from me and Adam. The Master Switch reminds us that much less separates Adam from Wu than separates either of them from Theodore Vail and David Sarnoff.
Adam began his first post by stating that he "disagrees vehemently with Wu's general worldview and recommendations, and even much of his retelling of the history of information sectors and policy." This is kind of silly. In fact, Adam and Wu (and I) want largely the same things out of information technology markets: we want competitive industries with low barriers to entry in which many firms compete to bring consumers the best products and services. We all reject the prevailing orthodoxy of the 20th century, which said that the government should be in the business of picking technological winners and losers. Where we disagree is over means: we classical liberals believe that the rules of property, contract, and maybe a bit of antitrust enforcement are sufficient to yield competitive markets, whereas left-liberals fear that too little regulation will lead to excessive industry concentration. That's an important argument to have, and I think the facts are mostly on the libertarians' side. But we shouldn't lose sight of the extent to which we're on the same side, fighting against the ancient threat of government-sponsored monopoly.
My friend Kerry Howley coined the term "state-worship" to describe libertarians who insist on making the government the villain of every story. For most of history, the state has, indeed, been the primary enemy of human freedom. Liberals like Wu are too sanguine about the dangers of concentrating too much power in Washington, D.C. But to say the state is an important threat to freedom is not to say that it's the only threat worth worrying about. Wu tells the story of Western Union's efforts to use its telegraph monopoly to sway the election of 1876 to Republican Rutherford B. Hayes. That effort would be sinister whether or not Western Union's monopoly was the product of government interference with the free market. Similarly, the Hays code (Hollywood's mid-century censorship regime) was an impediment to freedom of expression whether or not the regime was implicitly backed by the power of the state. Libertarians are more reluctant to call in the power of the state to combat these wrongs, but that doesn't mean we shouldn't be concerned with them.
By casting every argument in terms of a Manichean struggle between "cyber-libertarians" and "cyber-collectivists," Adam misses a lot of the value of The Master Switch. Many of the stories Wu tells are too complicated to fit comfortably at either end of the free-market-vs-regulation spectrum. For example, until I read The Master Switch, I didn't realize how important, and harmful, patents were to the early development of communications markets. Should these stories make libertarians more skeptical of patent law? I'd be interested to hear Adam take, but he was too busy railing against Wu's alleged cyber-collectivism to discuss the topic.







Adam Thierer's Blog
- Adam Thierer's profile
- 1 follower
