Adam Thierer's Blog, page 149
January 14, 2011
new paper: "Unappreciated Benefits of Advertising and Commercial Speech"
Today the Mercatus Center has released a short new paper I have authored on "Unappreciated Benefits of Advertising and Commercial Speech." I begin the piece by noting that:
Federal policy makers, state legislators, and state attorneys general have recently shown interest in regulating commercial advertising and marketing. Several new regulatory initiatives are being proposed, or are already underway, that could severely curtail or restrict advertising or marketing on a variety of platforms. The consequences of these stepped-up regulatory efforts will be profound and will hurt consumer welfare both directly and indirectly.
I go on to note that "advertising can be an easy target for politicians or regulatory activist groups who make a variety of (typically unsubstantiated) claims about its negative impact on society," but then continue on to explain how "the role of commercial speech in a free-market economy is often misunderstood or taken for granted." I outline how, despite regulators' concerns, consumers actually derive three important types of benefits from advertising and marketing: (1) Informational / Educational Benefits; (2) Market Choice / Pro-Competitive Benefits; and (3) Media Promotion / Cross-Subsidization. After discussing each benefit, I conclude that:
For these reasons, a stepped-up regulatory crusade against advertising and marketing will hurt consumer welfare since it will raise prices, restrict choice, and diminish marketplace competition and innovation—both in ad-supported content and service markets, and throughout the economy at large. Simply stated, there is no free lunch.
Read the entire 1,800-word essay here. I have also embedded the document down below in a Scribd reader.
Unappreciated Benefits of Advertising and Commercial Speech (Adam Thierer – Mercatus Center)







Adobe Improves Privacy Controls Before Regulators Can Saddle Up
Via @csoghoian (who can be wrathful if you don't attribute), Adobe buries the lede in its blog post about privacy improvements to the Flash player. They're working with the most popular browser vendors on integrating control of "local shared objects"—more commonly known as "Flash cookies"—into the interface. Users control of Flash cookies will soon be similar to control of ordinary cookies.
It doesn't end there:
Still, we know the Flash Player Settings Manager could be easier to use, and we're working on a redesign coming in a future release of Flash Player, which will bring together feedback from our users and external privacy advocates. Focused on usability, this redesign will make it simpler for users to understand and manage their Flash Player settings and privacy preferences. In addition, we'll enable you to access the Flash Player Settings Manager directly from your computer's Control Panels or System Preferences on Windows, Mac and Linux, so that they're even easier to locate and use. We expect users will see these enhancements in the first half of the year and we look forward to getting feedback as we continue to improve the Flash Player Settings Manager.
Mysterious, sinister "Flash cookies" were Exhibit A in the argument for a Do Not Track regulation. There is no way that people can cope with the endless array of tracking technologies advertisers are willing to deploy, the argument went, so the government must step in, define what it means to be "tracked," and require it to stop—without kneecapping the free Internet. (Good luck with that!)
But Flash cookies are now quickly taking their place as a feature that users can control from the browser (or OS), customizing their experience of the Web to meet their individual privacy preferences. This is not a panacea, of course: People must still be made aware of the importance of controlling Flash cookies, as well as regular cookies. New tracking technologies will emerge, and consumer-friendly information controls meeting those challenges will be required in response.
But if this is what the drawn-out "war" against tracking technologies looks like, color me pro-war!
In a few short months, Adobe has begun work on the controls needed to put Flash cookies under peoples' control. The Federal Trade Commission—prospective imposer of peace through complex, top-down regulation—took more than a year to produce a report querying whether a Do Not Track regulation might be a good idea. This problem will essentially be solved (and we'll be on to the next one) before the FTC would have gotten saddled up.
Yes, Adobe may have acted because of the threat of damaging government regulation. That seems always to be what gets these companies moving. Of course it does, when the primary modus operandi of privacy advocacy is to push for government regulation. Were the privacy community to work as assiduously on boycotts as acting through intermediary government regulators, change might come even faster.
We could do without the standing army of regulators. Having a government sector powerful enough to cow the business sector is costly, both in terms of freedom and tax dollars.
With the failure of Do Not Track, the vision of a free and open Internet—populated by aware, empowered individuals—lives on.







January 13, 2011
Old Future Shock Documentary Reflects on Same Pessimistic Fears We Hear about Today
Over at the Brain Pickings blog, Maria Popova has posted an amazing 1972 documentary based on Alvin Toffler's famous 1970 book, Future Shock. The documentary, like the book, focuses on many of the themes we hear Internet optimists and pessimists debating all the time today: "information overload," excessive consumerism, artificial intelligence and robotics, biotechnology, cryonics, the nature of humanity and how technology impacts it, etc, etc. Again, all the same stuff people are still fighting about today.
Popova correctly notes that "The film, darkly dystopian and oozing techno-paranoia, is a valuable reminder that… societies have always feared new technology but ultimately adapted to it." Indeed, at one point in the film we hear, "The future has burst upon us… [but] is technology always desirable?" And that's just in reference to the (now-obsolete) supersonic jet transport, or Concorde! "Changes bombard our nervous systems, clamoring for decisions. New values, new technologies, flood into our lives… Escape from change in today's society become more and more impossible. But change itself is out of control." Geez.. how did we make it past 1972!
The documentary is narrated by Orson Welles, which makes it even more fun. Welles had a presence that just made everything seem larger than life, and his voice-of-God narration here really added a nice touch to this film.
It's an absolutely great find. Here's the first 10-minute segment from the documentary. Watch all five segments over at Brain Pickings.







January 12, 2011
Understanding the Costs of Regulation
My colleague Dr. Richard Williams, who serves as the Director of Policy Research at the Mercatus Center, has just released an excellent little primer on "The Impact of Regulation on Investment and the U.S. Economy." Those who attempt to track and analyze regulation in the communications and high-tech arenas will find the piece of interest since it provides an framework for how to evaluate the sensibility of new rules.
Williams, who is an expert in benefit-cost analysis and risk analysis, opens the piece by noting that:
The total cost of regulation in the United States is difficult to calculate, but one estimate puts the cost at $1.75 trillion in 2008. Total expenditures by the U.S. government were about $2.9 trillion in 2008. Thus, out of a total of $4.6 trillion in resources allocated by the federal government, 38% of the total is for regulations.
If regulations always produced goods and services that were valued as highly as market-produced goods and services, then this would not be a cause for alarm. But that is precisely what is not known. In fact, there is evidence to the contrary for many regulations. Where regulations take resources out of the private sector for less valuable uses, overall consumer welfare is diminished. … Regulation also impacts the creation and sustainability of jobs… [which] can have very real consequences for the economy.
He also explains how regulation can affect international competitiveness, especially when burdensome rules limit the ability of companies to attract capital for new innovations and investment.
Of particular interest to students of communications and high-tech policy will be Williams' discussion of the dangers of uncertainty in markets. He notes:
Two types of uncertainty can affect decisions by firms to invest: (a) uncertainty about demand for their products (demand uncertainty) and (b) uncertainty about factor costs (labor and capital) (factor uncertainty). Major regulations—such as those recently authorized regarding financial services, health care, or greenhouse gas rules—can affect both demand and factor uncertainty.
Such regulatory uncertainty has long plagued tech and telecom markets and it's one of the reasons many of us are concerned about new, open-ended Net neutrality mandates. The FCC's new Net neutrality regime leaves so much unbounded discretion to the agency — and opens up the potential for so much litigation and rent-seeking — that one cannot help but believe it will have a deleterious impact on investment and innovation over the long-haul.
Finally, Williams outlines "four questions that will help to identify regulations that may not meet the standards for sound regulations and therefore deserve further scrutiny":
Systemic Problem: How well does the analysis identify and demonstrate the existence of a market failure or other systemic problem the regulation is supposed to solve? If an agency scores poorly on this, there is no evidence that the agency is addressing a real social problem as opposed to regulating for other reasons.
Alternatives: How well does the analysis assess the effectiveness of alternative approaches? If an agency has not identified and analyzed a number of approaches, it may mean the agency has settled on an approach without ever knowing if there are more effective ways to solve the problem.
Benefit-cost analysis: How well does the analysis assess costs and benefits? If an agency has done a poor job on this, it may mean that the there is no theory or evidence that the regulation will solve the problem or do so at a reasonable cost.
Net Benefits: Did the agency maximize net benefits or explain why it chose another option? If an agency cannot or chooses not to explain why it has not chosen the option that maximizes net benefits for society, the agency may have ignored the evidence that its analysis has produced.
I think it's safe to say that the FCC often falls short in satisfying many of these tests.
Anyway, make sure to read Richard's entire piece. You can download it here.







Privacy Community Wracked by Controversy—Are DoJ Officials Mustache-Twisting Enemies of Privacy?
I've been bemused by a minor controversy about remarks Ryan Calo of Stanford University made to a New York Times reporter for this story on Internet privacy and government access.
"When your job is to protect us by fighting and prosecuting crime, you want every tool available," said Ryan Calo, director of the consumer privacy project at the Center for Internet & Society at Stanford Law School. "No one thinks D.O.J. and other investigative agencies are sitting there twisting their mustache trying to violate civil liberties. They're trying to do their job."
That apparently didn't sit well in some corners of the privacy community, and Calo felt obligated to explain the comment as though he had implied that DoJ efforts to undercut privacy should not be resisted. He hadn't.
But evidently some people do think DoJ officials, or some relevant segment of them, are mustache-twisting privacy-haters. There are a few genuine oddballs committed to undercutting privacy, but it's not worth casting aspersions on the entire security bureaucracy because of these few.
I believe the motivations of the vast majority of DoJ officials are good. They feel a real sense of honor from doing their self-chosen task of protecting the country from various threats. On average, they'll likely weigh security and safety more heavily than the average privacy advocate or civil libertarian. Because they don't think about privacy as much, they may not understand as well what privacy is and how to protect it consistent with pursuing justice. These are all good faith reasons why DoJ officials may undervalue and, in their work, undercut privacy. It is not necessary to believe they a dastardly enemy sits on Constitution Avenue mocking the document that street is named after.
The theory of the evil DoJ official says more about the theoretician than the DoJ. Experience in Washington has shown me that incompetence is almost always the better explanation than malice. (That's not very nice, talking about "incompetence," but there are some DoJ officials who lack competence in the privacy area.) Some people apparently need a dramatic story line to motivate themselves.
I'm sure it feels good to cast oneself as a white hat facing down a team of secretive, nefarious, government-sponsored black hats. But this mind-set gives away strategic leverage in the fight for privacy. The story is no longer how to protect privacy; it's who is bad and who is good. Everyone (everyone thoughtful about messaging and persuasion, anyway) recognizes that Wikileaks veered off course by letting Wikileaks itself and Julian Assange become the story. We're not having the discussion we should have about U.S. government behavior because of Assange's self-regard.
I agree with my privacy brethren on the substance of the issues, but those who have similar self-regard, who insist on good-vs.-evil framing in order to cast themselves as heroic—they are closing the ears of DoJ officials they might reach and giving away opportunities to actually improve protections for privacy in the country.







The GOP Can Eclipse Obama on Transparency
So I say in Politico today. Highlights:
During his first two years in office, the president generated a lot of heat in the transparency area — but little sunlight. House Republicans can quickly outshine Obama and the Democratic Senate. It all depends on how they implement the watch phrase of their amendment package: "publicly available in electronic form."
. . .
The House can reach the gold standard for transparency if its new practices make introducing a bill and publishing the bill online the same thing. Moving a bill out of committee and posting the committee-passed version as online data must also be the same thing. Voting on a bill and publishing all data about the vote online must be standard procedure.
. . .
The transparency community owes it to Congress to say how it wants to get the data.
Of course, I've fooled you just a little bit. The whole thing is a highlight! (ahem) Read it.







"Preserving the Internet," But Which One?: Reading the FCC's Net Neutrality Order (Part IV)
This is Part IV of a five-part commentary on the FCC's Dec. 23, 2010 "Open Internet" Report and Order.
Part I looked at the remarkably weak justification the majority gave for issuing the new rules.
Part II explored the likely costs of the rules, particularly the undiscussed costs of enforcement that will be borne by the agency and accused broadband access providers, regardless of the merits. (See Adam Thierer's post on the first attenuated claim of violation, raised before the rules even take effect.)
Part III compared the final text of the rules to earlier drafts and alternative proposals, tracing the Commission's changing and sometimes contradictory reasoning over the last year.
Part IV, (this part), looks at the many exceptions and carve-outs from the rules, and what, taken together, they say about the majority's dogged determination to see the Internet as it was and not as it is or will become.
Part V will review the legal basis on which the majority rests its authority for the rules, likely to be challenged in court.
What does an Open Internet mean?
The idea of the "open Internet" is relatively simple: consumers of broadband Internet access should have the ability to surf the web as they please and enjoy the content of their choice, without interference by access providers who may have financial or other anti-competitive reasons to shape or limit that access.
In the act of trying to translate that idea into enforceable rules—enforceable, inexplicably, by a federal regulatory agency with no legislative authority over any substantial feature of the Internet economy and no real justification for creating rules of any kind for a system that is working nearly flawlessly so far—the FCC has found itself tied in unholy knots.
The rules as enacted carved out exceptions and caveats that, taken together, render the final regulations not meaningless but certainly incoherent.
In exempting from the rules a host of important innovations in network management and infrastructure optimization developed over the last decade, the FCC has stepped back from the brink of its original plan, which would have returned the Internet to the days of unreliable dial-up access and static websites.
But it has also revealed the danger of trying to regulate a rapidly-evolving life form, and risked the unintended consequence of denying it future forms of nutrition and good health. If these rules stand and are vigorously enforced, the Internet's further growth and development may be stunted.
The Mythical Neutrality Principle
Back in the stone age of 1998, I wrote in "Unleashing the Killer App" that one of the fundamental bases on which the Internet became an engine of innovation and even social change was that its basic protocols are non-proprietary. Anyone can make use of them, any device can support them, and every node is a peer—without paying royalties or other tribute to anyone. As the "lowest common denominator" standard, TCP/IP benefited from network effects to overtake several popular proprietary standards, including IBM's SNA.
The technical and legal openness of TCP/IP has been romanticized over the years, particularly by legal scholars and journalists who know less about technology than they think they do, into a view of the Internet as a Platonic ideal; a vehicle for true collaboration and consciousness-raising. The web was nothing less than the fruition, as Tim O'Reilly put it, "of "what we were talking about at Esalen in the '70s—except we didn't know it would be technology-mediated."
The ideal of neutrality—of a level playing field in which every website, application, and device is no more prominent than any other–is a persistent and compelling myth. It evokes the heroism of the entrepreneur in the garage, developing the next Yahoo or Google or YouTube or Facebook or Twitter or Groupon, with little more than a great idea, technical skills, and the willingness to sacrifice sleep and social life for the promise of a future liquidity event. Optimally, the great IPO, or to change the world and make it a better place by connecting people and information in new and unexpected ways. Wikipedia, for example.
Whatever the motivation, after a grueling race against the clock, the app is released. If all goes well, it reaps the benefit of Metcalfe's Law, goes viral, and becomes the next Big Thing, all in the span of time between one SXSW conference and the next Web 2.0 Summit.
No large corporation can stop the plucky inventor, or ransom a part of her invention. No access provider can hold its invaluable user base hostage. No competing content provider, no matter how giant, can buy up all the available market channels and freeze out the upstart start-up. No government regulator need approve or license the invention before human testing and general use can begin.
When Worlds Collide
A considerably more mundane version of that ideal world did exist in the last half of the 1990's. It still exists today. But it has become much more complex and nuanced in the last decade.
The Internet, the Web, the Cloud and the app-based economy of wireless computing devices, TVs and increasingly other things (including cars and other non-traditional computing platforms such as consumer electronics and home appliances) have evolved in interesting and productive ways, often "under the covers" of the network infrastructure.
Few consumers know or would care to know about the existence, let alone the details, of network optimization algorithms, content delivery networks, complex peering arrangements, caching and edge servers, file torrenting, mirror sites, specialized services, virtual private networks, packet prioritization based on media type, spam and other malware filters, dynamic IP addresses or domain name redirection.
All of these (and more) are mechanisms for speeding up the delivery of the most popular or the most bandwidth intensive content. Many have been developed by entrepreneurs or by the large access and hosting services, often working in concert with the voluntary protocol and technical committees of the Internet Society.
ISOC keeps the standards alive, flexible, and responsive to new opportunities for expansion and reinvention made possible through the agency of Moore's Law, which continues to drive the basic technological components of digital life into the uncharted realm of the faster, cheaper, and smaller.
Strictly speaking, of course, all of these innovations violate the neutrality principle. They recognize that some packets, either because of file size or popularity or media characteristics or importance to the recipient, requires special treatment in the transport from host to client.
Video (YouTube, Hulu, Netflix), for example, can consist of very large files, and the component packets must arrive at their destination with relatively short delays in order to maintain the integrity of streaming display.
Hosted services, such as medical monitoring, use parts of the same infrastructure as the public Internet, but cannot safely be left to the normal ebb and flow of Internet traffic patterns. Limitations of the 3G wireless infrastructure—in large part a result of regulatory restrictions on cell siting and spectrum mismanagement—make it difficult to satisfy exploding customer demand for ever-more of the most bandwidth-intensive apps.
When all is said and done, the core problem with the FCC's Open Internet Report and Order comes down to a clash of the idealized view of the neutral Internet with the reality of an always-evolving, always-improving technology infrastructure.
Chairman Genachowski, himself a former venture capitalist, is clinging to the myth of the Internet as virtual frontier, an understandable but highly dangerous indulgence in nostalgia, a remembrance of Internets past. He's not alone. The romance of the American west has persisted more than a hundred years since historian Frederick Jackson Turner famously declared the frontier closed.
As he said in introducing the Open Internet proceeding in September, 2009, shortly after taking office:
"The Internet's creators didn't want the network architecture — or any single entity — to pick winners and losers. Because it might pick the wrong ones. Instead, the Internet's open architecture pushes decision-making and intelligence to the edge of the network — to end users, to the cloud, to businesses of every size and in every sector of the economy, to creators and speakers across the country and around the globe. In the words of Tim Berners-Lee, the Internet is a 'blank canvas' — allowing anyone to contribute and to innovate without permission."
Many of us fortunate enough to have been there at the moment the Internet reached its tipping point and became an unstoppable force, a kind of network gravity, share this nostalgia. It was a moment that changed the trajectory of computing, upended giants, and unleashed tremendous creativity. For me, it utterly transformed my career, much as my first FORTRAN course as an undergraduate had unintentionally started it.
But the effort to translate nostalgia into federal law—assuming, but only for the moment, that the FCC is the appropriate agency to preserve an Internet that has long since passed even if it was ever the way we old-timers remember it—has already fallen down more than its fair share of abandoned mine shafts.
.
The Exceptions that Expose the Rule
Even the original Notice of Proposed Rulemaking and draft order released for comment in October, 2009 included many (necessary) exceptions from strict adherence to the neutrality principle.
The proposed rules, most important, limited all six neutrality rules (§§ 8.5-8.15) to an exception for "reasonable network management." Reasonable network management was defined as all "reasonable practices" broadband Internet access providers undertook to, among other things, "reduce or mitigate the effects of congestion on the network or to address quality-of-service concerns." (§ 8.3). And bowing to legal limits to neutrality, reasonable network management did not apply to efforts by broadband access providers to "address unlawful conduct on the Internet," including unlicensed sharing of copyrighted content. (¶ 139)
In explaining "reasonable network management," (¶¶ 135-141), the FCC acknowledged that the technology by which a user accessed the Internet could play a significant role in determining when a provider could act "inconsistently" with the neutrality principle but still not violate the rules. Access over coaxial cable follows a different architecture—with different constraints—than fiber, copper, satellite, or cellular access. For purposes of "quality of service," the agency acknowledged that it might be appropriate for an access provider to implement a "network management practice of prioritizing classes of latency-sensitive traffic," such as VoIP, gaming, and streaming media traffic. (¶137)
Since the FCC has up until now had little role to play in the regulation of the Internet, it's not surprising that the agency began this process with a highly outdated view of how the Internet "worked." So the NPRM here and in eighty other sections, sought comment on the current state of the Internet ecosystem, the technologies of broadband access, network management principles in place, and the nature of the broadband access market throughout the U.S.—the latter a subject the agency took up again in the National Broadband Plan.
Not surprisingly, the FCC heard plenty. The final report lists over 450 sources of comments and replies to the NPRM, many of which addressed themselves to educating the FCC on the technologies it had undertaken to regulate.
As a result of this formal (and no doubt a great deal of informal) feedback, the final rules added numerous additional exceptions, authorizing a wide range of ways a provider of broadband Internet access could act "inconsistently" with the neutrality principle but still not be thought to have violated them.
The new exceptions include:
Exemption from many of the rules for all providers of mobile broadband Internet access, including the "no unreasonable discrimination" rule and some of the "no blocking" rule. (§ 8.5, 8.7)
Explicit exemption from the "no blocking" rule for app stores and other control mechanisms used by mobile broadband providers. (¶ 102)
A change from a strict "nondiscrimination" rule for wireline providers to a rule prohibiting only "unreasonable discrimination." (§ 8.7) (See Part III for a discussion of the difference between those two formulations.)
A limited definition of "broadband Internet access service" that applies the rules only to providers of a "mass market retail service" providing "the capability to transmit data to and receive data from all or substantially all Internet endpoint." (§ 8.11(a) That change leaves out a range of relatively new Internet devices and services—including the Amazon Kindle, game consoles, cars, TVs and refrigerators—that offer some form of web access incidental to their main purpose in connecting to the network. (See ¶ 47)
A broader definition of "reasonable network management," that includes any practice that is "appropriate and tailored to achieving a legitimate network management purpose." (§ 8.11(d) and see ¶ 82)
Exemption for virtual private networks, which use much of the same infrastructure as the public Internet. (¶ 47)
Exemption for Content Delivery Networks and co-located servers that put particular content in closer proximity to important network nodes and therefore speed its transmission to requesting users. (see ¶ 47 and ¶ 76 note 235)
Exemption for multichannel video programming services (e.g., U-verse) that use TCP/IP protocols and existing Internet infrastructure. (¶ 47)
Exemption for Internet backbone services. (¶ 47)
Exemption for hosting or data storage services. (¶ 47)
Exemptions for "coffee shops, bookstores, airlines and other entities when they acquire Internet service from a broadband provider to enable their patrons to access the Internet from their establishments." (¶ 52)
Exemption from the discrimination rule for "existing arrangements for network interconnection, including existing peering arrangements." (¶ 67 n. 209)
Exemption (for now) for "specialized services," including multichannel video programming (see above) or facilities-based VoIP, that "share capacity with broadband Internet access services over providers' last-mile facilities." (¶¶ 112-114)
A hedge on whether "paid priority" of some content, either of the access provider or a third party, would necessarily violate the "unreasonable discrimination" rule (¶ 76), and an explicit rejection of the argument that CDNs constitute illegal "pay for priority" though they have the same effect on consumer experience as prohibited prioritization schemes. (¶ 77)
Recognition that end-users may elect to acquire Internet access that limits their choice of content, including services that support parental controls or which "allow end users to choose a service that provides access to the Internet but not to pornographic websites." (¶ 89). Further, "[b]roadband providers are also free under this Order to offer a wide range of 'edited' services," including a "service limited to 'family friendly' materials." (¶ 143, cf. ¶ 141)
Recognition that existing federal law allows all Internet Service Providers to "restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable." (¶ 89 n. 279)
Finding the Forest Amid the Exemptions
Of course these exceptions, particularly the measured approach to mobile broadband access and the provisional reprieve for specialized services, generated howls of indignation from advocacy groups hoping for pure neutrality, and led many of the Chairman's initial supporters to abandon him over the course of the year the NPRM was publicly and privately debated.
My concern is quite different. I think each of these exceptions makes good sense, and will keep the new rules, at least in the short-term, from causing life-threatening damage to the Internet ecosystem.
Rather, what the laundry list of exceptions demonstrates is that the majority just isn't seeing the forest for the trees. What the exceptions have in common is that each of them represents change to the Internet's architecture and service models that have emerged in the last decade and a half. They are all new services, technologies, or service providers who, in these and other ways, violate the neutrality principle.
But these innovations have been developed for beneficial, not evil purposes. The network is better in every sense imaginable, and will continue to improve in speed, efficiency, and usability so long as future innovations don't run afoul of the rules and their enforcement. The Internet is not "open" in the way it may have been in 1995 (it was never as open as the idealists imagine). But in order for the Internet we have today—faster, cheaper, better—to exist, each of these changes had to be made.
The genius of a virtual infrastructure is that it can absorb redesign without any interruption in service. One unfortunate side-effect of that ease of transformation is that users don't see the construction cones and highway workers. Consumers—and the FCC–don't realize that we're now traveling on a multi-lane highway rather than the old dirt road. The technology is utterly changed, and the rules of the road have changed with it. For better or worse, but largely for the better.
The final rules, with all their exceptions, suggest a majority clinging to the idealized past, and a stubborn refusal in the end to admit that the Internet has changed and continues to change—that it needs to change.
The exceptions for the "inconsistent" behavior of CDNs, specialized services, peering arrangements, e-readers and game consoles, and app stores have no logical rationale, other than that the FCC has now learned that they are part of the current status quo. But they are being exempted because they are in place, and they work.
For example, paying a CDN to replicate your content and co-locate servers at key network access points is surely "paying for priority." It puts a start-up offering similar content but without the funds for similar services at a competitive disadvantage. The cached content will arrive faster when requested by a consumer. But for consumers, that feature is a good thing—an improvement—even though it is not "neutral."
Likewise, the mobile Internet is given special treatment because it is "evolving rapidly." (¶ 8) But the fixed Internet is evolving rapidly as well, as many of these exemptions implicitly recognize.
The majority is fixated on maintaining a neutral Internet even though it now understands that neutrality is a virtue more honored in the breach. The final report uses the word "traditionally" 25 times, the word "historically" 9 times, and the word "typically" 21 times. These are the only justifications for the exceptions, and they undermine the purpose of the rules that remain. There is no neutral Internet to preserve. There's only one that works.
The reality is that we're moving away from websites to the mobile, app-based economy, specialized services and high-bandwidth applications such as video that shouldn't be treated the same. A "level playing field" doesn't mean everyone gets a trophy
The good news is that the final rules grandfather in many existing technologies that violate the neutrality principle. That's essential, even if each of the exceptions is granted in isolation and begrudgingly at that.
But the bad news is that the open Internet regulations as approved allow little flexibility for future innovations in network optimization. The FCC sees ominous clouds of non-neutral and therefore prohibited behavior on the network horizon, even though tomorrow's violations are only as dangerous as the "traditions" that have been established up until this random moment in Internet time. The vote comes at a politically significant moment, but not a time that has any particular meaning for the network's engineering. The new rules, in the worst case, may arbitrarily freeze today's particular status quo, for no good (and lots of bad) reasons.
Nostalgia can be fun. I enjoy sitting around with my fellow veterans of the pre-bubble dot com boom talking about the good old days, toasting to our irrational exuberance. But translating that wistfulness into federal law, even as here with rules pockmarked by the blemishes of a reality that looks far different than our idealized view of the past, is a dangerous way to celebrate it.
Next: Not to worry. The FCC has no authority, either.







January 11, 2011
The MetroPCS Net Neutrality Hullabaloo
A group of regulatory advocates that includes Free Press, Media Access Project and the New America Foundation, have fired off a letter to the Federal Communications Commission (FCC) requesting action against the nation's #5 mobile provider, MetroPCS. These regulatory groups claim that "new service plans being offered by mobile provider MetroPCS block and discriminate against Internet content, applications and websites." Wired's Ryan Singel summarizes what the fight is about:
At issue are new, tiered 4G data plans from the nation's fifth largest mobile carrier, which specializes in pay-as-you-go mobile-phone service. The new plans offer "unlimited web usage" for all three tiers, which cost $40, $50 and $60 a month. But MetroPCS's terms exclude video sites other than YouTube from "unlimited web usage," and block the use of internet-telephony services such as Skype and Tango. The terms of service also make it very unclear whether users would be allowed to use online-radio services such as Pandora.
The parties petitioning the FCC for regulatory intervention claim that "MetroPCS appears to be in violation of the Commission's recently adopted open Internet rules" even though they note that "these rules have not yet taken effect."
There are four things I find interesting about this hullabaloo:
(1) The ink isn't even dry on the FCC's Net neutrality order and yet it already has the inside-the-Beltway lobbying machine humming. We're just a few weeks into the FCC's new "light touch" Net neutrality regulatory regime and yet we're already seeing pleadings like this one. If this foreshadows what the future holds, it's a troubling sign of things to come. If the agency's new regulatory regime sticks, I think it's safe to say that such requests for market meddling will only increase as time goes on and the Internet will quickly be wrapped in innovation-stifling red tape. Meanwhile, countless lawyers and lobbyists around the Beltway are licking their chops in anticipation of the lobbying and litigation bonanza that awaits.
(2) Choice is largely irrelevant to the pro-regulation Net neutrality crowd. It seemingly doesn't matter to these regulatory advocates that they and other consumers are free to shop around for alternative mobile plans. In the field of competition policy, the ability to exercise such choice is typically the end of the story and no further discussion / intervention is considered warranted. These advocates, however, seemingly want control over all terms of service for all market competitors, even for the distant #5 players in the field. I mean, for God's sake, we are talking about MetroPCS here! Does anyone seriously believe that there's just no escaping their evil clutches?
And apparently we can look forward to more of this sort of across-the-board, damn-the-consequences market meddling thanks to what Randy May of the Free State Foundation refers to as "Infamous No. 78″ of the FCC's Net neutrality order. That provision of the order essentially says that the FCC can dispense with the notion that a showing of actual monopoly power and actual consumer harm should be the litmus tests for regulatory intervention. Instead, May notes:
by disclaiming reliance only on anticompetitive injury and consumer harm (generally present only when an Internet provider possesses market power), the Commission leaves itself largely at sea in enforcing its rules. By "at sea," I mean, of course, that the Commission, as it acknowledges, is leaving itself with nearly unbridled discretion in deciding which Internet provider practices will be permitted and which will not.
Welcome to our brave new world of 'anything goes' Internet regulation.
(3) For Net neutrality proponents, "fairness" always trumps competition / innovation, regardless of the costs., The people who work at these organizations are, no doubt, well-meaning in their pleadings for regulation. They really think they can make communications and broadband market outcomes more "fair" through the application of Net neutrality regulations and other rules.
But regulation is not costless. Micromanaging markets can lead to less innovation, less investment, and less consumer choice. It can also dampen price competition. After all, while the regulatory advocates want us to get hot and bothered about the terms of service in this particular case, we should not forget the fact that, with this latest move, MetroPCS is attempting to inject more competition, new innovation, and lower prices into the mobile marketplace. To reiterate, the company is offering a $40 per month entry level price plan for a new 4G LTE service bundle. Most people would call this innovation. But Free Press, Media Access Project and New America Foundation want us to believe it is a massive anti-consumer scandal. What an astonishing bit of hubris.
Moreover, let's imagine that these regulatory advocates get their way and the FCC preemptively denies this innovative move, or that the agency micro-manages the terms of the offering. Those regulatory groups would like us to believe that MetroPCS can absorb the cost of such meddling and that everything will be just fine and dandy. Back in the real world, however, if you ask just about any serious investment analyst or market expert who monitors mobile markets what they think, most of them would first convey their shock that MetroPCS has even been able to last as long as they have given the cut-throat competition in this arena. Then they'll tell you that the sort of price and service competition that MetroPCS is pursuing here could kill them. Finally they'd tell you that an increased regulatory burden on the company at this time is could very well result in one less competitor in the long run.
So, while the regulatory advocates will shower us with talk of how they are looking out for our best interests to ensure carriers play "fair," from a consumer perspective, an additional competitor and more price competition is likely of more importance than a perfectly "neutral" mobile service offering.
(4) Net neutrality regulatory proponents seemingly have very little faith in "openness" prospering organically, even though it has. As I've noted before, no one disagrees that the Internet's openness is what made it great, or that consumers benefit from the free flow of traffic and applications over broadband networks. But the regulatory advocates assume that only sweeping controls on broadband networks will make that a reality. The fact is, the Internet has never been more "open" than it is today. There's a simple reason for that: It's what most people demand. It's also smart business. No company ever got rich in this space by blocking traffic.
Having said all that, it may be the case that not everyone cares as much about perfect openness as others do. [See my essay from last year on the many flavors of "openness" and how defining the term is challenging.] As noted above, many consumers would be happier with cheaper price plans and more varied service options. (I bet that is particularly true of many MetroPCS customers since the company seems to target that market niche). And guess what technophiles… not everyone out there is dying to have Skype or Pandora at their fingertips. Personally, I couldn't live with out either of those services and would never own a smartphone or calling plan that disallowed them for any reason. But I am not so arrogant as to assume that everyone else has the same values as me or that I should make this trade-off for the rest of the world. If some consumers want to trade functionality off against an affordable entry-level 4G plan, who is to say they should not have that option? Apparently Free Press, Media Access Project and New America Foundation, that's who.







Do-Not-Track: The Recipe for Frosting is Not the Wedding Cake
I laughed out loud when I read the following line in Harlan Yu's post, "Some Technical Clarifications About Do Not Track":
"[T]he Do Not Track header compels servers to cooperate, to proactively refrain from any attempts to track the user."
(Harlan's a pal, but I'm plain-spoken with friends just like everyone else, so here goes, buddy.)
To a policy person, that's a jaw-dropping misstatement. An http header is a request. It has no coercive power whatsoever. (You can learn this for yourself: Take 30 minutes and write yourself a plug-in that charges ten cents to every site you visit. Your income will be negative 30 minutes of your time.)
Credit goes to the first commenter on his post who said, "What if they ignore the header? . . . Wouldn't there also need to be legal penalties in place for violations, in order for this to work? (To encourage advertising companies to put in those lines of code.) Is this in the works?"
Of course there would be. That is the hard stuff. Just what behaviors amount to "tracking" anyway? It's easy to say you don't want to be tracked—hard to say what that is, especially given the fluidity of information flows and business models in the online environment. Once a definition is in place, might there be some tracking that consumers want, necessitating an exceptions system? Yes.
And what domains would be subject to the Do Not Track regulation after its years of development? The FTC's jurisdiction is very broad in the United States (very narrow elsewhere), so every U.S. company with a web presence—large and small—would have to monitor the regulation's development to determine whether they were going to be subject to the rules. Web-business would hold off on development to make sure they're not building to an illegal business model. What if foreign countries write similar rules—but not identical. Pity the businesses trying to comply with multiple national Do Not Track regimes.
Coders find it really easy to trivialize coding.
On the server-side, adding code to detect the header is also a reasonably easy task—it takes just a few extra lines of code in most popular Web frameworks. It could take more substantial work to program how the server behaves when the header is "on," but this work is often already necessary even in the absence of Do Not Track.
It's easy, right? Except that people in every web-based business will have to understand what this means and how it might affect them. Just how broadly will server-side coding have to be implemented?
If you're new to public policy, you've caught me overstating the scope of the regulation. Because we all know that it's just going to be ad networks, right? Yeah, and the Social Security Number is only for administering the Social Security system. Have you ever seen Congress act without careful consideration? Have you ever seen it shrink the scope of a regulatory requirement? Come and stay awhile. You'll see one but not the other.
Put all these issues aside, though. Let's say you've taken the years it takes to get a rule in place, and you've got all the legitimate sites subject to the rule re-tooled and prepared to obey. Privacy is a little bit better prote—Wait! We just begged a key question: Which are the legitimate sites and which are not?
To solve that problem, you have to have some permanent institutionalized patrolling of the Internet—much of the information economy, actually—looking for business behavior that is inconsistent with fealty to the Do Not Track regulation. The FTC, or consumer groups and private attorneys general hunting legal fees, will go creating false identities and salting them into website log files hoping to get some tailored advertising. A-ha! Tracking!
But how will they really know it's a product of tracking? It might make the most sense to give the FTC the power to subpoena data held by web sites. Much of it will be personally identifiable, but . . . oh, crap. We've just created a system where databases of information have to be handed over to the government to ensure that the information remains private…
This is a quick and slightly careless dash through the myriad issues that are involved in Do Not Track on the regulatory side, but perhaps it illustrates that the "technical" issues are the easy ones. The recipe for frosting is not the wedding cake.
I get what our tech-side friends are saying. There are lots of tracking technologies, and more to come in the future. They don't want to fight a drawn-out war to protect people from receiving customized advertising. (Put aside whether that's even a good idea.)
The war doesn't end because you can write code to implement a signal in the http header. It shifts to a different venue. That venue—do you really need to be told?—is crawling with mercenary soldiers who work for the ones with the money. That is generally the business sector. If you push the problem on Washington, D.C., you'll be even less satisfied than if you have to watch the market slowly discover and build the technologies that actually deliver privacy protection on the terms people want, and that do so by controlling information.







Declan McCullagh on WikiLeaks
On the podcast this week, Declan McCullagh, chief political correspondent for CNET and former Washington bureau chief for Wired News, discusses WikiLeaks. McCullagh gives a quick recap of the WikiLeaks saga so far, comments on the consequences of the leaks themselves, and talks about the broader significance of the affair. He also offers a few insights into Julian Assange's ideology based on his interactions with Assange in early '90s "cypherpunk" circles. Lastly, McCullagh discusses the future of diplomacy and the chance that Assange will be indicted in the United States.
Related Links
"DOJ sends order to Twitter for Wikileaks-related account info", by McCullagh
"Assange legal case could hang on contradiction", by McCullagh
"Amid criticism, WikiLeaks shifts focus", by McCullagh
"Wikileaks' war files disclosure roils Washington", by McCullagh
To keep the conversation around this episode in one place, we'd like to ask you to comment at the web page for this episode on Surprisingly Free. Also, why not subscribe to the podcast on iTunes?







Adam Thierer's Blog
- Adam Thierer's profile
- 1 follower
