Adam Thierer's Blog, page 91

June 21, 2012

We Must Take UN’s Internet Grab Seriously

Thanks to TLFers Jerry Brito and Eli Dourado, and the anonymous individual who leaked a key planning document for the International Telecommunication Union’s World Conference on International Telecommunications (WCIT) on Jerry and Eli’s inspired WCITLeaks.org site, we now have a clearer view of what a handful of regimes hope to accomplish at WCIT, scheduled for December in Dubai, U.A.E.



Although there is some danger of oversimplification, essentially a number of member states in the ITU, an arm of the United Nations, are pushing for an international treaty that will give their governments a much more powerful role in the architecture of the Internet and economics of the cross-border interconnection. Dispensing with the fancy words, it represents a desperate, last ditch effort by several authoritarian nations to regain control of their national telecommunications infrastructure and operations



A little history may help. Until the 1990s, the U.S. was the only country where telephone companies were owned by private investors. Even then, from AT&T and GTE on down, they were government-sanctioned monopolies. Just about everywhere else, including western democracies such as the U.K, France and Germany, the phone company was a state-owned monopoly. Its president generally reported to the Minster of Telecommunications.



Since most phone companies were large state agencies, the ITU, as a UN organization, could wield a lot of clout in terms of telecom standards, policy and governance–and indeed that was the case for much of the last half of the 20th century. That changed, for nations as much as the ITU, with the advent of privatization and the introduction of wireless technology. In a policy change that directly connects to these very issues here, just about every country in the world embarked on full or partial telecom privatization and, moreover, allowed at least one private company to build wireless telecom infrastructure. As ITU membership was reserved for governments, not enterprises, the ITU’s political influence as a global standards and policy agency has since diminished greatly. Add to that concurrent emergence of the Internet, which changed the fundamental architecture and cost of public communications from a capital-intensive hierarchical mechanism to inexpensive peer-to-peer connections and the stage was set for today’s environment where every smartphone owner is a reporter and videographer. Telecommunications, once part of the commanding heights of government control, was decentralized down to street level.





There’s no going back. Even authoritarian regimes understand this. Fifty years ago, when a third-world dictatorship faced civil strife, it could control real-time information by shutting off its international telephone gateway switch. Not so today. So much commerce, banking, transportation and logistics depends on up-to-the-second cross-border data flow that no country, save for truly isolated regimes such as North Korea, can afford to cut themselves off the global Internet, even for one day.



That’s why it’s no surprise that the authoritarian regimes of China and Russia, supported by even more despotic states such as Iran, are spearheading the UN/ITU effort. Their politically repressive regimes can’t function with the Internet, but their economic regimes, tied as they are to world trade, can’t function without it. That’s why attempts at Internet control have to be more nuanced and cloaked in diplomacy.



As we see in the leaked documents, their agenda is masked as concerns about computer security and virus and malware detection, or in arguments about how nation-states have a historically justifiable regulatory responsibility for setting technical standards for IP-to-IP connections. But dig deeper and you find their proposed solutions would give them the power to read emails, record browser habits and extort fees from web sites and services such as Google, Facebook and Twitter (if they aren’t going to block them completely).



In the long run, it is doomed to fail. As an organism, the Internet defies top-down control. Every time a country attempts to impede certain types of Internet communications, via firewalls, filters, or outright domain name blocks, individuals create workarounds. It’s not that difficult.



That simple fact might engender complacency among netizens here in the U.S. And besides, speaking out against ominous plots by UN agencies makes us sound too much like the nutty neighbor with the backyard bunker.



But there are serious risks to what the ITU and the UN are attempting. Even if only gets part of what it wants, the ITU’s Internet grab stands to seriously damage the global free and open Internet.



First, as a multi-lateral “international” agreement, the ITU plan will give repressive regimes cover for Internet clampdowns. Even if the U.S. does not sign on, all it will take is buy-in a few other Western governments, who might just see the treaty as convenient (see the U.K.’s recent Home Office ideas), to allow the more egregious dictatorships in the world to take repressive action.



The U.S. should be leading all democratic governments in speaking out against the ITU plan. A weak-willed “I’m-OK-you’re-OK” approach, or worse, a non-judgmental relativism that suggests American ideas of Internet freedom should defer to a more repressive country’s “national culture,” are simply not acceptable.



It seeks to displace multi-stakeholder development. The collaborative culture of the Internet, driven by consensus and undergirded with a commitment to open standards and platforms, is the ITU’s primary target. When a nation-states make rules for phone networks, they can specify equipment, favor their domestic manufacturers, create cumbersome compliance rules, and ban possession of non-compliant devices all with the force of heavy-handed law. This is hardly far-fetched. Ethiopia has made Internet phone calls (i.e. Skype) illegal.



It seeks to normalize government regulation of the Internet. For more than 30 years, deregulation has been the predominant policy toward the Internet. This trend has managed to hold on despite numerous attempts at censorship, “neutrality” regulation and price controls. The most common proposition we hear runs to the effect of the Internet has become so important that it needs regulation. Frankly, the Internet has survived and thrived since its beginning without top-down state regulation. Worldwide access continues to grow. By and large, international data networks operate reliably and inexpensively. If anything, the burden of proof for regulation of the ‘Net should be ever higher. Why, exactly, do we need an international regulatory regime for the Internet? So far those who would impose one haven’t said so. And sorry to say, because citizens are taking to the streets with their iPhones and demanding basic freedoms is not an acceptable reason.



More Coverage:



WCITLeaks Gets Results



 



The UN’s “Internet Takeover” and the Politics of Kumbaya



WCIT is About People vs. Their Governments



 



 




 •  0 comments  •  flag
Share on Twitter
Published on June 21, 2012 14:51

The UN’s ‘Internet takeover’ and the politics of Kumbaya

When it comes to the UN exerting greater control over Internet governance, all of us who follow Internet policy in the U.S. seem to be on the same page: keep the Internet free of UN control. Many folks have remarked how rare this moment of agreement among all sides–right, left, and center–can be. And Congress seized that moment yesterday, unanimously approving a bi-partisan resolution calling on the Secretary of State to “to promote a global Internet free from government control[.]“



However, below the surface of this “Kumbaya moment,” astute observers will have noticed quite a bit of eye-rolling. Adam Thierer and I wrote a piece for The Atlantic pointing out the obvious fact that when a unanimous Congress votes “to promote a global Internet free from government control,” they are being hypocrites. That’s a pretty uncontroversial statement, as far as I can tell, but of course no one likes a skunk at the garden party.



Here’s our friend Steve DelBianco writing at CircleID:




Today a key committee in the US Congress approved a resolution opposing United Nations “control over the Internet.” While some in the Internet community have dismissed the bipartisan effort as mere political grandstanding, recent actions by some UN Member States show that lawmakers have good reason to be worried.




For the record, I fully support, commend, and endorse the Congressional resolution and the idea that the UN and all governments should keep their paws off the Internet. I certainly don’t dismiss the effort. That said, because I am capable of critical thought, I can simultaneously entertain the idea that politicians in Congress are also engaging in grandstanding and will likely forget their august resolution next time they vote on cybersecurity, copyright, privacy, net neutrality, or child safety bills.



So what is the recent action that DelBianco says should have lawmakers worried?




Last month, UN voting member Ethiopia made it a crime — punishable by 15 years in prison — to make calls over the Internet. The Ethiopian government cited national security concerns, but also made it clear that it wants to protect the revenues of the state-owned telecom monopoly.




And this gets to the next point of contention. Milton Mueller has been getting some heat because he is pointing out the also obvious fact that the UN is not about to take over the Internet, and that the issues around WCIT are much more subtle than the headlines would lead you to believe. The fact that Ethiopia is enforcing such a terrible law is evidence itself that state governments are the real threat to the Internet, and that they don’t need permission from the UN to regulate the Internet.



And even if they did need it, they have it. As ITU Secretary-General Houman Touré pointed out in his speech yesterday, “Such restrictions are permitted by article 34 of the ITU’s Constitution, which provides that Member States reserve the right to cut off, in accordance with their national law, any private telecommunications which may appear dangerous to the security of the State, or contrary to its laws, to public order or to decency.”



But all this does not mean that folks like Milton, Adam, Eli Dourado, and I are not in complete agreement with DelBianco, FCC Commissioner Robert McDowell, Gigi Sohn, and the rest who are sounding the alarm about WCIT. It’s just that we want to be more specific about what the exact threats are, and we don’t want to overstate the case because we fear that could eventually backfire.



The real threat is not that the UN will take over the Internet per se, but that autocratic states like Russia, China and Iran will use the process to further legitimize their existing programs of censorship, as well as the idea of interconnection charges.



In a happy accident of history, the Internet was designed by academics and engineers, not governments and telcos. Now they want to say, “Thanks for setting it up, we’ll take it from here.” They can’t take control overnight at a single conference–and maybe never given the Internet’s decentralized architecture–but they can start setting the stage for more and more government regulation, perhaps even resulting splinternets (an issue beyond the scope of this post). That’s the subtle threat WCIT and subsequent conferences pose.



The question then is, why update the ITRs at all? All evidence suggests that the only reason to revisit the ITRs is to bring the Internet under their umbrella. The main thing to be negotiated at WCIT, it seems, is how much regulation of the Internet the ITRs will legitimize, not whether to do so at all. I know it will be hard for our diplomats to acknowledge other states’ concerns about security, piracy, fairness, etc., and at the same time be firm that WCIT is not the place to deal with those issues, but that’s what they should do.




 •  0 comments  •  flag
Share on Twitter
Published on June 21, 2012 11:36

June 20, 2012

Five Reasons the DoJ’s Investigation of Data Caps is Misguided

Count me among those who are rolling their eyes as the Department of Justice initiates an investigation into whether cable companies are using data caps to strong-arm so-called “over-the-top” on-demand video providers like Netflix, Walmart’s Vudu and Amazon.com and YouTube.



The Wall Street Journal reported last week that DoJ investigators “are taking a particularly close look at the data caps that pay-TV providers like Comcast and AT&T Inc. have used to deal with surging video traffic on the Internet. The companies say the limits are needed to stop heavy users from overwhelming their networks.”



Internet video providers like Netflix have expressed concern that the limits are aimed at stopping consumers from dropping cable television and switching to online video providers. They also worry that cable companies will give priority to their own online video offerings on their networks to stop subscribers from leaving.


Here are five reasons why the current anticompetitive sturm und drang is an absurd waste of time and might end up leading to more harm than good.





Cable companies set data caps high, really high. Comcast, for example, currently sets its residential data limit at 250 or 300 gigabytes (GB), depending on the market. Searching around the ‘Net, I’ve found that a rule of thumb is 1 GB equals 1 hour of video, although quality, frame rate and resolution may affect this measure. However, this 1 hour = 1 GB tracks with my own household downloading, which varies between YouTube and iTunes downloads, and Netflix HD.



Also, despite the use of the word “caps,” residential Internet users are not cut off when they reach the 250 GB threshold. Comcast customers are just charged $10 for another 50 GBs, that is, another 50 hours of video.



The idea of a threshold is not unreasonable. Video delivery requires greater network management to address issues such as latency and error correction, adding costs to network operation. The alternative is for service providers to raise the base price of “unlimited downloads” for all users, essentially spreading the cost of a small percentage of high-volume users across the entire subscriber population. That in itself raises fairness questions. It’s a simple trade-off, do we want higher prices across the board, or should the top tier users bear the costs they are responsible for imposing?



The pay-per-view market is crowded, and cable has a right to compete. Consumers have a fair degree of choice among pay-per-view providers. Cable companies, along with Amazon, Vudu, Cinema Now and Apple’s iTunes all have solid selections in terms of recent film releases and TV episodes. These virtual rentals generally cost $5 to $7 for a 24- to 48-hour period. Most also offer viewers an option to buy a digital copy. While Netflix’s on-demand menu lacks the timeliness of the others, it compensates in terms of depth, and thousands of titles are available for a relatively low $8 per month subscription fee. Then there’s Google’s YouTube, the largest server of Internet video, which is trying to expand off the desktop PC and onto the living room big-screen by funding development of new “channels.” Although the first fruits of this venture, channels such as The Nerdist seems a bit, come across as a bit, well nerdy, we should know by now not to discount anything Google attempts.



The point is that cable is not somehow shutting down “low-cost” access to video, as the DoJ claims. In truth, alternatives to cable pay-per-view can be less expensive and more varied. At the same time, the argument that cable companies should count there own programming against data caps doesn’t have much traffic. It is their infrastructure after all, built with their capital, and the cost of programming and carriage is factored into the “cable TV” portion of the overall bill. Forcing cable companies to count the cost of their own programming in the data pipe also seems to penalize their customers rather than lead to any sort of level playing field.  (See Berin Szoka’s post for more discussion on this topic)



“Cutting the cable cord” involves a value proposition. Like telephone service before it, household video delivery no longer depends solely on a single hard-wired monopoly infrastructure. For someone not particularly interested in watching TV news, reality shows and real-time sports events, it is possible to do away with cable TV entirely and get one’s video fix through DVDs, digital downloads via wireless service providers. Or one could even chose the lowest-priced cable tier, essentially local channels, with Internet access.



But the landline telephone analogy has limits. Wireless service is replacing wireline because it offers much more value than the old home phone. For starters, wireless makes communications truly personal: your phone is associated with you, not a geographic location like your home office. Today’s wireless phones also are as much information appliances as they are communications tools.



True, cable companies don’t get much love from consumers, but there’s still something to be said for watching the NBA playoffs on a 50-inch HD big screen. And contrary to what the DoJ thinks, there is no consumer “right” or entitlement to this service at below-market prices. Saying so–and assigning the social cost on one segment of the value chain, namely the infrastructure owners, stands to create all sorts of problems. For example, why hold the cable company responsible for low-cost video and not the TV manufacturers? Why shouldn’t all DVDs rentals be priced at $1 rather than $3 for “new releases?” Why should Apple’s iTunes be permitted to charge extra for TV episodes delivered “free” the night before?



The answer is that there are costs and trade-offs associated with each. An Amazon customer may be able to buy all of season one of Game of Thrones for the cost of one month’s subscription to HBO, but she must wait almost a year for the opportunity to do so. In this model, the cable company leverages timeliness, HBO is protects its distribution partners, yet, in the the long run, the programming is available to those who don’t want to pay the cable premium. It’s difficult to see where consumer rights are being violated.



Wireless is the wild card. As alluded to above, wireless service may yet be a substitute for cable connections. Spectrum scarcity, however, makes wireless connections more expensive, and therefore usage caps are much lower (unless you’re piggybacking on a household WiFi supported to a cable modem). But this is just one more reason to speed ahead with spectrum re-allocation.



Here’s where current policy works at cross-purposes. Fostering greater consumer choice is a laudable goal. But that goal can be achieved faster and more cost-effectively if policy is aimed at increasing market mechanisms – which spectrum auctions, unencumbered by conditions, will do. It beats creating cumbersome regime of subsidized service and mandating prices, which, at the end of the day, is nothing but raiding the wallets of average users to pay the cost of the heaviest bandwidth consumers.



The TV game is changing. Looking at the current video landscape, I have trouble seeing the cable companies as having any sort of advantageous position right now. Their big competitive differentiator, wide scope of programming, is becoming commoditized. Television audiences are fragmenting, which means even the most popular shows draw lower ratings than in the past. oard. DVRs, DVDs and iTunes allow audiences to avoid advertising, which means the one-time stalwart business model that supported free content since the beginning of broadcasting is changing.



Truth be told, no one really knows exactly how TV programming content and delivery will change over the next ten years–only that it will. As broadband data capacity and management becomes more germane to video delivery, bandwidth tiers may yet be an important to pay for it and keep content free. At the same time, there is enormous potential for unintended consequences if unwise policy courses are taken. The worse thing right now is for any government agency to start fumbling around with mandates, regulations and directives on broadband video entertainment, whether they address pricing, platforms or business models.




 •  0 comments  •  flag
Share on Twitter
Published on June 20, 2012 15:26

WCITLeaks Gets Results

This morning, the Secretary-General of the ITU, Hamadoun Touré, gave a speech at the WCIT Council Working Group meeting in Geneva in which he said,




It has come as a surprise — and I have to say as a great disappointment — to see that some of those who have had access to proposals presented to this working group have gone on to publicly mis-state or distort them in public forums, sometimes to the point of caricature.



These distortions and mis-statements could be found plausible by credulous members of the public, and could even be used to influence national parliaments, given that the documents themselves are not officially available — in spite of recent developments, including the leaking of Document TD 64.



As many of you surely know, a group of civil society organizations has written to me to request public access to the proposals under discussion.



I would therefore be grateful if you could consider this matter carefully, as I intend to make a recommendation to the forthcoming session of Council regarding open access to these documents, and in particular future versions of TD 64.



I would also be grateful if you would consider the opportunity of conducting an open consultation regarding the ITRs. I also intend to make a recommendation to Council in this regard as well.




Jerry and I commend Dr. Touré for reversing his position on open access to these documents. We like to think that WCITLeaks.org played a role in precipitating this sudden change. Like Dr. Touré, we lament that WCIT planning documents have been subject to so much unhelpful speculation and possibly misrepresentation, but we think that this has happened precisely because they were not available to the public. We’re glad the ITU seems to be recognizing this fact and we look forward to reading the documents once they become public. That said, they remain inaccessible for now and we will continue to solicit leaks as long as that is the case.



Despite these salutary possible changes in ITU policy, I want to highlight some of Dr. Touré’s remarks, which are full of political spin. He said,




There have also been a number of accounts stating that there is some sort of barrier, conflict or even war between telecommunications and the Internet.



In the converged world of the 21st century, this is plainly ridiculous. Who can stand up today and tell me the difference, in terms of traffic passing across networks, between voice, video, and data?




Nobody denies that convergence between IP networks and traditional networks is happening. More and more voice and video is being carried over data networks. But Dr. Touré would have us infer from this that the ITU’s mandate should automatically expand to cover these new data networks. What we deny is that the ITU is needed to regulate data connections at all. The ITU is increasingly becoming obsolete. But like any other bureaucratic organization, it constantly seeks a new justification for its existence. And Internet users should oppose Internet governance as such a justification, because we have our own, “native” Internet governance institutions.



I read a striking fact yesterday in a document that we posted on WCITLeaks. As of 2009, only 6% of US-originated telephone traffic is settled according to the charging and accounting provisions of Article 6 of the ITRs; the other 94% is settled according to private contracts. The proportions are similar for other countries. Even in its traditional niche, voice telephony, the world has moved on from the ITU. And of course, 0% of global Internet traffic today is settled according to the ITRs. It’s not “ridiculous” to demand a rationale for the expanded role the ITU sees for itself.



There are some other distortions in Dr. Touré’s speech. For instance, he lists some “important” ITU activities, such as developing standards for cable modems. This claim is way overstated. DOCSIS was developed by CableLabs, a non-profit R&D consortium run by cable operators. True, it was later ratified by the ITU, but it was already in use when the ITU ratified it. It is not that case that but for the ITU, we would not have standardized cable modems. Dr. Touré also mentions “the radio frequencies used to implement WiFi.” Again, Wi-Fi was not developed by the ITU, and it seems misleading to suggest that without the ITU, we would not have standardized wireless Internet capabilities.



Perhaps the most exaggerated claim that Dr. Touré makes is that the ITU is a bottom-up organization:




I am proud of the ITU’s tradition of open discussion amongst its membership, and I am proud that the ITU works bottom-up, thanks to inputs from its 193 Member States and 552 Sector Members.




Give me a break. Compare the ITU to a truly bottom-up organization, like the IETF:




The IETF is completely open to newcomers. There is no formal membership, no membership fee, and nothing to sign. By participating, you do automatically accept the IETF’s rules, including the rules about intellectual property (patents, copyrights and trademarks). If you work for a company and the IETF will be part of your job, you must obviously clear this with your manager. However, the IETF will always view you as an individual, and never as a company representative.




When the ITU adopts policies similar to the IETF’s, I’ll be happy to call it bottom-up.



It’s clear that the ITU feels threatened by the increased attention that WCITLeaks has sent its way. Good. We believe that political institutions should be transparent and required to justify their continued existence in the face of social change.




 •  0 comments  •  flag
Share on Twitter
Published on June 20, 2012 14:07

UMG-EMI Deal Is No Threat To Innovation In Music Distribution

By Geoffrey Manne and Berin Szoka



Everyone loves to hate record labels. For years, copyright-bashers have ranted about the “Big Labels” trying to thwart new models for distributing music in terms that would make JFK assassination conspiracy theorists blush. Now they’ve turned their sites on the pending merger between Universal Music Group and EMI, insisting the deal would be bad for consumers. There’s even a Senate Antitrust Subcommittee hearing tomorrow, led by Senator Herb “Big is Bad” Kohl.



But this is a merger users of Spotify, Apple’s iTunes and the wide range of other digital services ought to love. UMG has done more than any other label to support the growth of such services, cutting licensing deals with hundreds of distribution outlets—often well before other labels. Piracy has been a significant concern for the industry, and UMG seems to recognize that only “easy” can compete with “free.” The company has embraced the reality that music distribution paradigms are changing rapidly to keep up with consumer demand. So why are groups like Public Knowledge opposing the merger?



Critics contend that the merger will elevate UMG’s already substantial market share and “give it the power to distort or even determine the fate of digital distribution models.” For these critics, the only record labels that matter are the four majors, and four is simply better than three. But this assessment hews to the outmoded, “big is bad” structural analysis that has been consistently demolished by economists since the 1970s. Instead, the relevant touchstone for all merger analysis is whether the merger would give the merged firm a new incentive and ability to engage in anticompetitive conduct. But there’s nothing UMG can do with EMI’s catalogue under its control that it can’t do now. If anything, UMG’s ownership of EMI should accelerate the availability of digitally distributed music.



To see why this is so, consider what digital distributors—whether of the pay-as-you-go, iTunes type, or the all-you-can-eat, Spotify type—most want: Access to as much music as possible on terms on par with those of other distribution channels. For the all-you-can-eat distributors this is a sine qua non: their business models depend on being able to distribute as close as possible to all the music every potential customer could want. But given UMG’s current catalogue, it already has the ability, if it wanted to exercise it, to extract monopoly profits from these distributors, as they simply can’t offer a viable product without UMG’s catalogue.



The merger with EMI—the smallest of the four major labels, with a US market share of around 9%—does nothing to increase UMG’s incentive or ability to extract monopoly rents. UMG’s ability to raise prices on Lady Gaga’s music is hardly affected by the fact that it might also own Lady Antebellum’s music, anymore than its current ownership of Ladyhawke’s music does. But, regardless, UMG has viewed digital distribution as a friend, not a foe.



Even on their own, structural terms, the critics’ analysis is flawed. The argument against the merger is based largely on the notion that the critical, relevant antitrust market comprises album sales by the four major labels. But this makes no sense.



In fact, UMG currently distributes only about 30% of the music consumed in the US, and because, like all the majors, it distributes some music over which it has no ownership rights (including no ability to set prices), it owns only 24% of music purchased in the US. EMI’s share of distribution, as we noted, is around 9%, and it has experienced significant turmoil in recent years. Meanwhile, the independent labels that some critics seek to exclude from the market (and which, ironically, probably distribute the bulk of the music they listen to) sell 30% of the records sold in the US today and do so digitally largely through a single distributor, Merlin—essentially a fifth major record label. This is far beyond trivial.



What matters for antitrust market definition is substitutability: If customers would purchase eight singles off an album in response to an increase in the 12-track album price, singles and albums are surely in the same market. Ditto consumption of singles and entire albums through streaming services in lieu of outright purchase—and it’s clear that this mode of distribution is increasingly popular. There is no principled defense of an album-only market, nor one that excludes independent labels or streaming services. And once you appreciate these market dynamics, the concerns over this merger disappear.



The reality is closer to this: EMI is effectively a failing firm. Its current owner (Citigroup) inherited the company when its previous owner defaulted, and it promptly put it up for auction. Warner and UMG both bid on EMI and UMG won. Now Warner leads the effort to stymie the deal, deploying a time-tested strategy of trying to accomplish by regulation what it couldn’t manage through genuine competition.

Everyone loves to hate record labels. For years, copyright-bashers have ranted about the “Big Labels” trying to thwart new models for distributing music in terms that would make JFK assassination conspiracy theorists blush. Now they’ve turned their sites on the pending merger between Universal Music Group and EMI, insisting the deal would be bad for consumers. There’s even a Senate Antitrust Subcommittee hearing tomorrow, led by Senator Herb “Big is Bad” Kohl.



But this is a merger users of Spotify, Apple’s iTunes and the wide range of other digital services ought to love. UMG has done more than any other label to support the growth of such services, cutting licensing deals with hundreds of distribution outlets—often well before other labels. Piracy has been a significant concern for the industry, and UMG seems to recognize that only “easy” can compete with “free.” The company has embraced the reality that music distribution paradigms are changing rapidly to keep up with consumer demand. So why are groups like Public Knowledge opposing the merger?



Critics worry that a larger UMG will stifle innovative distribution services. While that’s theoretically possible, UMG’s past practice and the industry’s changing dynamics—including the significant increase in buyer power from large retailers like Apple, Amazon and Wal-Mart—suggest the concern is speculative, at best. Albums are simply not the dominant marketing vehicles they once were for most artists, and, increasingly, consumers are content to “rent” their music through streaming and other online services rather than own it outright.



A slightly larger UMG poses no threat to the evolving distribution of music. In fact, UMG has increasingly championed digital distribution as it has grown in size. UMG’s history with digital distribution should please anyone concerned about the deal: it has been both aggressive and progressive in the digital space. UMG is often the first to license its catalogue to new services and it has financially supported the creation of some of the largest of these services. When online giant Slacker Radio added a subscription service to its Web radio offering, UMG not only licensed its catalogue for the new service but also renegotiated (and lowered) its terms for Slacker’s webcasting license in order to ease Slacker’s move into subscription services. And UMG was instrumental in getting Muve—the second largest subscription music service in the US today—off the ground. Again—the industry’s best defense against “free” is “easy,” and that doesn’t change for UMG if it gains another few percentage points of market share.



To paraphrase Timbuk 3 (from an album originally released on the famed I.R.S. label): Music’s future is so bright, it’s gotta wear shades. Music has never been cheaper, easier to access, more widely distributed, nor available in more forms and formats. And the digital distribution of music—significantly facilitated by UMG—shows no signs of slowing down. What has slowed down, thanks largely to these advances in digital and online distribution, is music piracy. Anyone looking for an explanation why UMG has been so progressive in its support for innovation in music distribution need look no further than that fact. This merger does nothing to change UMG’s critical incentives to continue to support digital distribution of its catalogue: fighting piracy and effectively distributing its music.



[Cross posted at Forbes.com]




 •  0 comments  •  flag
Share on Twitter
Published on June 20, 2012 11:52

June 19, 2012

Internet Security without Law

That is the title of my new working paper, out today from Mercatus. The abstract:




Lichtman and Posner argue that legal immunity for Internet service providers (ISPs) is inefficient on standard law and economics grounds. They advocate indirect liability for ISPs for malware transmitted on their networks. While their argument accurately applies the conventional law and economics toolkit, it ignores the informal institutions that have arisen among ISPs to mitigate the harm caused by malware and botnets. These informal institutions carry out the functions of a formal legal system—they establish and enforce rules for the prevention, punishment, and redress of cybersecurity-related harms.



In this paper, I document the informal institutions that enforce network security norms on the Internet. I discuss the enforcement mechanisms and monitoring tools that ISPs have at their disposal, as well as the fact that ISPs have borne significant costs to reduce malware, despite their lack of formal legal liability. I argue that these informal institutions perform much better than a regime of formal indirect liability. The paper concludes by discussing how the fact that legal polycentricity is more widespread than is often recognized should affect law and economics scholarship.




While I frame the paper as a reply to Lichtman and Posner, I think it also conveys information that is relevant to the debate over CISPA and related Internet security bills. Most politicians and commentators do not understand the extent to which Internet security is peer-produced, or why security institutions have developed in the way they have. I hope that my paper will lead to a greater appreciation of the role of bottom-up governance institutions on the Internet and beyond.



Comments on the paper are welcome!




 •  0 comments  •  flag
Share on Twitter
Published on June 19, 2012 07:20

June 18, 2012

John Palfrey on interoperability

Post image for John Palfrey on interoperability

John Palfrey of the Berkmann Center at Harvard Law School, discusses his new book written with Urs Gasser, Interop: The Promise and Perils of Highly Interconnected Systems. Interoperability is a term used to describe the standardization and integration of technology. Palfrey discusses how the term can describe many relationships in the world and that it doesn’t have to be limited to technical systems. He also describes potential pitfalls of too much interoperability. Palfrey finds that greater levels of interoperability can lead to greater competition, collaboration, and the development of standards. It can also lead to giving less protection to privacy and security. The trick is to get to the right level of interoperability. If systems become too complex, then nobody can understand them and they can become unstable. Palfrey describes the current financial crises could be an example of this. Palfrey also describes the difficulty in finding the proper role of government in encouraging or discouraging interoperability.




 •  0 comments  •  flag
Share on Twitter
Published on June 18, 2012 23:30

Why Mandatory Online Age Verification is So Problematic: What Expert Task Forces Have Found

There was an important article about online age verification in The New York Times yesterday entitled, “Verifying Ages Online Is a Daunting Task, Even for Experts.” It’s definitely worth a read since it reiterates the simple truth that online age verification is enormously complicated and hugely contentious (especially legally). It’s also worth reading since this issue might be getting hot again as Facebook considers allowing kids under 13 on its site.



Just five years ago, age verification was a red-hot tech policy issue. The rise of MySpace and social networking in general had sent many state AGs, other lawmakers, and some child safety groups into full-blown moral panic mode. Some wanted to ban social networks in schools and libraries (recall that a 2006 House measure proposing just that actually received 410 votes, although the measure died in the Senate), but mandatory online age verification for social networking sites was also receiving a lot of support. This generated much academic and press inquiry into the sensibility and practicality of mandatory age verification as an online safety strategy. Personally, I was spending almost all my time covering the issue between late 2006 and mid-2007. The title of one of my papers on the topic reflected the frustration many shared about the issue: “Social Networking and Age Verification: Many Hard Questions; No Easy Solutions.”



Simply put, too many people were looking for an easy, silver-bullet solution to complicated problems regarding how kids get online and how to keep them safe once they get there. For a time, age verification became that silver bullet for those who felt that “we must do something” politically to address online safety concerns. Alas, mandatory age verification was no silver bullet. As I summarized in this 2009 white paper, “Five Online Safety Task Forces Agree: Education, Empowerment & Self-Regulation Are the Answer,” all previous research and task force reports looking into this issue have concluded that a diverse toolbox and a “layered approach” must be brought to bear on these problems. There are no simple fixes. Specifically, here’s what each of the major online child safety task forces that have been convened since 2000 had to say about the wisdom of mandatory age verification:



2000 – Commission on Online Child Protection (“COPA Commission”)



“[Age verification] imposes moderate costs on users, who must get an I.D. It imposes high costs on content sources that must install systems and might pay to verify I.D.s. The adverse effect on privacy could be high. It may be lower than for credit card verification if I.D.s are separated from personally-identifiable information. Uncertainty about the application of a harmful to minors standard increases the costs incurred by harmful to minors sites in connection with such systems.  An adverse impact on First Amendment values arises from the costs imposed on content providers, and because requiring identification has a chilling effect on access. Central collection of credit card numbers coupled with the “embarrassment effect” of reporting fraud and the risk that a market for I.D.s would be created may have adverse effect on law enforcement.”


2002 – Youth, Pornography, and the Internet (“Thornburgh Commission”)



“In an online environment, age verification is much more difficult because a pervasive nationally available infrastructure for this purpose is not available. […] Note that each of these [age verification] methods imposes a cost in convenience of use, and the magnitude of this cost rises as the confidence in age verification increases.” (p. 63-4)


2008 – Safer Children in a Digital World (“Byron Review”)



“[N]o existing approach to age verification is without its limitations, so it is important that we do not fixate on age verification as a potential ‘silver bullet.’” (p. 99)


2009 – Internet Safety Technical Task Force (ISTTF)”



Age verification and identity authentication technologies are appealing in concept but challenged in terms of effectiveness.  Any system that relies on remote verification of information has potential for inaccuracies.  For example, on the user side, it is never certain that the person attempting to verify an identity is using their own actual identity or someone else’s.  Any system that relies on public records has a better likelihood of accurately verifying an adult than a minor due to extant records.  Any system that focuses on third-party in-person verification would require significant political backing and social acceptance.  Additionally, any central repository of this type of personal information would raise significant privacy concerns and security issues.” (p. 10)


2009 – “Point Smart. Click Safe” Blue Ribbon Working Group



“The task force acknowledges that the issues of identity authentication and age verification remain substantial challenges for the Internet community due to a variety of concerns including privacy, accuracy, and the need for better technology in these areas.”


2010 – Youth Safety on a Living Internet: Repost of the Online Safety and Technology Working Group (“OSTWG“)



“There is no quick fix or “silver bullet” solution to child safety concerns, especially given the rapid pace of change in the digital world. A diverse array of protective tools are currently available today to families, caretakers, and schools to help encourage better online content and communications. They are most effective as part of a “layered” approach to child online safety. The best of these technologies work in tandem with educational strategies, parental involvement, and other approaches to guide and mentor children, supplementing but not supplanting the educational and mentoring roles.”  [...] “age verification is not only not effective but not necessarily advisable. There was some evidence presented to the (ISTTF) Task Force that it might actually endanger youth by keeping adult guidance or supervision out of online spaces where peer-on-peer harassment or cyberbullying could occur.” (p. 7, 27)


This makes it clear that there is near-universal consensus that mandatory age verification is not the smart path forward. In my closing statement to the Harvard Berkman Center Internet Safety Technical Task Force, of which I was a member, I actually went even further and argued that mandatory age verification represents a dangerous solution to concerns about online child safety because it:




Won’t Work: Mandatory age verification will not work as billed. For the reasons detailed below, it will fail miserably and create more problems than it will solve.
Will Create a False Sense of Security: Because it will fail, mandatory age verification will create a false sense of security for parents and kids alike. It will lead them to believe they are entering “safe spaces” simply because someone has said users are “verified.”
Is Not a Background Check: Moreover, even if age verification did work as billed, it is important to realize it is not synonymous with a complete background check. In other words, even if the verification process gets the age part of the process right, that tells us little else about the person being verified.
Is a Grave Threat to Privacy: Mandatory age verification is dangerous because it would require that even more personal information (about kids, no less) be put online at a time when identity theft and privacy violations continue to be a major concern.
Will Seriously Misallocate Resources: Devising and enforcing age verification regulations might also divert valuable time and resources that could be better used to focus on education and awareness-building efforts, especially K-12online safety and media literacy education. Moreover, it might divert law enforcement energy and resources away from policing serious crimes or more legitimate threats to children


I went on to post  “10 Questions about Age Verification that the AGs Must Answer” if they continued their foolish pursuit of this misguided silver bullet (non-)solution. Instead of repeating them all here, I have simply appended my closing statement to this post. [see Scribd embed below].



In closing, I remain convinced that nothing on the ground has changed since back then. All the traditional age verification schemes remain highly flawed, and the more sophisticated age verification systems (tapping school records and using biometric identifiers to create “digital passports,” for example), would have rather obvious downsides and still not likely be effective in practice. In the end, there is simply no substitute for an education and awareness-based approach to online safety that relies on parental mentoring, digital literacy / digital citizenship, and better social norms and self-regulation.  Techno-silver bullets will always fail.



ISTTF Thierer Closing Statement






 •  0 comments  •  flag
Share on Twitter
Published on June 18, 2012 12:40

WCIT is about People vs. Their Governments

As Jerry noted ten days ago, our little side project got some good press right after we launched it. I am delighted to report that the media love continues. On Saturday, WCITLeaks was covered by Talking Points Memo, and a Wall Street Journal article appeared online last night and in print this morning.



I think it’s great that both left- and right-of-center publications are covering WCIT and the threat to our online freedoms posed by international bureaucracy. But I worry that people will infer that since this is not a left vs. right issue, it must be a USA vs. the world issue. This is an unhelpful way to look at it.



This is an Internet users vs. their governments issue. Who benefits from increased ITU oversight of the Internet? Certainly not ordinary users in foreign countries, who would then be censored and spied upon by their governments with full international approval. The winners would be autocratic regimes, not their subjects. And let’s not pretend the US government is innocent on this score; it intercepts and records international Internet traffic all the time, and the SOPA/PIPA kerfuffle shows how much some interests, especially Big Content, want to use the government to censor the web.



The bottom line is that yes, the US should walk away from WCIT, but not because the Internet is our toy and we want to make the rules for the rest of the world. The US should walk away from WCIT as part of a repentant rejection of Internet policy under Bush and Obama, which has consistently carved out a greater role for the government online. I hope that the awareness we raise through WCITLeaks will not only highlight how foolish the US government is for playing the lose-lose game with the ITU, but how hypocritical it is for preaching net freedom while spying on, censoring, and regulating its own citizens online.




 •  0 comments  •  flag
Share on Twitter
Published on June 18, 2012 10:51

June 14, 2012

Troubling Internet Regulations Proposed for WCIT

Today, WCITLeaks.org posted a new document called TD-62. It is a compilation of all the proposals for modification of the International Telecommunication Regulations (ITRs), which will be renegotiated at WCIT in Dubai this December. Some of the most troubling proposals include:




The modification of section 1.4 and addition of section 3.5, which would make some or all ITU-T “Recommendations” mandatory. ITU-T “Recommendations” compete with standards bodies like the Internet Engineering Task Force (IETF), which proposes new standards for protocols and best practices on a completely voluntary and transparent basis.
The modification of section 2.2 to explicitly include Internet traffic termination as a regulated telecommunication service. Under the status quo, Internet traffic is completely exempt from regulation under the ITRs because it is a “private arrangement” under article 9. If this proposal—supported by Russia and Iran—were adopted, Internet traffic would be metered along national boundaries and billed to the originator of the traffic, as is currently done with international telephone calls. This would create a new revenue stream for corrupt, autocratic regimes and raise the cost of accessing international websites and information on the Internet.
The addition of a new section 2.13 to define spam in the ITRs. This would create an international legal excuse for governments to inspect our emails. This provision is supported by Russia, several Arab states, and Rwanda.
The addition of a new section 3.8, the text of which is still undefined, that would give the ITU a role in allocating Internet addresses. The Internet Society points out in a comment that this “would be disruptive to the existing, successful mechanism for allocating/distributing IPv6 addresses.”
The modification of section 4.3, subsection a) to introduce content regulation, starting with spam and malware, in the ITRs for the first time. The ITRs have always been about the pipes, not the content that flows through them. As the US delegation comments, “this text suggests that the ITU has a role in content related issues. We do not believe it does.” This is dangerous because many UN members do not have the same appreciation for freedom of speech that many of us do.
The addition of a new section 8.2 to regulate online crime. Again, this would introduce content regulation into the ITRs.
The addition of a new section 8.5, proposed by China, that would give member states what the Internet Society describes as a “a very active and inappropriate role in patrolling and enforcing newly defined standards of behaviour on telecommunication and Internet networks and in services.”


These proposals show that many ITU member states want to use international agreements to regulate the Internet by crowding out bottom-up institutions, imposing charges for international communication, and controlling the content that consumers can access online.


 •  0 comments  •  flag
Share on Twitter
Published on June 14, 2012 11:16

Adam Thierer's Blog

Adam Thierer
Adam Thierer isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Adam Thierer's blog with rss.