Error Pop-Up - Close Button Sorry, you must be a group member to see those polls.

Adam Thierer's Blog, page 66

May 2, 2013

Internet Analogies: Twice as Many Americans Lack Access to Public Water-Supply Systems than Fixed Broadband

If broadband Internet infrastructure had been built to the same extent as public water-supply systems , more than twice as many Americans would lack fixed broadband Internet access.



After abandoning the “information superhighway” analogy for the Internet, net neutrality advocates began analogizing the Internet to waterworks. I’ve previously discussed the fundamental difference between infrastructure that distributes commodities (e.g., water) and the Internet, which distributes speech protected by the First Amendment – a difference that is alone sufficient to reject any notion that governments should own and control the infrastructure of the Internet. For those who remain unconvinced that the means of disseminating mass communications (e.g., Internet infrastructure) is protected by the First Amendment, however, there is another flaw in the waterworks analogy: If broadband Internet infrastructure had been built to the same extent as public water-supply systems, more than twice as many Americans would lack fixed broadband Internet access.



Advocates who would prefer that the government (whether local, state, or federal) own and operate the Internet often use the lack of broadband access in rural America as a justification. They point to an FCC report finding that 19 million Americans (6% of the population) lack access to a fixed broadband network and that less than 1% of Americans lack access to a mobile broadband network. Government broadband advocates fail to acknowledge, however, that more than twice as many Americans lack access to public water-supply systems. According to the most recent report from the US Geological Survey,* 43 million Americans (14% of the population) lack access to public water-supply systems and instead must self-supply their own water (e.g., they have to drill a well on their property).



Self-supplied water systems are common in rural areas and neighborhoods that lie outside the jurisdictional boundaries of a municipality. The Virginia Department of Health notes that the “majority of households in 60 of Virginia’s 95 counties rely on private water supply systems” and that in “52 counties, the number of households using private wells is increasing faster than the number of households connecting to public water supply systems.” For example, my neighborhood in northern Virginia, which is served by two fixed broadband providers and several mobile broadband providers, has no access to a public water-supply system. In my neighborhood, every homeowner must drill their own well (at a cost ranging from $3,500 to over $50,000 depending on geological conditions and local regulations).



The jurisdictional limitations of municipal water-supply systems can be overcome by self-supply in most areas of the United States because the value of a water system to a particular household is not directly increased by interconnecting it with another water system. In contrast, the Internet is a network of networks (the term “Internet” was shortened from internetwork) that exhibits both positive and negative direct network effects – i.e., its value for all users is affected by the addition of new users or content to the internetwork. By definition, an individual homeowner cannot self-supply Internet access without interconnecting with at least one other network.



This fundamental difference between waterworks and the Internet is critical to understanding why state legislatures often treat municipal waterworks differently than municipal broadband networks. In addition to the First Amendment issues that are involved when local governments own and control the primary means of mass communications, many states have recognized the potential for municipal broadband networks to result in a form of “cherry picking.” If every municipality built its own broadband network, substantial portions of most states would still lack access to broadband, but the ability of private broadband network operators to profitably serve those areas would likely be reduced. As noted above, public water-supply systems cover significantly less population than private broadband networks.



Of course, advocates who would prefer that the government own and operate the Internet typically don’t mention the jurisdictional limitations of municipalities or the potential impact of municipal broadband networks on citizens who don’t live in a municipality. Some of these advocates actually imply that cronyism must be the primary motivation for state legislation governing municipal broadband networks. Fortunately, state legislators representing citizens who lack access to municipal services have a better understanding of the needs of their citizens than some urban lobbyists and bureaucrats living in Washington.



*                      *                      *



*Note that the broadband data in the FCC report is current through mid-2011, and the public water-supply data in the US Geological Survey report is current only through 2005. The US Geological Survey releases its water use reports every five years, but does not intend to release its 2010 water use report until fiscal year 2014. Based on previous trends, however, it is unlikely that the percentage of Americans who have access to public water-supply systems has increased significantly in the last six years, if at all. The percentage of Americans that self-supplied their water dropped only three percentage points in the twenty-year period from 1985 (17%) to 2005 (14%).




 •  0 comments  •  flag
Share on Twitter
Published on May 02, 2013 05:13

April 30, 2013

Alex Tabarrok on innovation

Alex Tabarrok

Alex Tabarrok, author of the ebook Launching The Innovation Renaissance: A New Path to Bring Smart Ideas to Market Fast discusses America’s declining growth rate in total factor productivity, what this means for the future of innovation, and what can be done to improve the situation.



Accroding to Tabarrok, patents, which were designed to promote the progress of science and the useful arts, have instead become weapons in a war for competitive advantage with innovation as collateral damage. College, once a foundation for innovation, has been oversold. And regulations, passed with the best of intentions, have spread like kudzu and now impede progress to everyone’s detriment. Tabarrok outs forth simple reforms in each of these areas and also explains the role immigration plays in innovation and national productivity.



Download



Related Links


Launching The Innovation Renaissance: A New Path to Bring Smart Ideas to Market Fast, Tabarrok
VIDEO: Innovations in Most Fields Are Not Patented, Tabarrok
VIDEO: End Software Patents, Tabarrok
Patent Policy on the Back of a Napkin, Tabarrok



 •  0 comments  •  flag
Share on Twitter
Published on April 30, 2013 03:00

April 24, 2013

My Senate Testimony on Privacy, Data Collection & Do Not Track

Today I’ll be testifying at a Senate Commerce Committee hearing on online privacy and commercial data collection issues. In my remarks, I make three primary points:




First, no matter how well-intentioned, restrictions on data collection could negatively impact the competitiveness of America’s digital economy, as well as consumer choice.
Second, it is unwise to place too much faith in any single, silver-bullet solution to privacy, including “Do Not Track,” because such schemes are easily evaded or defeated and often fail to live up to their billing.
Finally, with those two points in mind, we should look to alternative and less costly approaches to protecting privacy that rely on education, empowerment, and targeted enforcement of existing laws. Serious and lasting long-term privacy protection requires a layered, multifaceted approach incorporating many solutions.


The testimony also contains 4 appendices elaborating on some of these themes.



Down below, I’ve embedded my testimony, a list of 10 recent essays I’ve penned on these topics, and a video in which I explain “How I Think about Privacy” (which was taped last summer at an event up at the University of Maine’s Center for Law and Innovation). Finally, the best summary of my work on these issues can be found in this recent Harvard Journal of Law & Public Policy article, “The Pursuit of Privacy in a World Where Information Control is Failing.” (This is the first of two complimentary law review articles I will be releasing this year dealing with privacy policy. The second, which will be published early this summer by the George Mason University Law Review, is entitled, “A Framework for Benefit-Cost Analysis in Digital Privacy Debates.”)



Testimony of Adam D. Thierer before the Senate Committee on Commerce, Science & Transportation hearing…





Some of My Recent Essays on Privacy & Data Collection


A Better, Simpler Narrative for U.S. Privacy Policy – March 19, 2013
On the Pursuit of Happiness… and Privacy – March 31, 2013 (condensed from Harvard Journal of Law & Public Policy article, “The Pursuit of Privacy in a World Where Information Control is Failing”)
Isn’t “Do Not Track” Just a “Broadcast Flag” Mandate for Privacy? – Feb. 20, 2011
Two Paradoxes of Privacy Regulation – Aug. 25, 2010
Privacy as an Information Control Regime: The Challenges Ahead – Nov. 13, 2010
When It Comes to Information Control, Everybody Has a Pet Issue & Everyone Will Be Disappointed – Apr. 29, 2011
Lessons from the Gmail Privacy Scare of 2004 – March 25, 2011
Who Really Believes in “Permissionless Innovation”? – March 4, 2013 (condensed from Minnesota Journal of Law, Science & Technology law review article, “Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle”)
The Problem of Proportionality in Debates about Online Privacy and Child Safety – Nov. 28, 2009
Obama Admin’s “Let’s-Be-Europe” Approach to Privacy Will Undermine U.S. Competitiveness– Jan. 5, 2011





 •  0 comments  •  flag
Share on Twitter
Published on April 24, 2013 10:35

April 23, 2013

Hate the Idea of a National ID? Wanna Do Something About it?

The Cato Institute is seeking a “researcher to support a campaign to educate the public and policymakers on the implications of biometric identification systems related to immigration policy reforms.



The better applicants will know how many different governmental systems work—legislation, appropriation, regulation, procurement, grant-making, and so on—and have zeal to chase down all the ways the national ID builders are using them to advance their cause.



Immigration reform legislation in the Senate that features a vast expansion of E-Verify is yet another reason to join the fight against having a national ID in the United States.




 •  0 comments  •  flag
Share on Twitter
Published on April 23, 2013 15:07

WTF? WTPF! The continuing battle over Internet governance principles

Remember all the businesses, internet techies and NGOs who were screaming about an “ITU takeover of the Internet” a year ago? Where are they now? Because this time, we actually need them.



May 14 – 21 is Internet governance week in Geneva. We have declared it so because there will be three events in that week for the global community concerned with global internet governance. From 14-16 May the International Telecommunication Union (ITU) holds its World Telecommunication Policy Forum (WTPF). This year it is devoted to internet policy issues. With the polarizing results of the Dubai World Conference on International Telecommunications (WCIT) still reverberating, the meeting will revisit debates about the role of states in Internet governance. Next, on May 17 and 18, the Graduate Institute of International and Development Studies and the Global Internet Governance Academic Network (GigaNet) will hold an international workshop on The Global Governance of the Internet: Intergovernmentalism, Multi-stakeholderism and Networks. Here, academics and practitioners will engage in what should be a more intellectually substantive debate on modes and principles of global Internet governance.



Last but not least, the UN Internet Governance Forum will hold its semi-annual consultations to prepare the program and agenda for its next meeting in Bali, Indonesia. The IGF consultations are relevant because, to put it bluntly, it is the failure of the IGF to bring governments, the private sector and civil society together in a commonly agreed platform for policy development that is partly responsible for the continued tension between multistakeholder and intergovernmental institutions. Whether the IGF can get its act together and become more relevant is one of the key issues going forward.





Internet Governance Principles



The Dubai WCIT meeting last year grafted an Internet governance principles debate onto negotiations over an old telecommunications treaty that had little to do with the internet. That muddled the debate considerably. This time, we are actually having a debate about Internet governance principles, specifically the role of states and intergovernmental institutions.



In preparation for the WTPF, The ITU’s Secretary-General has released a 38-page report and five “Draft Opinions” on policy. The stated aim of the WTPF report is “to provide a basis for discussion at the Policy Forum…focusing on key issues on which it would be desirable to reach conclusions.” This is what the IGF ought to be doing but was prevented from doing by key stakeholders in the Internet technical and business communities, because they wanted to make sure the IGF could not be used to challenge the status quo.



The ITU SG’s report contains a fairly balanced survey of many internet-related policy controversies. After digesting it, however, it becomes clear that its main purpose is to re-assert and strengthen the role of governments in Internet governance. In particular, it proposes a definition of multi-stakeholderism that reserves to states a ‘sovereign right’ to make ‘public policy for the Internet;’ a definition that relegates the private sector and civil society to secondary, subordinate roles rather than empowering them as equal-status participants in new institutions for Internet governance. In keeping with this philosophy, the discussions at WTPF will be confined to ITU member states and sector members. Ordinary citizens cannot speak, they can only watch.



A flawed debate



What’s troubling about this looming debate is the intellectual weakness of so many of the supposed defenders of internet freedom. The Internet Society, ICANN and the U.S. government have increasingly re-branded Internet freedom as “The Multistakeholder Model” (TMM). So the choice we are given is not between a free Internet and a restricted, censored one, or between centralized, hierarchical internet governance and a more distributed, participatory, open and decentralized governance. No, we are given a choice between the ITU and a status quo that is vaguely defined as TMM. This not only implies that there is a single, well-defined “Multistakeholder Model” (in fact, there is not), but it conflates the results of good governance (freedom, openness, innovation, globalized connectivity, widespread access) with a particular model. It also tends to exempt many of the existing Internet governance institutions from deserved criticism and reform.



The lack of intellectual substance underlying the principles debate was played out with stark clarity in the U.S. two weeks ago, when the U.S. Congress proposed a bill “to Affirm the Policy of the United States Regarding Internet Governance.” The bill originally said



“It is the policy of the United States to promote a global Internet free from government control and to preserve and advance the successful multistakeholder model that governs the Internet.”



For reasons that we outlined in an earlier blog, the “government control” language was deemed too controversial and the bill was amended to read:



“It is the policy of the United States to preserve and advance the successful multistakeholder model that governs the Internet.”



So the United States has officially refused to endorse freedom from government control as a policy underlying its approach to Internet governance. It does not, apparently, have any principled objection to censorship, state surveillance to facilitate political manipulation of the population, over-regulation, over-taxation, economic protectionism and other destructive forms of governmental intervention. All those things are fine, apparently, as long as we manage to “preserve and advance” multistakeholder governance. What an uninspiring stance!



Why should anyone support TMM if it is devoid of any substantive meaning regarding the role of states and freedom from governmental control? TMM inspires support only if it is presented as a better alternative to a form of governance that is authoritarian, repressive, ineffective and unrepresentative of Internet users’ interests. In other words, we should support TMM only insofar as it contains and limits the power of nation-states to interfere unduly with the use and operation of the Internet, and empowers individuals worldwide to govern themselves. TMM is not an end in itself. In fact, once it is stripped of substantive policy norms, dogmatic support for TMM seems indistinguishable from unqualified support for existing Internet institutions.



As we enter into this crucial debate about principles of Internet governance, we need to have a better understanding of why global Internet governance institutions need to be shielded from national governments. Below we provide some simple bullet points as a guide to the ongoing debate over principles regarding the role of states in Internet governance.




The political unit – the polity – for Internet governance should be the transnational community of Internet users and suppliers, not a collection of states.


There is a fundamental, lasting conflict between territorial jurisdiction and the global Internet. There is a fundamental difference between a collection of leaders of national polities and a global polity. Though national governments can provide legitimate and rights-respecting modes of ordering society within their jurisdiction, at the transnational level there is anarchy, a space where the problems of governance are best addressed by new institutions with direct participation and more open channels of communication. National governments are not ‘just another stakeholder’ in a multistakeholder system: they represent a competing, alternative institutional framework.




A system of Internet governance based on states is inherently biased toward greater restriction and control of the Internet’s capabilities.


States are by nature oriented toward control. More specifically, they are concerned about maintaining their own control qua sovereign entity in a territory. They will, therefore, act to limit forms of choice and access that provide alternatives to their control of communication and information. In the international arena they will bargain with other states to maintain their security and control in relation to other states. They will not be optimal representatives of the interests of Internet users in freedom, access and openness. Ever.




The threats to Internet freedom posed by states are more serious than those posed by private actors.


Get over the stuff about ‘Googledom’ and ‘Facebookistan.’ It’s a cute metaphor but there is really no comparison between sovereigns and these businesses. States have the power to tax and expropriate, they have a monopoly on the use of force, they generate armed conflicts that result in war; they fund and deploy weapons. You do not choose to use their services. However much you might think you are locked in to Google, there is still a huge qualitative difference between your ability to use or not use its services and the choice you have with respect to states. This doesn’t mean that the private sector is perfect nor that there is no need for states to ever order or regulate what private actors do, but it helps to keep your priorities straight.




Multi-stakeholderism is not a panacea


Multistakeholderism as an ideology originated as a pragmatic means of opening up intergovernmental organizations (IGOs) to broader representation and participation. As a transitional mechanism for infusing IGOs with more information, expertise and voice, it has worked wonderfully. But it is not a well-defined, ultimate solution to the problem of Internet governance. The organically evolved Internet institutions were not originally conceived as “multistakeholder” but as private sector and contractually based governance. Some forms of Internet governance, such as the IETF, are truly bottom up, based on individualized representation, decentralized and largely voluntary in effect. Others, like ICANN, are highly centralized, largely coercive, and deeply enmeshed with states in a hybrid form of global governance. The virtues (and faults) of one should not be visited upon the other.




 •  0 comments  •  flag
Share on Twitter
Published on April 23, 2013 12:59

Making airspace available for ‘permissionless innovation’

Today, Jerry Brito, Adam Thierer and I filed comments on the FAA’s proposed privacy rules for “test sites” for the integration of commercial drones into domestic airspace. I’ve been excited about this development ever since I learned that Congress had ordered the FAA to complete the integration by September 2015. Airspace is a vastly underutilized resource, and new technologies are just now becoming available that will enable us to make the most of it.



In our comments, we argue that airspace, like the Internet, could be a revolutionary platform for innovation:



Vint Cerf, one of the “fathers of the Internet,” credits “permissionless innovation” for the economic benefits that the Internet has generated. As an open platform, the Internet allows entrepreneurs to try new business models and offer new services without seeking the approval of regulators beforehand.

Like the Internet, airspace is a platform for commercial and social innovation. We cannot accurately predict to what uses it will be put when restrictions on commercial use of UASs are lifted. Nevertheless, experience shows that it is vital that innovation and entrepreneurship be allowed to proceed without ex ante barriers imposed by regulators.


And in Wired today, I argue that preemptive privacy regulation is unnecessary and unwise:



Regulation at this juncture requires our over-speculating about which types of privacy violations might arise. Since many of these harms may never materialize, pre-emptive regulation is likely to overprotect privacy at the expense of innovation.

Frankly, it wouldn’t even work. Imagine if we had tried to comprehensively regulate online privacy before allowing commercial use of the internet. We wouldn’t have even known how to. We wouldn’t have had the benefit of understanding how online commerce works, nor could we have anticipated the rise of social networking and related phenomena.


I expect us all to hear more about commercial drones in the near future. See Jerry’s piece in Reason last month or Larry Downes’s great post at the HBR blog for more.




 •  0 comments  •  flag
Share on Twitter
Published on April 23, 2013 12:32

DOJ Files Political Screed Asking FCC to Rig Spectrum Incentive Auction

The DOJ’s recommendation would likely reduce the amount of revenue produced by the incentive auction and risk leaving the public safety network unfunded (as the economist who led the design of the most successful auction in FCC history will explain in this webinar on Thursday). The unsubstantiated, speculative increase in commercial competition the DOJ says could occur if the FCC picks winners and losers in the incentive auction is a poor justification for continuing to deny our nation’s first responders the network they need to protect the safety of every American.



Beyond enforcing the antitrust laws, the Antitrust Division of the Department of Justice (DOJ) advocates for competition policy in regulatory proceedings initiated by Executive Branch and independent agencies, including the Federal Communications Commission (FCC). In this role, the DOJ works with the FCC on mergers involving communications companies and occasionally provides input in other FCC proceedings. The historical reputation of the DOJ in this area has been one of impartial engagement and deliberate analysis based on empirical data. The DOJ’s recent filing (DOJ filing) on mobile spectrum aggregation jeopardizes that reputation, however, by recommending that the FCC “ensure” Sprint Nextel and T-Mobile obtain a nationwide block of mobile spectrum in the upcoming broadcast incentive auction.



The new “findings” in the DOJ filing fail to cite any factual record and are inconsistent with the DOJ’s factual findings in recent merger proceedings that contain extensive factual records. The DOJ filing blithely relies on a discriminatory evidentiary presumption to insinuate that Verizon and AT&T are “warehousing” spectrum, and then uses that presumption to support a proposed remedy that bears no rational relationship to factual findings that the DOJ has actually made. The absence of any empirical evidence supporting the relevant conclusions in the DOJ filing gives it the appearance of a political document rather than a deliberative work product crafted with the traditionally substantive and impartial standards of the Justice Department. The FCC, the independent agency that prides itself on being fact-based and data-driven, should give this screed no weight.



DOJ Flip-Flops on Competition in the Mobile Market



The DOJ filing concludes – without citing any evidence – that Verizon and AT&T have “the ability and, in some cases, the incentive to exercise at least some degree of market power, particularly given that there is already significant nationwide concentration in the wireless industry.” This conclusion directly contradicts the representations of the DOJ in its federal court complaint to block the merger of AT&T and T-Mobile. The companies had argued that the absence of T-Mobile would not have a significant impact on the mobile marketplace because, as a standalone company, T-Mobile faced substantial commercial and spectrum challenges. The DOJ refuted this rationale by finding that, “Due to the advantages arising from their scale and scope of coverage, each of the Big Four nationwide carriers is especially well-positioned to drive competition, at both a national and local level, in this industry.” Now, however, the DOJ asserts that Verizon and AT&T are “dominant firms” in the mobile market and that Sprint Nextel and T-Mobile cannot compete in the upcoming incentive auction unless the FCC adopts laws granting them special privileges. “Each” of the “Big Four” could hardly have been “well-positioned” to “drive competition” if half of them require government subsidies to compete – something I expect the federal court judge would have been interested to hear.



DOJ Recommends a Discriminatory Evidentiary Presumption



After flip-flopping on competition, the DOJ theorizes that Verizon and AT&T could use their “dominant” positions in the mobile market to “foreclose” competition by aggregating excessive mobile spectrum. Rather than rely on actual evidence to support its foreclosure theory, the DOJ presumes that Verizon and AT&T are not using their spectrum “efficiently” while assuming that Sprint Nextel and T-Mobile could “effectively” make use of more spectrum. The filing concludes – again without citing any evidence – that Sprint Nextel and T-Mobile would make the “highest value use” of new spectrum “absent compelling evidence that the largest incumbent carriers are already using their existing spectrum licenses efficiently and their networks are still capacity-constrained.” The DOJ offered no explanation for its outrageous suggestion that the FCC should hold certain companies to a higher evidentiary standard (a presumption that must be rebutted by “compelling evidence”) than others (for which efficiency is assumed) when evaluating whether they are using their spectrum efficiently.



The DOJ also failed to offer any actual evidence supporting the notion, reiterated in testimony before the Senate Judiciary Committee, that the FCC “take a close look at whether some of the spectrum already available to some providers is being warehoused and not being used.” The FCC established “build out” requirements to ensure spectrum is not being “warehoused” and has previously found that a “single objective metric” of spectrum efficiency is “neither possible nor appropriate.” If the FCC now intends to consider the efficiency of current spectrum use as a factor in developing its spectrum aggregation and auction rules, fundamental principles of justice require that it establish an open and transparent process for defining spectrum efficiency and apply the new metric equally to all mobile providers after a full factual investigation of their actual spectrum use. Among all agencies, one would expect the Justice Department to understand that without a reminder.



DOJ Recommends an Irrational Remedy



The DOJ’s factual flip-flop on competition in the mobile market and its proposed adoption of a discriminatory evidentiary presumption merely lay the predicate for its ultimate policy recommendation: That the FCC distinguish between “low” and “high” frequencies in its spectrum aggregation rules in order to “ensure” Sprint Nextel and T-Mobile obtain a nationwide block of spectrum in the upcoming broadcast incentive auction. In the past, the FCC could simply decree that only Sprint Nextel and T-Mobile are eligible to bid on certain spectrum blocks or to participate in the auction at all. But, based on the disastrous results of previous spectrum auctions that limited the eligibility of certain types of companies to participate, Congress prohibited the FCC from imposing eligibility restrictions on the broadcast incentive auction.



So how does the DOJ propose that the FCC “ensure” Sprint Nextel and T-Mobile are “winners”? Although it cannot directly limit the participation of Verizon and AT&T, the FCC retains jurisdiction to limit the overall amount of mobile spectrum any one provider can hold. Most economists agree that, if a single provider were able to aggregate a significant amount of the total available mobile spectrum, that provider could use its spectrum holdings to engage in anticompetitive behavior. The potential for excessive aggregation of mobile spectrum would not normally be relevant to a particular spectrum auction because the FCC has traditionally treated all spectrum bands the same in this respect. Now, however, the FCC has proposed to apply different rules to “low” frequency spectrum (i.e., frequencies less than 1 GHz) on a nationwide basis – even though the only new mobile spectrum the FCC has proposed to auction in the last five years happens to be the broadcast spectrum, which, coincidentally of course, is below 1 GHz.



In its recent FCC filing, the DOJ is, also coincidentally, recommending this new approach as well, even though its own factual findings don’t support that outcome. Unlike the FCC, the DOJ has traditionally distinguished among mobile spectrum bands below and above 1 GHz, but only in rural areas. After conducting detailed market-by-market analyses in merger proceedings with voluminous factual records, the DOJ found that mobile providers that lack access to spectrum below 1 GHz “generally have found it less attractive to build out in rural areas.” After expressly considering the question in multiple merger proceedings, the DOJ has never considered the distinction between “lower” and “higher” frequency mobile spectrum competitively relevant in urban areas. As it admits in its filing with the FCC, the DOJ considers spectrum above 1 GHz “just as effective as low-frequency spectrum” when a provider “is attempting to augment the capacity of its network in dense urban areas.”



If mobile providers required spectrum below 1 GHz to compete successfully in non-rural areas, the DOJ could not have truthfully told the federal court that T-Mobile was “well-positioned” to “drive competition” in the mobile market, because T-Mobile has never held substantial “low” frequency spectrum. Despite the fact that T-Mobile has always relied on mobile spectrum above 1 GHz, the DOJ found that T-Mobile managed to build a mobile network covering 90% of the US population and is using that network to compete successfully in the mobile market on a nationwide basis. The DOJ’s factual findings have repeatedly affirmed that, to the extent spectrum below 1 GHz is competitively relevant, its relevance is limited to sparsely populated rural areas where capacity is not a substantial issue. The “spectrum crunch” the incentive auction is intended to ameliorate is a capacity issue caused by the massive growth in data traffic, not a coverage issue, and capacity issues primarily impact areas with high population densities.



The DOJ’s factual findings regarding the competitive relevance of spectrum below 1 GHz would, at best, support a rule limiting the amount of “low” frequency spectrum that a particular mobile provider could hold in low-density rural areas where the distinction between higher and lower frequencies may actually have competitive relevance. Suggesting that the FCC should nevertheless apply such a distinction on a nationwide basis is an irrationally overbroad remedy for potential competition issues that are limited to sparsely populated rural areas when the “spectrum crunch” harms areas with the densest population the most.



It’s also too clever by half in this context. When Congress enacted legislation prohibiting the FCC from imposing eligibility restrictions on the incentive auction, it did so with knowledge that the FCC had not traditionally distinguished among spectrum bands suitable for mobile use. Although the DOJ has recognized a distinction in rural areas during its case-by-case merger reviews, the FCC’s chosen remedy for rural coverage issues has been to mandate by rule that Verizon and AT&T enter into roaming agreements that allow other providers to use their networks, in part because it is often uneconomic for more than one or two providers to build separate networks in areas with low population densities. If the FCC’s findings supporting its roaming orders remain valid, it would presumably be uneconomic for T-Mobile to substantially increase its current rural coverage even if it held spectrum below 1 GHz on a nationwide basis.



DOJ Contradicts Congressional Priorities



Even if “ensuring” Sprint Nextel and T-Mobile “win” spectrum in the incentive auction would prompt those companies to spend the capital necessary to substantially improve their mobile coverage in rural areas (a particularly unlikely outcome for Sprint Nextel, which already holds a nationwide block of spectrum below 1 GHz), picking winners in the incentive auction is inconsistent with Congressional priorities. Among other things, Congress intended that the incentive auction raise $7 billion for the construction of an interoperable public safety network first recommended by the 9/11 Commission Report over a decade ago. The DOJ’s recommendation would likely reduce the amount of revenue produced by the incentive auction and risk leaving the public safety network unfunded (as the economist who led the design of the most successful auction in FCC history will explain in this webinar on Thursday). The unsubstantiated, speculative increase in commercial competition the DOJ says could occur if the FCC picks winners and losers in the incentive auction is a poor justification for continuing to deny our nation’s first responders the network they need to protect the safety of every American. For that reason alone, I expect a thoughtful and independent FCC to reject the politically motivated recommendations of a DOJ that considers itself unaccountable to Congress.




 •  0 comments  •  flag
Share on Twitter
Published on April 23, 2013 07:46

Paul Heald on the public domain

Paul J. Heald

Paul J. Heald, professor of law at the University of Illinois Urbana-Champaign, discusses his new paper “Do Bad Things Happen When Works Enter the Public Domain? Empirical Tests of Copyright Term Extension.”



The international debate over copyright term extension for existing works turns on the validity of three empirical assertions about what happens to works when they fall into the public domain. Heald discusses a study he carried out with Christopher Buccafusco that found that all three assertions are suspect. In the study, they show that audio books made from public domain bestsellers are significantly more available than those made from copyrighted bestsellers. They also demonstrate that recordings of public domain and copyrighted books are of equal quality.



Since copyrighted works will once again begin to fall into the public domain starting in 2018, Heald says, it’s likely that content owners will ask Congress for yet another term extension. He argues that his empirical findings suggest it should not be granted.



Download



Related Links


Do Bad Things Happen When Works Enter the Public Domain?: Empirical Tests of Copyright Term Extension, Heald and Buccafusco
More Music in Movies: What Box Office Data Reveals About the Availability of Public Domain Songs in Movies from 1968-2008, Heald, Shi, Stoiber, and Zheng
Property Rights and the Efficient Exploitation of Copyrighted Works: An Empirical Analysis of Public Domain and Copyrighted Fiction Best Sellers, Heald



 •  0 comments  •  flag
Share on Twitter
Published on April 23, 2013 03:00

April 18, 2013

Silk Road proprietor: Bitcoin’s volatility doesn’t really matter

2013-03-07_0113-4A couple of weeks ago I wrote that bitcoin’s valuation doesn’t really matter for the currency to effectively function as a medium of exchange. Now comes word from none other than the proprietor of the notorious Silk Road encrypted black market that indeed the recent wild volatility has not affected the transactions on his site. As Andy Greenberg reports:




In a rare (and brief) public statement sent to me, the Dread Pirate Roberts (DPR) said that despite Silk Road’s reliance on Bitcoin, commerce on the site hasn’t been seriously hurt by Bitcoin’s wild rise and fall. “Bitcoin’s foundation, its algorithms and network, don’t change with the exchange rate,” the pseudonymous site administrator writes. “It is just as important to the functioning of Silk Road at $1 as it is at $1,000. A rapidly changing price does have some effect, but it’s not as big as you might think.”



Silk Road’s customers, after all, aren’t generally interested in Bitcoin’s worth as an investment vehicle, so much as in how it makes it possible to privately buy heroin, cocaine, pills or marijuana. They use Bitcoin because it’s not issued or stored by banks and doesn’t require any online registrations, and thus offers a certain amount of anonymity. …



Silk Road has built-in protections against Bitcoin’s spikes and crashes. Although purchases on Silk Road can only be made with Bitcoin, sellers on the site have the option to peg their prices to the dollar, automatically adjusting them based on Bitcoin’s current exchange rate as defined by the central Bitcoin exchange Mt. Gox. To insulate those sellers against Bitcoin fluctuations, the eBay-like drug site also offers a hedging service. Sales are held in escrow until buyers receive their orders via mail, and vendors are given the choice to turn on a setting that pegs the escrow’s value to the dollar, with Silk Road itself covering any losses or taking any gains from Bitcoin’s swings in value that occur while the drugs are in transit. So while Bitcoin’s crash last week from $237 to less than $100 means that the Dread Pirate Roberts was likely forced to pay out much of the extra gains Silk Road made from Bitcoin’s rise, most of his sellers were protected from those price changes and continued to trade their drugs for Bitcoins despite the currency’s plummeting value.




What this shows is that Silk Road is separating the “unit of account” function of money from the “medium of exchange” function. Prices are denominated in dollars (as a unit of account) but payments are made in bitcoin (as a medium of exchange). Hedging is used to smooth out volatility.





Some folks still don’t grok the distinction. Here is CBS MoneyWatch getting it wrong in a story yesterday:




Well, if you did want to buy a house using Bitcoins as your medium of exchange, it would have been best to arrange for a closing no later than Wednesday morning. Because later that day, when the value of one Bitcoin reached an all-time high of $266 to a U.S. dollar, the digital money took a swan-dive. By late Wednesday, if you were still committed to the transaction, your house would have cost you twice as much – and even more if you had waited until the following day, because Bitcoins just kept losing value.




No. If you were using bitcoin merely as a medium of exchange your house would not have cost twice as much. What the reporter means is that it would have cost twice as much if you had used bitcoin as your unit of account for the mortgage, which would be a ridiculous thing to do. As Steve Hanke, quoted in the piece, says, “One of the functions of money is a unit of account for future payments, like a mortgage payment, so if you’ve got a lot of instability it’s a big problem.”



By the way, in that same article the reporter says I “backed off from the notion that [bitcoin] can serve as a stable store of value[.]” But I don’t see how I can back off from a view I’ve never held. Yes, there is this idea that bitcoin can be a stable store of value, it is an idea that excites many bitcoin enthusiasts, and it’s an idea I explain when I talk about bitcoin, but it’s not one I’ve ever held or promoted. Quite the opposite, I’ve written about how it’s not what makes Bitcoin interesting.




 •  0 comments  •  flag
Share on Twitter
Published on April 18, 2013 10:42

Broadband and Competition Conference at GMU Law tomorrow

The Information Economy Project at the George Mason University School of Law is hosting a conference tomorrow, Friday, April 19. The conference title is From Monopoly to Competition or Competition to Monopoly? U.S. Broadband Markets in 2013. There will be two morning panels featuring discussion of competition in the broadband marketplace and the social value of “ultra-fast” broadband speeds.



We have a great lineup, including keynote addresses from Commissioner Joshua Wright, Federal Trade Commission and from Dr. Robert Crandall, Brookings Institution.



The panelists include:



Eli Noam, Columbia Business School



Marius Schwartz, Georgetown University, former FCC Chief Economist



Babette Boliek, Pepperdine University School of Law



Robert Kenny, Communications Chambers (U.K.)



Scott Wallsten, Technology Policy Institute



The panels will be moderated by Kenneth Heyer, Federal Trade Commission and Gus Hurwitz, University of Pennsylvania, respectively. A continental breakfast will be served at 8:00 am and a buffet lunch is provided. We expect to adjourn at 1:30 pm. You can find an agenda here and can RSVP here. Space is limited and we expect a full house, so those interested are encouraged to register as soon as possible.




 •  0 comments  •  flag
Share on Twitter
Published on April 18, 2013 07:54

Adam Thierer's Blog

Adam Thierer
Adam Thierer isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Adam Thierer's blog with rss.