Adam Thierer's Blog, page 95
May 7, 2012
Resource Database for WCIT / ITU / “U.N. Taking over the Net”
While preparing my latest Forbes column, “Does the Internet Need a Global Regulator?” I collected some excellent resources. I figured I would just post all the links here since others might find them useful as we work our way up to the big U.N. International Telecommunication Union (ITU) World Conference on International Telecommunications (WCIT) in Dubai this December. Please let me know of other things that I should add to this resource database. I’ve divided the database into “General Resources” and “Opinion Pieces”:
General Resources:
World Conference on International Telecommunications (WCIT-12) official website
White House statement on WCIT: “Ensuring an Open Internet,” by Lawrence Strickling, Philip Verveer, and Daniel Weitzner, May 2, 2012.
Internet Society “What is the WCIT?” F.A.Q.
Internet Society’s Scoop.It page curating news on WCIT
Center for Democracy & Technology briefing paper by : “ITU Move to Expand Powers Threatens the Internet,” March 12, 2012.
David A. Gross & Ethan Lucarelli, “The 2012 World Conference On International Telecommunications: Another Brewing Storm Over Potential UN Regulation Of The Internet,” November 2011.
Michael Joseph Gross, “World War 3.0,” Vanity Fair, May 2012.
[BOOK] Milton Mueller – Networks and States: The Global Politics of Internet Governance (2010).
Internet Governance Project blog.
[RADIO] National Public Radio: “Who — If Anyone — Should Control The Internet?” January 12, 2012.
Opinion Pieces:
Robert McDowell, “The U.N. Threat to Internet Freedom” Wall Street Journal, February 21, 2012.
ITU Secretary General Dr. Hamadoun Touré, “Securing the Future Benefits of Technology,” The Guardian (U.K.), March 6, 2012. [+ a recent speech on the issue.]
Andrea Renda, “The U.N., Internet Regulator?” Wall Street Journal Europe, April 25, 2012.
Gordon Crovitz, “The U.N. Wants to Run the Internet,” Wall Street Journal, May 6, 2012
Adam Thierer, ”Does the Internet Need a Global Regulator?” Forbes, May 6, 2012
Gregory Francis, “UN Moves on Internet Governance: Latest Dispatch,” CircleID, April 26, 2012.
Aspen Institute, Toward a Single Global Digital Economy: The First Report of the Aspen Institute IDEA Project, (April 24, 2012).
Edward J. Black, “UN’s ITU Could Become Next Internet Freedom Threat,” Huffington Post, March 9, 2012.
Jerry Brito, “The Case Against Letting the U.N. Govern the Internet,” Time TechLand, February 13, 2012.







May 2, 2012
Nothing to Fear From Pricing Freedom For Broadband Providers
The airline would not let coach passenger Susan Crawford stow her viola in first class on a crowded flight from DC to Boston, she writes at Wired (Be Very Afraid: The Cable-ization of Online Life Is Upon Us).
Just imagine trying to run a business that is utterly dependent on a single delivery network — a gatekeeper — that can make up the rules on the fly and knows you have nowhere else to go. To get the predictability you need to stay solvent, you’ll be told to pay a “first class” premium to reach your customers. From your perspective, the whole situation will feel like you’re being shaken down: It’s arbitrary, unfair, and coercive.
Most people don’t own a viola, nor do they want to subsidize viola travel. They want to pay the lowest fare. Differential pricing (prices set according to the differing costs of supplying products and services) has democratized air travel since Congress deregulated the airlines in 1978. First class helps make it possible for airlines to offer both lower economy ticket prices and more frequent service. Which is probably why Crawford’s column isn’t about airlines.
For one thing, Crawford seems to be annoyed that the “open Internet protections” adopted by the Federal Communications Commission in 2010 do not curtail specialized services — such as an offering from Comcast that lets Xbox 360 owners get thousands of movies and TV shows from XFINITY On Demand. As the commission explained,
“[S]pecialized services,” such as some broadband providers’ existing facilities-based VoIP and Internet Protocol-video offerings, differ from broadband Internet access service and may drive additional private investment in broadband networks and provide end users valued services, supplementing the benefits of the open Internet. (emphasis mine)
Since XFINITY on Xbox is a specialized service similar to traditional cable television service, it doesn’t have to count towards the data usage threshold that applies to broadband Internet access services provided by Comcast. Netflix doesn’t want to be “shaken down” or pay “tribute” to get similar treatment, according to Crawford.
For the data usage threshold exemption to be provided at no charge to Netflix, however, Comcast would have to recover the cost and/or the value from somewhere else. Broadband providers invested nearly $65 billion in 2010 alone. FCC staff have estimated the cost of universal broadband availability is $350 billion for 100 Mbps or faster.
Neither taxpayers nor lenders are going to sustain this level of investment in the current economic and political environment. It will have to come from private investors, who have many options for managing their money and demand competitive returns on equity. Since specialized services share last-mile facility capacity with broadband Internet access services, they provide a valuable additional source of revenue for fueling investment in the network. The concept is the same as first class and economy class passengers sharing the cost of air travel.
Increasing broadband adoption is justifiably a major objective of FCC Chairman Julius Genachowski, who estimated that more than 100 million Americans (roughly 35% of U.S. households) could but did not have broadband in 2010 in part because they felt they could not afford it.
Making broadband universally affordable and preventing businesses from having the option to pay a first class premium to reach their customers (if they want) are not compatible goals. If anything, there is a need to reduce broadband prices, not subsidize Netflix sales. Broadband providers must be allowed to let customers who value their products and services pay more money so broadband providers will be in a stronger position to appeal to price-conscious consumers.
Kindle users, for example, pay for the content and get the wireless connectivity for free.
Crawford falsely claims that broadband is a “single delivery network — a gatekeeper — that can make up the rules on the fly and knows you have nowhere else to go.” She makes this untrue claim because “natural monopoly” is the classic legal justification for close government scrutiny and pervasive regulation.
The fact is that cable, telephone and mobile wireless providers all compete to offer similar broadband Internet access services. Fourth-generation wireless technologies being deployed now are believed to be capable of delivering peak download speeds of 100 Mbps or higher, comparable to DOCSIS 3.0 and Verizon’s FiOS service. There is no gatekeeper problem, only a desire on the part of some firms to seek political favors instead of undertaking the difficult and uncertain task of creating real consumer value.
Netflix’s success derives in large part because FedEx, UPS and the U.S. Postal Service did not rent out DVDs. Delivering video is a Comcast speciality, however, and Netflix has no obvious source of competitive advantage. That’s unfortunate, but a bailout would impose hidden costs on consumers in the form of high prices for broadband Internet access.
When government intrudes in the free market to perform a rescue of the type Netflix is seeking, it is picking winners and losers. Capricious government intervention frightens private investors and can lead to crony capitalism and corruption.
I’m not sure what Susan Crawford can do to avoid having to gate check her viola in the future. But not even she is advocating that Congress repeal airline deregulation so airlines are treated the same as telecommunications carriers again. Which is exactly what she is advocating for broadband.
May 1, 2012
APR 2011: Universal Service, Spectrum Policy, Online Privacy and Internet Sales Taxes
The Reason Foundation today has published the Telecommunications and Internet section of its 2011 Annual Privatization Review.
Although there’s been a bit of lead time since the articles were written, they are still timely. Notable is the discussion on the collection of state sales taxes from Internet retailers, back in the news now that Amazon.com has reached an agreement with the state of Texas to collect sales taxes from consumers in the Lone Star State. The settlement concludes a lengthy battle in Austin as to whether Amazon’s distribution facility in Ft. Worth constitutes a “nexus” as defined in previous court cases.
While a blow to Amazon’s Texas customers (full disclosure: I count myself as one), the action may shed further light on the debate as to how much advantage the Amazon has because it can waive sales tax collection. Competitors such as ailing Best Buy have said it’s enough to hurt brick-and-mortar retailers. Amazon points to findings that in New York, the most populous state where it collects sales tax, sales have not fallen off. Soon we’ll see if Texas tracks with that data as well. If it does, it will further validate opinions that Amazon and other on-line retailers are succeeding because they have fundamentally changed the way people shop, not because they can simply avoid sales taxes.
Also in the report look for updates on the FCC’s options for the next spectrum auction, state and federal policymaking on search engines and social networking sites, and how priorities may change as the FCC migrates from the current Federal Universal Service Fund to its new more broadband-oriented Connect America Fund.
The telecom section of APR 2011 can be found here.







Jennifer Shkabatur on transparency reform
On the podcast this week, Jennifer Shkabatur, Fellow at the Berkman Center for Internet Society at Harvard University, discusses her new paper, “Transparency With(out) Accountability: The Effects of the Internet on the Administrative State. Shkabatur begins by discussing the focus of her paper, a critical look at open government initiatives. Shkabatur believes promises of transparency in government fall short and do not promote accountability. She then discusses innovations in accountability facilitated by the Internet, which she divides into three categories: mandatory transparency, discretionary transparency, and involuntary transparency. Shkabatur then sets forth types of reforms that she believes would improve government transparency. According to Shkabatur, context and details on agency processes are necessary along with details about how an agency performs various tasks.
Related Links
Transparency With(out) Accountability , by Shkabatur
“Transparency Through Technology: Evaluating Federal Open Government Efforts”, Mercatus.org“The Power of Open Government”, Brookings Institute
To keep the conversation around this episode in one place, we’d like to ask you to comment at the webpage for this episode on Surprisingly Free. Also, why not subscribe to the podcast on iTunes?







April 27, 2012
Big Data, Innovation, Competitive Advantage & Privacy Concerns
This morning I spoke at a U.S. Chamber of Commerce event on “Responsible Data Uses: Benefits to Consumers, Businesses and the Economy.” In preparing for the event, I dusted off some old working notes for speeches I had delivered at other events about privacy policy and “big data” and expanded them a bit to account for recent policy developments. For what it’s worth, I figured I would post those notes here. (I apologize about the informality but I never write out my speeches, I just work from bullet points.)
—————–
Benefits of “Big Data”
“big data” has numerous micro- and macroeconomic benefits
Micro benefits:
data aggregation of all varieties has powerful social and economic benefits that are sometimes invisible to consumers and citizens but are nonetheless enjoyed by them
big data can positively impact the 3 key micro variables – quality, quantity & price – and benefit consumers / citizens in the process
Macro benefits:
Data is the lifeblood of the information economy and it has an increasing bearing on the global competitiveness of companies and countries
In the old days, when we talked about comparative and competitive advantage, the focus was on natural resources, labor, and capital.
Today, we increasingly talk about another variable: information
Data is increasing one of the most important resources that can benefit economic growth, innovation, and the competitive advantage of firms and nations.
Privacy Concerns
of course, “big data” also raises big privacy concerns for many groups and individuals
this has led to calls for regulatory action and virtually all levels of government – federal, state, local, and international – are considering expanded controls on data collection and aggregation
America’s Privacy Regime
I want to address what I regard as the most powerful myth that governs this debate
namely, I speak of the myth that America doesn’t have a privacy framework that can balance these goals and concerns about “big data” and data collection in general
we hear various advocates say that America needs a new privacy regime, and many of these advocates suggest that that regime should more like Europe’s
Europe’s Regime
first, what is that European regime?
a more preemptive top-down approach / data “directives” / stringent requirements on data use
basically, under the EU regime, privacy trumps almost all other considerations, regardless of cost or complexity.
It’s more of a “Mother, May I” regime in which innovation needs to be “permissioned”
what’s wrong with European approach?
We can relate this back to the question of competitive advantage
The European approach leaves less room for innovative uses of data and ongoing marketplace experimentation
There’s also some evidence that this regime might influence industry structure and competitiveness as well as the quality and quantity of choices for the consumer
Anecdotally-speaking, we can ask ourselves this simple question: Can any of us name a global leader in the modern digital economy that was born in Europe?
I suppose there are a few, but I struggle to name them
Now, why is that?
It could be high taxes and the lack of healthy market for venture capital.
But it also must have something to do with regulatory structure that Europe has adopted.
America’s Current Advantages
Regardless, here’s what we do know: America’s digital economy innovators and social media operators are household names across the globe. Our firms are the envy of the world
Moreover, while many sectors of the U.S. economy are struggling, I bet if you stopped the average Joe in the street and asked them to name one sector of America’s economy that is currently thriving and an example of innovation that others should emulate, most of them would probably mention information technology and the digital economy.
Again, many factors may contribute to our current success relative to Europe but certainly our “light-touch” legal and regulatory approach must have had some bearing on that outcome
America’s Privacy Regime
So, what exactly is America’s privacy regime?
Again, some say we don’t have one and that regulation is, therefore, needed
I beg to differ
America does have a privacy regime; it is one that is:
governed by a set of evolutionary norms,
ongoing online marketplace interactions and experiments, contractual negotiations,
public and press pressures,
self-regulatory systems,
educational efforts and user empowerment,
personal responsibility,
and targeted legal enforcement and the use of state torts when true harms can be demonstrated.
compared with Europe, our legal regime:
More bottom-up enforcement
Issue-specific / Sectoral approach to addressing
Relies on common law / case law / torts
States have role; often more stringent than fed law
evolving industry Self-regulation
That’s been the uniquely American approach to privacy protection and we should not abandon it lightly.
It’s the Same Regime We’ve Used to Address Online Safety
Importantly, it’s largely the same approach we have taken in this country toward online speech and child safety matters.
There, too, we have focused on what I call the “3-E” approach:
Education
Empowerment, and
Enforcement against particularly bad apples
Thus, in both the online child safety space as well as the privacy policy space, we have made great strides in pushing both personal responsibility and corporate responsibility as the first line of defense, not the last.
Now, it has always been true, and will always be the case, that “more can be done.”
Consumers could do more: We need to constantly encourage consumers to take more care to protect the personal data they care most about and to take steps to safeguard that which they do not want collected in the first place
Companies could do more: And we also need to constantly encourage companies who collect data to take greater steps to:
first consider asking permission to collect and use that data
second, to be transparent about what data they are collection and what they are using it for
and third, to ensure adequate safeguards are in place to guard against unauthorized use of that data
The Difference between the Traditional American Model & the Emerging “Co-Regulatory” Model
in a sense, this vision tracks the Obama Administration’s proposed model for privacy and data collection
but here’s the difference: the Obama Administration wants to force this process in a more heavy-handed way by involving various federal agencies in the day-to-day management of how all these decisions get made
in essence, it’s a small but certain step toward the European model of “co-regulation”
government steers, industry rows
“multi-stakeholder process”
Everyone has a “seat at the table”
But we don’t need “a table” if the table is being set by government
there’s nothing wrong with truly voluntary “multi-stakeholder” processes, but when the government is the one setting the “seats at the table” and talking about enforcing the “codes” that the committee comes up with, it opens the door to a co-regulation model that has some real dangers:
If every decision about how information is used or aggregated becomes the equivalent of a committee decision — with everyone “at the table” getting a vote or a veto – then it will almost certainly be the case that less innovation occurs
The process could lack traditional democratic accountability / due process if more of an “agency threats” model evolves out of this. After all, if certain officials are in charge of who gets a “seat at the table” and also responsible for enforcing whatever is decided “at the table,” it raises the question of how much pressure they can bring to bear on the process. (File this under “regulation by raised eyebrow”).
Any way you cut it, regulation by committee (in this case, the “multistakeholder” process) could become the equivalent of a tax on innovation and have detrimental impacts on the quality and price of online services
Conclusion
For these reasons, we should instead continue to rely on the uniquely American model of privacy policy that balances diverse goals and values in a more spontaneous, evolutionary, and voluntary way without incessant government oversight and intervention.
Again, the traditional American model isn’t perfect and sometimes we will need targeted statutes, torts, and even FTC (Sec. 5) enforcement to handle the bad apples out there who cause the most serious problems in terms of privacy violations or data breeches.
But that more targeted approach to enforcement, along with the education and empowerment-based approaches I have outlined, can adapt to new challenges in this space and the child safety space while also ensuring our global competitive advantage is not sacrificed in the process.
To sum up: let’s not casually trade in the American model for Europe’s. America’s more flexible, evolutionary model of privacy protection has served us well so far and can adapt to balance competing needs without crushing our innovative information economy or America’s global competitiveness.
Additional Reading:
my big Mercatus Center filing to the FTC last year on privacy and Do Not Track regulation
my recent Forbes oped, “The Problem with Obama’s “Let’s Be More Like Europe” Privacy Plan“
Initial Thoughts on FTC’s Final Privacy Report
video & slides from Hill Briefing on Online Privacy Policy
Isn’t “Do Not Track” Just a “Broadcast Flag” Mandate for Privacy?
Privacy as an Information Control Regime: The Challenges Ahead
Obama Admin’s “Let’s-Be-Europe” Approach to Privacy Will Undermine U.S. Competitiveness
Lessons from the Gmail Privacy Scare of 2004
When It Comes to Information Control, Everybody Has a Pet Issue & Everyone Will Be Disappointed
And so the IP & Porn Wars Give Way to the Privacy & Cybersecurity Wars
Book Review: Solove’s Understanding Privacy







April 25, 2012
Book Review: Infrastructure: The Social Value of Shared Resources, by Brett Frischmann
The folks at the Concurring Opinions blog were kind enough to invite me to participate in a 2-day symposium they are holding about Brett Frischmann’s new book, Infrastructure: The Social Value of Shared Resources. In my review, I noted that it’s an important book that offers a comprehensive and highly accessible survey of the key issues and concepts, and outlines much of the relevant literature in the field of infrastructure policy. Frischmann’s book deserves a spot on your shelf whether you are just beginning your investigation of these issues or if you have covered them your entire life. Importantly, readers of this blog will also be interested in the separate chapters Frischmann devotes to communications policy and Net neutrality regulation, as well as his chapter on intellectual property issues.
However, my review focused on a different matter: the book’s almost complete absence of “public choice” insights and Frischmann’s general disregard for thorny “supply-side” questions. Frischmann is so focused on making the “demand-side” case for better appreciating how open infrastructures “generate spillovers that benefit society as a whole” and facilitate various “downstream productive activities,” that he short-changes the supply-side considerations regarding how infrastructure gets funded and managed. I argue that:
When one begins to ponder infrastructure management problems through the prism of public choice theory, the resulting failures we witness become far less surprising. The sheer scale of many infrastructure projects opens the door to logrolling, rent-seeking, bureaucratic mismanagement, and even outright graft. Regulatory capture is an omnipresent threat, too. . . any system big enough and important to be captured by special interests and affected parties often will be. Frischmann acknowledges the problem of capture in just a single footnote in the book and admits that “there are many ways in which government failures can be substantial.” (p. 165) But he asks the reader to quickly dispense with any worries about government failure since he believes “the claims rest on ideological and perhaps cultural beliefs rather than proven theory or empirical fact.” (p. 165) To the contrary, decades of public choice scholarship has empirically documented the reality of government failure and its costs to society, as well as the plain old-fashioned inefficiency often associated with large-scale government programs. For infrastructure projects in particular, the combination of these public choice factors usually adds up to massive inefficiencies and cost overruns.
From there I launch into a fuller discussion of public choice insights and outline why it is essential that such considerations inform debates about infrastructure policy going forward. Again, read my entire review here.







April 24, 2012
Naomi Cahn on the digital afterlife
On the podcast this week, Naomi Cahn, John Theodore Fey Research Professor of Law at George Washington University, discusses her new paper entitled, “Postmortem Life Online.” Cahn first discusses what could happen to online accounts like Facebook once a person dies. According to Cahn, technology is outpacing the law in this area and it isn’t very clear what can happen to an online presence once the account holder passes away. She discusses the various problems family members face when trying to access a deceased loved one’s account, and also the problems online companies face in trying to balance the deceased’s privacy rights with the need to settle an estate. Cahn claims that terms of service often dictate what will happen to an online account after death, but these terms may not be in line with account holder wishes. She then suggests some steps to take in making sure online accounts are taken care of after death, including taking inventory of all online accounts and determining who should have access to those accounts after death.
Related Links
“Postmortem Life On-Line”, by Cahn“What happens to your Facebook account when you die?”, The Digital Beyond“Deathless data: What happens to our digital property after we die?”, The Economist
To keep the conversation around this episode in one place, we’d like to ask you to comment at the webpage for this episode on Surprisingly Free. Also, why not subscribe to the podcast on iTunes?







Why Do ISPs Impose Data Caps?
There is a Senate Commerce Committee hearing today on online video, and our friends at Free Press, Consumers Union, Public Knowledge, and New America Foundation argue that it should be used to investigate ISP-imposed data caps.
If data caps had a legitimate economic justification, they might be just a necessary annoyance. But they do not have such a justification. Arbitrary caps and limits are imposed by multichannel video providers that also provide broadband Internet access, because the providers have a strong incentive and ability to protect their legacy, linear video distribution models from emerging online video competition.
As someone who uses an ISP with a data cap and who is a paid subscriber to three different online video services, you might think that I too would concerned about these caps. But to the contrary, I think there are some legitimate economic reasons ISPs might impose data caps, and I don’t see a reason to stop ISPs from setting the price and policies for the services they offer.
The first and most basic reason that ISPs might want to implement a cap is to price discriminate. The term “price discrimination” makes a lot of people uneasy because it contains the word “discrimination.” While that sounds nefarious, price discrimination usually increases economic welfare. If a firm has to charge all consumers $100/month for service, some consumers who can only afford $50/month will be left out. In contrast, if it can charge $50/month to those customers and $150/month to other customers, more customers will use the product. Price discrimination especially benefits those customers at the low end of the consumption spectrum, who would otherwise have to go without.
Comcast and AT&T have 250 GB/month caps on data usage for their residential service. Very few residential customers are likely to bump up against this cap. In fact, residential service is so adequate for most uses that some business customers might be tempted to use residential-class service. By imposing a cap on usage, ISPs are really trying to force these customers to buy their more-expensive business-class service, which does not have a usage cap. This is good for residential customers because it means that businesses are paying a greater share of the fixed costs associated with providing Internet service; business customers are cross-subsidizing residential customers.
Some people argue that price discrimination is bad because it is a sign of market power. However, relatively recent developments in the economics literature do not support automatic linkage between these two elements. Consider for instance that the most notorious price discriminators are airlines, who seem to be continuously going bankrupt! The best paper on the subject is Michael Levine’s “Price Discrimination Without Market Power” (gated published version, ungated working paper). Levine argues that firms in industries with large fixed costs or networks to maintain will be forced by competition to price discriminate to efficiently allocate their fixed costs. From this perspective, it is bans on price discrimination that can be thought of as a restraint of trade.
The second reason that ISPs might legitimately impose data caps is because it is easier for consumers than other superficially more rational approaches to handling congestion. If I were designing a bandwidth pricing scheme for homo economicus, I would impose two-part pricing. Every consumer would pay a fixed fee just to be part of the network, and then a per-bit metered fee so that they bear the costs associated with their own use. Even better, the per-bit fee would be higher when the network was more congested! However, it could be the case that such pricing imposes “mental accounting costs” on ordinary homo sapiens consumers. As Public Knowledge writes in their new paper on Internet data caps and usage-based pricing:
The strongest arguments for flat rates are best explained by the concept of “mental accounting costs.” As the world gets increasingly complicated, people are overwhelmed by the available choices and the need to devote mental efforts to sorting them out, and therefore search for simplicity. They are willing to pay extra for the peace of mind that flat rates offer them.
A 250 GB cap eliminates mental accounting costs for most consumers, relative to the two-part pricing scheme that I proposed above, while eliminating the congestion created by network hogs. My argument is not that a 250 GB cap is optimal, necessarily. ISPs probably have to do some experimentation to find solutions that strike the right tradeoff between congestion management and consumer value. But a 250 GB cap might work relatively well because Grandma doesn’t have to worry about running up her Internet bill. Nor would the vast majority of consumers, for that matter.
A third reason that ISPs might impose data caps is not actually about Internet service at all, but rather about copyright. ISPs quite understandably do not want to be the copyright police. They don’t want to get roped (further) into the ongoing battle between the content-Congressional complex and ordinary consumers. It is in ISPs’ interest, therefore, to find ways to make the content piracy problem go away without alienating the majority of their users. A 250 GB/month data cap probably strikes a nice balance for them on that score. Uncapped service would result in some users running their bit torrent clients at full speed for 23+ hours per day, consuming terabytes of data and distributing lots of copyrighted content. It would also result in further calls by policymakers and content producers for ISPs to do more with respect to copyright. By imposing a cap, ISPs eliminate the most egregious file-sharing practices without overburdening casual file-sharers and without having to monitor users directly.
The bottom line is that off the top of my head, I can think of three legitimate and sensible reasons for ISPs to impose caps. I’m not saying that any one of these is the answer; the real answer might be a combination of these and other reasons. I have no doubt that our friends have the best of intentions, but their claim that data caps have no possible legitimate economic justification suggests to me that either they don’t know much about economics or they’re not trying very hard.







April 20, 2012
The Kochs, Cato, and Miscalculation—Part III
In previous posts about the battle for control of the Cato Institute, I’ve noted (Part I) that the “Koch side” is a variety of different actors with different motivations who collectively seem not to apprehend the Cato Institute’s value. Next (Part II), I looked at why the Koch side is fairly the object of the greater scrutiny: their precipitous filing of the original lawsuit.
My premise has been that the Koch side cares. That is, I’ve assumed that they want to preserve Cato and see its role in the libertarian movement continue. Some evidence to undercut that assumption has come around, namely, their filing of a second lawsuit—and now a third! [Update: Mea culpa---there hasn't been a third lawsuit. Just a new report of the second one. I had assumed the second was filed in state court and thus thought this was distinct. I'm not following the legal issues, obviously, which matter very little.]
The Koch side may be “on tilt.” Lawsuit-happy, win-at-any-cost. We will just have to wait and see.
For the time being, I will continue to assume that the Koch side has the best interests of liberty in mind and explore the dispute from that perspective. I owe the world some discussion of Cato-side miscalculation—of course, there is some—but before I get to that in my next post, I think it’s worth talking about the burden of proof in the Kochs’ campaign to take control of Cato.
Only fringies will deny that the Cato Institute adds some value to the liberty movement. It does. The question—if preservation of liberty is the goal—is how well it will do so in the future. The central substantive issue in the case—there are many side issues—is how Cato will operate in the future.
Now, here’s a quick primer on public campaigns and the difference between the “yes” side and the “no” side.
A “yes” campaign is hard. The moving side—the “yes” side—has to make the case that there is a problem, and it also has to make the case that it offers the best available solution.
A “no” campaign is easy. The “no” side can choose to dispute the existence of the problem, or it can dispute that the “yes” side’s solution is the right one.
In 1994, I worked for a campaign to defeat a single-payer health care initiative, California’s Prop. 186. The most memorable work we did—and the most fun—was a weekly release we faxed out (yes, faxed!) called the “Whopper of the Week.” Our side would take any dimension of the other side’s campaign and pound on it as hard as we could with mocking disdain and a smattering of the facts as we saw them.
By the end of the campaign, the “yes” side was arguing that their losses in battles like this were becoming more narrow each time around. Pathetic. We blasted out an Alice-in-Wonderland-themed Whopper. No, health-care socializers, a loss is not a win.
In the battle for control of Cato, the Kochs are the moving party, the “yes” campaign. But it has done almost none of the work that a “yes” campaign should.
As I wrote previously, they didn’t even make the case that there is a problem:
In terms of communications and public relations, this is kind of jaw-dropping stuff. It looks as though the Koch side laid little or no groundwork for public discussion of their move to take control of Cato. They didn’t register a public complaint about the direction of Cato’s research. They didn’t enlist a single ally or proxy into raising questions about Cato’s management.
And it’s becoming conspicuous with the passage of time that the Koch side isn’t putting forward a solution.
When the Kochs filed their original lawsuit, their public messaging was that it was a narrow contract dispute. “Nothing to see here.”
Then, the Koch messaging aimed at Ed Crane’s personality and management style. A statement from David Koch cited Ed’s rudeness. A pair of unsigned stories on Breitbart.com expanded on that theme a little breathlessly (using a picture of Ed that makes him look mean and fat!). I presume the Kochs helped with the placement of these stories, though I could certainly be wrong.
[UPDATE: (4/23/12) A third Breitbart story went up today, but is no longer available at its original source. A mirror of the story, "The Crane Chronicles, Part III: Ed Gone Wild," is available here.]
You only have to look at that “mean and fat” picture of Ed Crane to know he was going to be out the door soon anyway. Hopefully, to a chaise lounge and a mai tai with a little umbrella in it. Ironically, the instant dispute may keep Ed at Cato longer than he would have been if someone just said “thank you” and thrown him a nice going-away party.
Attacking Ed Crane does nothing to make the Kochs’ case for taking over Cato. It is at best one-third of the first half of a “yes” campaign.
What about the other two-thirds of the “problem” statement? Has Cato’s fundraising lagged? Is the scholarship weak? Has Cato failed to strike the right balance between principle and relevance? These are important, substantive questions … that the Koch side has barely raised.
Much less has the Koch side put forward the solutions that it thinks are the right ones. PR statements won’t do for the people who dedicate their every work-day to advancing liberty. What is the Koch vision for Cato? Who do the Kochs think should be at the helm? How can we know that Cato will remain a distinct, non-partisan voice in Washington? It takes something more than words when the devil we know has a 35-year track record.
The evidence of miscalculation I bring to bear in this post is the dog that didn’t bark. By all appearances, the Kochs didn’t prepare for the campaign to take over Cato. A fair inference is that the Kochs aren’t prepared to run it.
I’m fascinated in writing this post that I feel the need to explain to whoever is running this issue for the Kochs what they should have done in the effort to get control of Cato. It’s not because I wish the Koch side success. It’s because the evidence we have indicates fairly strongly that the Koch side is not prepared to run the Cato Institute. What happens if the dog catches the car?







April 19, 2012
The Closing of the Spectrum Frontier

Frederick Jackson Turner (1861-1932)
On Fierce Mobile IT, I’ve posted a detailed analysis of the NTIA’s recent report on government spectrum holdings in the 1755-1850 MHz. range and the possibility of freeing up some or all of it for mobile broadband users.
The report follows from a 2010 White House directive issued shortly after the FCC’s National Broadband Plan was published, in which the FCC raised the alarm of an imminent “spectrum crunch” for mobile users.
By the FCC’s estimates, mobile broadband will need an additional 300 MHz. of spectrum by 2015 and 500 MHz. by 2020, in order to satisfy increases in demand that have only amped up since the report was issued. So far, only a small amount of additional spectrum has been allocated. Increasingly, the FCC appears rudderless in efforts to supply the rest, and to do so in time.
It’s not entirely their fault. At the core of the problem, the FCC is simply not constituted to resolve this increasingly urgent crisis. That’s because, as I write in the article, the management of radio frequencies has entered new and unchartered territory.
For the first time since the FCC and its predecessor agencies began licensing spectrum nearly 100 years ago, there is no unassigned spectrum available, or at least none of which current technology can make effective use.
The spectrum frontier is now closed. But the FCC, as created by Congress, is an agency that only functions at all on the frontier.
So it’s worth remembering what happened a hundred years earlier, when a young historian named Frederick Jackson Turner showed up at the 1893 annual meeting of the American Historical Association to present his paper on “The Significance of the Frontier in American History.”
The meeting took place that year on the grounds of the World’s Columbian Exposition in Chicago. The weather was unspeakably hot, and Turner’s talk was poorly attended. (The President of the AHA, Henry Adams, was in attendance but appears not to have heard Turner’s talk or ever to have read the paper—he was meditating in the Hall of Turbines, as he wrote in his autobiography, “The Education of Henry Adams,” having a nervous breakdown.) But the paper has had an outsized and long-lasting impact, launching the field of western or frontier history.
Turner’s thesis was simple and unassailable. Citing census data that showed there was no longer a recognizable line of American territory beyond which there was no settlement, Turner declared that by 1890 the frontier had “closed.” The era of seemingly endless supplies of readily-available cheap land, dispensed for free or for nominal cost by the federal government, had come to an end.
For Turner, the history of the west was the history of the American experience. And the defining feature of American life—shaping its laws, customs, culture and economy–had disappeared. A new phase, with new rules, was beginning.
The FCC Only Functions, When it Functions at All, on the Frontier
Our problem, at least, is equally easy to describe. The FCC, as created by Congress, is an agency that only functions, when it functions at all, on the frontier.
All the talk of “spectrum crunch” boils down to a simple but devastating fact: it’s no longer possible to add capacity to existing mobile networks by assigning them unused ranges of radio frequencies. While technology continues to expand the definition of “usable” frequencies, demand for mobile broadband is increasing faster than our ability to create new supply.
We need more spectrum. And the only way to put more spectrum to use for the insatiable demands of mobile consumers is to reallocate spectrum that has already been licensed to someone else.
In the American west, reallocation of land was easy. Land grants were given with full legal title, and holders were under no lasting obligation to use their land for any specific purpose or in any particular way.
The various acts of Congress that authorized the grants were intended to foster important social values—populating the frontier, developing agriculture, compensating freed slaves, building the railroads. But those intentions were never translated into the kind of limited estates that plagued modern Europe after the feudal age came to an end. (For a good example of the mischief a conditional estate can cause hundreds of years later, watch “Downton Abbey.” Watch it even if you don’t want to see an example of inflexible estate law.)
Speculators sold to farmers, farmers to ranchers, ranchers to railroads and miners and oil drillers, and from there to developers of towns and other permanent settlements. The market established the transfer price, and the government stood behind the change of title and its enforcement, where necessary. Which was rarely.
So the closing of the western frontier, while it changed the nature of settlement in the American west, never threatened to bring future development to a screeching halt.
Reallocation Options are Few and Far Between
Unfortunately, spectrum licensing has never followed a property model, even though one was first proposed by Ronald Coase as early as 1959. Under the FCC’s command-and-control model, spectrum assignments have historically been made to foster new technologies or new applications the FCC deems to be of good potential to advance national interests. Spectrum has been licensed, usually at no or nominal cost to the licensor, for particular uses, with special conditions (often unrelated) attached.
In theory, of course, the FCC could begin revoking the licenses of public and private users who aren’t using the spectrum they already have, or who aren’t using it effectively or, to use the legal term of art, “in the public interest.” Legally and politically, however, revoking (or even refusing to renew) licenses is a non-starter.
Consequently, the most disastrous side-effect of the “public interest” approach to licensing has been that when old technologies grow obsolete, there is no efficient way to reclaim the spectrum for new or more valuable uses. The FCC must by law approve any transfer of an existing license on the secondary market, slowing the process at best and creating an opportunity to introduce new criteria and new conditions for the transfer at worst.
Even when the agency approves a transfer, the limitations on use and the existing conditions of the original licensor apply in full force to the new user. That means that specific ranges of spectrum more-or-less arbitrarily set aside for a particular application remains forever set aside for that application, unless and until the FCC undertakes a rulemaking to reassign it.
That also takes time and effort, and offers the chance for new regulatory mischief. (Only since 1999, the FCC has had the power, under limited circumstances, to grant flexible use licenses. The power cannot be applied retroactively to existing licenses.)
With the spectrum frontier closed, mobile broadband providers must find additional capacity from existing license holders. But because of the use restrictions and conditions, the universe of potential acquisition targets immediately and drastically shrinks to those making similar use of their licenses–that is, to current competitors.
So it’s no surprise that since 2005, as mobile use has exploded with the advent of 2G, 3G, and now 4G networks, the FCC has been called upon to approve over a dozen significant transfers within the mobile industry, including Sprint/Nextel, Verizon/Alltel, and Sprint Nextel/Clearwire. Indeed, expanding capacity through merger seemed to be the agency’s preferred solution, and the one that required the least amount of time and effort.
But with the rejection last year of AT&T’s proposed merger with T-Mobile USA, the FCC has signaled that it no longer sees such transactions as a preferred or perhaps even potential avenue for acquiring additional capacity. At least not for AT&T–and perhaps as well for Verizon, which is currently fighting to acquire unused spectrum held by a consortium of cable providers.
What other avenues are left? With the approval of “voluntary incentive auction” legislation earlier this year, the FCC can now begin the process of gently coercing over-the-air television broadcasters to give up some or all of their licensed capacity in exchange for a share of the proceeds of any auctions the agency conducts to repurpose that spectrum for mobile broadband.
(Broadcast television seems the obvious place to start freeing up spectrum. With the transition to digital TV, every station was given a 6 MHz. allocation in the 700 MHz. range. But over-the-air viewership has collapsed to as few as 10% of homes in favor of cable and fiber systems, which today reach nearly every home in the country and offer far greater selection and services. Many local broadcasters remain in business largely through the regulatory arbitrage of the FCC’s retransmission consent and must-carry rules.)
Those auctions will likely take years to complete, however, and the agency and Congress have already fallen out over how and how much the agency can “shape” the outcomes of these future auctions by disqualifying bidders who the agency feels already have too high a concentration of existing licenses.
And it’s far from clear that the broadcasters will be in any hurry to sign up, or that enough of them will to make the auctions worthwhile. Participation is, at least so far, entirely voluntary. Just getting Congress to agree to give the FCC even limited new auction authority took years.
There’s also the possibility of reassigning other kinds of spectrum to mobile use—increasing the pool of usable spectrum allocated to mobile, in other words. That option, however, has also failed to produce results. For example, the FCC initially gave start-up LightSquared a waiver that would allow it to repurpose unused spectrum allocated for satellite use for a new satellite and terrestrial-based LTE network.
But after concerns were raised by the Department of Defense and the GPS device industry about possible interference, the waiver was revoked and the company now stands on the brink of bankruptcy. (Allegations of political favoritism in the granting of the waiver are holding up the nominations of two FCC commissioners.)
So when Dish Networks recently asked for a similar waiver, the agency traded speed and flexibility for the relative safety of full process. The FCC has now published a formal Notice of Proposed Rulemaking to evaluate the request. If the rulemaking is approved, Dish will be able to repurpose satellite spectrum for a terrestrial mobile broadband network (possibly a wholesale network, rather than a new competitor). That, of course, will take time. And given enough time, anything can and will happen.
Finally, there’s the potential to free up unused or underutilized spectrum currently licensed to the federal government, one of the largest holders of usable spectrum and a notoriously poor manager of this valuable resource.
That was the subject of the NTIA’s recent report, which seemed to suggest that the high-priority 1755-1850 MHz. range (internationally targeted for mobile users) could be cleared of government users within ten years—some in five years, and in some cases, with possible sharing of public and private use during a transitional phase.
But as I point out in the article, the details behind that encouraging headline suggest rather that some if not all of the twenty agencies who currently hold some 1,300 assignments in this band are in no hurry to vacate it. Having paid nothing for their allocations and with no option to get future auction proceeds earmarked to their agency, the feds have little incentive to do so. (NTIA can’t make them do much of anything.) The offer to share may in fact be a stalling tactic to ensure they never actually have to vacate the frequencies.
What’s Left? Perhaps Nothing, at Least as Far as the FCC is Concerned
The color-coded map of current assignments is so complicated it can’t actually be read at all except on very large screens. There are currently some 50,000 active licenses. The agency still doesn’t even have a working inventory of them. This is the legacy of the FCC’s command-and-control approach to spectrum allocation over nearly 100 years.
Almost everyone agrees that even with advances in hardware and software that make spectrum usage and sharing more efficient, large quantities of additional spectrum must be allocated soon if we want to keep the mobile ecosystem healthy and the mobile revolution in full and glorious swing.
With the closing of the spectrum frontier, the easy solutions have all been extinguished. And the century-long licensing regime, which tolerated tremendous inefficiency and waste when spectrum was cheap, has left the FCC, the NTIA, the mobile industry and consumers dangerously hamstrung in finding alternative methods to meet demand. Existing spectrum, by and large, can’t be repurposed even when everyone involved wants to do so and where the market would easily catalyze mutually-beneficial transactions.
Given the law as it stands and the FCC’s current policy choices, carriers can’t get spectrum from outside the mobile industry, nor can they get it from their competitors. They can’t get it from the government, and may not be allowed to participate in future auctions of spectrum agonizingly pried loose from broadcasters who aren’t using what they have cost-effectively—assuming those auctions ever take place. They also can’t put up more towers and antennae to make better use of what they have, thanks to the foot-dragging and NIMBY policies of local zoning authorities.
And even when network operators do get more usable spectrum, it comes burdened with inflexible use limits and unrelated conditions that attach like barnacles at every stage of the process—from assignment to auction to transfer—and which require regular reporting, oversight, and supervision by the FCC.
A New Approach to Spectrum Management–Following an Old Model that Worked
The frontier system for spectrum management is hopelessly and dangerously broken. It cannot be repaired. For the mobile broadband economy to continue its remarkable development (one bright spot throughout the sour economy), Congress and the FCC must transition quickly to a new model that makes sense in a world without a spectrum frontier.
That model would look much more like the 19th century system of federal land management than the FCC’s legacy command-and-control system. The new approach would start by taking the FCC out of the middle of every transaction, and leave to the market to determine the best and highest use of our limited range of usable frequencies. It would treat licenses as transferable property, just like federal land grants in the 18th and 19th centuries.
It would leave to the market—with the legal system as backup—to work out problems of interference, just as the common law courts have stood as backup for land disputes.
And it would deal with any genuine problems of over-concentration (that is, those that cause demonstrable harm to consumers) through modern principles of antitrust applied by the Department of Justice, not the squishy and undefined “public interest” non-standard of the FCC. It would correct problems once it was clear the market had failed to do so, not short-circuit the market at the first hint of theoretical trouble. (Hello, net neutrality rules.)
That’s the system, according to Frederick Jackson Turner, that formed American culture and values, shaped American law and provided the fuel to create the engine of capitalism.
For starters.







Adam Thierer's Blog
- Adam Thierer's profile
- 1 follower
