Adam Thierer's Blog, page 77
December 20, 2012
Tears for Tiers: Wyden’s “Data Cap” Restrictions Would Hurt, not Help, Internet Users
By Geoffrey Manne & Berin Szoka
As Democrats insist that income taxes on the 1% must go up in the name of fairness, one Democratic Senator wants to make sure that the 1% of heaviest Internet users pay the same price as the rest of us. It’s ironic how confused social justice gets when the Internet’s involved.
Senator Ron Wyden is beloved by defenders of Internet freedom, most notably for blocking the Protect IP bill—sister to the more infamous SOPA—in the Senate. He’s widely celebrated as one of the most tech-savvy members of Congress. But his latest bill, the “Data Cap Integrity Act,” is a bizarre, reverse-Robin Hood form of price control for broadband. It should offend those who defend Internet freedom just as much as SOPA did.
Wyden worries that “data caps” will discourage Internet use and allow “Internet providers to extract monopoly rents,” quoting a New York Times editorial from July that stirred up a tempest in a teapot. But his fears are straw men, based on four false premises.
First, US ISPs aren’t “capping” anyone’s broadband; they’re experimenting with usage-based pricing—service tiers. If you want more than the basic tier, your usage isn’t capped: you can always pay more for more bandwidth. But few users will actually exceed that basic tier. For example, Comcast’s basic tier, 300 GB/month, is so generous that 98.5% of users will not exceed it. That’s enough for 130 hours of HD video each month (two full-length movies a day) or between 300 and 1000 hours of standard (compressed) video streaming.
Second, Wyden sets up a false dichotomy: Caps (or tiers, more accurately) are, according to Wyden, “appropriate if they are carefully constructed to manage network congestion,” but apparently for Wyden the only alternative explanation for usage-based pricing is extraction of monopoly rents. This simply isn’t the case, and propagating that fallacy risks chilling investment in network infrastructure. In fact, usage-based pricing allows networks to charge heavy users more, thereby recovering more costs and actually reducing prices for the majority of us who don’t need more bandwidth than the basic tier permits—and whose usage is effectively subsidized by those few who do. Unfortunately, Wyden’s bill wouldn’t allow pricing structures based on cost recovery—only network congestion. So, for example, an ISP might be allowed to price usage during times of peak congestion, but couldn’t simply offer a lower price for the basic tier to light users.
That’s nuts—from the perspective of social justice as well as basic economic rationality. Even as the FCC was issuing its famous Net Neutrality regulations, the agency rejected proposals to ban usage-based pricing, explaining:
prohibiting tiered or usage-based pricing and requiring all subscribers to pay the same amount for broadband service, regardless of the performance or usage of the service, would force lighter end users of the network to subsidize heavier end users. It would also foreclose practices that may appropriately align incentives to encourage efficient use of networks.
It is unclear why Senator Wyden thinks the FCC—no friend of broadband “monopolists”—has this wrong.
Third, charging heavy users more isn’t just more equitable, it’s actually a solution to the very problem Wyden worries about: ensuring that ISPs have an incentive to encourage Internet use. Tiered pricing means they actually benefit from heavy use. So rather than try to slow use or discriminate against bandwidth-heavy applications�—which is how the Net Neutrality fight started—ISPs will continue to build out faster networks.
Now, it’s certainly possible that, if the basic tier were set low enough or if additional data were expensive enough, cable companies could discourage their subscribers from canceling a cable subscription and switching to a competing service like Netflix. But it’s hard to see how a 300 GB basic tier deters anyone, especially when users can buy additional blocks of 50 GB for just $10/month—enough for nearly two more hours a day of streamed video. If there actually were a problem here, antitrust law could address it far better than blunt pricing restrictions. Indeed, such an investigation is already ongoing.
Finally, Wyden would require that broadband providers count content download from them against your usage—fearing that a “discriminatory cap” would harm competing video providers. But if the “cap” is high enough, who cares? Under antitrust law, such “discrimination” is illegal only if it harms consumers—and it’s hard to see how consumers suffer from being able to download more video. Would they really be better off if every hour of video they streamed from their cable company meant an hour less they could stream from Netflix? That’s what Wyden’s bill would require.
The recent kerfuffle over Comcast’s decision in October to make some of its television (pay per view) content available through Xbox without counting against Internet usage limits brought this point into stark relief. While activists like Public Knowledge decried the decision for the same reasons Wyden does now, they missed the fact that by removing some of its content from usage limits Comcast was actually freeing up users to access more content at lower prices.
If Wyden’s concern is that usage-based pricing would allow ISPs to extract “monopoly profits” from users who bump up against tiers, then “preferencing” some of their own content will reduce, not increase, that risk: Users would be able to access, say, bandwidth-heavy video content just as they do television content now—without it counting against Internet usage limits. That this might “discriminate” against other Internet-based content providers does not mean that it harms consumers—quite the opposite, in fact. Again, to the extent that it might, antitrust rules are more than sufficient to discourage such practices in the first place or punish them if they arise—without restricting firms’ ability to price their content and manage their networks to ensure a reasonable return on their investments.
Pricing structures for broadband are still evolving. Just this year, Comcast moved from its original 250 GB cap—which it never enforced—to today’s 300 GB basic tier, and other broadband providers will likely follow suit. Those plans will probably continue to evolve towards pricing structures that minimize network congestion—like offering periods of unmetered use in the middle of the night, when network use plummets. That would go a long way to allaying concerns about the effect of tiered plans on competition, since Netflix could send your favorite shows and the next movies in your queue to the device of your choice while you sleep. But pricing structures also have to allow sensible, fair recovery of costs—which the Wyden bill would simply ban.
So much for not blithely regulating the Internet, Senator!
[Cross-posted at Truth on the Market]







Should We Use the “One Ring” to Control the Internet?
Three rings for the broadcast-kings filling the sky,
Seven for the cable-lords in their head-end halls,
Nine for the telco-men doomed to die,
One for the White House to make its calls
On Capitol Hill where the powers lie,
One ring to rule them all, one ring to find them,
One ring to bring them all and without the Court bind them,
On Capitol Hill where the powers lie.
Myths resonate because they illustrate existential truths. In J.R.R. Tolkien’s mythical tale, the Lord of the Rings, the evil Lord Sauron imbued an otherwise very ordinary ring – the “One Ring”– with an extraordinary power: It could influence thought. When Sauron wore the One Ring, he could control the lords of the free peoples of Middle Earth through lesser “rings of power” he helped create. The extraordinary power of the One Ring was also its weakness: It eventually corrupted all who wore it, even those with good intentions. This duality is the central truth in Tolkien’s tale.
It is also central to current debates about freedom of expression and the Internet.
Since the invention of the printing press, those who control the means of mass communication have had the ability to influence thought. The printing press enabled the rapid and widespread circulation of ideas and information for the first time in history, including ideas that challenged the status quo (e.g., sedition and heresy). Governments viewed this new technology as a threat and responded by establishing control over the machinery of the printing press through state monopolies, press licenses, and special taxation.
The Framers knew that freedom of expression is the foundation of freedom. They also recognized that governments could control thought by controlling the printing press, and included a clause in the First Amendment prohibiting government interference with the “freedom of the press.” Though this clause was aimed at the printing press, its protection is not limited to the mass communications media of the Eighteenth Century. The courts have held that the First Amendment encompasses new mass media technologies, including broadcast television and cable.
Several public interest groups, academics, and pundits across the political spectrum nevertheless argue that the latest mass communications technology – the Internet – does not merit protection from government interference on First Amendment grounds. They assert that neither the dissemination of speech by Internet service providers (ISPs) nor the results of Internet search engines (e.g., Google) are entitled to First Amendment protection. They fear that Internet companies will use the First Amendment to justify the exercise of editorial control over the free expression of their consumers.
Others (including the Competitive Enterprise Institute) argue that the First Amendment applies to bothISPs and search engines. They believe a government with unrestrained control over the means of mass communications has the incentive and the ability to use that power to control the thoughts of its people, which inevitably leads to authoritarianism. They point to Internet censorship by China, Syria, and other authoritarian governments as current proof of this principle.
Both sides in the Internet debate raise legitimate concerns. I suspect many consumers do not want ISPs and search engines to exercise unfettered control over the Internet. I suspect that just as many consumers do not want government to exercise unfettered control over the Internet either. How can we resolve these dual concerns?
The free peoples of Middle Earth struggled with a similar duality at the Council of Elrond, where they decided what should be done with the One Ring. “Why not use this ring?” wondered Boromir, a bold hero who had long fought the forces of Sauron and believed the ring could save his people. Aragorn, a cautious but no less valiant hero, abruptly answered that no one on the Council could safely wield it. When Elrond suggested that the ring must be destroyed, mutual distrust drove the Council to chaos. Order was restored only when Frodo, a hobbit with no armies to command and no physical power, volunteered for the dangerous task of destroying the ring.
The judicial branch is our Frodo. It has no armies to command and no physical power. It must rely on the willingness of others to abide by its decisions and their strength to enforce them. Like the peoples of Middle Earth who relied on Frodo, we rely on the courts to protect us from abuse of government power because the judicial branch is the least threatening to our liberty.
This is as true today as it was when the Constitution was signed. Changes in technology do not change the balance of power among our branches of government. As we have in the earlier eras of the printing press, broadcast television, and cable, we must trust the courts to apply the First Amendment to mass communications in the Internet era.
Providing ISPs and search engines with First Amendment rights would prevent dangerous and unnecessary government interference with the Internet while permitting the government to protect Internet consumers within Constitutional bounds. Although some advocates imply otherwise, application of the First Amendment to Internet companies would not preclude the government from regulating the Internet. The courts uphold regulations that limit freedom of expression so long as they are narrowly tailored to advance a compelling or substantial government interest.
We have always trusted the courts to balance the right to freedom of expression with other rights and governmental interests, and there is no reason to believe they cannot appropriately balance competing concerns involving the Internet. If the courts cannot be trusted with this task, no one can.







Time for Congress to Cancel the FTC’s Section 5 Antitrust Blank Check
By Geoffrey Manne and Berin Szoka
A debate is brewing in Congress over whether to allow the Federal Trade Commission to sidestep decades of antitrust case law and economic theory to define, on its own, when competition becomes “unfair.” Unless Congress cancels the FTC’s blank check, uncertainty about the breadth of the agency’s power will chill innovation, especially in the tech sector. And sadly, there’s no reason to believe that such expansive power will serve consumers.
Last month, Senators and Congressmen of both parties sent a flurry of letters to the FTC warning against overstepping the authority Congress granted the agency in 1914 when it enacted Section 5 of the FTC Act. FTC Chairman Jon Leibowitz has long expressed a desire to stake out new antitrust authority under Section 5 over unfair methods of competition that would otherwise be legal under the Sherman and Clayton antitrust acts. He seems to have had Google in mind as a test case.
On Monday, Congressmen John Conyers and Mel Watt, the top two Democrats on the House Judiciary Committee, issued their own letter telling us not to worry about the larger principle at stake. The two insist that “concerns about the use of Section 5 are unfounded” because “[w]ell established legal principles set forth by the Supreme Court provide ample authority for the FTC to address potential competitive concerns in the relevant market, including search.” The second half of that sentence is certainly true: the FTC doesn’t need a “standalone” Section 5 case to protect consumers from real harms to competition. But that doesn’t mean the FTC won’t claim such authority—and, unfortunately, there’s little by way of “established legal principles” stop the agency from overreaching.
The Conyers-Watt letter cites four Supreme Court cases (Aspen Skiing, Otter Tail Power, Lorrain Journal and Indiana Federation of Dentists), the latest decided in 1986, that deal only with the Sherman Act or that reference Section 5 only as the statutory basis by which the FTC enforces, indirectly, the Sherman Act. But what conduct does Section 5 allow the FTC to prosecute beyond the Sherman Act? The fifth case cited, Sperry & Hutchinson, from 1972, was the last time the Supreme Court directly addressed this critical question, holding that the FTC “does not arrogate excessive power to itself if, in measuring a practice against the elusive, but congressionally mandated standard of fairness, it, like a court of equity, considers public values beyond simply those enshrined in the letter or encompassed in the spirit of the antitrust laws.” Yet, even there, the Court concluded the FTC would have prevailed under the Sherman Act—thus leaving unresolved what a standalone Section 5 case could cover. Fourteen years later, the Court dodged the question again in Indiana Federation of Dentists, noting that, although Section 5 covers something more than the Sherman and Clayton acts, the Sherman Act provided the sole basis for liability in that case. Of Section 5, the Court in Indiana Federation of Dentists said merely that “the standard of ‘unfairness’ under the FTC Act is, by necessity, an elusive one.”
Elusive. Try telling that to your shareholders—or investors looking for The Next Big Thing—when asked how the FTC might regulate innovative business methods!
The FTC has been down this road before—starting with the same Sperry & Hutchinson decision cited by Conyers and Watt. The FTC interpreted that 1972 decision as a blank check to use its authority over unfair trade practices (distinct from, but related to, its authority over unfair methods of competition) to regulate everything from funeral parlors to children’s advertising. But the FTC’s overreach provoked widespread outcry, causing the Washington Post to blast the agency as the “National Nanny.” The Democratic Congress briefly closed the agency, slashed its budget and, in 1980, ordered the Commission to establish legal limiting principles in the form of a formal policy statement on unfairness (followed in 1983 by one on deception). That statement bars the FTC from banning a practice as unfair simply because a majority of Commissioners decide it is “immoral” or in violation of public policy; instead the Commission must show that it violates public policy that is “widely-shared” and “clear and well-established” in law or that causes a substantial injury to consumers without countervailing benefits and which consumers cannot reasonably avoid. Congress enshrined this doctrine into law in 1994.
But the Commission has never issued any such policy statement about Section 5′s unfair competition language—and Congress has never bothered to intervene, even though the FTC has begun exploiting this uncertainty as additional leverage in “convincing” companies to settle shaky antitrust cases. That’s precisely what happened in the Intel case where, as we’ve explained, Intel settled a questionable complaint, probably because it concluded that that settling the case was less costly than litigating it. While such outcomes may bolster the agency’s power, they do nothing to protect consumers and serve instead to chill business conduct that would benefit consumers.
That dynamic is a major reason why the FTC gets away with pushing the boundaries of its authority. Litigation in court is costly enough, but the agency can always threaten companies with an administrative “Part III” litigation—meaning the company would have to spend upwards of a year litigating before the FTC’s Administrative Law Judge and then the full Commission, almost certainly suffering two losses, both PR disasters, before ever getting to an independent, neutral tribunal. So it’s not surprising that most companies settle. Sure, they might win in court eventually, but if the FTC is talking to you about a standalone Section 5 case while pressuring you to settle a case in a consent decree… well, “you’ve got to ask yourself one question: ‘Do I feel lucky?’ Well, do ya, punk?”
High-tech companies are particularly likely to find themselves the targets of Section 5 sabre-rattling. Cutting-edge companies are often antitrust test-cases because technological innovation goes hand-in-hand with innovations in business practices, from consumer pricing to “coopetition” partnerships between rivals. They’re more likely to settle rather than litigate because they’re terrified of squandering money, investor goodwill and management time on litigation—lest, like Microsoft, they fall behind their rivals even as they are demonized as rapacious monopolists in the press. At the same time, Internet-related cases tend to attract a unique degree of popular attention, driving antitrust regulators to show they’re “doing something” about a perceived problem. Even the best regulators all too easily fall prey to the costly tendency described by Nobel economist Ronald Coase: “if an economist finds something—a business practice of one sort or another—that he does not understand, he looks for a monopoly explanation.”
We can’t wait for the courts to fix this problem—not least because the tendency for these cases to settle out of court means it may be a long while before any court gets the chance. At a minimum, Congress should insist that the FTC convene a public workshop aimed at identifying what a valid standalone Section 5 case could cover—followed by formal guidelines, as we’ve urged. If the FTC cannot rigorously define an interpretation of Section 5 that will actually serve consumer welfare—which the Supreme Court has defined as the proper aim of antitrust law—Congress should expressly limit Section 5′s prohibition of unfair competition only to invitations to collude (which aren’t cognizable under the Sherman Act).
As the FTC’s policy statement on unfairness puts it, “[t]he Supreme Court has stated on many occasions that the definition of ‘unfairness’ is ultimately one for judicial determination.” But for the courts to play that vital role in defining the “elusive,” Congress may need to reassess how the FTC operates�. That might start with requiring the agency to bring suit directly in federal court, just as the Department of Justice does. But it also means much more careful Congressional oversight of what the FTC does across the board. Otherwise, the Commission may once again, as it did in the 1970s, become a second national legislature—with three political appointees deciding what’s “fair” for the entire economy, especially the high-tech sector.
[Crossposted at Forbes.com]







December 19, 2012
FCC Should State the Obvious: Telephone Service Is Not a Monopoly
Given the rate at which telephone companies are losing customers when they cannot raise prices as a regulatory matter, it is preposterous to continue presuming that they could raise prices as an economic matter.
Today, the United States Telecom Association (USTA) asked the Federal Communications Commission (FCC) to declare that incumbent telephone companies are no longer monopolies. Ten years ago, when most households had “plain old telephone service,” this request would have seemed preposterous. Today, when only one in three homes have a phone line, it is merely stating the obvious: Switched telephone service has no market power at all.
The FCC already knows that plain old telephone service is no longer a “dominant” service (“dominance” is more likely when a service has a market share exceeding 60%). Last year, the FCC’s Technological Advisory Council found that the legacy, circuit switched telephone network “no longer functions as a universal communications infrastructure” and telephone service “does not provide anything close to the services and capabilities” of wired and wireless broadband Internet access services.
The FCC also knows that outdated regulations premised on the historical primacy of telephone networks are discouraging investment in the modern Internet infrastructure that is necessary for the United States to remain competitive in a global economy. To its credit, the FCC has begun “eliminating barriers to the transformation of today’s telephone networks into the all-IP broadband networks of the future.” Based on an idea pioneered by Commissioner Ajit Pai, the FCC recently formed an agency-wide Technology Transitions Task Force to provide recommendations for modernizing our nation’s communications policies.
The USTA petition has a very limited scope compared to the TTTF. The petition does not include broadband or “special access” services and does not seek to deregulate telephone service. It asks only that incumbent telephone companies providing plain old telephone service receive regulatory treatment similar to that received by wireless providers, cable operators, and VOIP providers. Today, telephone companies designated as “dominant” are subject to unique regulatory requirements regarding pricing, tariff filings, and market entry and exit that are inapplicable to their competitors.
These unique regulatory requirements are premised on the presumption that telephone companies have “market power” – i.e., that they can raise prices without losing customers to competitors. Telephone companies may have possessed such market power during the Carter Administration when the current regulatory regime was adopted. But today, incumbent telephone companies whose prices are capped by the FCC are losing 10% of their customers to competitive alternatives every year. Given the rate at which telephone companies are losing customers when they cannot raise prices as a regulatory matter, it is preposterous to continue presuming that they could raise prices as an economic matter. It is more realistic to presume that plain old telephone service will lose customers at any price as consumers migrate to services with superior capabilities.
Though the relief sought by USTA is a small step toward regulatory modernization, it is an essential one that the FCC can take immediately under existing precedent. In 1995, the FCC concluded that AT&T should be reclassified as “non-dominant” in the “long distance” market after its share of that market declined from approximately 90% to 60% during the preceding decade. Last October, the FCC eliminated the presumption prohibiting cable operators from entering into exclusive programming arrangements with their affiliates because cable’s share of the video market had dropped from approximately 95% to 57% since the presumption was adopted. It is obvious that switched telephone service – with a national market share that is approximately half that of the long distance and cable services the FCC found lacked market power – should receive similar treatment.
“There is nothing more deceptive than an obvious fact.” It is obvious that switched telephone services are no longer capable of supporting the economic and social goals of our nation. It is also obvious that our future success depends on a rapid transition to an all-Internet infrastructure.
The USTA petition asks the FCC to state the obvious while the FCC’s new Task Force conducts a more holistic review of our nation’s outdated communications policies. Eliminating the presumption that plain old telephone service is “dominant” would promote confidence for private investment in Internet infrastructure and bring us one step closer to realizing the full potential and opportunity of Internet transformation for consumers. That’s progress that benefits everyone.







December 18, 2012
Wendell Wallach on robot ethics
Wendell Wallach, lecturer at the Interdisciplinary Center for Bioethics at Yale University, co-author of “Moral Machines: Teaching Robots Right from Wrong,” and contributor to the new book, “Robot Ethics: The Ethical and Social Implications of Robotics,” discusses robot morality.
Though many of those interested in the ethical implications of artificial intelligence focus largely on the ethical implications of humanoid robots in the (potentially distant) future, Wallach’s studies look at moral decisions made by the technology we have now.
According to Wallach, contemporary robotic hardware and software bots routinely make decisions based upon criteria that might be differently weighted if decided by a human actor working on a case-by-case basis. The sensitivity these computers have to human factors is a vital to ensuring they make ethically sound decisions.
In order to build a more ethically robust AI, Wallach and his peers work with those in the field to increase the sensitivity displayed by the machines making the routine calculations that affect our daily lives.
Related Links
Moral Machines: Teaching Robots Right from Wrong, by Wallach and Collin Allen
Robot Ethics: The Ethical and Social Implications of Robotics, by Patrick Lin, Keith Abney, and George, A. Bekey
From Robots to Techno Sapiens: Ethics, Law and Public Policy in the Development
of Robotics and Neurotechnologies, by Wallach







December 14, 2012
Ending Transaction ‘Mission Creep’ at the FCC
by Larry Downes and Geoffrey A. Manne
Now that the election is over, the Federal Communications Commission is returning to the important but painfully slow business of updating its spectrum management policies for the 21st century. That includes a process the agency started in September to formalize its dangerously unstructured role in reviewing mergers and other large transactions in the communications industry.
This followed growing concern about “mission creep” at the FCC, which, in deals such as those between Comcast and NBCUniversal, AT&T and T-Mobile USA, and Verizon Wireless and SpectrumCo, has repeatedly been caught with its thumb on the scales of what is supposed to be a balance between private markets and what the Communications Act refers to as the “public interest.”
Commission reviews of private transactions are only growing more common—and more problematic. The mobile revolution is severely testing the FCC’s increasingly anachronistic approach to assigning licenses for radio frequencies in the first place, putting pressure on carriers to use mergers and other secondary market deals to obtain the bandwidth needed to satisfy exploding customer demand.
While the Department of Justice reviews these transactions under antitrust law, the FCC has the final say on the transfer of any and all spectrum licenses. Increasingly, the agency is using that limited authority to restructure communications markets, beltway-style, elevating the appearance of increased competition over the substance of an increasingly dynamic, consumer-driven mobile market.
Given the very different speeds at which Silicon Valley and Washington operate, the expanding scope of FCC intervention is increasingly doing more harm than good.
Deteriorating Track Record
We’re trapped in a vicious cycle: the commission’s mismanagement of the public airwaves is creating more opportunities for the agency to insert itself into the internet ecosystem, largely to fix problems caused by the FCC in the first place. That is happening despite the fact that Congress clearly and precisely circumscribed the agency’s authority here, a key reason the internet has blossomed while heavily regulated over-the-air broadcasting and wireline telephone fade into history.
Desperate for continued relevance, the FCC can’t resist the temptation to tinker with one of the only segments of the economy that is still growing and investing. The agency, for example, fretted over Comcast’s merger with NBCUniversal for 10 months, approving it only after imposing a 30-page list of conditions, including details about which channels had to be offered in which cable packages.
Regulating-by-merger-condition has become a popular sport at the FCC, one with dangerous consequences. While it conveniently allows the agency to get around the problem of intervening where it has no authority, the result is a regulatory crazy quilt with different rules applying to different companies in different markets. Consumers, the supposed beneficiaries of this micromanagement, cannot be expected to understand the resulting chaos.
For example, Comcast also agreed to abide by an enhanced set of “net neutrality” rules even if, as appears likely, a federal appeals court throws out the FCC’s 2010 industry-wide rulemaking for exceeding the agency’s jurisdiction. As with all voluntary concessions, Comcast’s acquiescence isn’t reviewable in court.
The FCC made an even bigger hash in its review of AT&T’s proposed merger with T-Mobile. Once it became clear that the FCC was bowing to political pressure to reject the deal, the companies pulled their applications for license transfers to focus on winning over the Department of Justice first. But FCC Chairman Julius Genachowski, determined to have his say, simply released an uncirculated draft of the agency’s analysis of the deal anyway.
The report found that the combination, as initially proposed, would control too much spectrum in too many local markets. But that was only after the formula, known as the “spectrum screen,” was manipulated to reduce substantially the amount of frequency included in the denominator. Hidden in a footnote, the report noted cryptically that the reduction was being made (and explained) in an unrelated order yet to be published.
When the other order was released months later, however, it made no mention of the change. It never actually happened. With the T-Mobile deal off the table, apparently, the chairman found it more expedient to leave the screen as it was, at least until further gerrymandering proved useful. Unwittingly, Genachowski had exposed his hand in rigging a supposedly objective test applied by a supposedly independent agency.
Leave it to the Experts
This amateurish behavior, unfortunately, is increasingly the norm at the FCC. Politics aside, part of the problem is that while federal antitrust regulators enforce statutes under a long line of interpretive case law, the FCC’s review of license transfers is governed by an undefined and largely untested public interest standard.
Now the commission is asking interested parties how, if at all, it needs to formalize its transaction review process, particularly the spectrum screen calculation it blatantly manipulated in the AT&T/T-Mobile review. It’s even asking whether it should re-impose a rigid cap on the amount of spectrum any one carrier can license, a bludgeon of a regulatory tool the agency wisely abandoned in 2003.
We have a better idea. Do away with easily forged formulae and proxies with no scientific relevance. Instead, review transactions in the broader context of a dynamic broadband ecosystem that is disciplined not only by inter-carrier competition, but increasingly by device makers, operating system providers, app makers and ultimately by consumers.
Every user with an iPhone 5 knows perfectly well how complex and competitive the mobile marketplace has become. It’s now time for the government to abandon its 19th century toolkit and look at actual data—data that the FCC already collects and dutifully reports, then ignores when political expediency beckons.
Thanks to the FCC’s endemic misadventures in spectrum management, we can expect more, not fewer, mergers—necessitating more, not fewer, commission reviews. Rather than expanding the agency’s unstructured approach to transaction reviews, we should be reining it in. As the FCC embarks on its analysis of T-Mobile’s takeover of MetroPCS and Sprint’s acquisition by SoftBank, it’s time to put an end to dangerous mission creep at the FCC.
That, at least, would better serve the public interest.
(Reprinted, with permission, from Bloomberg BNA Daily Report for Executives, Dec. 6, 2012. Our recent paper on FCC transaction review can be found at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2163169.)







Why Sell Phones With Subscriptions?
Why do mobile carriers sell phones with a subscription? My roommate and I were debating this the other night. Most other popular electronics devices aren’t sold this way. Cable and satellite companies don’t sell televisions with their video service. ISPs don’t sell laptops and desktops with their Internet service. Bundling phones with mobile service subscriptions is pretty unique. (The only mass-market analogs I can think of are satellite radio and GPS service.)
Why might this be? Some might think that US carriers need control over the phones sold to their customers because roughly half of US subscribers use GSM phones (AT&T and T-Mobile) and half use CDMA phones (Verizon and Sprint), but that can’t be the reason because GSM is the standard in Europe yet bundling phones with subscriptions occurs.
Some say it occurs because it benefits carriers at the expense of consumers. A law review article written a few years ago said bundling profitably exploits the misperceptions of consumers and the value they place on mobile services. Tim Wu has said that selling phones is an anticompetitive response that allows carriers to control the platform and disable features (WiFi, Bluetooth, VoIP) that might eat into the carriers’ existing revenue streams. But even if that’s true I don’t think that’s the whole answer. If network services have that much control over the devices used for their services, why don’t cable, satellite, and Internet service providers sell TVs and computers that only work with their service? At the very least, if we assume, as Wu does, that carrier control removes features consumers really want, consumers could simply purchase phones directly from phone makers–Apple, Motorola, Samsung, LG–with full functionality intact.
I don’t know the best answer, and maybe commenters can chime in, but I suspect phones and contracts are primarily sold together because of the engineering challenges presented by a device using radio spectrum. (This would explain why GPS and satellite radio service providers also bundle devices with service.) Different carriers purchase licenses to use different swaths of spectrum, and these different frequencies require different radio receivers. Phones, then, need to have radios installed that are tailored for the particular carrier.
In any case, throughout most of the world, phones are sold with subscriptions. Some on the left, like Wu, say that bundling shouldn’t be permitted because it enables large carriers to exclude competitors and remove functionality consumers want. To that end, he proposes regulations that require all handsets to work with all carriers. Despite these objections, I’ll push back on the claim that consumers are being duped or that competition is seriously harmed. Bundling handsets with subscriptions has several pro-competitive and pro-consumer justifications.
1. Acts as an installment plan
This may be the most powerful reason selling phones with subscriptions is near-universal: consumers like it. Modern smartphones are expensive consumer products costing hundreds of dollars. Wherever you see expensive consumer products (home appliances, furniture, computers, clothes) you find retailers offering installment plans so that consumers don’t have to pay hundreds or thousands of dollars up-front. By locking consumers into a two-year contract, carriers can offer heavily subsidized advanced handsets–that they usually sell at an initial loss–and charge more for services over two years.
Consumers seem to prefer bundling since it acts as a de facto financing agreement. Noncontract prepaid plans are offered by every US carrier, yet the vast majority of Americans still use post-paid plans with contracts in large part because the (subsidized) phones offered are so much cheaper and more attractive deals. (See my prior post on the subject.) Further evidence that consumers really value this installment plan option comes from Belgium, where bundling phones with subscriptions was illegal years ago. That all changed in 2008 when the iPhone 3G came out. Belgians complained about the fact that their iPhones started at €525 when their neighbors, like those in the Netherlands (who allowed bundling), could get a subsidized phone for as little as €1. Within a year, with support from a competition minister, the law was changed to allow phones to be sold with subscriptions. Predictably, the up-front costs of Belgian phones subsequently dropped as carriers subsidized the phones, and broadband penetration increased.
2. Reduces transactions costs for consumers
Consumers also benefit from having a one-stop shop for their mobile needs. Instead of needing to go to a phone retailer like Best Buy and then to a carrier’s retail store, consumers can get everything at the carrier’s retail store. This may sound like a small benefit, but I imagine this especially benefits rural Americans who don’t have the retail options city-dwellers do.
3. Aids carriers’ marketing and improves competition
It’s probable that bundling phones with subscriptions makes carriers more competitive. There’s a textbook antitrust justification for why this is true. Vertical contracts with suppliers aligns the interests of the retailer (carrier) with the supplier (phone maker). DROID is a good example. It’s a brand used by Verizon to market higher-end Android smartphones to tech-savvy early adopters. This is a case of vertical restraints that prevent free-riding on Verizon’s brand promotion since no other carrier can offer DROID phones. By most accounts, creating the DROID brand was a lucrative marketing move that helped Verizon’s Android phones compete with iPhones. While DROID is probably the most successful example, all carriers have phones they market and sell exclusively.
4. Improves carriers’ bargaining power with handset makers (and improves phones)
Selling phones with subscriptions allows carriers to strengthen their position in the value chain. Carriers don’t want to be passive bit-pipes. They know crushing price competition between carriers would result. (Not to mention, being “dumb pipes” would make carriers more susceptible to net neutrality rules.) Carriers are already being squeezed by handset suppliers, namely Apple, with high prices, so it’s to their benefit to make the handsets complementary to a specific network and not easily interoperable with other carriers. And by selling differentiated handsets to their customers, the carriers demand innovative handsets from suppliers to differentiate their brand from other carriers and make their network ecosystem attractive to consumers. If phones worked on all networks, a mandate Wu and others seek, each carrier’s demand for innovative phones from their suppliers would subside. (Then competition would be driven by consumer demands, but it’s my impression that phone makers prefer to deal with carriers. Responding directly to consumer demands would tend to fragment the hardware market even more than the existing market, which would add to their costs.)
5. Smooths revenue streams for carriers (and improves networks)
Finally, locking consumers into a two-year contract, with a subsidized phone as a carrot, gives some predictability to carriers’ revenue streams. Lumpy revenue streams and high churn is a killer for long-term network investment plans. Without the ability to sell phones with subscriptions, churn rates would be much higher since few customers would want to be in a long contract.
This is what happened in Finland for years, when regulators banned bundling. After having one of the best networks when cell phones first became popular in the late 1990s, there was intense price competition for voice and text. And while Finnish prices were low, the investments in a 3G data network fell far behind other countries. No bundling led to very high churn rates and made price competition–not advanced services like broadband–the focus of carriers. Seeing that the lack of network investment was brought on by the ban on bundling, the Finnish equivalent of the FCC repealed the anti-bundling law in 2005. With the new ability to lock customers into contracts, phone prices fell and network investment into mobile broadband improved.
I expect selling phones with subscriptions will continue for the foreseeable future, absent regulation. And, for the reasons I’ve outlined, the ability to sell phones with subscriptions is likely a good thing for consumers and the industry.
Finally, though, I’ll note that inexpensive high-end smartphones could upset this entire bundling regime. Cheap phones would mean carriers are less able to lock consumers into contracts. We’re not there yet, but phones like the LG Nexus 4–an unlocked high-end Android starting at $300–indicates the day may come when consumers can’t be bribed into contracts by subsidized phones any longer. Consumers, at that point, will prefer to pay full price up-front and have the ability to switch carriers at any time. I don’t know how the radio engineering issues would be overcome, but this would be a major disruption of the wireless market and would have some ambiguous effects on competition, network investment, and consumers. And, it’s important to note that we may enter Wu’s desired world of phone interoperability without regulatory mandates.







December 12, 2012
CFTC Targets Prediction Markets; Hits First Amendment
Would you pay good money for accurate predictions about important events, such as election results or military campaigns? Not if the U.S. Commodity Futures Trading Commission (CFTC) has its way. It recently took enforcement action against overseas prediction markets run by InTrade and TEN. The alleged offense? Allowing Americans to trade on claims about future events.
The blunt version: If you want to put your money where your mouth is, the CFTC wants to shut you up.
A prediction market allows its participants to buy and sell claims payable upon the occurrence of some future event, such as an election or Supreme Court opinion. Because they align incentives with accuracy and tap the wisdom of crowds, prediction markets offer useful information about future events. InTrade, for instance, accurately called the recent U.S. presidential vote in all but one state.
As far as the CFTC is concerned, people buying and selling claims about political futures deserve the same treatment as people buying and selling claims about pork futures: Heavy regulations, enforcement actions, and bans. Co-authors Josh Blackman, Miriam A. Cherry, and I described in this recent op-ed why the CFTC’s animosity to prediction markets threatens the First Amendment.
The CFTC has already managed to scare would-be entrepreneurs away from trying to run real-money prediction markets in the U.S. Now it threatens overseas markets. With luck, the Internet will render the CFTC’s censorship futile, saving the marketplace in ideas from the politics of ignorance.
Why take chances, though? I suggest two policies to protect prediction markets and the honest talk they host. First, the CFTC should implement the policies described in the jointly authored Comment on CFTC Concept Release on the Appropriate Regulatory Treatment of Event Contracts, July 6, 2008. (Aside to CFTC: Your web-based copy appears to have disappeared. Ask me for a copy.)
Second, real-money public prediction markets should make clear that they fall outside the CFTC’s jurisdiction by deploying notices, setting up independent contractor relations with traders, and dealing in negotiable conditional notes. For details, see these papers starting with this one.
[Aside to Jerry and Adam: per my promise.]
[Crossposted at Technology Liberation Front, and Agoraphilia.]







December 11, 2012
How to Rig an FCC Spectrum Auction in 5 Easy Steps
Tomorrow the Federal Communications Commission (FCC) is testifying at a House Energy and Commerce Committee oversight hearing on spectrum auctions. The hearing is focused on the implementation of the broadcast incentive auction required by the Middle Class Tax Relief and Job Creation Act of 2012 (“Spectrum Act”), though the members will likely address other issues as well, including mobile spectrum aggregation.
I expect several questions regarding the FCC’s commitment to comply with the legislation as enacted by Congress. FCC Commissioner Ajit Pai has questioned whether several of the agency’s proposals in its auction proceeding are consistent with the Spectrum Act. The FCC’s recent proceeding to consider mobile spectrum aggregation has since raised troubling new questions regarding the agency’s willingness to comply with Congressional directives regarding spectrum auctions. If the FCC adopts new limits on spectrum holdings as suggested by its mobile competition reports, Verizon and AT&T would be prohibited from bidding in the incentive auction. Contrary to Congressional intent, the incentive auction would be rigged before it even begins.
Here is how the FCC could rig the auction in 5 easy steps.
Recognize that Verizon and AT&T have substantial mobile spectrum holdings below 1 GHz.
Propose an auction of spectrum below 1 GHz.
Arbitrarily decide that spectrum below 1 GHz is competitively relevant in both rural and urban areas.
Decide that Verizon and AT&T already hold too much (or just enough) spectrum below 1 GHz.
Discourage dissent by basing this arbitrary conclusion on highly technical (though arguably irrelevant) “data” that “only an engineer” can understand.
But, didn’t Congress prohibit the FCC from rigging auctions? Mostly. Section 6404 of the Spectrum Act prohibits the FCC from preventing a person from participating in an auction if that person meets the technical, financial, and character requirements to hold a spectrum license. This provision was clearly intended to prevent the FCC from imposing “eligibility restrictions” in an auction. In the past, the FCC had used seemingly neutral eligibility restrictions to pick winners and losers in auctions, though it was the public that lost the most. For example, in the so-called “entrepreneur auction” completed in 1996, the FCC restricted the bidding to certain businesses and allowed them to pay 90% of their winning bids through installment payments. Most bidders defaulted on their payments, and it took nearly ten years to free the licenses from bankruptcy and reassign them to operators capable of providing service to the public. The FCC has lost a total of approximately $26 billion in auction revenue through this and similarly failed policies.
Though Section 6404 was intended to prevent bad results in the incentive auction, Section 6404 has a loophole. It says that the prohibition against eligibility restrictions doesn’t affect “any authority the Commission has to adopt and enforce rules of general applicability, including rules concerning spectrum aggregation that promote competition.” This exception gives the FCC the flexibility to adjust the amount of spectrum that can be held by any mobile provider on a band-by-band basis before every auction. If this flexibility is abused, it could become the exception that swallows the rule.
Congressional oversight will be necessary to ensure the FCC doesn’t use this exception to pick winners and losers in the mobile marketplace. If the FCC intends to distinguish among different spectrum bands when measuring spectrum aggregation, the FCC must do more than examine the technical characteristics of the spectrum. It must obtain sufficient facts and data to accurately assess the potential impact of distinctions among different spectrum bands on competition – the concern addressed by spectrum aggregation policies. Competition is primarily an economic issue, and competition rules should be based on economic analysis. Until the FCC conducts a rigorous economic analysis of the impact of differences among spectrum bands on competition, if any, it should follow its traditional practice of treating mobile spectrum the same.







Video of Copyright Unbalanced event now available
Last week, Jim Harper was kind enough to host a book forum at the Cato Institute for Copyright Unbalanced: From Incentive to Excess. Video of the event is now available online.
I presented the case for why conservatives and libertarians should be skeptical of our current copyright system, and Tom Bell, a contributor to the book, made the case for reform. Mitch Glazier of the RIAA, a former Republican senior staffer on the House Judiciary Committee, served as respondent and engaged us in some lively debate.
I hope you will check out the video and that it might compel you to pick up a copy of the book, which also includes excellent essays from Reihan Salam, Patrick Ruffini, David Post, Tim Lee, Christina Mulligan, and Eli Dourado.
Also, this Thursday at 3 p.m. on the Hill, TechFreedom will host a panel discussion on free market thinking on copyright featuring yours truly, Geoff Manne, Larry Downes, Ryan Radia, and Adam Mossoff.







Adam Thierer's Blog
- Adam Thierer's profile
- 1 follower
