Adam Thierer's Blog, page 47

March 26, 2014

The End of Net Neutrality and the Future of TV

Some recent tech news provides insight into the trajectory of broadband and television markets. These stories also indicate a poor prognosis for a net neutrality. Political and ISP opposition to new rules aside (which is substantial), even net neutrality proponents point out that “neutrality” is difficult to define and even harder to implement. Now that the line between “Internet video” and “television” delivered via Internet Protocol (IP) is increasingly blurring, net neutrality goals are suffering from mission creep.


First, there was the announcement that Netflix, like many large content companies, was entering into a paid peering agreement with Comcast, prompting a complaint from Netflix CEO Reed Hastings who argued that ISPs have too much leverage in negotiating these interconnection deals.


Second, Comcast and Apple discussed a possible partnership whereby Comcast customers would receive prioritized access to Apple’s new video service. Apple’s TV offering would be a “managed service” exempt from net neutrality obligations.


Interconnection and managed services are generally not considered net neutrality issues. They are not “loopholes.” They were expressly exempted from the FCC’s 2010 (now-defunct) rules. However, net neutrality proponents are attempting to bring interconnection and managed services to the FCC’s attention as the FCC crafts new net neutrality rules. Net neutrality proponents have an uphill battle already, and the following trends won’t help.


1. Interconnection becomes less about traffic burden and more about leverage.


The ostensible reason that content companies like Netflix (or third parties like Cogent) pay ISPs for interconnection is because video content unloads a substantial amount of traffic onto ISPs’ last-mile networks.


Someone has to pay for network upgrades to handle the traffic. Typically, the parties seem to abide by the equity principle that whoever is sending the traffic–in this case, Netflix–should bear the costs via paid peering. That way, the increased expense is incurred by Netflix who can spread costs across its subscribers. If ISPs incurred the expense of upgrades, they’d have to spread costs over its subscriber base, but many of their subscribers are not Netflix users.


That principle doesn’t seem to hold for WatchESPN, which is owned by Disney. WatchESPN is an online service that provides live streams of ESPN television programming, like ESPN2 and ESPNU, to personal computers and also includes ESPN3, an online-only livestream of non-marquee sports. If a company has leverage in other markets, like Disney does in TV programming markets, I suspect ISPs can’t or won’t charge for interconnection. These interconnection deals are non-public but Disney probably doesn’t pay ISPs for transmitting WatchESPN traffic onto ISPs’ last-mile networks. The existence of a list of ESPN’s “Participating Providers” indicates that ISPs actually have to pay ESPN for the privilege of carrying WatchESPN content.


Netflix is different from WatchESPN in significant ways (it has substantially more traffic, for one). However, it is a popular service and seems to be flexing its leverage muscle with its Open Connect program, which provided higher-quality videos to participating ISPs. It’s plausible that someday video sources like Netflix will gain leverage, especially as broadband competition increases, and ISPs will have to pay content companies for traffic, rather than the reverse. When competitive leverage is the issue, antitrust agencies, not the FCC, have the appropriate tools to police business practices.


2. The rise of managed services in video.


Managed services include services ISPs provide to customers like VoIP and video-on-demand (VOD). They are on data streams that receive priority for guaranteed quality assurance since customers won’t tolerate a jittery phone call or movie stream. Crucially, managed services are carried on the same physical broadband network but are on separate data streams that don’t interfere with a customer’s Internet service.


The Apple-Comcast deal, if it comes to fruition, would be the first major video offering provided as a managed service. (Comcast has experimented with managed services affiliated with Xbox and TiVo.) Verizon is also a potential influential player since it just bought an Intel streaming TV service. Future plans are uncertain but Verizon might launch a TV product that it could sell outside of the FiOS footprint with a bundle of cable channels, live television, and live sports.


Net neutrality proponents decry managed services as exploiting a loophole in the net neutrality rules but it’s hardly a loophole. The FCC views managed services as a social good that ISPs should invest in. The FCC’s net neutrality advisory committee last August released a report and concluded that managed services provide “considerable” benefits to consumers. The report went on to articulate principles that resemble a safe harbor for ISPs contemplating managed services. Given this consensus view, I see no reason why the FCC would threaten managed services with new rules.


3. Uncertainty about what is “the Internet” and what is “television.”


Managed services and other developments are blurring the line between the Internet and television, which makes “neutrality” on the Internet harder to define and implement. We see similar tensions in phone service. Residential voice service is already largely carried via IP. According to FCC data, 2014 will likely be the year that more people subscribe to VoIP service than plain-old-telephone service. The IP Transition reveals the legal and practical tensions when technology advances make the FCC’s regulatory silos–”phone” and “Internet”–anachronistic.


Those same technology changes and legal ambiguity are carrying over into television. TV is also increasingly carried via IP and it’s unclear where “TV” ends and “Internet video” begins. This distinction matters because television is regulated heavily while Internet video is barely regulated at all. On one end of the spectrum you have video-on-demand from a cable operator. VOD is carried over a cable operator’s broadband lines but fits under the FCC’s cable service rules. On the other end of the spectrum you have Netflix and YouTube. Netflix and YouTube are online-only video services delivered via broadband but are definitely outside of cable rules.


In the gray zone between “TV” and “Internet video” lies several services and physical networks that are not entirely in either category. These services include WatchESPN and ESPN3, which are owned by a cable network and are included in traditional television negotiations but delivered via a broadband connection.


IPTV, also, is not entirely TV nor Internet video. AT&T’s UVerse, Verizon’s FiOS, and Google Fiber’s television product are pure or hybrid IPTV networks that “look” like cable or satellite TV to consumers but are not. AT&T, Verizon, and Google voluntarily assent to many, but not all, cable regulations even though their service occupies a legally ambiguous area.


Finally, on the horizon, are managed video and gaming services and “virtual MSOs” like Apple’s or Verizon’s video products. These are probably outside of traditional cable rules–like program access rules and broadcast carriage mandates–but there is still regulatory uncertainty.


Broadband and video markets are in a unique state of flux. New business models are slowly emerging and firms are attempting to figure out each other’s leverage. However, as phone and video move out of their traditional regulatory categories and converge with broadband services, companies face substantial regulatory compliance risks. In such an environment, more than ever, the FCC should proceed cautiously and give certainty to firms. In any case, I’m optimistic that experts’ predictions will be borne out: ex ante net neutrality rules are looking increasingly rigid and inappropriate for this ever-changing market environment.


Related Posts


1. Yes, Net Neutrality is a Dead Man Walking. We Already Have a Fast Lane.

2. Who Won the Net Neutrality Case?

3. If You’re Reliant on the Internet, You Loathe Net Neutrality.

 •  0 comments  •  flag
Share on Twitter
Published on March 26, 2014 08:03

March 25, 2014

Video Double Standard: Pay-TV Is Winning the War to Rig FCC Competition Rules

Most conservatives and many prominent thinkers on the left agree that the Communications Act should be updated based on the insight provided by the wireless and Internet protocol revolutions. The fundamental problem with the current legislation is its disparate treatment of competitive communications services. A comprehensive legislative update offers an opportunity to adopt a technologically neutral, consumer focused approach to communications regulation that would maximize competition, investment and innovation.


Though the Federal Communications Commission (FCC) must continue implementing the existing Act while Congress deliberates legislative changes, the agency should avoid creating new regulatory disparities on its own. Yet that is where the agency appears to be heading at its meeting next Monday.


recent ex parte filing indicates that the FCC is proposing to deem joint retransmission consent negotiations by two of the top four Free-TV stations in a market a per se violation of the FCC’s good-faith negotiation standard and adopt a rebuttable presumption that joint negotiations by non-top four station combinations constitute a failure to negotiate in good faith.” The intent of this proposal is to prohibit broadcasters from using a single negotiator during retransmission consent negotiations with Pay-TV distributors.


This prohibition would apply in all TV markets, no matter how small, including markets that lack effective competition in the Pay-TV segment. In small markets without effective competition, this rule would result in the absurd requirement that marginal TV stations with no economies of scale negotiate alone with a cable operator who possesses market power.


In contrast, cable operators in these markets would remain free to engage in joint negotiations to purchase their programming. The Department of Justice has issued a press release “clear[ing] the way for cable television joint purchasing” of national cable network programming through a single entity. The Department of Justice (DOJ) concluded that allowing nearly 1,000 cable operators to jointly negotiate programming prices would not facilitate retail price collusion because cable operators typically do not compete with each other in the sale of programming to consumers.


Joint retransmission consent negotiations don’t facilitate retail price collusion either. Free-TV distributors don’t compete with each other for the sale of their programming to consumers — they provide their broadcast signals to consumers for free over the air. Pay-TV operators complain that joint agreements among TV stations are nevertheless responsible for retail price increases in the Pay-TV segment, but have not presented evidence supporting that assertion. Pay-TV’s retail prices have increased at a steady clip for years irrespective of retransmission consent prices.


To the extent Pay-TV distributors complain that joint agreements increase TV station leverage in retransmission consent negotiations, there is no evidence of harm to competition. The retransmission consent rules prohibit TV stations from entering into exclusive retransmission consent agreements with any Pay-TV distributor — even though Pay-TV distributors are allowed to enter into such agreements for cable programming — and the FCC has determined that Pay- and Free-TV distributors do not compete directly for viewers. The absence of any potential for competitive harm is especially compelling in markets that lack effective competition in the Pay-TV segment, because the monopoly cable operator in such markets is the de facto single negotiator for Pay-TV distributors.


It is even more surprising that the FCC is proposing to prohibit joint sales agreements among Free-TV distributors. This recent development apparently stems from a DOJ Filing in the FCC’s incomplete media ownership proceeding.


A fundamental flaw exists in the DOJ Filing’s analysis: It failed to consider whether the relevant product market for video advertising includes other forms of video distribution, e.g., cable and online video programming distribution. Instead, the DOJ relied on precedent that considers the sale of advertising in non-video media only.


Similarly, the Department has repeatedly concluded that the purchase of broadcast television spot advertising constitutes a relevant antitrust product market because advertisers view spot advertising on broadcast television stations as sufficiently distinct from advertising on other media (such as radio and newspaper). (DOJ Filing at p.8)


The DOJ’s conclusions regarding joint sales agreements are clearly based on its incomplete analysis of the relevant product market.


Therefore, vigorous rivalry between multiple independently controlled broadcast stations in each local radio and television market ensures that businesses, charities, and advocacy groups can reach their desired audiences at competitive rates. (Id. at pp. 8-9, emphasis added)


The DOJ’s failure to consider the availability of advertising opportunities provided by cable and online video programming renders its analysis unreliable.


Moreover, the FCC’s proposed rules would result in another video market double standard. Cable, satellite, and telco video programming distributors, including DIRECTV, AT&T U-verse, and Verizon FIOS, have entered into a joint agreement to sell advertising through a single entityNCC Media (owned by Comcast, Time Warner Cable, and Cox Media). NCC Media’s Essential Guide to planning and buying video advertising says that cable programming has surpassed 70% of all viewing to ad-supported television homes in Prime and Total Day, and 80% of Weekend daytime viewing. According to NCC, “This viewer migration to cable [programming] is one of the best reasons to shift your brand’s media allocation from local broadcast to Spot Cable,” especially with the advent of NCC’s new consolidated advertising platform. (Essential Guide at p. 8) The Essential Guide also states:



“It’s harder than ever to buy the GRP’s [gross rating points] you need in local broadcast in prime and local news.” (Id. at p. 16)
“[There is] declining viewership on broadcast with limited inventory creating a shortage of rating points in prime, local news and other dayparts.” (Id. at p. 17)
“The erosion of local broadcast news is accelerating.” (Id. at p. 18)
“Thus, actual local broadcast TV reach is at or below the cume figures for wired cable in most markets.” (Id. at p. 19)

This Essential Guide clearly indicates that cable programming is part of the relevant video advertising product market and that there is intense competition between Pay- and Free-TV distributors for advertising dollars. So why is the FCC proposing to restrict joint marketing agreements among Free-TV distributors in local markets when virtually the entire Pay-TV industry is jointly marketing all of their advertising spots nationwide?


The FCC should refrain from adopting new restrictions on local broadcasters until it can answer questions like this one. Though it is appropriate for the FCC to prevent anticompetitive practices, adopting disparate regulatory obligations that distort competition in the same product market is not good for competition or consumers. Consumer interests would be better served if the FCC decided to address video competition issues more broadly — or there might not be any Free-TV competition to worry about.

 •  0 comments  •  flag
Share on Twitter
Published on March 25, 2014 10:44

New Book Release: “Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom”

book cover (small)I am pleased to announce the release of my latest book, “Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom.” It’s a short manifesto (just under 100 pages) that condenses — and attempts to make more accessible — arguments that I have developed in various law review articles, working papers, and blog posts over the past few years. I have two goals with this book.


First, I attempt to show how the central fault line in almost all modern technology policy debates revolves around “the permission question,” which asks: Must the creators of new technologies seek the blessing of public officials before they develop and deploy their innovations? How that question is answered depends on the disposition one adopts toward new inventions. Two conflicting attitudes are evident.


One disposition is known as the “precautionary principle.” Generally speaking, it refers to the belief that new innovations should be curtailed or disallowed until their developers can prove that they will not cause any harms to individuals, groups, specific entities, cultural norms, or various existing laws, norms, or traditions.


The other vision can be labeled “permissionless innovation.” It refers to the notion that experimentation with new technologies and business models should generally be permitted by default. Unless a compelling case can be made that a new invention will bring serious harm to society, innovation should be allowed to continue unabated and problems, if they develop at all, can be addressed later.


I argue that we are witnessing a grand clash of visions between these two mindsets today in almost all major technology policy discussions today.


The second major objective of the book, as is made clear by the title, is to make a forceful case in favor of the latter disposition of “permissionless innovation.” I argue that policymakers should unapologetically embrace and defend the permissionless innovation ethos — not just for the Internet but also for all new classes of networked technologies and platforms. Some of the specific case studies discussed in the book include: the “Internet of Things” and wearable technologies, smart cars and autonomous vehicles, commercial drones, 3D printing, and various other new technologies that are just now emerging.


I explain how precautionary principle thinking is increasingly creeping into policy discussions about these technologies. The urge to regulate preemptively in these sectors is driven by a variety of safety, security, and privacy concerns, which are discussed throughout the book. Many of these concerns are valid and deserve serious consideration. However, I argue that if precautionary-minded regulatory solutions are adopted in a preemptive attempt to head-off these concerns, the consequences will be profoundly deleterious.


The central lesson of the booklet is this: Living in constant fear of hypothetical worst-case scenarios — and premising public policy upon them — means that best-case scenarios will never come about. When public policy is shaped by precautionary principle reasoning, it poses a serious threat to technological progress, economic entrepreneurialism, social adaptation, and long-run prosperity.


Again, that doesn’t mean we should ignore the various problems created by these highly disruptive technologies. But how we address these concerns matters greatly. If and when problems develop, there are many less burdensome ways to address them than through preemptive technological controls. The best solutions to complex social problems are almost always organic and “bottom-up” in nature. Luckily, there exists a wide variety of constructive approaches that can be tapped to address or alleviate concerns associated with new innovations. These include:



education and empowerment efforts (including media literacy, digital citizenship efforts);
social pressure from activists, academics, and the press and the public more generally.
voluntary self-regulation and adoption of best practices (including privacy and security “by design” efforts); and,
increased transparency and awareness-building efforts to enhance consumer knowledge about how new technologies work.

Such solutions are almost always superior to top-down, command-and-control regulatory edits and bureaucratic schemes of a “Mother, May I?” (i.e., permissioned) nature. The problem with “top-down” traditional regulatory systems is that they often tend to be overly-rigid, bureaucratic, inflexible, and slow to adapt to new realities. They focus on preemptive remedies that aim to predict the future, and future hypothetical problems that may not ever come about. Worse yet, administrative regulation generally preempts or prohibits the beneficial experiments that yield new and better ways of doing things. It raises the cost of starting or running a business or non-business venture, and generally discourages activities that benefit society.


To the extent that other public policies are needed to guide technological developments, simple legal principles are greatly preferable to technology-specific, micro-managed regulatory regimes. Again, ex ante (preemptive and precautionary) regulation is often highly inefficient, even dangerous. To the extent that any corrective legal action is needed to address harms, ex post measures, especially via the common law (torts, class actions, etc.), are typically superior. And the Federal Trade Commission will, of course, continue to play a backstop here by utilizing the broad consumer protection powers it possesses under Section 5 of the Federal Trade Commission Act, which prohibits “unfair or deceptive acts or practices in or affecting commerce.” In recent years, the FTC has already brought and settled many cases involving its Section 5 authority to address identity theft and data security matters. If still more is needed, enhanced disclosure and transparency requirements would certainly be superior to outright bans on new forms of experimentation or other forms of heavy-handed technological controls.


In the end, however, I argue that, to the maximum extent possible, our default position toward new forms of technological innovation must remain: “innovation allowed.” That is especially the case because, more often than not, citizens find ways to adapt to technological change by employing a variety of coping mechanisms, new norms, or other creative fixes. We should have a little more faith in the ability of humanity to adapt to the challenges new innovations create for our culture and economy. We have done it countless times before. We are creative, resilient creatures. That’s why I remain so optimistic about our collective ability to confront the challenges posed by these new technologies and prosper in the process.


If you’re interested in taking a look, you can find a free PDF of the book at the Mercatus Center website or you can find out how to order it from there as an eBook. Hardcopies are also available. I’ll be doing more blogging about the book in coming weeks and months. The debate between the “permissionless innovation” and “precautionary principle” worldviews is just getting started and it promises to touch every tech policy debate going forward.


_______________


Related Essays :



The Growing Conflict of Visions over the Internet of Things & Privacy,” Technology Liberation Front, January 14, 2014.
CES 2014 Report: The Internet of Things Arrives, but Will Washington Welcome It?” Technology Liberation Front, January 8, 2014.
Who Really Believes in ‘Permissionless Innovation’?” Technology Liberation Front, March 4, 2013.
What Does It Mean to ‘Have a Conversation’ about a New Technology?” Technology Liberation Front, May 23, 2013.
On the Line between Technology Ethics vs. Technology Policy,” Technology Liberation Front, August 1, 2013.
Planning for Hypothetical Horribles in Tech Policy Debates,” Technology Liberation Front, August 6, 2013.
Edith Ramirez’s ‘Big Data’ Speech: Privacy Concerns Prompt Precautionary Principle Thinking,” Technology Liberation Front, August 29, 2013.
When It Comes to Information Control, Everybody Has a Pet Issue & Everyone Will Be Disappointed,” Technology Liberation Front, April 29, 2011.
Copyright, Privacy, Property Rights & Information Control: Common Themes, Common Challenges,” Technology Liberation Front, April 10, 2012.
Can We Adapt to the Internet of Things?” IAPP Privacy Perspectives, June 19, 2013.
Why Do We Always Sell the Next Generation Short?” Forbes, January 8, 2012.
The Six Things That Drive ‘Technopanics,’” Forbes, March 4, 2012.
 •  0 comments  •  flag
Share on Twitter
Published on March 25, 2014 08:06

March 18, 2014

New Mercatus Paper from Daniel Lyons about Wireless Net Neutrality

The Mercatus Center at George Mason University has released a new working paper by Daniel A. Lyons, professor at Boston College Law School, entitled “Innovations in Mobile Broadband Pricing.”


In 2010, the FCC passed net neutrality rules for mobile carriers and ISPs that included a “no blocking” provision (since struck down in FCC v. Verizon). The FCC prohibited mobile carriers from blocking Internet content and promised to scrutinize carriers’ non-standard pricing decisions. These broad regulations had a predictable chilling effect on firms trying new business models. For instance, Lyons describes how MetroPCS was hit with a net neutrality complaint because it allowed YouTube but not other video streaming sites on its budget LTE plan (something I’ve written on). Some critics also allege that AT&T’s Sponsored Data program is a net neutrality violation.


In his paper, Lyons explains that the FCC might still regulate mobile networks but advises against a one-size-fits-all net neutrality approach. Instead, he encourages regulatory humility in order to promote investment in mobile networks and devices and to allow new business models. For support, he points out that several developing and rich countries have permitted commercial arrangements between content companies and carriers that arguably violate principles of net neutrality. Lyons makes the persuasive argument that these “non-neutral” service bundles and pricing decisions on the whole, rather than harming consumers, expand online access and ease non-connected populations into the Internet Age. As Lyons says,


The wide range of successful wireless innovations and partnerships at the international level should prompt U.S. regulators to rethink their commitment to a rigid set of rules that limit flexibility in American broadband markets. This should be especially true in the wireless broadband space, where complex technical considerations, rapid change, and robust competition make for anything but a stable and predictable business environment.


Further,


In the rapidly changing world of information technology, it is sometimes easy to forget that experimental new pricing models can be just as innovative as new technological developments. By offering new and different pricing models, companies can provide better value to consumers or identify niche segments that are not well-served by dominant pricing strategies.


Despite the January 2014 court decision striking down the FCC’s net neutrality rules, it’s an issue that hasn’t died. Lyons’ research provides support for the position that a fixation on enforcing net neutrality, however defined, distracts policymakers from serious discussion of how to expand online access. Rules should be written with consumers and competition in mind. Wired ISPs get the lion’s share of scholars’ attention when discussing net neutrality. In an increasingly wireless world, Lyon’s paper provides important research to guide future US policies.

 •  0 comments  •  flag
Share on Twitter
Published on March 18, 2014 13:58

March 17, 2014

Toward a Post-Government Internet

The Internet began as a U.S. military project. For two decades, the government restricted access to the network to government, academic, and other authorized non-commercial use. In 1989, the U.S. gave up control—it allowed private, commercial use of the Internet, a decision that allowed it to flourish and grow as few could imagine at the time.


Late Friday, its intent to give up the last vestiges of its control over the Internet, the last real evidence that it began as a government experiment. Control of the Domain Name System’s (DNS’s) Root Zone File has remained with the agency despite the creation of ICANN in 1998 to perform the other high-level domain name functions, called the IANA functions.


The NTIA announcement is not a huge surprise. The U.S. government has always said it eventually planned to devolve IANA oversight, albeit with lapsed deadlines and changes of course along the way.


The U.S. giving up control over the Root Zone File is a step toward a world in which governments no longer assert oversight over the technology of communication. Just as freedom of the printing press was important to the founding generation in America, an unfettered Internet is essential to our right to unimpeded communication. I am heartened to see that the U.S. will not consider any proposal that involves IANA oversight by an intergovernmental body.


Relatedly, next month’s global multistakeholder meeting in Brazil will consider principles and roadmaps for the future of Internet governance. I have made two contributions to the meeting, a set of proposed high-level principles that would limit the involvement of governments in Internet governance to facilitating participation by their nationals, and a proposal to support experimentation in . I view these proposals as related: the first keeps governments away from Internet governance and the second provides a check against ICANN simply becoming another government in control of the Internet.

 •  0 comments  •  flag
Share on Twitter
Published on March 17, 2014 06:41

March 11, 2014

Shane Greenstein on bias in Wikipedia articles

Post image for Shane Greenstein on bias in Wikipedia articles

Shane Greenstein, Kellogg Chair in Information Technology at Northwestern’s Kellogg School of Management, discusses his recent paper, Collective Intelligence and Neutral Point of View: The Case of Wikipedia , coauthored by Harvard assistant professor Feng Zhu. Greenstein and Zhu’s paper takes a look at whether Linus’ Law applies to Wikipedia articles. Do Wikipedia articles have a slant or bias? If so, how can we measure it? And, do articles become less biased over time, as more contributors become involved? Greenstein explains his findings.


Download


Related Links

Is Wikipedia Biased?, Greenstein, Zhu
Harvard Business School Biography, Greenstein
Harvard Business School Biography, Zhu
The Irony of Public Funding, The Virulent World of Mouse

 •  0 comments  •  flag
Share on Twitter
Published on March 11, 2014 03:00

March 10, 2014

In His Bid to Buy T-Mobile, Sprint Chairman Slams US Wireless Policies that Sprint Helped Create

Sprint’s Chairman, Masayoshi Son, is coming to Washington to explain how wireless competition in the US would be improved if only there were less of it.


After buying Sprint last year for $21.6 billion, he has floated plans to buy T-Mobile. When antitrust officials voiced their concerns about the proposed plan’s potential impact on wireless competition, Son decided to respond with an unusual strategy that goes something like this: The US wireless market isn’t competitive enough, so policymakers need to approve the merger of the third and fourth largest wireless companies in order to improve competition, because going from four nationwide wireless companies to three will make things even more competitive. Got it? Me neither.


An argument like that takes nerve, especially now. When AT&T attempted to buy T-Mobile a few years ago, Sprint led the charge against it, arguing vociferously that permitting the market to consolidate from four to only three nationwide wireless companies would harm innovation and wireless competition. After the Administration blocked the merger, T-Mobile rebounded in the marketplace, which immediately made it the poster child for the Administration’s antitrust policies.


It also makes Son’s plan a non-starter. Allowing Sprint to buy T-Mobile three years after telling AT&T it could not would take incredible regulatory nerve. It would be hard to convince anyone that such an immediate about face in favor of the company that fought the previous merger the hardest isn’t motivated by a desire to pick winners in losers in the marketplace or even outright cronyism. That would be true in almost any circumstance, but is doubly true now that T-Mobile is flourishing. It’s hard to swallow the idea that it would harm competition if a nationwide wireless company were to buy T-Mobile — unless the purchaser is Sprint.


The special irony here is that Son has built his reputation on a knack for relentless innovation. When he bought Sprint, he expressed confidence that Sprint would become the number 1 company in the world. But, a year later, it is T-Mobile that is rebounding in the marketplace, even though T-Mobile has fewer customers than Sprint and less spectrum than Sprint. Buying into T-Mobile’s success now wouldn’t improve Son’s reputation for innovation, but it would double down on his confidence. I expect US regulators will want to see how he does with Sprint before betting the wireless competition farm on a prodigal Son.


 •  0 comments  •  flag
Share on Twitter
Published on March 10, 2014 13:30

March 7, 2014

TacoCopters are Legal (for Now)

Yesterday, an administrative judge ruled in Huerta v. Pirker that the FAA’s “rules” banning commercial drones don’t have the force of law because the agency never followed the procedures required to enact them as an official regulation. The ruling means that any aircraft that qualifies as a “model aircraft” plausibly operates under laissez-faire. Entrepreneurs are free for now to develop real-life TacoCopters, and Amazon can launch its Prime Air same-day delivery service.


Laissez-faire might not last. The FAA could appeal the ruling, try to issue an emergency regulation, or simply wait 18 months or so until its current regulatory proceedings culminate in regulations for commercial drones. If they opt for the last of these, then the drone community has an interesting opportunity to show that regulations for small commercial drones do not pass a cost-benefit test. So start new drone businesses, but as Matt Waite says, “Don’t do anything stupid. Bad actors make bad policy.”


Kudos to Brendan Schulman, the attorney for Pirker, who has been a tireless advocate for the freedom to innovate using drone technology. He is on Twitter at @dronelaws, and if you’re at all interested in this issue, he is a great person to follow.


 •  0 comments  •  flag
Share on Twitter
Published on March 07, 2014 08:08

March 4, 2014

Repeal Satellite Television Law

The House Subcommittee on Communications and Technology will soon consider whether to reauthorize the Satellite Television Extension and Localism Act (STELA) set to expire at the end of the year. A hearing scheduled for this week has been postponed on account of weather.


Congress ought to scrap the current compulsory license in STELA that governs the importation of distant broadcast signals by Direct Broadcast Satellite providers.  STELA is redundant and outdated. The 25 year-old statute invites rent-seeking every time it comes up for reauthorization.


At the same time, Congress should also resist calls to use the STELA reauthorization process to consider retransmission consent reforms.  The retransmission consent framework is designed to function like the free market and is not the problem.


Those advocating retransmission consent changes are guilty of exaggerating the fact that retransmission consent fees have been on the increase and blackouts occasionally occur when content producers and pay-tv providers fail to reach agreement.  They are also at fault for attempting to  pass the blame.  DIRECTV dropped the Weather Channel in January, for example, rather than agree to pay “about a penny a subscriber” more than it had in the past.


A DIRECTV executive complained at a hearing in June that “between 2010 and 2015, DIRECTV’s retransmission consent costs will increase 600% per subscriber.”  As I and other have noted in the past, retransmission consent fees account for an extremely small share of pay-tv revenue.  Multichannel News has estimated that only two cents of the average dollar of cable revenue goes to retransmission consent.


According to SNL Kagan, retransmission-consent fees were expected to be about 1.2% of total video revenue in 2010, rising to 2% by 2014. at that rate, retrans currently makes up about 3% of total video expenses.


Among other things, DIRECTV recommended that Congress use the STELA reauthorization process to outlaw blackouts or permit pay-tv providers to deliver replacement distant broadcast signals during local blackouts.  In effect, DIRECTV wants to eliminate the bargaining power of content producers, and force them to offer their channels for retransmission at whatever price DIRECTV is willing to pay.


There is a need for regulatory reform in the video marketplace.  Unfortunately, proposals such as these do not advance that goal.  The government intervention DIRECTV is seeking would simply add to the problem by forcing local broadcasters to subsidize pay-tv providers instead of being allowed to recover the fair market value of their programming.  Broadcaster Marci Burdick was correct when she observed that regulation which unfairly siphons local broadcast revenue could have the unintended effect of reducing the “quality and diversity of broadcast programming, including local news, public affairs, severe weather, and emergency alerts, available both via [pay-tv providers] and free, over-the-air to all Americans.”


Broad regulatory reform of the video marketplace can and should be considered as part of the process House Energy and Commerce Committee Chairman Fred Upton (R-MI) and Communications and Technology Subcommittee Chairman Greg Walden (R-OR) recently announced by which the committee will examine and update the Communications Act.


 •  0 comments  •  flag
Share on Twitter
Published on March 04, 2014 13:56

February 24, 2014

What’s Wrong with Two-Sided Markets?

It seems to me that a lot of the angst about the Comcast-Netflix paid transit deal results from a general discomfort with two-sided markets rather than any specific harm caused by the deal. But is there any reason to be suspicious of two-sided markets per se?


Consider a (straight) singles bar. Men and women come to the singles bar to meet each other. On some nights, it’s ladies’ night, and women get in free and get a free drink. On other nights, it’s not ladies’ night, and both men and women have to pay to get in and buy drinks.


There is no a priori reason to believe that ladies’ night is more just or efficient than other nights. The owner of the bar will benefit if the bar is a good place for social congress, and she will price accordingly. If men in the area are particularly shy, she may have to institute a “mens’ night” to get them to come out. If women start demanding too many free drinks, she may have to put an end to ladies’ night (even if some men benefit from the presence of tipsy women, they may not be as willing as the women to pay the full cost of all of the drinks). Whether a market should be two-sided or one-sided is an empirical question, and the answer can change over time depending on circumstances.


Some commentators seem to be arguing that two-sided markets are fine as long as the market is competitive. Well, OK, suppose the singles bar is the only singles bar in a 100-mile radius? How does that change the analysis above? Not at all, I say.


Analysis of two-sided markets can get very complex, but we shouldn’t let that complexity turn into reflexive opposition.


 •  0 comments  •  flag
Share on Twitter
Published on February 24, 2014 06:53

Adam Thierer's Blog

Adam Thierer
Adam Thierer isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Adam Thierer's blog with rss.