Adam Thierer's Blog, page 50

December 23, 2013

Important Cyberlaw & Info-Tech Policy Books (2013 Edition)

I didn’t have nearly as much time this year to review the steadily growing stream of information policy books that were released. The end-of-year lists I put together in the past were fairly comprehensive (see 2008, 2009, 2010, 2011 and 2012), but I got sidetracked this year with 7 law review articles and an eBook project and had almost no time for book reviews, or even general blogging for that matter.


So, I’ve just listed some of the more notable titles from 2013 even though I didn’t find the time to describe them all.  The first couple are the titles that I believe will have the most lasting influence on information technology policy debates. Needless to say, just because I believe that some of these titles will have an impact on policy going forward does not mean I endorse the perspectives or recommendations in any of them. And that would certainly be the case with my choice for most important Net policy book of the year, Ian Brown and Chris Marsden’s Regulating Code. Their book does a wonderful job mapping the unfolding universe of Internet “co-regulation” and “multi-stakeholderism,” but their defense of a more politicized information policy future leaves lovers of liberty like me utterly demoralized.


The same could be said of many other titles on the list. As I noted in concluding several reviews over the past year, liberty is increasingly a loser in Internet policy circles these days. And it’s not just neo-Marxist rants like McChesney’s Digital Disconnect or Lanier’s restatement of the Unibomber Manifesto, Who Owns the Future? The sad reality is that pretty much everybody these days has a pet peeve they want addressed through pure power politics because, you know, something must be done! The very term “Internet freedom” has already been grotesquely contorted into something akin to an open mandate for governments to meticulously plan virtually every facet of economic and social activity in the Information Age.


Anyway, despite that caveat, many interesting books were released in 2013 on an ever-expanding array of specific information policy topics.  Here’s the list of everything that landed on my desk over the past year.



Ian Brown & Christopher T. Marsden – Regulating Code: Good Governance and Better Regulation in the Information Age   [ my review ]
Ronald Deibert – Black Code: Inside the Battle for Cyberspace   [ my review ]
Anupam Chander – The Electronic Silk Road: How the Web Binds the World Together in Commerce [ my review ]
Marvin Ammori – On Internet Freedom
Jaron Lanier – Who Owns the Future?
Ethan Zuckerman – Rewire: Digital Cosmopolitans in the Age of Connection
Eric Schmidt & Jared Cohen – The New Digital Age: Reshaping the Future of People, Nations and Business
Abraham H. Foxman & Christopher Wolf – Viral Hate: Containing Its Spread on the Internet [ my review ]
Nicco Mele – The End of Big: How the Internet Makes David the New Goliath
Clive Thompson – Smarter Than You Think: How Technology is Changing Our Minds for the Better [ my review ] (** note: This was my favorite book of the year, but it didn’t have a lot to say about policy.)
Evgeny Morozov – To Save Everything, Click Here: The Folly of Technological Solutionism [ my review ]
Viktor Mayer-Schonberger & Kenneth Cukier – Big Data: A Revolution That Will Transform How We Live, Work, and Think
Thomas Rid – Cyber War Will Not Take Place
Nate Anderson – The Internet Police: How Crime Went Online, and the Cops Followed
Robert W. McChesney – Digital Disconnect: How Capitalism Is Turning the Internet Against Democracy
Giovanni Ziccardi – Resistance, Liberation Technology and Human Rights in the Digital Age
John O. McGinnis – Accelerating Democracy: Transforming Governance Through Technology
Scott Shackelford – Managing Cyber Attacks in International Law, Business and Relations: In Search of Cyber Peace
Paul Rosenzweig – Cyber Warfare: How Conflicts in Cyberspace Are Challenging America and Changing the World
Alice E. Marwick – Status Update: Celebrity, Publicity, and Branding in the Social Media Age
Dorothea Kleine – Technologies Of Choice? ICTs, Development, and the Capabilities Approach

 •  0 comments  •  flag
Share on Twitter
Published on December 23, 2013 10:21

December 20, 2013

The Overblown Case For Retrans Reform

Retransmission consent came under attack again this month, and two long-awaited bills on the subject have finally been introduced—the Next Generation Television Marketplace Act (H.R. 3720) by Rep. Steve Scalise, and the Video CHOICE (Consumers Have Options in Choosing Entertainment) Act (H.R. 3719) by Rep. Anna G. Eshoo.


The American Cable Association’s Matthew M. Polka has reiterated his view that the process whereby cable and satellite TV providers negotiate with broadcasters for the right to retransmit broadcast signals is a “far cry from the free market,” and Alan Daley and Steve Pociask with the American Consumer Institute claim that retransmission consent jeopardizes the Broadcast Television Spectrum Incentive Auction.


As Jeff Eisenach pointed out at the Hudson Institute, “Congress created retransmission consent in 1992 to take the place of the property rights that it and the FCC abrogated.  Prior to 1992, broadcasters weren’t permitted to charge anyone for retransmitting their signals.”


After 1992, compensation for broadcasters was typically in-kind.  For example, Mark Robichaux writes in Cable Cowboy: John Malone and the Rise of the Modern Cable Business (2002) that in the ‘90s,


TCI, for one, refused to pay cash to any of the big networks, but it indicated it might be willing to make room on its systems for a new cable channel a broadcaster might like to start. One of TCI’s first deals was with the Fox network, owned by Rupert Murdoch’s News Corporation, Ltd. He was eager to forgo carriage fees for Fox’s TV stations in exchange for a slot on the cable dial, where he could start a new Fox cable network that would receive a separate fee from TCI cable systems.


As the years went by, channel space became far less scarce, and that’s why broadcasters and cable and satellite providers started negotiating cash compensation about ten years ago.  SNL Kagan projects that the fees will amount to $3.3 billion this year, and that they could increase to $7.15 billion in 2019.  There is nothing sinister about this.


Buyers and sellers are homing in on the true and sustainable fair market value of broadcast content using a new method of compensation.  Although the year-to-year increases look dramatic, rentansmission consent fees account for only about two cents out of every dollar of cable revenue and make up only about 3% of total video content expenses on average.  This isn’t a big deal.


Polka gets it wrong when he says that in a “truly free market, a cable operator would be allowed to negotiate a carriage deal with any TV station.”  He acknowledges in the next paragraph that the exclusivity rules merely “use government resources to enforce private contracts” that protect local affiliates of a particular network  such as ABC, CBS, NBC or FOX in one market from competition by affiliates in other markets.  Even without the exclusivity regulation, in other words, the private exclusivity contracts would still remain in effect.  We don’t need duplicative government enforcement mechanisms.  But in a free market economy the government does enforce valid private contracts as well as protect private property.  Exclusive dealing is common throughout the economy, and it can be enormously pro-competitive.


Although an unfortunate regulatory thicket has grown in and around how broadcasters monetize their product, one wonders if part of the angst surrounding this issue stems from a need for some people to overcome the fact that, although TV used to be “free,” times are changing.  The broadcast business model is evolving now that broadcasters compete for advertising dollars and cannot produce high-quality programming if they can’t sell it.


I respectfully disagree with my friends Alan and Steve, with whom I normally see eye-to-eye, if they are suggesting that limiting retransmission compensation would be a good thing because it would diminish the value of spectrum in the hands of broadcasters so they would be more likely to sell it for mobile wireless use.  That would be an example of central planning and letting the outdated, bureaucratic, cronyistic FCC pick winners and losers and block innovation.  Ideally, the solution is to let the broadcasters go into the mobile wireless business and vice versa.


There is plenty of room to reform FCC broadcast and media ownership rules, but unfortunately the focus of the current debate mostly seems to be about diminishing the fair market value of broadcast television.


 •  0 comments  •  flag
Share on Twitter
Published on December 20, 2013 15:24

Understanding the False Equivalency of the Free State Foundation’s Views on Retransmission Consent and the Free Market

My response to Free State Foundation’s blog post, “Understanding the Un-Free Market for Retrans Consent Is the First Step for Reforming It


The Free State Foundation (FSF) questioned my most recent blog post at RedState, which noted that the American Television Alliance’s (ATVA) arguments supporting FCC price regulation of broadcast television content are inconsistent with the arguments its largest members make against government intervention proposed by net neutrality supporters. FSF claimed that my post created a “false equivalency” between efforts to modify an existing regulatory regime and efforts to impose new regulations in a previously free market.


FSF’s “false equivalence” theory is a red herring that is apparently intended to distract from the substantive issues I raised. The validity of the economic arguments related to two-sided markets discussed in my blog doesn’t depend on the regulatory status of the two-sided markets those arguments address. The notion that the existence of regulation in the video marketplace gives ATVA a free pass to say anything it wants without heed for intellectual consistency is absurd.


I suspect FSF knows this. Its blog post does not dispute that ATVA’s arguments at the FCC are inconsistent with the arguments its largest members make against net neutrality; in fact, FSF failed to address the ATVA petition at all. Though the FSF blog was ostensibly prompted by my post at RedState, FSF decided to “leave the merits of ATVA’s various proposals to others” (except me, apparently).


FSF’s decision to avoid the merits of ATVA’s arguments at the FCC (the subject of my blog post), begs the question: What was the FSF blog actually about? It appears FSF wrote the blog to (1) reiterate its previous (and misleading) analyses of the video programing market, and (2) argue that the Next Generation Television Marketplace Act “represents the proper direction” for reforming it.


To be clear, I haven’t previously addressed either issue. But, in the spirit of collegial dialogue initiated by FSF, I discuss them briefly in this blog.


Retransmission Consent


FSF is right that, “In a truly free marketplace, private parties have the liberty to pursue [or not pursue] commercial deals with whomever they choose.” I also agree that the market for video programming is not a “truly free marketplace,” and that the rules governing retransmission consent “restrict private bargaining.” But, FSF’s one-sided characterization of retransmission consent as granting “special rights” to broadcasters only is flatly misleading.


FSF highlights how local broadcasters benefit from (1) “must carry” rules and (2) non-duplication and syndication agreements.


The must carry rules require for-pay video distributors (e.g., cable operators) to carry the programming of broadcasters who elect mandatory program carriage while prohibiting distributors from charging such broadcasters for that carriage. Although I agree with FSF that the must carry rules are particularly intrusive, they are also irrelevant to retransmission consent negotiations. Once a broadcaster elects to engage in retransmission consent negotiations for carriage, it cannot take advantage of must carry for three years. Even if it could, the existence of must carry wouldn’t provide the broadcaster any pricing advantage in negotiations with for-pay video distributors, whose goal is to carry the programming at the lowest possible cost (which must carry sets at zero).


FSF correctly notes that non-duplication and syndication agreements limit the ability of for-pay video distributors (e.g., cable operators) to bargain with non-local broadcasters for new and syndicated broadcast programming, respectively. But FSF sidesteps the fact that these limitations are created in the free market by private contractual arrangements between broadcast stations and the providers of network or syndicated programming, not the government. The FCC’s non-duplication and syndication “rules do not create these rights but rather provide a means for the parties to exclusive contracts to enforce them through the Commission rather than the courts.”


Finally, FSF fails to mention, either in its blog post or its scholarly papers, that the retransmission consent rules limit the ability of broadcasters to choose with whom they bargain by prohibiting broadcasters from entering into exclusive program carriage agreements with for-pay video distributors – a limitation on bargaining that does not apply to programming owned by for-pay video distributors. Unlike non-duplication and syndication, this exclusivity prohibition is not grounded in private contractual arrangements.


FSF does not address whether the potential negotiating advantages conferred on broadcasters by FCC enforcement of network non-duplication and syndication agreements is more valuable in retransmission consent negotiations than the potential disadvantages imposed by the prohibition on exclusive program carriage agreements. To the extent the value of exclusive carriage agreements (the opportunity cost of the retransmission consent regime for broadcasters) outweighs the value of network non-duplication and syndication enforcement (the benefit to broadcasters), for-pay video distributors benefit more from the retransmission consent regime than broadcasters.


Next Generation TV Act


To be sure, even if for-pay video distributors benefit more from retransmission consent than broadcasters, retransmission consent negotiations do not occur in a “truly free market.” I agree with FSF that, “The ultimate goal should be to eliminate regulatory intrusion in this space – and to thereby eliminate occasions for debate over whether this or that particular modification to the old regulations will tip the scales in favor of one class of competitors over another.” Unfortunately, the modifications proposed by the Next Generation TV Act (the Bill) would not eliminate such debates.


FSF describes the Bill as a “comprehensive free market reform.” It would indeed eliminate FCC enforcement of network non-duplication and syndication agreements (and compulsory copyright licenses—an issue that merits additional discussion), but it is far from comprehensive.


First, the Bill doesn’t eliminate must carry for non-profit (e.g., religious and educational) broadcasters – the broadcasters most likely to elect mandatory carriage. Retaining such protections for religious and educational broadcasters is certainly reasonable when viewed from a political perspective; however, it falls short of being a free market approach to video regulation generally.


More importantly, the Bill wouldn’t eliminate any of the underlying reasons for which broadcasters enter into non-duplication and syndication agreements. Broadcasters negotiate exclusive distribution rights in local markets because government regulations require broadcasters to provide their programming for free. As a result of this government mandate, broadcasters rely on local advertising revenue to generate profit. If for-pay video distributors could retransmit duplicative programming (syndicated or otherwise) from non-local broadcasters (e.g., because the local broadcaster had not negotiated exclusive distribution rights), the local broadcaster would lose a substantial portion (if not all) of its advertising revenue. In a “truly free market,” the local broadcaster could respond to the potential loss of advertising revenue by charging subscription fees for its over-the-air video programming delivery or repurposing its spectrum for an alternative use. But broadcasters today don’t operate in a truly free market, and the government generally won’t allow them to pursue other business models.


Although the Bill aims toward a more vibrant free market, my primary concern is that it would leave in place the intrusive business model restrictions on broadcasters while eliminating rules that help make the government-mandated business model work. Perhaps FSF would agree that, if the goal is to “eliminate regulatory intrusions in this space,” the Bill should also eliminate government restrictions on broadcast business models and spectrum use. Anything less is better described as “picking winners and losers,” not “comprehensive free market reform.”


 •  0 comments  •  flag
Share on Twitter
Published on December 20, 2013 08:19

December 17, 2013

Want Drones for the Little Guy? Don’t Overregulate

In an op-ed at CNN, Ryan Calo argues that the real drone revolution will arrive when ordinary people can own and operate app-enabled drones. Rather than being dominated by a few large tech companies, drones should develop along the lines of the PC model: they should be purchasable by consumers and they should run third-party software or apps.


The real explosion of innovation in computing occurred when devices got into the hands of regular people. Suddenly consumers did not have to wait for IBM or Apple to write every software program they might want to use. Other companies and individuals could also write a “killer app.” Much of the software that makes personal computers, tablets and smartphones such an essential part of daily life now have been written by third-party developers.


[...]


Once companies such as Google, Amazon or Apple create a personal drone that is app-enabled, we will begin to see the true promise of this technology. This is still a ways off. There are certainly many technical, regulatory and social hurdles to overcome. But I would think that within 10 to 15 years, we will see robust, multipurpose robots in the hands of consumers.


I agree with Ryan that a world where only big companies can operate drones is undesirable. His vision of personal drones meshes well with my argument in Wired that we should see airspace as a platform for innovation.


This is why I am concerned about the overregulation of drones. Big companies like Amazon, Apple, and Google will always have legal departments that will enable them to comply with drone regulations. But will all of us? There are economies of scale in regulatory compliance. If we’re not careful, we could regulate the little guy out of drones entirely—and then only big companies will be able to own and operate them. This is something I’m looking at closely in advance of the FAA proceedings on drones in 2014.


 •  0 comments  •  flag
Share on Twitter
Published on December 17, 2013 12:34

How Much Carbon Does It Take to Keep Ben Bernanke Alive?

Everyone seems to be worried about Bitcoin’s carbon footprint lately. Last week, an article on Quartz claimed that Bitcoin miners are spending $17 million per day on electricity in order to reap $4.4 million worth of bitcoins. And Yesterday, Pando Daily ran a piece that ominously warned about Bitcoin’s carbon footprint.


One problem with both of these pieces is that they seem to rely on electricity consumption estimates from blockchain.info. While this site is great for getting stats about the Bitcoin network, it’s not such a great site for estimating electricity consumption. Blockchain.info clearly states that it is using an estimate of 650 Watts per gigahash [per second, I assume] in its electricity calculations. While this may have been a good estimate of the efficiency of the Bitcoin network when the page was first created, the network has become much more efficient since then. Archive.org shows that the 650W/GH/s figure was used on the earliest cached copy of the page, from December 2, 2011; yes, that is over two years ago.


Furthermore, we can use data from current-generation mining hardware to see how absurd the 650W/GH/s number is. In recent months, the Bitcoin network has mostly switched to application-specific integrated circuits, or ASICs. These devices are much more efficient at mining than previous generations of hardware. A look at this table of mining hardware shows that ASICs all seem to mine at less than 10W/GH/s. Some discontinued models seem to mine as efficiently as 2W/GH/s, and some models that are shipping next year will use less than 0.5W/GH/s. Not everyone in the Bitcoin network is using the latest-generation models of ASICs, and of course botnet mining is based on stealing electricity, so it’s not likely that the network averages 2W/GH/s or less. Nevertheless, it seems that the electricity estimates that these articles are based on may be off by a factor of close to 100.


Furthermore, we should always ask “compared to what?” Yes, the Bitcoin network uses a lot of electricity, but the computations that use this electricity are used to clear transactions, move money around the blockchain, increment the money supply, etc. In order to make a fair comparison to non-cryptocurrency payment systems, we need to ask how many resources (and how much carbon) is used to keep those systems going. And I think the answer is quite a lot. Banks, too, use computers, sometimes ancient ones, to process transactions. Furthermore, humans use a lot of carbon. Since our financial system uses a lot more human intervention than Bitcoin, much of those humans’ carbon use is due to the financial system. (Another way to put this is that if we all switched to cryptocurrency, those humans would get other jobs and produce other social benefits in exchange for the carbon used to keep them alive.) And there are of course costs of physically moving cash around, for example on armored trucks.


The relevant calculations are admittedly difficult, but it seems quite possible to me, when all is accounted for, that Bitcoin is the green alternative to Federal Reserve Notes. Cryptoanarchy and the environment don’t have to be enemies.


 •  0 comments  •  flag
Share on Twitter
Published on December 17, 2013 08:14

Robert Scoble on Wearable Computers

Post image for Robert Scoble on Wearable Computers

Robert Scoble, Startup Liaison Officer at Rackspace discusses his recent book, Age of Context: Mobile, Sensors, Data and the Future of Privacy, co-authored by Shel Israel. Scoble believes that over the next five years we’ll see a tremendous rise in wearable computers, building on interest we’ve already seen in devices like Google Glass. Much like the desktop, laptop, and smartphone before it, Scoble predicts wearable computers represent the next wave in groundbreaking innovation. Scoble answers questions such as: How will wearable computers help us live our lives? Will they become as common as the cellphone is today? Will we have to sacrifice privacy for these devices to better understand our preferences? How will sensors in everyday products help companies improve the customer experience?


Download


Related Links

Age of Context: Mobile, Sensors, Data and the Future of Privacy , Amazon
Naked Conversations: How Blogs are Changing the Way Businesses Talk with Customers, Amazon
The Rackspace Blog & Newsroom, Scoble
 •  0 comments  •  flag
Share on Twitter
Published on December 17, 2013 03:00

December 16, 2013

Crovitz Nails It on Software Patents and the Federal Circuit

Gordon Crovitz has an excellent column in today’s Wall Street Journal in which he accurately diagnoses the root cause of our patent litigation problem: the Federal Circuit’s support for extensive patenting in software.


Today’s patent mess can be traced to a miscalculation by Jimmy Carter, who thought granting more patents would help overcome economic stagnation. In 1979, his Domestic Policy Review on Industrial Innovation proposed a new Federal Circuit Court of Appeals, which Congress created in 1982. Its first judge explained: “The court was formed for one need, to recover the value of the patent system as an incentive to industry.”


The country got more patents—at what has turned out to be a huge cost. The number of patents has quadrupled, to more than 275,000 a year. But the Federal Circuit approved patents for software, which now account for most of the patents granted in the U.S.—and for most of the litigation. Patent trolls buy up vague software patents and demand legal settlements from technology companies. Instead of encouraging innovation, patent law has become a burden on entrepreneurs, especially startups without teams of patent lawyers.


I was pleased that Crovitz cites my new paper with Alex Tabarrok:


A system of property rights is flawed if no one can know what’s protected. That’s what happens when the government grants 20-year patents for vague software ideas in exchange for making the innovation public. In a recent academic paper, George Mason researchers Eli Dourado and Alex Tabarrok argued that the system of “broad and fuzzy” software patents “reduces the potency of search and defeats one of the key arguments for patents, the dissemination of information about innovation.”


Current legislation in Congress makes changes to patent trial procedure in an effort to reduce the harm caused by patent trolling. But if we really want to solve the trolling problem once and for all, and to generally have a healthy and innovative patent system, we need to get at the problem of low-quality patents, especially in software. The best way to do that is to abolish the Federal Circuit, which has consistently undermined limits on patentable subject matter.


 •  0 comments  •  flag
Share on Twitter
Published on December 16, 2013 08:38

December 13, 2013

A New Kingsbury Commitment: Universal Service through Competition?

Join TechFreedom on Thursday, December 19, the 100th anniversary of the Kingsbury Commitment, AT&T’s negotiated settlement of antitrust charges brought by the Department of Justice that gave AT&T a legal monopoly in most of the U.S. in exchange for a commitment to provide universal service.


The Commitment is hailed by many not just as a milestone in the public interest but as the bedrock of U.S. communications policy. Others see the settlement as the cynical exploitation of lofty rhetoric to establish a tightly regulated monopoly — and the beginning of decades of cozy regulatory capture that stifled competition and strangled innovation.


So which was it? More importantly, what can we learn from the seventy year period before the 1984 break-up of AT&T, and the last three decades of efforts to unleash competition? With fewer than a third of Americans relying on traditional telephony and Internet-based competitors increasingly driving competition, what does universal service mean in the digital era? As Congress contemplates overhauling the Communications Act, how can policymakers promote universal service through competition, by promoting innovation and investment? What should a new Kingsbury Commitment look like?


Following a luncheon keynote address by FCC Commissioner Ajit Pai, a diverse panel of experts moderated by TechFreedom President Berin Szoka will explore these issues and more. The panel includes:



Harold Feld, Public Knowledge
Rob Atkinson, Information Technology & Innovation Foundation
Hance Haney, Discovery Institute
Jeff Eisenach, American Enterprise Institute
Fred Campbell, Former FCC Commissioner

Space is limited so RSVP now if you plan to attend in person. A live stream of the event will be available on this page. You can follow the conversation on Twitter on the #Kingsbury100 hashtag.  


When:

Thursday, December 19, 2013

11:30 – 12:00Registration & lunch

12:00 – 1:45Event & live stream


The live stream will begin on this page at noon Eastern.


Where:

The Methodist Building

100 Maryland Ave NE

Washington D.C. 20002


Questions?

Email contact@techfreedom.org.


 •  0 comments  •  flag
Share on Twitter
Published on December 13, 2013 12:02

How Cable and Satellite TV Providers Are Using the Net Neutrality Playbook to Regulate Broadcast Television Content

The decision to forgo distribution is referred to as a “blackout” in the cable context and “blocking” in the Internet context, but the economic considerations affecting such negotiations are substantially the same.


The American Television Alliance (ATVA), a coalition comprised primarily of cable and satellite TV operators, is using the playbook of net neutrality proponents in abid to convince the Federal Communications Commission (FCC) to regulate prices for broadcast television content. The goal of ATVA’s cable and satellite members is to increase their profit margins by convincing the government to artificially lower the cost of programming they resell to consumers. I suspect the goal of ATVA’s non-profit memberse.g.Public Knowledge and New America Foundation, is to solidify the FCC’s flawed rationale for adopting net neutrality rules in 2010, which imposed restrictions on market arrangements between Internet Service Providers (ISPs) and Internet content providers without finding a market failure.


Many of ATVA’s cable members are also ISPs that have routinely argued against the imposition of net neutrality regulations in the market for Internet services. By supporting ATVA, these same companies appear to have abandoned the intellectual foundation for opposition to net neutrality. Are they now signaling their intent to embrace net neutrality regulation of the Internet?


An analysis of the similarities between the cable and Internet services markets illuminates this apparent inconsistency. Both cable and Internet services exhibit the characteristics of two-sided markets, and the economic relationships among the participants in both of these markets are substantially similar. All else being equal, consumers prefer distribution platforms (i.e., cable or ISP networks) that provide access to more rather than less content, and content providers prefer distribution on platforms with more rather than less users. As a result, either side of the market has the potential to behave anticompetitively, but only if it has substantial market power relative to the other. Recent economic literature demonstrates that, in the absence of market failure, permitting full pricing flexibility on both sides of two-sided communications markets maximizes consumer welfare by increasing investment in both network infrastructure and content.


Prominent ATVA members who are also ISPs recognized as much in their fight against net neutrality at the FCC. In its comments opposing net neutrality, Time Warner Cable argued that the “critical gap in the [FCC]‘s selective proposal to regulate broadband Internet access service providers is the absence of any assertion that they possess market power—without which, it is unclear that even manifestly harmful discrimination would warrant regulatory intervention.” (Time Warner Cable Comments at 27 (emphasis in original)) Yet, the ATVA petition, filed by Time Warner Cable at the FCC, fails to provide any economic analysis or cite any precedent finding that broadcasters exercise market power warranting government intervention in retransmission consent negotiations.


The core of ATVA’s argument is a straightforward attack on the ordinary functioning of any two-sided market – the same attack on the previously unregulated Internet made by net neutrality proponents. ATVA argues that, when a cable operator asks a broadcaster for consent to retransmit broadcast content (which is known as “retransmission consent”), the cable operator must either agree to pay the broadcasters or forgo distribution of that broadcaster’s content. Net neutrality advocates similarly argue that, if an Internet content provider were required to pay an ISP for Internet content distribution, the Internet content provider would either have to agree to pay the ISP or forgo distribution of its content. The decision to forgo distribution is referred to as a “blackout” in the cable context and “blocking” in the Internet context, but the economic considerations affecting such negotiations are substantially the same.


ATVA’s attack on retransmission consent agreements suffers from the same infirmity as the net neutrality attack on ISPs: It is a “solution in search of a problem.” As Time Warner Cable noted in its comments on net neutrality:


“Consumers have to come to expect that they can access the content and services they want, when they want. Service providers almost invariably meet those expectations, and in those isolated instances when they have not, the marketplace has exerted the discipline necessary to rectify matters.” (Time Warner Cable Comments at 18)


Those who believe in free markets should exhibit the same trust in the marketplace when addressing the issue of “black outs” for video content as they do when addressing the issue of “blocking” Internet content. Broadcasters have no greater incentive to “black out” cable viewers (and potentially lose advertising revenue) than ISPs have to “block” Internet content (and potentially lose subscription revenue).


Of course, ATVA doesn’t complain about blackouts, per se. Every blackout to date has been resolved by the marketplace without restrictive FCC rules, and even if they weren’t, consumers could still access broadcast programming over the air free of charge. ATVA’s real complaint is that broadcasters are demanding “excessive” retransmission consent fees due to the popularity of their programming – an allegation that is uncomfortably similar to the “gatekeeper” theory the FCC relied on in its net neutrality order. There, the FCC concluded that an ISP could “force” edge providers to pay “inefficiently high fees” because that ISP is “typically” an Internet content provider’s “only option” for reaching a particular end user. Both theories reflect a desire to intervene in the ordinary pricing mechanisms of two-sided markets without engaging in a thorough market power analysis. They also ignore the fact that, in a two-sided market, charging for content distribution “may well have important pro-competitive effects.” (Time Warner Cable Comments at 31)


The apparent inconsistency of ATVA members who support regulation of retransmission consent agreements while opposing net neutrality is not a new or surprising phenomenon in Washington. It is essential, however, for those who believe in liberty to recognize the danger that ATVA’s theory represents to free market principles: An ATVA win on retransmission consent would continue the expansion of FCC authority unbounded by rigorous analysis that began with the net neutrality order. With a rewrite of the Communications Act on the horizon, free market advocates cannot afford to lose this battle. If we do, we risk losing the war before it even begins.


 •  0 comments  •  flag
Share on Twitter
Published on December 13, 2013 11:36

New House bill pays federal agencies to clear spectrum

It’s encouraging to see more congressional movement in repurposing federal spectrum for commercial use. This week, a bill rewarding federal agencies for ending or moving their wireless operations passed a House committee. The bipartisan Federal Spectrum Incentive Act of 2013 allows agencies to benefit when they voluntarily give up their spectrum for FCC auction.


In the past, an agency could receive a portion of auction proceeds but only to compensate the agency for relocating its systems. Agencies complained, sensibly, that this arrangement does little to encourage them to give up spectrum. Federal agencies had to go through the hassle of modifying their wireless equipment and sharing spectrum with another agency but were left no better off than before. In some cases, the complications with sharing spectrum made them worse off, so there was risk of downside and no upside.


This House bill provides that an agency can keep 1% of auction proceeds in addition to relocation costs. With this additional carrot, the hope is, agencies will be more willing to modify their equipment and make room for mobile broadband carriers.


The bill is a good start but I think it’s a little too restrictive. A one percent claim on auction receipts seems insufficient to induce dramatically improved agency participation. Given how poorly federal agencies use spectrum, Congress should be doing much more to force agencies to justify their spectrum usage. Additionally, how agencies can use that 1% benefit seems too limited. The bill allows the funds to be used 1) to offset sequestration cuts, and 2) to compensate other agencies if they agree to share spectrum. Some journalists are reporting that agencies can use the funds to expand existing programs but I don’t see that language in the proposed bill. It wouldn’t be a bad idea, though, to have fewer restrictions on the payments since it would likely increase agency participation.


Further Reading:


See my Mercatus paper on the subject of repurposing federal spectrum.


 •  0 comments  •  flag
Share on Twitter
Published on December 13, 2013 11:22

Adam Thierer's Blog

Adam Thierer
Adam Thierer isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Adam Thierer's blog with rss.