Adam Thierer's Blog, page 162

October 26, 2010

Private Ownership of Public Law

Carl Malamud is a breakthrough thinker and doer on transparency and open government. In the brief video below, he makes the very interesting case that various regulatory codes are wrongly withheld from the public domain while citizens are expected to comply with them. It's important, mind-opening stuff.



It seems a plain violation of due process that a person might be presumed to know laws that are not publicly available. I'm not aware of any cases finding that inability to access the law for want of money is a constitutional problem, but the situation analogizes fairly well to Harper v. Virginia, in which a poll tax that would exclude the indigent from voting was found to violate equal protection.



Regulatory codes that must be purchased at a high price will tend to cartelize trades by raising a barrier to entry against those who can't pay for copies of the law. Private ownership of public law seems plainly inconsistent with due process, equal protection, and the rule of law. You'll sense in the video that Malamud is no libertarian, but an enemy of an enemy of ordered liberty is a friend of liberty.






 •  0 comments  •  flag
Share on Twitter
Published on October 26, 2010 05:03

William Powers on taking control of our technology

Post image for William Powers on taking control of our technology

William Powers, a writer who has been a columnist and media critic for such publications as The Washington Post, The New Republic, and National Journal, discusses his new book, Hamlet's BlackBerry: A Practical Philosophy for Building a Good Life in the Digital Age. In the book, Powers writes, "You can allow yourself to be led around by technology, or you can take control of your consciousness and thereby your life." On the podcast, he discusses historical philosophers' ideas that can offer shelter from our present deluge of connectedness, how to create gaps that allow for currently elusive depth and inward reflection, and strategies that help him and his family regain control over their technology.



Related Links


"Born to Check Mail", The New York Times
"To Tweet, Or Not to Tweet", The Wall Street Journal
"Stop the World", by George Packer


To keep the conversation around this episode in one place, we'd like to ask you to comment at the web page for this episode on Surprisingly Free. Also, why not subscribe to the podcast on iTunes?




 •  0 comments  •  flag
Share on Twitter
Published on October 26, 2010 05:00

October 25, 2010

Television More Competitive, Diverse & Fragmented Than Ever

I've grown increasingly tired of the fight over not just retransmission consent, but almost all TV regulation in general.  Seriously, why is our government still spending time fretting over a market that is more competitive, diverse and fragmented than most other economic sectors?  It's almost impossible to keep track of all the innovation happening on this front, although I've tried here before. Every metric — every single one — is not just improving but exploding. Just what's happening on the kids' TV front is amazing enough, but the same story is playing out across other programming genres and across multiple distribution platforms.



More proof of just how much more diverse and fragmented content and audiences are today comes in this excellent new guest editorial over at GigaOm, "The Golden Age of Choice and Cannibalization in TV," by Mike Hudack, CEO of Blip.tv. Hudack notes that, compared to the Scarcity Era, when we had fewer choices and were all forced to watch pretty much the same thing, today's media cornucopia is overflowing, and audiences are splintering as a result.  "Media naturally trends towards fragmentation," he note.  "As capacity increases so does choice. As choice increases audiences fragment. When given a choice people generally prefer media that speaks to them as individuals over media that speaks to the 'masses.'"



Indeed, he cites Nielsen numbers I've used here before illustrating how the top shows of the 50′s (like Texaco Star Theater) netted an astonishing 60-80% of U.S. television households while more recent hits, like American Idol is lucky if it can manage over 15% audience share. He concludes, therefore, that:



While American Idol remains strong, the trend is clear. Americans have been abandoning broadcast television in favor of cable's niche shows for thirty years.  Historical trends like these do not disappear, they accelerate. Internet video is growing at a significant pace. It has not yet taken a chunk out of the broadcast and cable audiences, but the trend is there. Shows on the web are infinitely more targeted than the shows broadcast and cable companies deliver. [...]

The broadcast distribution model, which dictates that only one show can air at any given time, makes it impossible for a niche show to thrive. The opportunity cost is too high. And the corporate structures, cost structures, business models and cultures of the network and cable companies make change far too difficult. Thus the Internet will do to broadcast and cable what cable did to broadcast. It's inevitable. And it's already beginning to happen.


Too bad nobody bothered telling Washington policymakers that the world has changed so radically.




 •  0 comments  •  flag
Share on Twitter
Published on October 25, 2010 08:56

Should the military have a role in civilian cybersecurity?

In the current issue of Foreign Affairs, Deputy Defense Secretary William J. Lynn III, has one of the more sober arguments for government involvement in cybersecurity. Naturally, his focus is on military security and the Pentagon's efforts to protect the .mil domain and military networks. He does, however, raise the question of whether and how much the military should be involved in protecting civilian networks.



One thing that struck me about Lynn's article is the wholesale rejection of a Cold War metaphor for cybersecurity. "[The United States] must also recognize that traditional Cold War deterrence models of assured retaliation do not apply to cyberspace, where it is difficult and time consuming to identify an attack's perpetrator," he writes. Given the fact that attribution is nearly impossible on the internet, he suggests that the better strategy would be "denying any benefits to attackers [rather] than imposing costs through retaliation."



What's interesting about this is that it is in utter contrast to the recommendations of cybersecurity enthusiasts like former NSA chief Michael McConnell, who wrote earlier this year in a 1,400-word op-ed in the Washington Post:




We need to develop an early-warning system to monitor cyberspace, identify intrusions and locate the source of attacks with a trail of evidence that can support diplomatic, military and legal options—and we must be able to do this in milliseconds. More specifically, we need to reengineer the Internet to make attribution, geolocation, intelligence analysis and impact assessment—who did it, from where, why and what was the result—more manageable.




It's good to see that DoD is facing the fact that "reengineering the internet" in the name of attribution is not a practical possibility. Lynn seems to be saying that what the military needs to focus on is better security hygiene and network resiliency. It's therefore interesting that the two data points he provides as evidence of a threat are




The oft-cited factoid that "Every day, U.S. military and civilian networks are probed thousands of times and scanned millions of times."


A now declassified episode in 2008 in which classified military networks were severely compromised by a foreign intelligence agency. How? "[A]n infected flash drive was inserted into a U.S. military laptop at a base in the Middle East."




Probing and scanning networks are the digital equivalent of trying doorknobs to see if they are unlocked—a maneuver available to even the most unsophisticated hackers. And since the days of War Games, the Pentagon has been a favorite target. That a major attack must rely on social engineering—that is, tricking an insider into connecting an infected USB thumb drive–gives me some reassurance about the military's ability to protect against "probes and scans." (Note also that the attack vector of the recently discovered Stuxnet worm was also flash drive.) It also tells me that the best defense against any kind of security breach is still an educated computer user.



Lynn also writes that,




The U.S. government has only just begun to broach the larger question of whether it is necessary and appropriate to use national resources, such as the defenses that now guard military networks, to protect civilian infrastructure. Policymakers need to consider, among other things, applying the National Security Agency's defense capabilities beyond the ".gov" domain, such as to domains that undergird the commercial defense industry. U.S. defense contractors have already been targeted for intrusion, and sensitive weapons systems have been compromised. The Pentagon is therefore working with the Department of Homeland Security and the private sector to look for innovative ways to use the military's cyberdefense capabilities to protect the defense industry.




For folks like McConnell, the answer is obvious. "[T]he reality is that while the lion's share of cybersecurity expertise lies in the federal government, more than 90 percent of the physical infrastructure of the Web is owned by private industry," he wrote in the Post. As a result, intermingling is inevitable.



First, I'm not sure I'm willing to stipulate that the federal government is the technical leader in network security. What is the evidence for that claim? (Jim Harper has previously pointed this out.) Second, if DoD is concerned about the network security of defense contractors, it can rely on them less or it can contractually require more stringent practices. As the ACLU recently warned, a partnership between DHS and DoD (read NSA) could pose a threat to civil liberties. Let's never forget this is the agency that made warrantless domestic surveillance possible. Finally, while it may start with defense contractors, regulation and "public-private partnerships" tend to have a ratcheting effect that grow bureaucracies and crowd out innovation. It's time to slow down this cybersecurity train before it loses control.




 •  0 comments  •  flag
Share on Twitter
Published on October 25, 2010 08:53

Thoughts on Tim Wu's Master Switch, Part 1

Tim Wu's new book, The Master Switch: The Rise and Fall of Information Empires, will be released next week and it promises to make quite a splash in cyberlaw circles.  It will almost certainly go down as one of the most important info-tech policy books of 2010 and will probably win the top slot in my next end-of-year list.



Of course, that doesn't mean I agree with everything in it.  In fact, I disagree vehemently with Wu's general worldview and recommendations, and even much of his retelling of the history of information sectors and policy.  Nonetheless, for reasons I will discuss in this first of many critiques, the book's impact will be significant because Wu is a rock star in this academic arena as well as a committed activist in his role as chair of the radical regulatory activist group, Free Press. Through his work at Free Press as well as the New America Foundation, Professor Wu is attempting to craft a plan of action to reshape the Internet and cyberspace.



I stand in opposition to almost everything that Wu and those groups stand for, thus, I will be spending quite a bit of time addressing his perspectives and proposals here in coming months, just as I did when Jonathan Zittrain's hugely important The Future of the Internet & How to Stop It was released two years ago (my first review is here and my latest critique is here).  In today's essay, I'll provide a general overview and foreshadow my critiques to come.  (Note: Tim was kind enough to have his publisher send me an advance uncorrected proof of the book a few months ago, so I'll be using that version to construct these critiques. Please consult the final version for cited material and page numbers.)



The Master Switch & the Cyber-Collectivist Trilogy of Terror

As I noted in my essay on "Two Schools of Internet Pessimism," what I find most lamentable about the state of cyberlaw and high-tech policy debates today is the foreboding sense of gloom and doom that haunts so many narratives.  To crack open most Net policy books these days is to step into a world of corporate conspiracies, nefarious industry schemers, closed systems, "kill switches," squashed consumer rights, and so on.  Let's face it, Chicken Little doesn't need an agent; pessimism sells. The world loves a good tale of villainy and misery, and that's exactly what Columbia Law School professor Tim Wu delivers in his new book, The Master Switch: The Rise and Fall of Information Empires.



Wu's book is important if for no other reason than he is considered one of the intellectual godfathers of modern cyberlaw and The Master Switch is best understood as the final installment in an important trilogy that began with the publication of Lawrence Lessig's seminal 1999 book, Code and Other Laws of Cyberspace and then was continued on in Jonathan Zittrain's much-discussed 2008 book, The Future of the Internet & How to Stop It.



To better understand where Wu wants to take us in The Master Switch, we must first return to the central tenant of Lessig's Code:  "Left to itself," Lessig predicted, "cyberspace will become a perfect tool of control." (pg 5-6)  Code quickly became a sort of cyber-collectivist Bible and today Lessig's many disciples in academia and a wide variety of public policy regulatory advocacy organizations continue to preach this gloomy gospel of impending digital doom and "perfect control."  Zittrain and Wu are Lessig's most notable intellectual descendants; the Peter and Paul of the Church of Cyber-Doom that he founded.  And despite their insistence that they really aren't all that pessimistic—or, more humorously, that they are actually libertarians in disguise—this crew persists with frightful tales and lugubrious warnings that unless someone or something—quite often, the State—intervenes to set us on a better course or protect those things that they regard as sacred.



Zittrain's Future of the Internet, for example, brought Lessig's Code up date by giving us a fresh set of villains.  Gone was Lessig's old foil AOL and its worrisome walled gardens. Instead, the new face of evil became Apple, Facebook, and TiVo.  Zittrain worries about "sterile and tethered" digital "appliances" that foreclose digital generativity and the rise of "a handful of gated cloud communities whose proprietors control the availability of new code."



Wu simply extends this narrative in The Master Switch when he ominously warns that there are "forces threatening the Internet as we know it" (p. 7) and then goes on to craft an enemies list that reads like a "Who's Who" of high-tech corporate America. No one, it seems, can be trusted—at least not if that someone has a ".com" behind their name.  Wu hopes to convince us that history proves that concentrations of private power in information industries are inevitably follow a period of openness and competition.  He refers to this as "The Cycle." Thus, he trots out the old collectivist saw that freedom is really slavery — slavery to The Man:



If the stories in this book tell us anything… it is that the free market can also lead to situations of reduced freedom. Markets are born free, yet no sooner are they born than some would-be emperor is forging chains.  Paradoxically, it sometimes happens that the only way to preserve freedom is through judicious controls on the exercise of private power.  If we believe in liberty, it must be freedom from both private and public coercion. (p. 310)


This is the heart of Wu's critique in The Master Switch: The real threat is not Big Brother but Big Corporate Brother. It's certainly not a new critique. Wu is simply steering the Lessig-ite, cyber-collectivism school of cyberlaw in line with traditional "progressive" perspectives and recommendations.  Indeed, although he and other so-called progressives don't always come right out and say it, they often suggest that private power – however defined – is so insidious and threatening that greatly amplified State power to counter it becomes essential, even a good.



The cyber-collectivist movement that Lessig began with Code and Zittrain and Wu continue in their books, is fueled by that dour, depressing "the-Net-is-about-to-die" fear. Again and again their message comes down to this: "Enjoy the good old days of the open Internet while you can, because any minute now it will be crushed and closed-off by corporate marauders!"  This crowd want us to believe that the corporate big boys are — someday very soon — going to toss the proverbial "master switch," suffocating Internet innovation and digital freedom, and making us all cyber-slaves within their commercialized walled gardens.



We might think of this fear as "The Great Closing," or the notion that, unless radical interventions are pursued — usually of a regulatory nature – a veritable Digital Dark Age of Closed Systems will soon unfold, complete with myriad AOL-like walled gardens, "sterile and tethered devices," corporate censorship, and consumer gouging. Again, it's really just a restatement of the old Lessig vision of an unfettered cyberspace leading to "perfect (corporate) control."  In other words, most information systems, networks and devices will be bottled up by corporate "gatekeepers" if markets aren't steered in a better direction by wise philosopher-regulators.  And these "Openness Evangelicals," as I will call them, believe they are the sagacious chosen few who will serve as the self-appointed janissary of the supposed dying order of openness.



My critique of this cyber-collectivist thinking and "Great Closing" thesis was more fully developed in these two essays [1, 2] and will be more robustly developed in a chapter for an upcoming book that will be published shortly.  Much of what I'll have to say in response to Wu's new book will be drawn from those essays as well as my two-part exchange [1, 2] with Lessig upon the 10th anniversary of the publication of Code. Basically, I do not buy – not for one minute – the notion that "the Internet is dying" or that "openness" is evaporating.  The Internet has never been more vibrant or open.  Again, please read those previous essays for my completely response.  I'll be teasing out some of those themes in future essays here.



More specifically, my response to Wu's new book comes down to this:




Rarely is there any discussion of the nature of the respective forms of "power" or the coercive nature of State power, in particular.  The fact that the State has a monopoly on force in society and, thus, can penalize or even imprison, is either ignored or treated as irrelevant compared to the supposed "power" of private actors.
Rarely in their analysis — and never in Wu's book — is there a serious cost-benefit analysis of the trade-off associated with an aggrandizement of State power in the name of countering the supposed evils of private power.  The solutions offered – to the extent they rise above amorphous calls to "do something" – are presented as cost-free options.
Rarely is there any mention of the dangers of "regulatory capture" or the massive inefficiencies associated with the sort of regulatory regimes that progressives and modern cyber-collectivists like Wu would substitute for market mechanisms.


In my next installment, I'll take on Wu's critique of the fictional "purely economic laissez-faire approach" he derides – an approach that has never existed in American communications or media markets.  In a forthcoming installment, I'll also be challenging Tim to a Simon-Ehrlich wager on this front and ask him to put his money where his mouth is to see just how serious he is about his dour worldview and extreme technological pessimism!  So, stay tuned.




 •  0 comments  •  flag
Share on Twitter
Published on October 25, 2010 06:57

If the Premises are Wrong, Why Read On?

(Second in a series.)



I recently picked up a copy of Robert Wuthnow's Be Very Afraid: The Cultural Response to Terror, Pandemics, Environmental Devastation, Nuclear Annihilation, and Other Threats. According to the dust cover, the Princeton sociologist's book "examines the human response to existential threats…" Contrary to common belief, we do not deny such threats but "seek ways of positively meeting the threat, of doing something—anything—even if it's wasteful and time-consuming." Interesting batch of ideas, no?



Well, the fifth paragraph of the book joins up with some pretty obscure and disorganized writing in the introduction to disqualify it from absorbing any more of my precious time. That paragraph contains this sentence: "Millions could die from a pandemic or a dirty bomb strategically placed in a metropolitan area."



It's probably true that millions could die from a pandemic. Two million deaths would be just under 0.03% of the world's population—not quite existential. But the killer for the book is Wuthnow saying that millions could die from a dirty bomb placed in a metropolitan area. There will never be that many deaths from a dirty bomb, placed anywhere, ever.



One suspects that the author doesn't know what a dirty bomb is. A dirty bomb is a combination of conventional explosives and radioactive material that is designed to disperse the radioactive material over a wide area. A dirty bomb is not a nuclear explosive and its lethality is little greater than a conventional weapon, as the radiological material is likely to be too dispersed and too weak to cause serious health issues.



Dirty bombs are meant to scare. Incautious discussion of dirty bombs has induced more fright in our society than any actual bomb. Professor Wuthnow asserts, as fact, that a dirty bomb could kill millions, which is plainly wrong. If he doesn't know his subject matter, he doesn't get any more time from this reader.



Given my brief experience with the book, I advise you to be very afraid of Be Very Afraid.




 •  0 comments  •  flag
Share on Twitter
Published on October 25, 2010 05:34

October 22, 2010

Celebrating the COPA Report Ten Years Later: A Charter for Sound Consumer Protection Online

An important anniversary just passed with little more notice than an email newsletter about the report that played a pivotal role in causing the courts to strike down the 1998 Child Online Protection Act (COPA) as an unconstitutional restriction on the speech of adults and website operators. (COPA required all commercial distributors of "material harmful to minors" to restrict their sites from access by minors, such as by requiring a credit card for age verification.)



The Congressional Internet Caucus Advisory Committee is pleased to report that even after 10 years of its release the COPA Commission's final report to Congress is still being downloaded at an astounding rate – between 700 and 1,000 copies a month. Users from all over the world are downloading the report from the COPA Commission, a congressionally appointed panel mandated by the Child Online Protection Act. The primary purpose of the Commission was to "identify technological or other methods that will help reduce access by minors to material that is harmful to minors on the Internet." The Commission released its final report to Congress on Friday, October 20, 2000.

As a public service the Congressional Internet Caucus Advisory Committee agreed to virtually host the deliberations of the COPA Commission on the Web site COPACommission.org. The final posting to the site was the actual COPA Commission final report making it available for download. In the subsequent 10 years it is estimated that close to 150,000 copies of the report have been downloaded.


The COPA Report played a critical role in fending off efforts to regulate the Internet in the name of "protecting our children," and marked a shift towards focusing on what, in First Amendment caselaw is called "less restrictive" alternatives to regulation. This summary of the report's recommendations bears repeating:



After consideration of the record, the Commission concludes that the most effective current means of protecting children from content on the Internet harmful to minors include: aggressive efforts toward public education, consumer empowerment, increased resources for enforcement of existing laws, and greater use of existing technologies. Witness after witness testified that protection of children online requires more education, more technologies, heightened public awareness of existing technologies and better enforcement of existing laws.


In case you haven't noticed, this is the message Adam Thierer and I have hammered home relentlessly in all the work we do concerning not only child protection but also privacy, data security and other areas of concern about online consumer protection.



On the child protection side, check out our recent joint comments with CDT and EFF warning the FTC not to expand the Child Online Privacy Protection Act (COPPA), lest  it converge with COPA, which the courts have found unconstitutional—and also the lengthy paper we wrote on this subject back in June 2009 well before COPPA reemerged as an issue.



On the privacy side, allow me to quote from my November 2009 comments to the FTC on its Privacy Roundtables (primarily concerning online advertising). Specifically, I laid out a "Principled Pro-Consumer Alternative to Further Regulation:"



The "Privacy Wars" that have waged over how government should regulate online collection and use of data might better be referred to as the "Privacy Proxy Wars" because the most clearly demonstrated "harm" at issue seems to be from government itself, not the private sector.  The Fourth Amendment guarantees that "The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated…"  Americans have a legitimate expectation that this "security" extends to their digital "papers and effects," yet that expectation is not given effect by current restraints on government access to consumer data in American law.  Thus, we have proposed the following layered approach to concerns about online privacy, focusing on restraining government access to data, rather than crippling the private sector uses of data that directly benefit consumers:


Erect a higher "Wall of Separation between Web and State" by increasing Americans' protection from government access to their personal data—thus bringing the Fourth Amendment into the Digital Age.
Educate users about privacy risks and data management in general as well as specific practices and policies for safer computing.
Empower users to implement their privacy preferences in specific contexts as easily as possible.
Enhance self-regulation by industry sectors and companies to integrate with user education and empowerment.
Enforce existing laws against unfair and deceptive trade practices as well as state privacy tort laws.



I look forward to the day when Adam and I aren't so alone in calling for a unified, consistent approach to online consumer protection across all these issues that begins with demanding a showing of genuine harm or true market failure, but also insists on using (or at least starting with) the least "restrictive" measures to address that problem. In privacy, as with child protection, that means starting with these E-words before rushing to R-words like "regulate, restrict, remove (options)," because those things ultimately retard, rather than encourage, Progress for digital consumers.




 •  0 comments  •  flag
Share on Twitter
Published on October 22, 2010 10:28

Why exactly do we have a "spectrum crunch"?

It's wonderful to see that the FCC is putting spectrum front and center on its agenda. Yesterday it held a spectrum "summit" at which it released several papers looking at the challenges and opportunities mobile broadband faces, and it was announced that at its November meeting, the chairman will introduce several items related to spectrum reallocation. NTIA is keeping pace, identifying over 100 MHz now in federal hands (mostly DoD) to be moved over for commercial use.



The consensus that has led us to this happy time is that there is a spectrum "shortage" or spectrum "crunch," as many said yesterday. Here's how Chairman Genachowski explained it:




The explosive growth in mobile communications is outpacing our ability to keep up. If we don't act to update our spectrum policies for the 21st century, we're going to run into a wall—a spectrum crunch—that will stifle American innovation and economic growth and cost us the opportunity to lead the world in mobile communications.



Spectrum is finite. Demand will soon outpace the supply available for mobile broadband.




Every natural resource is finite, however. So how exactly did we end up with this "spectrum crunch"?



Spectrum is a vital input to the mobile industry, just as steel is for the automobile industry or timber is for housing and paper products. Yet even during the productive peaks of those industries, we never saw any meaningful shortages of resources. That is because (aside from some government interference here and there) those resources are freely traded in a market. If demand for them increases, prices rise accordingly, and supply moves to better uses. Higher prices will also create an incentive for entrepreneurs to develop more efficient uses of finite resources.



In contrast, spectrum is barely traded in a market. It's uses are largely mandated by government fiat. For example, less than 15 percent of U.S. households depend on over-the-air TV broadcasts because they do not subscribe to cable or satellite. Yet our most valuable spectrum is in the hands of broadcasters, with no easy exit, thanks to government regulation. This is the cause of the "shortage," and we shouldn't forget that as we move to reallocate spectrum.



The incentive auctions and secondary market rules the FCC will consider steps in the right direction. But once the spectrum is released from the grasp of old and inefficient technology, we should make sure we don't make the same mistake again. Television and radio were the most important technologies in the world at one point, which is why they were given so much spectrum by government. Today it's mobile broadband, and that's where we want the spectrum to go. But let's be careful we don't earmark the spectrum in any way so that fifty years from now we find that we have a spectrum crunch for teleportation because it's in the hands of broadband.



Reallocated spectrum should be made as property-like as possible. Exclusive, flexible, and tradable. The spectrum "crunch" is another instance of the government stepping in to clean up a mess it made. Let's hope they get it right this time.




 •  0 comments  •  flag
Share on Twitter
Published on October 22, 2010 08:24

FCC and its Technological Advisory Council: Shut Them Down and Use the Money to Reduce Debt

The Federal Communications Commission has established a new advisory group called the "Technological Advisory Council." Among other things it will advise the agency on "how broadband communications can be part of the solution for the delivery and cost containment of health care, for energy and environmental conservation, for education innovation and in the creation of jobs."



This is an agency that is radically overspilling its bounds. It has established goals that it has no proper role in fulfilling and that it has no idea how to fulfill. As we look for cost-cutting measures at the federal level, we could end the pretense that communications industry should be regulated as a public utility. Shuttering the FCC would free up funds for better purposes such as lowering the national debt or reducing taxes.




 •  0 comments  •  flag
Share on Twitter
Published on October 22, 2010 06:26

October 21, 2010

Calling in the FCC to solve a mess it helped create

As we enter day 5 of the standoff between Cablevision and News Corp. over the retransmission of local Fox stations, the controversy over a supposed net neutrality violation has died down, but pressure on the FCC to interfere with the parties' negotiations is mounting. Sen. Kerry has also released a draft bill [PDF] that would reform the Cable Act's retransmission consent rules to force TV stations to accept FCC mediation and allow carriage of their signals during a contract dispute.



It's almost ironic that some would call for more FCC interference to solve a problem that is at least partly caused by FCC regulation. Cablevision is in New York, and what it wants is to carry Fox programming. The local Fox stations, owned and operated by News Corp., are demanding what Cablevision considers too high a price. So why wouldn't Cablevision just turn to a Fox affiliate in Michigan for the content? The answer is that FCC regulations authorized by the Cable Act take that excellent bargaining chip away from video providers.



Randy May explains this in a great little primer on retransmission consent:




The FCC's network non-duplication regulations allow local stations to block cable systems from importing network programming from another affiliate of the same broadcast network—even if the out-of-market broadcast affiliate and the cable network otherwise could reach a negotiated agreement. negotiated agreement. Similarly, syndicated exclusivity regulations allow local stations providing syndicated broadcast programming to prevent cable systems from carrying the same programs broadcast by out-of-market broadcast stations.




Without this prohibition, we may have already seen a resolution to the Cablevision-Fox dispute. So it's amazing that the FCC is being called on to interfere in the negotiations when they already have a thumb on the scales.



The lesson of this latest confrontation should not be that we need to "reform" retransmission consent rules to add FCC arbitration as Sen. Kerry and some broadcasters and video distributors are suggesting. Instead, it's that given a competitive market for programming, as the FCC has acknowledged exists in New York, we should plain and simply get rid of must-carry and retransmission consent rules altogether and allow a real free market to work. Without such a move, I see a lot more blackouts in the future.




 •  0 comments  •  flag
Share on Twitter
Published on October 21, 2010 07:50

Adam Thierer's Blog

Adam Thierer
Adam Thierer isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Adam Thierer's blog with rss.