Adam Thierer's Blog, page 94
May 17, 2012
Funding the Future: Advertising’s Role in Sustaining Culture & the Alternatives
My most recent Forbes column is entitled, “We All Hate Advertising, But We Can’t Live Without It.” It’s my attempt to briefly (a) defend the role advertising has traditionally played in sustaining news, entertainment, and online service, and (b) discuss some possible alternatives to advertising that could be tapped if advertising starts failing us a media cross-subsidy.
What got me thinking about this issue again was the controversy over satellite video operator DISH Network offering its customers a new “Auto Hop” capability for its Hopper whole-home HD DVR system. Auto Hop will give viewers the ability to automatically skip over commercials for most recorded prime time programs shown on ABC, CBS, FOX and NBC when viewed the day after airing. It makes the viewing experience feel like the ultimate free lunch. Alas, something still must pay the bills. As innovative as that technology is, we can be certain that it will not make content consumption cost-free. We’ll just pay the price in some other way. The same is true for online services since it’s never been easier to use technology to block ads.
So, what is going to pay the bills for content as ad-skipping becomes increasingly automated and effortless? Stated differently, what are the other possible methods of picking up the tab for content creation? Here’s a rough taxonomy:
I. CHARGES
A. Direct Fees (Periodic billing / Pay-per-view)
B. Indirect Charges (Tiers / Bundles / Package pricing)
II. ADVERTISING
A. General / Mass market ads (Billboards / Banner ads / Pop-up online ads)
B. Targeted ads (Directed pitch)
C. Integrated (Product placement / Payola)
D. Sponsorship / Underwriting
III. PHILANTHROPIC
A. Individual (ex: Arts & opera funding)
B. Foundational (ex: Knight Foundation)
C. Governmental (ex: CPB / BBC model)
IV. INTERNAL CROSS-SUBSIDY (Profitable division subsidizes unprofitable / “loss leader” strategies)
There are probably other ways of subsidizing content creation, but those are the primary methods. I have no idea what combination of strategies will sustain content going forward, but I think advertising is likely to play a diminished role in the mix as it becomes increasingly easy for us to filter it out of the mix. But the content creators will just shift costs elsewhere and raise the prices for programming through direct and indirect pricing techniques. Do you like HBO’s pricing model? Pay-per-view? Paywalls? Well, it doesn’t make a difference whether you do or not because you’ll likely be seeing a lot more of those models in your life in coming years if advertising fades as a subsidization method.
Alternatively, as I also note in my Forbes piece, “we could see a lot more Texaco Star Theaters in our future, with major companies essentially owning specific shows or networks.” Such program sponsorship and content underwriting has always been with it, but it could really explode as a cross-subsidy method if traditional advertising starts failing. “But it will be challenging for every show or website to find its own corporate benefactor, and it will also raise issues about undue influence and bias,” I note in my essay.
I hope no one seriously believes that philanthropic models can fill the gaps. Even if we saw a significant uptick in voluntary charitable giving or even taxpayer support for the arts and media, there’s no way in hell it will possibly begin to cover the the bill for what advertising support covers today.
In the end, I can’t help but think how great we’ve had it when it comes to advertising. As I also noted in my essay, advertising has been “the great subsidizer of the press, entertainment, and online services” historically and benefited us tremendously even if we haven’t appreciated that fact. “It’s possible that no single industry — not newspapers nor search engines nor anything else — has done as much to advance the storehouse of accessible human knowledge in the 20th century as advertisers,” argues Washington Post columnist Ezra Klein. Klein is exactly right, yet it doesn’t really make a difference how important advertising has been to us if we fail to appreciate that fact and increasingly take steps to exclude it from our lives.
As that becomes easier and easier to accomplish, we shouldn’t bitch and whine when the bills (literally) come due for the content we all desire. As always, there is no free lunch. We’ll pay the price one way or another.
Additional Reading:
my recent Charleston Law Review article on “Advertising, Commercial Speech & First Amendment Parity”
Ezra Klein on the Importance of Advertising to Media
The Hidden Benefactor: How Advertising Informs, Educates & Benefits Consumers
There is No Free Lunch! No Advertising, No Media
PFF’s Mega-Filing in the FCC’s “Future of Media” Proceeding







May 15, 2012
Still More Confusion in the Debate over Retrans & Video Marketplace Deregulation
Writing over at the conservative Big Government blog (part of the Breitbart.com network of blogs), someone who goes by the pseudonym “Capitol Connection” has posted an editorial about the debate over retransmission consent reform that is full of misinformation and misguided policy prescriptions, at least if you believe is truly limited government. The piece is entitled, “Big Cable Would Prefer if You Paid Their Bills,” and the problems are almost immediately evident from that headline alone. First, what is a supposedly small government-oriented blog doing using a silly label like “Big Cable” to describe a vigorously competitive sector of our capitalist economy? Using terms like “Big Cable” is a silly lefty tactic. Second, no one in the cable industry is proposing anyone “pay their bills” except for the customers who enjoy their services. Isn’t a fee for service part of capitalism?
Anyway, that’s just the problem with the title of the essay. Sadly, the rest of the piece is filled with even more erroneous information and arguments about the retransmission consent regulatory process as well as the bill that aims to reform that process, “The Next Generation Television Marketplace Act” (H.R. 3675 and S. 2008). That bill, which is sponsored by Senator Jim DeMint (R-SC) and Rep. Steve Scalise (R-LA), represents a comprehensive attempt to deregulate America’s heavily regulated video marketplace. In a recent Forbes oped, I argued that the DeMint-Scalise effort would take us “Toward a True Free Market in Television Programming” by eliminating a litany of archaic media regulations that should have never been on the books to begin with. The measure would:
eliminate: “retransmission consent” regulations (rules governing contractual negotiations for content);
end “must carry” mandates (the requirement that video distributors carry broadcast signals even if they don’t want to);
repeal “network non-duplication” and “syndicated exclusivity” regulations (rules that prohibit distributors from striking deals with broadcasters outside their local communities);
end various media ownership regulations; and
end the compulsory licensing requirements of the Copyright Act of 1976, which essentially forced a “duty to deal” upon content owners to the benefit of video distributors.
This represents genuine and much-needed deregulation of a market that has been encumbered with far too much top-down control and micro-management by the FCC over the past several decades. To be clear, none of these rules apply to any other segment of our modern information economy. Every day of the week, deals are cut between content creators and distributors in many other segments of the media industry without these rules encumbering the process. The DeMint-Scalise bill is an attempt to get big government out of the way and let these deals be cut in a truly free market without regulators putting their thumb on the scale in one direction or the other.
Thus, it came as a bit of a shock to me to see a blog that rails against and is self-titled Big Government suggesting that we should retain a form of big government regulation! Indeed, the author gets the intent of the DeMint-Scalise bill exactly backward. The author says the The Next Generation Television Marketplace Act:
would strip broadcasters of their ability to negotiate in the free marketplace. Some cable operators, it turns out, would love to provide Americans with the quality content American broadcast companies churn out. They just don’t happen to want to pay for it.
The author of the piece also says that cable industry representatives:
are lobbying in Washington for key provisions in legislation that would that would allow the Federal government to intervene in what is otherwise a sound, private sector marketplace that benefits consumers each and every day. And they’re doing so under the guise of “deregulation.”
This is all utter poppycock. While I am sure that the cable industry would love to get all that content free of charge, that’s not what the DeMint-Scalise bill would do. It doesn’t end free-market contracting; it bolsters it. Again, the bill would get the government out of the business of setting rules for how these deals get cut and instead allow these big boys to come to the bargaining table and hammer out these deals on their own. That is called deregulation and true capitalism!
The author of the misguided Big Government editorial seems to be resting their case on a letter that the American Conservative Union (ACU) sent to members of Congress in late March. I addressed the claims found in that letter in this essay and pointed out that ACU had almost everything exactly backward. Both the ACU letter and the Big Government essay just keep erroneously assuming that the end of the regulatory retrans process means that “broadcasters [will] be forced to simply give away their signals and content.” Again, nothing could be further from the truth. As I noted in my response to the ACU letter:
nothing in this bill forces content creators or broadcasters to deal their content to other distributors. And nothing in the bill gives those other video distributors the right to freely distribute content without the permission of its owners. In sum, the bill does not repeal copyright law — it only repeals the compulsory licensing rules that force content owners to deal their programming against their consent on government regulated terms. That means copyright is actually strengthened under this bill and that content owners have more bargaining power than they do today. Thus, the ACU is horribly mistaken in asserting that the DeMint-Scalise bill would “allow an uncompensated use of broadcast signals and content.” The exact opposite is the case.
Finally, if nothing else convinces the folks at the Big Government blog and the ACU of the error in their thinking, consider this: The preservation of the current retransmission consent regime and all its corresponding regulations means the preservation and growth of the Federal Communications Commission as a federal regulatory agency overseeing the information economy. Is that a truly free market-oriented position? Do we need federal bureaucrats overseeing free market contractual negotiations in this or any other sector? Because that’s what the law allows today. By contrast, the DeMint-Scalise bill offers us the chance to finally get real deregulation rolling and get FCC downsizing back on track. You will never get a smaller FCC by advocating the retention of regulation.
Thus, I think it’s pretty clear which approach is the most liberty-enhancing. I hope, therefore, that the ACU and the folks at the Big Government blog will reconsider their position.







May 12, 2012
I’ll See Your Hayek and Raise You a Friedman
Tim Lee responds to my last post on net neutrality by invoking one of my favorite economists, Friedrich Hayek. As a matter of logic, a perfectly price discriminating monopoly can be as efficient as a competitive industry, at least in a static sense, but Tim wonders if any firm can ever know enough to price discriminate well, and whether in a dynamic sense these outcomes can really be equated.
In short, a market involving numerous competing over-the-top video providers will be fundamentally, qualitatively different from a market in which one or two large broadband incumbents decides which video content to provide to consumers. In the long run, the open Internet is likely to offer a radically broader range of video content than any single cable company’s proprietary video service, just as is true for text and audio content today. But Eli’s model can’t accomodate this difference, because it requires us to treat content as homogenous and service providers as omniscient in order to make the math tractable.
It’s a fair point that a basic price discrimination model like a simple graph with demand and marginal cost is not going to capture the texture of economic change over time. Nevertheless, I think Tim’s criticism is misplaced, and in fact it’s in a dynamic sense that laissez-faire really shines. Here are a few reasons:
Contra Tim, firms don’t need to be omniscient to price discriminate well. There are lots of techniques, such as bundling, quantity discounts, and tiering, that induce self-selection among consumers. These techniques are forms of price discrimination.
The efficiency properties of price discrimination kick in if the monopolist is able to price discriminate at the low end of the price spectrum, even if it prices poorly to higher-value consumers. There is good evidence that cable companies do this well. For instance, I called Comcast 9 months ago to cancel my economy cable TV package, and they offered me a $15/month credit for a year to keep it. I’m basically getting cable TV for free. Furthermore, as Adam Ozimek pointed out on Twitter last night, almost everyone has cable TV, so the cable company must know how to price it to get low-value consumers on board.
In a dynamic sense, monopoly profit can act as a prize for outcompeting everyone else. As long as competition is taking place without entry barriers or favoritism by the state, competition that admits a possibility of monopoly ex post is fiercer, more Schumpeterian, than that which does not.
Whether or not you buy the above arguments, I think my broad point in favor of laissez-faire in broadband is supported by the Hayekian view of competition, to which I am quite sympathetic. Here’s Hayek in his essay, “The Meaning of Competition,” available in Individualism and Economic Order (free pdf):
The argument in favor of competition does not rest on the conditions that would exist if it were perfect. Although, where the objective facts would make it possible for competition to approach perfection, this would also secure the most effective use of resources, and, although there is therefore every case for removing human obstacles to competition, this does not mean that competition does not also bring about as effective a use of resources as can be brought about by any known means where in the nature of the case it must be imperfect. Even where free entry will secure no more than that at anyone moment all the goods and services for which there would be an effective demand if they were available are in fact produced at the least current expenditure of resources at which, in the given historical situation, they can be produced, even though the price the consumer is made to pay for them is considerably higher and only just below the cost of the next best way in which his need could be satisfied, this, I submit, is more than we can expect from any other known system. The decisive point is still the elementary one that it is most unlikely that, without artificial obstacles which government activity either creates or can remove, any commodity or service will for any length of time be available only at a price at which outsiders could expect a more than normal profit if they entered the field.
The practical lesson of all this, I think, is that we should worry much less about whether competition in a given case is perfect and worry much more whether there is competition at all. What our theoretical models of separate industries conceal is that in practice a much bigger gulf divides competition from no competition than perfect from imperfect competition. Yet the current tendency in discussion is to be intolerant about the imperfections and to be silent about the prevention of competition. We can probably still learn more about the real significance of competition by studying the results which regularly occur where competition is deliberately suppressed than by concentrating on the shortcomings of actual competition compared with an ideal which is irrelevant for the given facts. I say advisedly “where competition is deliberately suppressed” and not merely “where it is absent,” because its main effects are usually operating, even if more slowly, so long as it is not outright suppressed with the assistance or the tolerance of the state. The evils which experience has shown to be the regular consequence of a suppression of competition are on a different plane from those which the imperfections of competition may cause. Much more serious than the fact that prices may not correspond to marginal cost is the fact that, with an intrenched monopoly, costs are likely to be much higher than is necessary. A monopoly based on superior efficiency, on the other hand, does comparatively little harm so long as it is assured that it will disappear as soon as anyone else becomes more efficient in providing satisfaction to the consumers.
Hayek’s position is my position. Let’s put aside simplistic notions of competition like “how many firms are there in the industry.” The important question is whether, as Hayek writes earlier in the essay, “only people licensed by authority [are] allowed to produce particular things, or prices [are] fixed by authority, or both.” Unless I am misreading Tim, he is at least sympathetic to using authority to forbid people from producing a particular thing, a private network, and charging what they like for its use.
Since I know that Tim is fond of quoting Milton Friedman, I’ll point out that Friedman’s position on natural monopoly is also consistent with my own. Here he is in Capitalism and Freedom:
When technical conditions make a monopoly the natural outcome of competitive market forces, there are only three alternatives that seem available: private monopoly, public monopoly, or public regulation. All three are bad so we must choose among evils. … I reluctantly conclude that, if tolerable, private monopoly may be the least of the evils.
If society were static so that the conditions which give rise to a technical monopoly were sure to remain, I would have little confidence in this solution. In a rapidly changing society, however, the conditions making for technical monopoly frequently change and I suspect that both public regulation and public monopoly are likely to be less responsive to such changes in conditions, to be less readily capable of elimination, than private monopoly.
My reading of Friedman is that he became even more hostile to competition policy over time, as economists discovered new, efficient rationales for illegal practices and analyzed cases, like United Shoe and Coors, where the government and the courts got it wrong.
[UPDATE] Tim did not find the preceding Friedman quotation impressive, so here is a more forceful one from later in his life, supporting my claim that he became more hostile to competition policy over time:
My own views about the antitrust laws have changed greatly over time. When I started in this business, as a believer in competition, I was a great supporter of antitrust laws; I thought enforcing them was one of the few desirable things that the government could do to promote more competition. But as I watched what actually happened, I saw that, instead of promoting competition, antitrust laws tended to do exactly the opposite, because they tended, like so many government activities, to be taken over by the people they were supposed to regulate and control. And so over time I have gradually come to the conclusion that antitrust laws do far more harm than good and that we would be better off if we didn’t have them at all, if we could get rid of them.
[/UPDATE]
Whatever the shortcomings of my view of efficiency, I know dozens of economists even more steeped in the work of Hayek than I am. I can’t think of a single one who would support a government-imposed top-down net neutrality policy framework. For Tim to argue for such a policy on Hayekian grounds seems to me to be quite a stretch.







May 10, 2012
Network Access Regulation 4.0
More this week on the efforts of Reed Hastings of Netflix to reignite the perennial debate over network access regulation, courtesy of the New York Times. Hastings is seeking a free ride on Comcast’s multi-billion-dollar investment in broadband Internet access.
Times columnist Eduardo Porter apparently believes that he has seen the future and thinks it works: The French government forced France Télécom to lease capacity on its wires to rivals for a regulated price, he reports, and now competitor Iliad offers packages that include free international calls to 70 countries and a download speed of 100 megabits per second for less than $40.
It should be noted at the outset that the percentage of French households with broadband in 2009 (57%) was less than the percentage of U.S. households (63%) according to statistics cited by the Federal Communications Commission.
There is a much stronger argument for unbundling in France – which lacks a fully-developed cable TV industry – than in the U.S. As the Berkman Center paper to which Porter’s column links notes on pages 266-68, DSL subscriptions – most of which ride France Télécom’s network – make up 95% of all broadband connections in France. Cable constitutes approximately only 5% of the overall broadband market. Competition among DSL providers has produced lower prices for consumers, but at the expense of private investment in fiber networks.
Despite commitments by several of the major broadband companies … to invest in fiber roll-out, fiber-based broadband connections remain marginal in France …. In part, this may be due to the public controversy regarding access to the infrastructure of France Télécom … The delayed investment is also consistent with the argument that requiring open access to incumbent facilities delays investment.
This observation is from the same Berkman Center paper. As a result of the delayed private investment, the paper acknowledges that “the French government has annouced its intention to help finance the deployment of fiber networks.” Public subsidy is frequently the only option after politicians tax and/or regulate something to death.
The U.S. has already experimented with unbundling, and the trial was unsuccessful. Prior to 2003, new entrants could purchase the high-frequency portion of local telephone loops to provide their own DSL service. In February of 2003, the FCC eliminated line-sharing, which had allowed new entrants to offer DSL – but not voice – over incumbent loops (henceforth, new entrants could either purchase the entire loop or partner with a voice provider).
“There is no evidence that network sharing has increased competition in U.S. broadband markets,” according to Robert W. Crandall of the Brookings Institution. “At the end of 2003, the FCC reported that only 1.7 percent of all broadband lines were DSL lines offered by nonincumbent telephone companies.” (See Crandall, Competition and Chaos, 2005.)
Porter also claims that cable is often the only choice for consumers who desire very high speeds. He is insinuating that there is monopoly problem in broadband, which might justify common carrier regulation pursuant to ancient legal theory. The legal scholar Blackstone wrote an early text book on this subject in the 18th century. Common carrier regulation guarded against monopolist misbehavior, but it also defended government-awarded monopolies from “ruinous” competition or unlimited liability. It turned out to be a sweet deal for monopolists. The fact that it victimized consumers became apparent by the 1970s.
Although telecommunications carriers are not investing in fiber-to-the-premises at the moment, they are investing in 4G wireless technologies that promise download speeds of 100 megabits per second or higher. Verizon Chairman and CEO Lowell C. McAdam predicted earlier this week in Tampa that “mobile devices will generate more Internet traffic than all wired devices combined” by the middle of this decade. And Wall Street Journal columnist Holman W. Jenkins, Jr. wrote this week that it seems, at least for now, that “wireless is the future of broadband.”
None of us can be sure what this market will look like in the future. If big cable companies seem frightening now, it is worth recalling that for years doomsayers predicted that telecommunications carriers would monopolize data processing, video services, classified advertising, alarm monitoring, etc. None of these predictions proved accurate. Most successful commercial enterprises are one-trick ponies.
What is clear is that we never seem to tire of the network access regulation debate. After many years of consideration, the FCC ruled in 1984 that providers of “computer enhanced services” would not be regulated as common carriers. Under pressure to reverse course in the late 1990s, FCC Chairman William E. Kennard (Democrat) declared that “the best decision government ever made with respect to the Internet was the decision that the FCC made 15 years ago NOT to impose regulation on it.” In 2010, the FCC voted along party lines to “preserve the Internet as an open network.” That decision is the subject of pending litigation.
Hastings apparently hopes to write the next version of this debate.







If You Meet a Censor, Ask Them This One Question
Via Twitter, Andrew Grossman brought to my attention this terrifically interesting interview with a Kuwaiti censor that appeared in the Kuwait Times (“Read No Evil – Senior Censor Defends Work, Denies Playing Big Brother“). In the interview, the censor, Dalal Al-Mutairi, head of the Foreign Books Department at the Ministry of Information, speaks in a remarkably candid fashion and casual tone about the job she and other Kuwaiti censors do every day. My favorite line comes when Dalal tells the reporter how working as a censor is so very interesting and enlightening: “I like this work. It gives us experience, information and we always learn something new.” I bet! But what a shame that others in her society will be denied the same pleasure of always learning something new. Of course, like all censors, Dalal probably believes that she is doing a great public service by screening all culture and content to make sure the masses do not consume offensive, objectionable, or harmful content.
But here’s where the reporter missed a golden opportunity to ask Dalal the one question that you must always ask a censor if you get to meet one: If the content you are censoring is so destructive to the human soul or psyche, how then is it that you are such a well-adjusted person? And Dalal certainly seems like a well-adjusted person. Although the reporter doesn’t tell us much about her personal life or circumstances, Dalal volunteers this much about herself and her fellow censors: “Many people consider the censor to be a fanatic and uneducated person, but this isn’t true. We are the most literate people as we have read much, almost every day. We receive a lot of information from different fields. We read books for children, religious books, political, philosophical, scientific ones and many others.” Well of course you do… because you are lucky enough to have access to all that content! But you are also taking steps to make sure the rest of your society doesn’t consume it on the theory that it would harm them or harm public morals in some fashion. But, again, how is it that you have not been utterly corrupted by it all, Ms. Dalal? After all, you get to consume all that impure, sacrilegious, and salacious stuff! Shouldn’t you be some kind of monster by now?
How can this inconsistency be explained? The answer to this riddle can be found in the “Third-Person Effect Hypothesis.” First formulated by psychologist W. Phillips Davison in 1983, “this hypothesis predicts that people will tend to overestimate the influence that mass communications have on the attitudes and behavior of others. More specifically, individuals who are members of an audience that is exposed to a persuasive communication (whether or not this communication is intended to be persuasive) will expect the communication to have a greater effect on others than on themselves.” While originally formulated as an explanation for how people convinced themselves “media bias” existed where none was present, the third-person-effect hypothesis has provided an explanation for other phenomenon and forms of regulation, especially content censorship. Indeed, one of the most intriguing aspects about censorship efforts historically is that it is apparent that many censorship advocates desire regulation to protect others, not themselves, from what they perceive to be persuasive or harmful content. That is, many people imagine themselves immune from the supposedly ill effects of “objectionable” material, or even just persuasive communications or viewpoints they do not agree with, but they claim it will have a corrupting influence on others.
In his brilliant paper, Davison tells this wonderful story of one of the last censor boards in America (and think about that Kuwati censor as you read this):
The phenomenon of censorship offers what is perhaps the most interesting field for speculation about the role of the third-person effect. Insofar as faith and morals are concerned, at least, it is difficult to find a censor who will admit to having been adversely affected by the information whose dissemination is to be prohibited. Even the censor’s friends are usually safe from pollution. It is the general public that must be protected. Or else, it is youthful members of the general public, or those with impressionable minds. When Maryland’s State Board of Censors, which had been filtering smut from motion pictures since 1916, was finally allowed to die in June 1981, some of its members issued dire forecasts about the future morals of Maryland and the nation (New York Times, June 29, 1981). Yet the censors themselves had apparently emerged unscathed. One of them stated that over the course of 21 years she had “looked at more naked bodies than 50,000 doctors,” but the effect of this experience was apparently more on her diet than on her morals. “I had to stop eating a lot of food because of what they do with it in these movies,” she is quoted as having told the Maryland Legislature.
I just love that story because it gets to the heart of what is so horribly elitist and ironic about censorship: No one every thought to test how corrupted the censors themselves had become because they consumed all the same stuff they were censoring! If there was anything to the “monkey see, monkey do” theory of media effects theory (i.e., if you read, see, or hear bad things, then you will do bad things), then these censors should all be dope-smoking, axe-wielding, sex addicts. But I bet most of them weren’t. Like Ms. Dalal, they were probably generally well-adjusted members of society. They probably learned how to properly process all that content, even as they had zero faith in the ability of their fellow citizens to do the same.
So, if you ever get a chance to meet an actual censor, make sure to ask them about all the fun stuff they’ve been consuming lately and why it hasn’t turn them into total freaks or madmen!







May 9, 2012
More on Net Neutrality, the Importance of Business Model Experimentation & Pricing Flexibility
I wanted to follow up on Eli Dourado’s excellent previous post (“Real Talk on Net Neutrality“) to reiterate the importance of a few points he made and add a some additional thoughts about the issues raised in that New York Times article on net neutrality and forced access regulation that lots of people are talking about today.
What Eli’s post makes clear is that there are those of us who think about Net neutrality and infrastructure regulation in economic terms (a rapidly shrinking group, unfortunately) and those who think it about in quasi-religious terms. The problem with the latter ideology of neutrality uber alles, however, is that at some point it must confront real-world economics. This is Eli’s core point: Something must pay the bills. In this case, something must cover the significant fixed costs associated with broadband investments if you hope to sustain those networks. Unless you are ready to make the plunge and suggest that the government should cover those costs through massive infrastructure expenditures and even potential nationalization or municipalization of broadband networks — and some clearly would be — then you have to get serious about how those costs will be covered by private operators.
Thus, we come back to the importance of business model experimentation and pricing flexibility to this debate. I have been harping on this point for a long time now, going all the way back to this 2005 essay, “The Real Net Neutrality Debate: Pricing Flexibility Versus Pricing Regulation.” And there’s a litany of other things I’ve penned on the same point, many of which I have cited at the end of this essay.
Here are the core points I have tried to get across in those earlier essays:
For progress to occur in any economic system, firms must be able to freely set prices for goods and services without fear of government price controls or micromanagement of business models. Heavy-handing tech mandates — especially Internet price controls — could have a profoundly deleterious impact on investment, innovation, and competition. After all, there can be no innovation or investment without a company first turning a profit.
The Net neutrality debate is about whether the government will allow broadband services to be differentiated or specialized for unique needs. Differentiated and prioritized services and pricing are part of almost every industrial sector in a capitalistic economy. (ex: airlines, package shipping, hotels, amusement parks, grades of gasoline, etc.) Why should it be any different for broadband? Indeed, it is essential that such flexibility be allowed precisely because it is the key to making sure more populations get served with more diversified offerings. Of course, advocates of neutrality uber alles think this is heresy, even if it is based on sound and widely-accepted economics. They just figure you can ban all sorts of business practices without it having any consequences.
But, again, there is no such thing as a free lunch. Something has to pay for ongoing Internet investment. It doesn’t just fall like manna from heaven. Differentiated business services and pricing can help in this regard by allowing carriers to price more intensive or specialized users and uses to ensure that carriers don’t have to hit everyone – including average household users – with the same bill for service. Why should the government make that illegal through Net neutrality regulation?
Net neutrality can have, and already has had, unintended consequences. Consider bandwidth caps, which critics paint as some sort of nefarious, anti-consumer plot. In reality, they are just a tool to manage capacity; a tool that has been necessitated by Net neutrality regulation. When the law says you are not allowed to differentiate or specialize service offerings, you have to find other ways to manage capacity and make sure you can recoup fixed costs. In a world without the omnipresent threat of Net neutrality regulation, things might have played out quite differently. Broadband providers might have found creative ways to have other downstream providers help defray the costs of specialized services so that consumers weren’t stuck picking up the entire bill or being forced to deal with caps. For example, video game developers like Electronic Arts and Activision might be willing to help subsidize the costs associated with online gaming by picking up that expense and then amortizing the expense over a diverse universe of online gamers. Similarly, some content companies or video services could help cross-subsidize new online video ventures to ensure those costs do not have to be spread across all customers but instead only those who most demand those services. Again, this is the alternate universe that might have played out if not for the hyperventilating of vociferous regulatory advocates who worship at the alter of perfect “neutrality” in all things. To reiterate, this is not the way any other sector of our capitalist economy works. Service differentiation and price discrimination are not some sort of bizarre anomaly; they are the norm.
When it comes to industrial organization questions, infrastructure socialism simply isn’t a sustainable long-term alternative. Sharing is not competing. We’ve tried line-sharing and forced access regimes before and they didn’t end well. Creating networks built on paper is a dangerous endeavor. In the short-term, you can milk existing infrastructures for every drop of value they have left, but eventually the bills will come due and something must pay for sustained investment and upgrades. Facilities-based competition, not infrastructure sharing is the path forward if we want truly robust and sustainable networks and markets.
Where will this debate turn next? As we saw in today’s New York Times piece, the regulatory proponents are turnung up the heat and asking for more day-to-day Net neutrality controls, making it increasingly difficult for differentiated service offerings to develop. That leaves broadband providers in the unenviable position of telling their customers that they’ll either have to live with caps or some variant of metered pricing. But bandwidth caps are increasingly controversial and, quite honestly, completely unnecessary if the carriers are at liberty to freely price their offerings to account for traffic.
Thus, I’d be willing to bet that we’ll see more broadband providers gradually phase in metered or two-part pricing schemes. Pure metering is a harder sell since many consumers resent it and it also remains unclear how easy it is to meter bits and communicate usage patterns to consumers. This leaves two-part pricing and tiered pricing. Two-part pricing would involve a flat fee for service up to a certain level and then a per-unit / metered fee over a certain level. I don’t know where the demarcation should be in terms of where the flat rate ends and the metering begins; that’s for market experimentation to sort out. But the clear advantage of this solution is that it preserves flat-rate, all-you-can-eat pricing for casual to moderate bandwidth users and only resorts to less popular metering pricing strategies when the usage is “excessive,” however that is defined. Or you can just go with tiers of service like wireless operators already have. Of course, if you have enough graduated tiers of service, it very quickly starts to resemble a metering scheme.
In the end, there’s just no way of escaping basic economics. If the law doesn’t allow service providers to use creative schemes to more efficiently allocate fixed costs, the end user will have to pick up the full cost of service. The only interesting question left is whether Net neutrality regulation will make that illegal too.
Additional Reading:
Netflix Falls Prey to Marginal Cost Fallacy & Pleads for a Broadband Free Ride (July 8, 2011)
Smartphones & Usage-Based Pricing: Are Price Controls Coming? (July 12, 2011)
Why Congestion Pricing for the iPhone & Broadband Makes Sense (October 7, 2009)
The (Un)Free Press Calls for Internet Price Controls: “The Broadband Internet Fairness Act” (June 17, 2009)
Free Press Hypocrisy over Metering & Internet Price Controls (June 18, 2009)
Bandwidth Cap Hysteria & the Alternative (October 4, 2008)
Once Again, Why Not Meter Broadband Pipes? (September 7, 2007)
Why Not Meter? (March 12, 2007)
The Real Net Neutrality Debate: Pricing Flexibility Versus Pricing Regulation (October 27, 2005)
Infrastructure socialism isn’t a sustainable alternative.







Real Talk on Net Neutrality
A lot of people are talking about this New York Times article on net neutrality, which highlights the effect on Netflix of Comcast launching its own video platform on the Xbox that is exempt from Comcast’s bandwidth limitations. While this policy may indeed result in more customers for Comcast’s video services and fewer for Netflix’s in the short run, I don’t think that critics are seriously thinking through the economics of Internet service before they speak.
The economics of running a large ISP is one of fixed costs. When you introduce large fixed costs, a lot of consumers’ ordinary economic intuition becomes worse than useless. If Comcast incurs a lot of fixed costs from building a network, someone has to pay for it. Suppose that the fixed cost is currently divided between TV subscription and advertising revenue and Internet service revenue. If Comcast’s TV revenues collapse because everyone is switching to Netflix, where will Comcast get the revenue to pay its high fixed costs? You guessed it, they will have to raise the price of Internet service.
To give a dramatically oversimplified example, suppose that TV service and Internet service each cost $50/month and Comcast has $90/customer/month in fixed costs and $10/customer/month in TV content licensing costs. If all of Comcast’s customers drop TV service and switch to Netflix, which costs $8/month, Comcast loses its $10/month licensing expense but it still has $90/month in fixed costs for maintaining its network. It will have to raise the price of its Internet service to $90/month to recover those costs. Consumers will now pay $90/month to Comcast for Internet service and $8/month to Netflix for TV service, for a total of $98/month, which is $2 less than they were paying before.
However, Comcast’s “non-neutral” Xbox service could improve on this for some customers, assuming that customers are heterogeneous. Suppose that critics’ worst fear comes true and I am the only Comcast customer to switch from Comcast video to Netflix. Then Comcast’s pricing does not have to change, I pay $58/month, and other customers continue paying $100/month, just as they were before. This pricing policy is great for me, the most elastic customer. If you are a Netflix subscriber, therefore, you benefit from Comcast’s non-neutral Xbox service.
But what about the inelastic customers? They have to pay more. However, it is economically efficient—and this can be proven rigorously—for the less elastic customers to pay a higher share of the fixed cost. Given that we’re going to have a network with a large fixed cost, the question we should be asking is, “What is the most efficient way of paying that fixed cost?” And the answer is, in many cases, in a non-neutral way.
The bottom line is that there is a lot of wishful thinking when it comes to net neutrality. In many respects, it reminds me of the simpleton’s dream of à la carte cable, as if pricing of $0.50/channel in a bundle of 100 channels can be extended to customers buying only 5 channels. Fools! You must pay the fixed cost somehow. And the best, most efficient way of splitting up this fixed cost is not equally, and certainly not at taxpayer expense, which is completely unfair to taxpayers who do not value the service, but inversely with demand elasticity. This means the network should always be non-neutral to some extent, balanced of course against our willingness to pay more as consumers for a neutral Internet.







May 8, 2012
Jim Harper & Ryan Radia on cybersecurity legislation
On the podcast this week, Jim Harper, director of information policy studies at the Cato Institute, and Ryan Radia, associate director of technology studies at the Competitive Enterprise Institute, discuss Congress’s recent interest in cybersecurity. Harper and Radia begin by discussing why Congress wants to legislate cybersecurity and the potential threats that have Congress frightened. Harper and Radia then discuss the types of bills before Congress, which include aspects of information sharing that would promote cybersecurity intelligence but may have privacy implications, and mandates for a security infrastructure. The discussion then turns to the role of government in cybersecurity and whether the protection of online information and assets should be left to markets. The discussion ends with Harper and Radia predicting the future of the proposed bills.
Related Links
“Cybersecurity Bills? No, Thanks”, cato@libertyGovernment Bureaucrats Can’t Prevent Data Breaches, CEI.org“Cyberwar Is the New Yellowcake”, Wired“Cybersecurity bill passes, Obama threatens veto”, CNN Money
To keep the conversation around this episode in one place, we’d like to ask you to comment at the webpage for this episode on Surprisingly Free. Also, why not subscribe to the podcast on iTunes?







Surveillance Cuts Both Ways: How New Technology Helps Keep the Cops in Check
This seems like a logical follow-up to Berin Szoka’s previous post about technology, social activism, and government power. ReasonTV has produced this important short clip on “Cops Vs. Cameras: The Killing of Kelly Thomas & The Power of New Media.” It documents how the combined power of citizen journalism, social media, and surveillance video can ensure that our police authorities are held accountable for their actions. In this particular case, it can hopefully win some justice for Kelly Thomas, the homeless Fullerton, California man who was brutally beaten to death by police officers on the night of July 5, 2011.
There is live video from the horrific beating here, but I caution you it is not for the faint of heart. Watching the last moments of man’s life slip away from repeated blows to the head while he begs for his life and calls out for his father is, well, stomach-turning. But imagine if this video and the other citizen videos that were taking that night had not existed. As the ReasonTV clip notes, the Fullerton police department basically ignored requests for more information about the case until Kelly’s father (who was former police officer himself) took cell photos of his son’s beaten face in the hospital and released them to the public. Then the citizen videos of the beating were posted on YouTube and went viral. And then, finally, mainstream media started paying attention. And now the surveillance video from a nearby street camera has been released after citizens and activists demanded it.
While we spend a lot of time today worrying about the privacy implications of new technologies, especially surveillance technologies, episodes like these make it clear that there are also powerful benefits from these new surveillance tools. David Brin first pointed this out in his provocative 1997 book, The Transparent Society, in which he noted:
While new surveillance and data technologies pose vexing challenges, we may be wise to pause and recall what worked for us so far. Reciprocal accountability — a widely shared power to shine light, even on the mighty — is the unsung marvel of our age, empowering even eccentrics and minorities to enforce their own freedom. Shall we scrap civilization’s best tool – light — in favor of a fad of secrecy?
Of course, that doesn’t mean we shouldn’t take steps to limit the surveillance powers of our government over the citizenry. We absolutely must. But we must draw a distinction between the tools and their uses and make sure we do not go overboard with what Brin called the “fad of secrecy” such that new privacy rules limit the use and spread of these technologies.
For far too long governments have avoided accountability for their actions because of a lack of transparency. Nowhere has this been more dismaying that in matters of policing. While our law enforcement officers deserve respect for the hard jobs they have to keep the public safe, they also must account for their actions when they go too far precisely because we grant them coercive powers held by no other group in society. Luckily, new technologies can help us keep their power in check and hold them accountable. While some authorities are fighting back and trying to limit citizen efforts to record them and hold them accountable, the genie is already well out of the bottle. These surveillance tools are not going away and law enforcement authorities will now be forced to live under the gaze of an empowered citizenry. Hopefully that increases transparency and accountability in all policing activities going forward. Read Brin’s short 2011 essay “Sousveillance: A New Era for Police Accountability” for greater elaboration.







May 7, 2012
Toward a Greater Understanding of Internet Activism through Public Choice, Economics
In the lead essay for the “Cato Unbound” symposium this month, I analyze recent political movements that have been aided by Internet-based communication by positing a set of questions,
Activists played important roles in bringing down dictators in the Arab world, stopping the Stop Online Piracy Act (SOPA) in Congress and electing Barack Obama—just to name a few examples. But how much did the Internet matter in making these watershed events possible? How effective is it likely to be in the future? And how would we measure whether activism “works” for society—not just the activists?
I respond to the concerns raised by Evgeny Morozov in his iconoclastic 2010 book, The Net Delusion: The Dark Side of Internet Freedom (summarized in his short essay in TechFreedom’s free ebook The Next Digital Decade: Essays on the Future of the Internet). In general, I suggest that we simply do not yet understand the Internet’s effect on activism well enough to make strong normative judgments about it. But applying Public Choice theory can help us understand how developments in communication technologies are changing the relationship between an individual and the group in social movements. A few highlights:
Social media lower organizational costs, especially of recruiting members, but also noticeability: “members’ ability to notice each other’s actions.” Even in 2003, there was little way to tell whether your friends actually followed through when you asked them to help join a cause. But today, it’s easy to encourage them to re-share material on Facebook or Twitter—and to “notice” whether they’ve done so.
Social media allows members of large groups—think Twitter followers—to be continuously bombarded with propaganda about the worthiness of the cause creating social pressures not entirely unlike those that can be generated in a face-to face group.
The Internet empowers large, dispersed groups (like dedicated Internet users) to organize against small but concentrated interests. As anyone who works in technology policy in Washington can attest, SOPA’s implosion made Congress more cautious—at least about Internet regulation, where fear of a digital activist backlash is greatest.
Ultimately, the Internet does make coordination easier among like-minded people to provide reputational feedback about corporations and governments. However we must still be vigilant—governments can and do manipulate the Internet in overt and covert ways to stifle their populations.
Activism works largely by imposing reputational costs on its targets. Online reputation markets deliver information much faster and more cheaply than ever before.
I conclude by saying: “The Internet may not necessarily make the world a better place in every way, but the more we understand how it changes our relationships with each other, the better equipped we will be to steer its evolution in more humane directions.”
In the coming days, Jason Benlevi, Rebecca MacKinnon and John O. McGinnis will all respond, leading to a spirited debate on the topic of Internet activism and to what degree technology really does enhance freedom.







Adam Thierer's Blog
- Adam Thierer's profile
- 1 follower
