Adam Thierer's Blog, page 121
July 19, 2011
Hal Singer on wireless competition
On the podcast this week, Hal Singer, managing director at Navigant Economics and adjunct professor at Georgetown University's McDonough School of Business, discusses his new paper on wireless competition, co-authored by Gerald Faulhaber of the University of Pennsylvania, and Bob Hahn of Oxford. The FCC produces a yearly report on the competitive landscape of the wireless market, which serves as an overview to policy makers and analysts. The report has found the wireless market competitive in years past; however, in the last two years, the FCC is less willing to interpret the market as competitive. According to Singer, the FCC is using indirect evidence, which looks at how concentrated the market is, rather than direct evidence, which looks at falling prices, to make its assessment. In failing to look at the direct evidence, Singer argues that the report comes to an erroneous conclusion about the real state of competition in wireless markets.
Related Links
Assessing Competition in U.S. Wireless Markets: Review of the FCC's Competition Reports, by Singer et al
"FCC report dodges answers on wireless industry competition", Washington Post
"FCC Mobile Competition Report Is One Green Light for AT&T/T-Mobile Deal", Technology Liberation Front
To keep the conversation around this episode in one place, we'd like to ask you to comment at the webpage for this episode on Surprisingly Free. Also, why not subscribe to the podcast on iTunes?







July 15, 2011
TechFreedom Event on 7/19 – Sorrell: The Supreme Court Confronts Free Speech, Marketing & Privacy
The Supreme Court's 6-3 decision in Sorrell v. IMS Health has been heralded as a major victory for commercial free speech rights and raised serious questions about how to reconcile privacy regulations with the First Amendment. The high Court struck down a Vermont law requiring that doctors opt in before drug companies could use data about their prescription patterns to market (generally name-brand) drugs to them. But what does the Court's decision really mean for the regulation of advertising, marketing, and data flows across the economy? Has free speech doctrine fundamentally changed? Will existing privacy laws be subject to new legal challenges? How might the decision affect the ongoing debate about privacy regulation in Congress and at the FTC?
These are some of the questions that will be addressed by leading thinkers on First Amendment law and privacy at an event hosted by TechFreedom, a new digital policy think tank, and the law firm of Hunton & Williams LLP. The event will take place on Tuesday, July 19 from 12 to 3 p.m. at Hunton & Williams's newly opened offices at 2200 Pennsylvania Ave NW, Washington DC. Complimentary lunch will be served.
The event will include two panels:
Panel 1: Sorrell: Towards Greater Commercial Free Speech Protections?
Moderator: Greg Stohr, Bloomberg
Tom Julin, Hunton Williams
Bob Corn-Revere, Davis Wright Tremaine LLP
Greg Beck, Public Citizen
Richard Ovelmen, Jordan Burt
Prof. David Orentlicher, Indiana University School of Law
Panel 2: Privacy after Sorrell: Reconciling Data Restrictions & the First Amendment
Moderator: Jim Harper, Cato Institute
John Verdi, Electronic Privacy Information Center
Jonathan Emord, Emord & Associates P.C.
John Morris, Center for Democracy & Technology
Berin Szoka, TechFreedom
TechFreedom filed an amicus curiae brief with the Supreme Court in this case (our media statement), led by Richard Ovelmen, and previously joined with other free speech groups in an amicus brief before the Second Circuit.
To Register: Space is limited. To guarantee a seat, register online here







July 14, 2011
Copyright Erodes Property℠
Copyrights and patents differ from tangible property in fundamental ways. Economically speaking, copyrights and patents are not rivalrous in consumption; whereas all the world can sing the same beautiful song, for instance, only one person can swallow a cool gulp of iced tea. Legally speaking, copyrights and patents exist only thanks to the express terms of the U.S. Constitution and various statutory enactments. In contrast, we enjoy tangible property thanks to common law, customary practices, and nature itself. Even birds recognize property rights in nests. They do not, however, copyright their songs.
Those represent but some of the reasons I have argued that we should call copyright an intellectual privilege, reserving property for things that deserve the label. Another, related reason: Calling copyright property risks eroding that valuable service mark.
Property as a service mark, like FedEx or Hooters? Yes. Thanks to long use, property has come to represent a distinct set of legal relations, including hard and fast rules relating to exclusion, use, alienation, and so forth. Copyright embodies those characteristics imperfectly, if at all. To call it intellectual property risks confusing consumers of legal services—citizens, attorneys, academics, judges, and lawmakers—about the nature of copyright. Worse yet, it confuses them about the nature of property. The property service mark suffers not merely dilution from copyright's infringing use, but tarnishment, too.
As proof of how copyright threatens to erode property, consider Ben Depooter, Fair Trespass, 111 Col. L. Rev. 1090 (2011). From the abstract:
Trespass law is commonly presented as a relatively straightforward doctrine that protects landowners against intrusions by opportunistic trespassers. . . . This Essay . . . develops a new doctrinal framework for determining the limits of a property owner's right to exclude. Adopting the doctrine of fair use from copyright law, the Essay introduces the concept of "fair trespass" to property law doctrine. When deciding trespass disputes, courts should evaluate the following factors: (1) the nature and character of the trespass; (2) the nature of the protected property; (3) the amount and substantiality of the trespass; and (4) the impact of the trespass on the owner's property interest. . . . [T]his novel doctrine more carefully weighs the interests of society in access against the interests of property owners in exclusion.
Although do not agree with every aspect of Prof. Depooter's doctrinal analysis, he correctly observes that trespass law includes some fuzzy bits. Nor do I complain about his overall form of argument. It is not a tack I would take, but it was near-inevitable that some legal scholar would eventually argue back from copyright to claim that real property, too, should fall prey to a multi-factor, fact-intensive "fair use" defense. I merely take this opportunity to remind fellow friends of liberty that they can expect more of the same—and more erosion of the property service mark—if they fail to recognize copyrights and patents as no more than intellectual privileges.
[Crossposted at Agoraphilia, Technology Liberation Front, and Intellectual Privilege.]







July 12, 2011
FCC Mobile Competition Report Is One Green Light for AT&T/T-Mobile Deal
By Larry Downes & Geoffrey A. Manne
Published in BNA's Daily Report for Executives
The FCC published in June its annual report on the state of competition in the mobile services marketplace. Under ordinary circumstances, this 300-plus page tome would sit quietly on the shelf, since, like last year's report, it ''makes no formal finding as to whether there is, or is not, effective competition in the industry.''
But these are not ordinary circumstances. Thanks to innovations including new smartphones and tablet computers, application (app) stores and the mania for games such as ''Angry Birds,'' the mobile industry is perhaps the only sector of the economy where consumer demand is growing explosively.
Meanwhile, the pending merger between AT&T and T-Mobile USA, valued at more than $39 billion, has the potential to accelerate development of the mobile ecosystem. All eyes, including many in Congress, are on the FCC and the Department of Justice. Their review of the deal could take the rest of the year. So the FCC's refusal to make a definitive finding on the competitive state of the industry has left analysts poring through the report, reading the tea leaves for clues as to how the FCC will evaluate the proposed merger.
Make no mistake: this is some seriously expensive tea. If the deal is rejected, AT&T is reported to have agreed to pay T-Mobile $3 billion in cash for its troubles. Some competitors, notably Sprint, have declared full-scale war, marshaling an army of interest groups and friendly journalists.
But the deal makes good economic sense for consumers. Most important, T-Mobile's spectrum assets will allow AT&T to roll out a second national 4G LTE (longterm evolution) network to compete with Verizon's, and expand service to rural customers. (Currently, only 38 percent of rural customers have three or more choices for mobile broadband.)
More to the point, the government has no legal basis for turning down the deal based on its antitrust review. Under the law, the FCC must approve AT&T's bid to buy T-Mobile USA unless the agency can prove the transaction is not ''in the public interest.'' While the FCC's public interest standard is famously undefined, the agency typically balances the benefits of the deal against potential harm to consumers. If the benefits outweigh the harms, the Commission must approve.
The benefits are there, and the harms are few. Though the FCC refuses to acknowledge it explicitly, the report's impressive detail amply supports what everyone already knows: falling prices, improved quality, dynamic competition and unflagging innovation have led to a golden age of mobile services. Indeed, the three main themes of the report all support AT&T's contention that competition will thrive and the public's interests will be well served by combining with T-Mobile.
1. Mobile Service: Rare Bright Spot in Recession
Demand for mobile services is soaring. The FCC reports 274 million mobile subscribers in 2009, up almost 5 percent from the previous year. The number of mobile internet subscribers, the fastest-growing category, doubled between 2008 and 2009. By late 2010, 41 percent of new mobile phone purchases were for smartphones. More than 9 billion apps had been downloaded by the end of 2010.
Despite poor economic conditions elsewhere, new infrastructure investment continues at a frenzied clip. Between 1999 and 2009, industrywide investment exceeded $213 billion. In 2009 alone, investments topped $20 billion—almost 15 percent of total industry revenue. Of the leading providers, only Sprint decreased
its investments in recent years.
Yet unlike virtually every other commodity, prices for mobile services continue to decline across the board, hardly a sign of flagging competition. The price of mobile voice services, the FCC reports, has ''declined dramatically over the past 17 years,'' falling 9 percent from 2008-2009 alone. (The average price for a voice minute is now 4 cents in the U.S., compared with 16 cents in Western Europe.) Text prices fell 25 percent in 2009. The price per megabyte of data traffic fell sevenfold from 2008-2010, from $1.21 to 17 cents.
2. Mobile Competition Is Robust and Dynamic
The FCC, recognizing the dynamism of the mobile services industry, is moving away from simplistic tools the agency once used to evaluate industry competitiveness. The report repeatedly de-emphasizes the Herfindahl-Hirschman Index, or HHI concentration index, which tends to understate competition. The report also downplays the value of ''spectrum screens'' that once limited a single provider to one-third of the total spectrum in a given market.
Now, the commission says, its evaluation is based on real-world conditions, and looks at competition mostly at the local level. That makes sense. ''Consumers generally search for service providers in the local areas where they live, work, and travel,'' according to the report, ''and are unlikely to search for providers that do not serve their local areas.''
Looking at all 172 local markets individually, the FCC found ample evidence of vibrant competition. For mobile voice services, for example, nearly 90 percent of consumers have a choice of five or more providers. In 2010, almost 68 percent of U.S. consumers had four or more mobile broadband providers to choose from, a significant increase over 2009.
Competition between different kinds of wireless service (cellular, PCS, WiFi, and WiMax) is also increasing, and a wider range of the radio spectrum is now being included in the FCC's analysis. Competition between mobile and traditional wireline service is growing in significance. More and more consumers are even ''cutting the cord:'' By the beginning of 2010, 25 percent of all households had no wireline service, up from 2 percent in 2003.
And competition within the mobile services marketplace, the Commission recognizes, is increasingly being driven not by the carriers but by new devices, applications and services. From 2008-2009, the FCC found that 38 percent of those who had switched carriers did so because it was the only way to obtain the particular handset that they wanted.
There are dozens of handsets to choose from, and no dominant provider among smartphone operating systems or device manufacturers. New entrants can and do thrive: handsets running Google's Android operating system rose from 5 percent of the total market at the end of 2009 to almost 20 percent by mid-2010.
3. If There Is a Problem, It Is Government
As consumers continue to embrace new mobile technologies and services, pressure is building on existing networks and the limited radio spectrum available to them. The risk of future network overload is serious—the one dark cloud hanging over the mobile industry's abundant sunshine. According to the report, ''mobile broadband growth is likely to outpace the ability of technology and network improvements to keep up by an estimated factor of three.''
The FCC sees a ''spectrum deficit'' of 300 megahertz within five years. But the FCC and Congress have made little progress over the last two years to free up underutilized spectrum in both public and private hands. Auctions for available spectrum in the valuable 700 Mhz. band are tied up in political fights over a public safety network. Spectrum held by over-the-air television broadcasters is idling as Congress debates ''incentive'' auctions that would share proceeds between the broadcasters and the government.
Improving coverage by modifying or adding cell towers, the commission finds, is subject to considerable delay at the local level. Of 3,300 zoning applications for wireless facilities pending in 2009, nearly 25 percent had been idling for more than a year. Some had been languishing for more than three years, despite an FCC requirement that applications be decided within 150 days at the most.
Combining the spectrum assets of AT&T and T-Mobile would go a long way toward limiting the potentially catastrophic effect of ''spectrum deficit.'' AT&T plans to move T-Mobile 3G customers to its existing network and integrate T-Mobile's existing physical infrastructure, improving 3G service and freeing up valuable spectrum to launch a new nationwide 4G LTE network. As the report notes, T-Mobile had no plans to ever launch true 4G service and, given its limited spectrum
holdings, probably never could.
As part of its public interest analysis, the FCC will have to take these and other regulatory constraints to heart.
To Reality . . . and Beyond!
Reading the entire report, it's clear that the FCC recognizes, as it must, that, even with the exit of T-Mobile from the U.S. market, mobile services would be anything but a ''duopoly''—either at the national level or at the local level, which is where it counts.
Competition is being driven by multiple local competitors, competing technologies, and handset and software providers. Federal, state and local governments all play an active role in overseeing the industry, which even the FCC now sees as the only serious constraint on future growth.
In Silicon Valley, if not inside the Beltway, consumers are understood to be the real drivers of the mobile services ecosystem—the true market-makers. Maybe that's why the report found that the vast majority of U.S. consumers report being ''very satisfied'' with their mobile service.
It is a relief to see the FCC looking carefully at real data and coming to realistic conclusions, as it does throughout the report. Let's hope reality continues its reign during the long AT&T/T-Mobile review and beyond, as this dynamic industry continues to evolve.
Reproduced with permission from Daily Report for Executives, July 11, 2011. Copyright 2011 The Bureau of National Affairs, Inc. (800-372-1033) www.bna.com.







Tim Harford on adapting and prospering in a complex world
On the podcast this week, Tim Harford, economist and senior columnist for the Financial Times, discusses his new book, Adapt: Why Success Starts With Failure. He argues that people and organizations have a poor record of getting things right the first time; therefore, the evolutionary process of trial and error is a difficult yet necessary process needed to solve problems in our complex world. Harford emphasizes the importance of embracing failure in a society focused on perfection. According to Harford, one can implement this process by trying different things in small doses and developing the ability to distinguish success and failures while experimenting. A design with failure in mind, according to Harford, is a design capable of adaptation.
Related Links
Adapt: Why Success Starts With Failure
"Tim Harford on failure", Washington Post
"No, statistics are not silly, but their users . . .", By Harford
To keep the conversation around this episode in one place, we'd like to ask you to comment at the webpage for this episode on Surprisingly Free. Also, why not subscribe to the podcast on iTunes?







Smartphones & Usage-Based Pricing: Are Price Controls Coming?
Two data points in the news over the past 24 hours to consider:
A new report on "Smartphone Adoption & Usage" by the Pew Internet Project finds that "one third of American adults – 35% – own smartphones" and that of that group "some 87% of smartphone owners access the Internet or email on their handheld" and "25% of smartphone owners say that they mostly go online using their phone, rather than with a computer."
According to the Wall Street Journal, the "Average iPhone Owner Will Download 83 Apps This Year." That's up from an average of 51 apps downloaded in 2010. (At first I was astonished when I read that, but then realized that I've probably downloaded an equal number of apps myself, albeit on an Android-based device.)
As I explain in my latest Forbes column, facts like these help us understand "How iPhones And Androids Ushered In A Smartphone Pricing Revolution." That is, major wireless carriers are in the process of migrating from flat-rate, "all-you-can-eat" wireless data plans to usage-based plans. The reason is simple economics: data demand is exploding faster than data supply can keep up.
"It's been four years since the introduction of the iPhone and rival devices that run Google's Android software," notes Cecilia Kang of The Washington Post. "In that time, the devices have turned much of America into an always-on, Internet-on-the-go society." Indeed, but it's not just the iPhone and Android smartphones. It's all those tablets that have just come online over the past year, too. We are witnessing a tectonic shift in how humans consume media and information, and we are witnessing this revolution unfold over a very short time frame.
Unsurprisingly, therefore, "unlimited" wireless data plans are probably on the way out since, as I observe in my Forbes piece:
That model created unsustainable network traffic burdens and it's surprising unlimited plans have lasted this long. With smartphone users increasingly using their mobile devices to access the Internet and consume more cloud-based services and mobile video than ever, the "all you can eat" data buffet eventually had to end.
But critics are far too quick to suggest this is some of nefarious, anti-consumer conspiracy. In reality, I argue:
Tiered and metered pricing schemes are a sensible way to price demand for bandwidth-intensive users and applications and, in the process, alleviate network congestion, encourage new investment, and ensure that average costs for consumers are more reasonable over time.
Using usage data provided by Nielsen, I document the dramatic traffic growth that carriers are struggling to deal with but also show how most average consumers will do better under the new tiered plans. That's because, even with a significant uptick in wireless data demand, the vast majority of users will not exceed the lowest tier of service (2 GB) that carriers are pricing at $20-$30. That's less than most of them pay today. Thus:
It's only the most rapacious mobile data consumers who'll pay the higher tier prices. Doesn't it make more sense that the most intensive network users pay more instead of raising average costs for all consumers? Why should minimal data users subsidize the big eaters?
Instead of repeating it all here, I'd just encourage you to bounce over to Forbes to read my entire essay.
The interesting policy question raised by all this is whether critics and policymakers will give network operators the freedom to innovate and employ creative business models so market experimentation can determine which pricing schemes will best calibrate supply and demand while also ensuring optimal network investment. You may recall that usage-based pricing has already become a flashpoint in the Net neutrality wars, and just last Friday I wrote about Netflix's shameless attempt to get the feds to regulate usage-based pricing on the wireline front.
So, stay tuned. This fight could really heat up. Perhaps it's time to dust off the old books and papers about how to fight off government price controls!
Related Reading:
Netflix Falls Prey to Marginal Cost Fallacy & Pleads for a Broadband Free Ride (July 8, 2011)
Why Congestion Pricing for the iPhone & Broadband Makes Sense (October 7, 2009)
The (Un)Free Press Calls for Internet Price Controls: "The Broadband Internet Fairness Act" (June 17, 2009)
Free Press Hypocrisy over Metering & Internet Price Controls (June 18, 2009)
Once Again, Why Not Meter Broadband Pipes? (September 7, 2007)
Why Not Meter? (March 12, 2007)
The Real Net Neutrality Debate: Pricing Flexibility Versus Pricing Regulation (October 27, 2005)







July 9, 2011
A Response to Andrew McLaughlin on Net Neutrality & "Freedom"
________
Andrew… I'm happy, as always, to engage in friendly debate with you about this, although I suspect from the tone of some of the others here that nothing I will say will convince them that opposition to Net neutrality regulation can be based on anything other than pure corporate whoring!
I'm always mystified by the highly selective nature of this rhetorical device when employed by some on the Left against libertarians. After all, as Tim Lee already alluded to in his comments above, we never seem to hear our Lefty friends trot out those arguments when they agree with us. For example, Berin Szoka and I filed an amicus brief in the Supreme Court last year in the BROWN v. EMA video game case along with Lee Tien and Cindy Cohn of EFF. Why is it that I did not hear one peep from any Lefties about my obvious corporate whoring in that matter! I mean, clearly, there's no possible way that a libertarian could support First Amendment rights. I must have just been in it for video game industry money, right?
OK, I'm being snarky here. And I know this is not your position because I've known you a long time and know that you do not adopt such tactics even when we do, on occasion, disagree heatedly over a major policy issue. But, even if I am wasting my breath, let me just say this to others: We libertarians in the academic and think tank world aren't exactly living "Lifestyles of the Rich and Famous." If we all just in it for the money than I can tell you that we are doing a tremendously shitty job at it! (In fact, most libertarian think tanks or organizations only have something like 5 to 10% corporate funding. The organization I work for has even less.) Seriously folks, we libertarians believe in our ideas and fight for them with the same passion that you fight for yours because of a heart-felt belief in the inherent rightness of our core principles.
So, returning to Net neutrality regulation, I would hope that folks on the Left could entertain the possibility that libertarians have serious concerns about the wisdom of inviting government to establish a new regulatory regime for the Internet. If others can be open-minded enough to entertain that possibility, then I hope they will take seriously the three prongs of libertarian opposition to Net neutrality regulation. I suspect the first and second will be somewhat more compelling (or at least plausible) to the Left than the third.
1) First, government simply does not have a very good historical track record regulating network industries.
I view Net neutrality regulation as a combination of common carriage regulation and "public interest" regulation. We have roughly 100 years' worth of experience with these regimes in practice in various industries. And when we evaluate the success of those regimes in terms of improving economic efficiency, innovation, competitiveness, consumer welfare, etc., well.. the results have been downright dismal.
Now, it is certainly true that common carriage rules and public interest mandates were well-intentioned. For example, who could possibly be against the idea of more diversity and "balance" in the reporting of news and opinion, as was generally mandated by the so-called Fairness Doctrine? And who could be against common carrier regs that mandated "just and reasonable" rates?
But all the noble intentions in the world don't matter a bit when stacked against the historical evidence of how well these rules and regulatory regimes worked in practice. Most of these efforts backfired miserably. The unintended consequences were myriad. Public interest regulation didn't give us more diversity, it gave us less. It limited the vibrancy of the speech marketplace in the process. Likewise, "just and reasonable" rate regulation gave us nothing of the sort. These rules benefited incumbents more than consumers or new competitors. They did not spur more innovation or entry. And regulatory capture was absolutely rampant across the board.
Thus, by my read of history, these regulatory regimes were viciously anti-consumer.
I suppose some might disagree with this history and suggest that things weren't as bad as I've made them out to be. I have very little tolerance for that suggestion because I do not believe there is any other way to read this history. I've spent the last 20 years attempting to document it in much of my work to remind others – especially policymakers – why we don't want to go down that path again. But you don't need to believe me. Read the works of Alfred Kahn (a lifelong liberal Democrat, I might note) or the countless others who have written histories of media and communications regulation. It's a truly miserable tale.
With all that in mind, can you start to see why the libertarian might be a tad bit suspect of calls for Net neutrality regulation?
2) Second, libertarians tend to be far more optimistic about the possibility of markets, ongoing experimentation, spontaneous/unforeseen innovation, and creative destruction to improve matters long before government regs like Net neutrality get around to doing so.
Again, the FCC just isn't very good at regulating fast-moving industries and technologies and its track record is particularly poor when it comes to incentivizing new things (remember Video Dialtone? Open Video System rules?) Also, flexibility is crucial for fast-moving technologies and networks and we must be careful not to freeze systems and industries in stone.
While libertarians wouldn't sympathize with efforts by network intermediaries to "block" any sort of content or traffic, we'd also challenge others to provide serious examples of this being a problem. We don't think there is a problem here. And if there were such silly corporate efforts to meddle, we are far more optimistic about the power of market and social norms to handle it. Pressure from the press, scholars, engineers, and the general public can help curb the worst excesses. Moreover, corporate screw-ups serve as a good invitation for other innovators to take a stab at offering consumers a better deal.
There's also the omnipresent threat of the slippery slope of regulation. "Neutrality" mandates could gradually spread to other layers of the Net and cover content and applications. We need to be careful so as not to open the door to comprehensive government regulation of the Internet. The FCC, in particular, has shown itself to be an agency with a healthy appetite for mission creep. Libertarians are highly suspect about giving a bunch of unelected bureaucrats the leeway to determine what a "neutral" Net looks like.
3) Finally, libertarians believe that our Constitution embodies a presumption of liberty. People — including corporations — should be free to pursue their interests so long as they do not violate the rights of others.
This is the "knee-jerk" aspect of libertarianism that alienates many progressives who believe in a different interpretation of rights and the Constitution. For that reason, I never lead with this argument when debating communications, media, or high-technology policy. Nonetheless, I would hope that you would appreciate why this construction of rights and constitutionally-guaranteed liberties leads the libertarian to resist regulatory regimes imposed from above.
Well, I've gone on far too long here. I know I have not convinced you to change your mind, Andrew. I understand your position and know how passionately you feel about it. I do hope, however, that you now better understand our position on Net neutrality and realize that it has nothing to do with protecting "big corporate interests," but rather, it's about understanding what REAL Internet freedom should be all about!
Alas, our competing conceptions of what "freedom" entails keeps us from being allies on this particular issue. I look forward to continuing to work with you on the many other issues where our ideological traditions are in closer alignment.
Cheers – Adam Thierer







July 8, 2011
Can cyber-kamikazes cyberbombard our cyberdefenses?
In today's Washington Post, Senators Lieberman, Collins and Carper had an op-ed calling for comprehensive cybersecurity legislation. If we don't pass such legislation soon, they say, "The alternative could be a digital Pearl Harbor — and another day of infamy."
Last time I checked, Pearl Harbor left over two thousand persons dead and pushed the United States into a world war. There is no evidence that a cyber-attack of comparable effect is possible. Yet as I write in TIME.com's Techland, war rhetoric allows government to pursue avenues that might otherwise be closed:
The problem with the war metaphor is that treating a cyber attack as an act of war, rather than a crime, invites a different governmental response. In the offline world, vandalism, theft, and even international espionage are treated as crimes. When you detect them, you call law enforcement, who investigate and prosecute and, most importantly, do so while respecting your civil liberties. In war, these niceties can go out the window.
War changes the options available to government, says noted security expert Bruce Schneier. Things you would never agree to in peacetime you agree to in wartime. Referring to the warrantless wiretapping of Americans that AT&T allowed the NSA to conduct after 9/11, Schneier has said, "In peacetime if the government goes to AT&T and says, 'Hey, we want to eavesdrop on everybody,' AT&T says, 'Stop, where's your warrant?' In wartime, AT&T says, 'Use that closet over there, lock the door, and just put a do not disturb sign on it.'"
Check out the whole article for more outrage.







Netflix Falls Prey to Marginal Cost Fallacy & Pleads for a Broadband Free Ride
Of all the shockingly naive and shamelessly self-serving editorials I've read by businesspeople in recent years, today's Wall Street Journal oped by Netflix general counsel David Hyman really takes the cake. It's an implicit plea to policymakers for broadband price controls. Hyman doesn't like the idea of broadband operators potentially pricing bandwidth according to usage /demand and he wants action taken to stop it. Of course, why wouldn't he say that? It's in Netflix's best interest to ensure that somebody else besides them picks up the tab for increased broadband consumption!
But Hyman tries to pull a fast one on the reader and suggest that scarcity is an economic illusion and that any effort by broadband operators to migrate to usage-based pricing schemes is simply a nefarious, anti-consumer plot that must be foiled. "Consumers and regulators need to take heed of what is happening and avoid winding up like the proverbial frog in a pot of boiling water," Hyman warns. "It's time to jump before it's too late."
Rubbish! The only thing policymakers need to do is avoid myopic, misguided advice like Hyman's, which isn't based on one iota of economic theory or evidence.
Hyman's economic illiteracy is evident from the get-go. He tries to spook people with the headline, "Why Bandwidth Pricing Is Anti-Competitive." No it isn't. Usage-based pricing is used in countless economic sectors every day and it is overwhelming viewed by economists as a sensible way to calibrate supply and demand while ensuring costs are covered. But Hyman says the laws of economics don't apply to broadband! No seriously, he says:
Cable and telecom companies argue that bandwidth is a scarce resource and that imposing caps and overage fees will relieve pressure on high-speed networks. Families pay more when they use more electricity, these companies point out, so why shouldn't households pay more if they use more bandwidth? The analogy is a false one. Wireline bandwidth is an almost unlimited resource due to advances in Internet architecture. Adding more capacity is easy. The marginal cost of providing an extra gigabyte of data—enough to deliver one episode of "30 Rock" from Netflix—is less than one cent, and falling.
[...] Consumer access to unlimited bandwidth is good for society. It fosters innovation, drives commerce, and advances political and social discourse. Given that bandwidth is cheap and plentiful and will only grow more so with time, there is no good reason for bandwidth caps and fees to take root.
Oh my goodness. Really? Hyman appears to be suffering from a rather serious case of marginal cost fallacy: the belief that prices should, as a rule, equal marginal costs. The problem with such thinking is that it leaves zero room for investment, innovation, and other real-world dynamics that get conveniently forgotten as "fixed costs." Of course, if you begin with the truly outrageous claim that "bandwidth is an almost unlimited resource," and "bandwidth is cheap and plentiful and will only grow more so with time," then it's only logical that you'd fall prey to this fallacy!
Meanwhile, back in the real world, economists and financial analysts will explain to you that high fixed-cost goods like broadband networks don't just grow on trees or fall like manna from heaven. Yes, of course it is true that "consumer access to unlimited bandwidth is good for society." But the same is true of countless other goods that we'd all like to have access to at zero cost. But that doesn't invalidate the fundamental laws of economics. Someone financed and built those networks and someone has to keep building and improving them. You'd never get anything built if you adopted the view that scarcity was a myth and that prices must equal marginal cost.
On that point, I was tickled to see in the online comments to Hyman's piece that one gentleman asked, "what happens when you allow unlimited access at.. marginal cost?" and for another to say in response, "Answer: You turn into Greece." Quite right. There is no free lunch. Something has to pay the bills, including the broadband bills. You can't just free-ride on the future forever by pretending that bandwidth is an abundant good and holding prices at or below marginal cost.
Once you understand these facts you can point out what's really wrong with Mr. Hyman's reasoning: He basically wants average costs for all consumers to go up so that his costs (or the costs of any high-bandwidth use or user) will never go up. Shameful! Indeed, let's just call Mr. Hyman's editorial what it is: a blatant attempt to get government to impose price controls on broadband providers to favor his company. End of story. He could have spared us all the sloppy economic sophistry and just told us that. It would have made it a bit easier to take him seriously.
P.S. Incidentally, Mr. Hyman's one and only suggestion for how to deal with network demand/congestion is this: "If Internet service providers really wanted to manage traffic efficiently, they would limit speeds at peak times." Interesting. I wonder how the Net neutrality crowd feels about Netflix's new-found love of broadband throttling!







Vivek Wadhwa on High-Tech's "Best Regulator"
Vivek Wadhwa, who is affiliated with Harvard Law School and is director of research at Duke University's Center for Entrepreneurship, has a terrific column in today's Washington Post warning of the dangers of government trying to micromanage high-tech innovation and the Digital Economy from above.
For reasons I have never been able to understand, the Washington Post uses different headlines for its online opeds versus its print edition. That's a shame, because while I like the online title of Wadhwa's essay, "Uncle Sam's Choke-Hold on Innovation," the title in the print edition is better: "Google, Twitter and the Best Regulator." By "best regulator" Wadhwa means the marketplace, and this is a point we have hammered on here at the TLF relentlessly: Contrary to what some critics suggest, the best regulator of "market power" is the market itself because of the way it punishes firms that get lethargic, anti-innovative, or just plain cocky. Wadhwa notes:
The technology sector moves so quickly that when a company becomes obsessed with defending and abusing its dominant market position, countervailing forces cause it to get left behind. Consider: The FTC spent years investigating IBM and Microsoft's anti-competitive practices, yet it wasn't government that saved the day; their monopolies became irrelevant because both companies could not keep pace with rapid changes in technology — changes the rest of the industry embraced. The personal-computer revolution did IBM in; Microsoft's Waterloo was the Internet. This — not punishment from Uncle Sam — is the real threat to Google and Twitter if they behave as IBM and Microsoft did in their heydays.
Quite right. I've discussed the Microsoft and IBM antitrust sagas many times here before. In particular, see my 2009 review of Gary Reback's book on antitrust and high-tech and my recent essay on "Libertarianism & Antitrust: A Brief Comment." I've also commented on the FTC's look at Twitter and Google in my recent essays, "Twitter, the Monopolist? Is this Tim Wu's "Threat Regime" In Action?" and "The Question of Remedies in a Google Antitrust Case."
The crucial points I have tried to get across in these essays, as well as all my essays countering the modern cyber-progressives," is that high-tech market power concerns are ultimately better addressed by voluntary, spontaneous, bottom-up, marketplace responses than by coerced, top-down, governmental solutions. Moreover, the decisive advantage of the market-driven approach to correcting market or "code failure" comes down to the rapidity and nimbleness of those responses, especially in markets built upon bits instead of atoms.
That's why Wadhwa's insight — that "the technology sector moves so quickly that when a company becomes obsessed with defending and abusing its dominant market position, countervailing forces cause it to get left behind" — is so cogent. We're not talking about markets like steel and corn here. Things move much, much more quickly when bits and code and are the foundations of what Tim Wu calls "information empires." There's no doubt that some companies will gain scale and even "power" quickly in our new Digital Economy, but they can also lose it in the blink of an eye.
The best modern example that I've documented here before is AOL. It's easy to forget now, but just a short decade ago, academics and regulators were in a tizzy over Big Bad AOL. And why not? After all, 25 million subscribers were willing to pay $20 per month to get a guided tour of AOL's walled garden version of the Internet. And then AOL and Time Warner announced a historic mega-merger that had some predicting the rise of "new totalitarianisms" and corporate "Big Brother."
But the deal quickly went off the rails. By April 2002, just two years after the deal was struck, AOL-Time Warner had already reported a staggering $54 billion loss. By January 2003, losses had grown to $99 billion. By September 2003, Time Warner decided to drop AOL from its name altogether and the deal continued to slowly unravel from there. In a 2006 interview with the Wall Street Journal, Time Warner President Jeffrey Bewkes famously declared the death of "synergy" and went so far as to call synergy "bullsh*t"! In early 2008, Time Warner decided to shed AOL's dial-up service and then to spin off AOL entirely. Looking back at the deal, Fortune magazine senior editor at large Allan Sloan called it the "turkey of the decade." The formal divorce between the two firms took place in 2008. Further deconsolidation followed for Time Warner, which spun off its cable TV unit and various other properties.
Meanwhile, AOL has lost its old dial-up business and walled garden empire and is still struggling to reinvent itself as an advertising company. It's about the last company on anybody's lips when we talk about tech titans today. What an epic tale of creative destruction! That all happened is less than 10 years! And yet, again, a decade ago, tech pundits and cyberlaw intellectuals like Larry Lessig were penning entire books about the ominous threat posed by the AOL walled garden model of Internet governance.
Lessig's myopia was based on an inherent techno-pessimism I have discussed and critiqued in my Next Digital Decade book chapter, "The Case for Internet Optimism, Part 2 – Saving the Net From Its Supporters." Countless Ivory Tower cyber-academics today adopt a static view of markets and market problems. This "static snapshot" crowd gets so worked up about short term spells of "market power" – which usually don't represent serious market power at all – that they call for the reordering of markets to suit their tastes. Sadly, they sometimes do this under the banner of "Internet freedom," claiming that techno-cratic elites can "free" consumers from the supposed tyranny of the marketplace.
In reality, that vision wraps markets in chains and ultimately leaves consumers worse off by stifling innovation and inviting in ham-handed regulatory edicts and bureaucracies to plan this fast-paced sector of our economy. Importantly, that vision ignores the deadweight losses associated with expanding government red tape and bureaucracy as well as the very real danger of "regulatory capture" that exists anytime Washington decides to get cozy with a major sector of the economy.
As Wadhwa correctly concludes, "Government has no place in this technology jungle." I wish other academics and tech pundits would heed that warning.







Adam Thierer's Blog
- Adam Thierer's profile
- 1 follower
