Adam Thierer's Blog, page 80
November 20, 2012
Forget remedies – FairSearch doesn’t even have a valid statement of harm in its Google antitrust criticism
After more than a year of complaining about Google and being met with responses from me (see also here, here, here, here, and here, among others) and many others that these complaints have yet to offer up a rigorous theory of antitrust injury — let alone any evidence — FairSearch yesterday offered up its preferred remedies aimed at addressing, in its own words, “the fundamental conflict of interest driving Google’s incentive and ability to engage in anti-competitive conduct. . . . [by putting an] end [to] Google’s preferencing of its own products ahead of natural search results.” Nothing in the post addresses the weakness of the organization’s underlying claims, and its proposed remedies would be damaging to consumers.
FairSearch’s first and core “abuse” is “[d]iscriminatory treatment favoring Google’s own vertical products in a manner that may harm competing vertical products.” To address this it proposes prohibiting Google from preferencing its own content in search results and suggests as additional, “structural remedies” “[r]equiring Google to license data” and “[r]equiring Google to divest its vertical products that have benefited from Google’s abuses.”
Tom Barnett, former AAG for antitrust, counsel to FairSearch member Expedia, and FairSearch’s de facto spokesman should be ashamed to be associated with claims and proposals like these. He better than many others knows that harm to competitors is not the issue under US antitrust laws. Rather, US antitrust law requires a demonstration that consumers — not just rivals — will be harmed by a challenged practice. He also knows (as economists have known for a long time) that favoring one’s own content — i.e., “vertically integrating” to produce both inputs as well as finished products — is generally procompetitive.
In fact, Barnett has said as much before:
Because a Section 2 violation hurts competitors, they are often the focus of section 2 remedial efforts. But competitor well-being, in itself, is not the purpose of our antitrust laws.
Access remedies also raise efficiency and innovation concerns. By forcing a firm to share the benefits of its investments and relieving its rivals of the incentive to develop comparable assets of their own, access remedies can reduce the competitive vitality of an industry.
Not only has FairSearch not actually demonstrated that Google has preferenced its own products, the organization has also not demonstrated either harm to consumers arising from such conduct nor even antitrust-cognizable harm to competitors arising from it.
As an empirical study supported by the International Center for Law and Economics (itself, in turn, supported in part by Google, and of which I am the Executive Director) makes clear, search bias simply almost never occurs. And when it does, it is the non-dominant Bing that more often practices it, not Google. Moreover, and most important, the evidence marshaled in favor of the search bias claim (largely adduced by Harvard Business School professor, Ben Edelman (whose work is supported by Microsoft)) demonstrates that consumers do, indeed, have the ability to detect and counter allegedly biased results.
Recall what search bias means in this context. According to Edelman, looking at the top three search results, Google links to its own content (think Gmail, Google Maps, etc.) in the first search result about twice as often as Yahoo! and Bing link to Google content in this position. While the ICLE paper refutes even this finding, notice what it means: “Biased” search results lead to a reshuffling of results among the top few results offered up; there is no evidence that Google simply drops users’ preferred results. While it is true that the difference in click-through rates between the top and second results can be significant, Edelman’s own findings actually demonstrate that consumers are capable of finding what they want when their preferred (more relevant) results appears in the second or third slot.
Edelman notes that Google ranks Gmail first and Yahoo! Mail second in his study, even though users seem to think Yahoo! Mail is the more relevant result: Gmail receives only 29% of clicks while Yahoo! Mail receives 54%. According to Edelman, this is proof that Google’s conduct forecloses access by competitors and harms consumers under the antitrust laws.
But is it? Note that users click on the second, apparently more-relevant result nearly twice as often as they click on the first. This demonstrates that Yahoo! is not competitively foreclosed from access to users, and that users are perfectly capable of identifying their preferred results, even when they appear lower in the results page. This is simply not foreclosure — in fact, if anything, it demonstrates the opposite.
Among other things, foreclosure — limiting access by a competitor to a necessary input — under the antitrust laws must be substantial enough to prevent a rival from reaching sufficient scale that it can effectively compete. It is no more “foreclosure” for Google to “impair” traffic to Kayak’s site by offering its own Flight Search than it is for Safeway to refuse to allow Kroger to sell Safeway’s house brand. Rather, actionable foreclosure requires that a firm “impair[s] the ability of rivals to grow into effective competitors that erode the firm’s position.” Such quantifiable claims are noticeably absent from critic’s complaints against Google.
And what about those allegedly harmed competitors? How are they faring? As of September 2012, Google ranks 7th in visits among metasearch travel sites, with a paltry 1.4% of such visits. Residing at number one? FairSearch founding member, Kayak, with a whopping 61% (up from 52% six months after Google entered the travel search business). Nextag.com, another vocal Google critic, has complained that Google’s conduct has forced it to shift its strategy from attracting traffic through Google’s organic search results to other sources, including paid ads on Google.com. And how has it fared? It has parlayed its experience with new data sources into a successful new business model, Wize Commerce, showing exactly the sort of “incentive to develop comparable assets of their own” Barnett worries will be destroyed by aggressive antitrust enforcement. And Barnett’s own Expedia.com? Currently, it’s the largest travel company in the world, and it has only grown in recent years.
Meanwhile consumers’ interests have been absent from critics’ complaints since the beginning. And not only do they fail to demonstrate any connection between harm to consumers and the claimed harms to competitors arising from Google’s conduct, but they also ignore the harm to consumers that may result from restricting potentially efficient business conduct — like the integration of Google Maps and other products into its search results. That Google not only produces search results but also owns some of the content that generates those results is not a problem cognizable by modern antitrust.
FairSearch and other Google critics have utterly failed to make a compelling case, and their proposed remedies would serve only to harm, not help, consumers.







James Miller on the economics of the singularity
James D. Miller, Associate Professor of Economics at Smith College and author of Singularity Rising: Surviving and Thriving in a Smarter, Richer, and More Dangerous World, discusses the economics of the singularity, or the point of time in which we’ll either have computers that are smarter than people or we will have significantly increased human intelligence.
According to Miller, brains are essentially organic computers, and, thus, applying Moore’s law suggests that we are moving towards singularity. Since economic output is a product of the human brain, increased brainpower or the existence of computers smarter than humans could produce outputs we cannot even imagine.
Miller goes on to outline what the singularity could look like and what could derail our progress towards it.
Related Links
Singularity Rising: Surviving and Thriving in a Smarter, Richer, and More Dangerous World, by Miller
Merely Human? That’s So Yesterday, New York Times
2045: The Year Man Becomes Immortal, Time







November 19, 2012
Morozov’s Algorithmic Auditing Proposal: A Few Questions
In a New York Times op-ed this weekend entitled “You Can’t Say That on the Internet,” Evgeny Morozov, author of The Net Delusion, worries that Silicon Valley is imposing a “deeply conservative” “new prudishness” on modern society. The cause, he says, are “dour, one-dimensional algorithms, the mathematical constructs that automatically determine the limits of what is culturally acceptable.” He proposes that some form of external algorithmic auditing be undertaken to counter this supposed problem. Here’s how he puts it in the conclusion of his essay:
Quaint prudishness, excessive enforcement of copyright, unneeded damage to our reputations: algorithmic gatekeeping is exacting a high toll on our public life. Instead of treating algorithms as a natural, objective reflection of reality, we must take them apart and closely examine each line of code.
Can we do it without hurting Silicon Valley’s business model? The world of finance, facing a similar problem, offers a clue. After several disasters caused by algorithmic trading earlier this year, authorities in Hong Kong and Australia drafted proposals to establish regular independent audits of the design, development and modifications of computer systems used in such trades. Why couldn’t auditors do the same to Google?
Silicon Valley wouldn’t have to disclose its proprietary algorithms, only share them with the auditors. A drastic measure? Perhaps. But it’s one that is proportional to the growing clout technology companies have in reshaping not only our economy but also our culture.
It should be noted that in a Slate essay this past January, Morozov had also proposed that steps be taken to root out lies, deceptions, and conspiracy theories on the Internet. Morozov was particularly worried about “denialists of global warming or benefits of vaccination,” but he also wondered how we might deal with 9/11 conspiracy theorists, the anti-Darwinian intelligent design movement, and those that refuse to accept the link between HIV and AIDS.
To deal with that supposed problem, he recommended that Google “come up with a database of disputed claims” or “exercise a heavier curatorial control in presenting search results,” to weed out such things. He suggested that the other option “is to nudge search engines to take more responsibility for their index and exercise a heavier curatorial control in presenting search results for issues” that someone (he never says who) determines to be conspiratorial or anti-scientific in nature.
Taken together, these essays can be viewed as a preliminary sketch of what could become a comprehensive information control apparatus instituted at the code layer of the Internet. Morozov absolutely refuses to be nailed down on the details of that system, however. In a response to his earlier Slate essay, I argued that Morozov seemed to be advocating some sort of Ministry of Truth for online search, although he came up short on the details of who or what should play that role. But in both that piece and his New York Times essay this weekend, he implies that greater oversight and accountability are necessary. “Is it time for some kind of a quality control system [for the Internet]?” he asked in his Slate oped. Perhaps it would be the algorithmic auditors he suggests in his new essay. But who, exactly, are those auditors? What is the scope of their powers?
When I (and others) made inquiries via Twitter requesting greater elaboration on these questions, Morozov summarily dismissed any conversation on the point. Worse yet, he engaged in what is becoming a regular Morozov debating tactic on Twitter: nasty, sarcastic, dismissive responses that call into question the intellectual credentials of anyone who even dares to ask him a question about his proposals. Unless you happen to be Bruno Latour — the obtuse French sociologist and media theorist who Morozov showers with boundless, adorning praise — you can usually count on Morozov to dismiss you and your questions or concerns in a fairly peremptory fashion.
I’m perplexed by what leads Morozov to behave so badly. When I first met him a couple of years ago, it was at a Georgetown University event he invited me to speak at. He seemed like an agreeable, even charming, fellow in person. But on Twitter, Morozov bears his fangs at every juncture and spits out venomous missives and retorts that I would call sophomoric except that it would be an insult to sophomores everywhere. Morozov even accuses me of “trolling” him whenever I ask him questions on Twitter, even though I am doing nothing more that posing the same sort of hard questions to him that he regularly poses to others (albeit in a much more snarky fashion). He always seems eager to dish it out, but then throws a Twitter temper tantrum whenever the roles are reversed and the tough questions come his way. Perhaps Morozov is miffed by some of what I had to say in my mixed review of his first book, The Net Delusion, or my Forbes column that raised questions about his earlier proposal for an Internet “quality control” regime. But I invite others to closely read the tone of those two essays and tell me whether I said anything to warrant Morozov’s wrath. (In fact, I actually said some nice things about his book in that review and later named it the most important information technology policy book of the year.)
Regardless of what motivates his behavior, I do not think it is unreasonable to ask for more substantive responses from Morozov when he is making grand pronouncements and recommendations about how online culture and commerce should be governed. The best I could get him to say on Twitter is that is that he only had 1,200 words to play with in his latest Times oped and that more details about his proposal would be forthcoming. Well, in the spirit of getting that conversation going, allow me to outline a few questions:
1) What is the specific harm here that needs to be addressed?
Do you have evidence of systematic algorithmic manipulation or abuse by Google, Apple, or anyone else, for that matter? Or is this all just about a handful of anecdotes that seemed to be corrected fairly quickly?
2) What standard or metric should we use to determine the extent of this problem, to the extent we determine it is a problem at all?
To the extent autocomplete results are what troubles you, can you explain how individuals or entities are “harmed” by those results?
If this is about reputation, what is your theory of reputational harm and when it is legally actionable?
If this is about informational quality or “truth,” can you explain what would constitute success?
Can you appreciate the concerns / values on the other side of this that might motivate some degree of algorithmic tailoring? For example, some digital intermediaries may seek to curb the use of a certain amount of vulgarity, hate speech, or other offensive content on their sites since they are broad-based platforms with diverse audiences. (That’s why most search providers default to “moderate” filtering for image searches, for example.) While I think we both favor maximizing free speech online, do you accept that some of this private speech and content balancing is entirely rational and has, to some extent, always gone on? Also, aren’t there plenty of other ways to find the content you’re looking for besides just Google, which you seem preoccupied with?
3) What is the proposed remedy and what are its potential costs and unintended consequences?
Can you explain the mechanism of control that you would like to see put in place to remedy this supposed problem? Would it be a formal regulatory regime?
Have you considered the costs and /or potentially unintended consequences associated with an algorithmic auditing regime if it takes on a regulatory character?
For example, if you are familiar with how long many regulatory proceedings can take to run their course, do you not fear the consequences of interminable delays and political gaming?
How often should the “auditing” you propose take place? Would it be a regular affair, or would it be driven by complaints?
4) Is this regime national in scope? Global? How will it be coordinated /administered?
In the United States, presumably the Federal Communications Commission or Federal Trade Commission would be granted new authority to carry out algorithmic audits, or would a new entity need to be created?
Is additional regulatory oversight necessary and, if so, how is this coordinated by nationally and globally?
5) Are there freedom of speech / censorship considerations that flow from (3) and (4)?
At least in the United States, algorithmic audits that had the force of law behind them could raise serious freedom of speech concerns. (See Yoo’s paper on “architectural censorship” and the recent work of Volokh & Grimmelmann on search regulation) and long-settled First Amendment law (see, e.g., Tornillo ) ensures that editorial discretion is housed in private hands. How would you propose we get around these legal obstacles?
6) Are there less-restrictive alternatives to administrative regulation?
Might we be able to devise various alternative dispute resolution techniques to flag problems and deal with them in a non-regulatory / non-litigious fashion?
Could voluntary industry best practices and/or codes of conduct be developed to assist these efforts?
Could an entity like the Broadband Internet Technical Advisory Group (BITAG) help sort out “neutrality” claims in this context, as they do in the broadband context?
Might it be the case that social norms and pressure can keep this problem in check? The very act of shining light on silly algorithmic screw-ups — much as you have in your recent opeds — has a way of keeping this problem in check.
I hope that Morozov finds these questions to be reasonable. My skepticism of most Internet regulation is no secret, so I suppose that Morozov or others might attempt to dismiss some of these questions as the paranoid delusions of a wild-eyed libertarian. But I suspect that I’m not the only one who feels uneasy with Morozov’s proposals since they could open the door to regulators across the globe to engage in “algorithmic auditing” on the flimsy assumption that some great harm exists from a few silly autocomplete suggestions or a couple conspiratorial websites. We deserve answers to questions like these before we start calling in the Code Cops to assume greater control over online speech.







Forthcoming book on conservative and libertarian skepticism about our copyright system
As you likely know by now, the Republican Study Committee published a briefing paper critical of copyright, but then later pulled it down claiming the memo had not received adequate review. Some have suggested that IP-industry pressure may have led to the reversal. I hope we will find out in due time whether the paper was indeed reviewed and approved (as I suspect it was), and why it was removed. That said, I think what this take-down likely shows is a generational gap between the old, captured, and pro-business parts of the Republican Party and its pro-market and pro-dynamism future.
I also hope that this dust-up sparks a debate within the “right” about our bloated copyright system, and so it’s propitious that in a couple of weeks the Mercatus Center will be publishing a new book I’ve edited making the case that libertarians and conservatives should be skeptical of our current copyright system. It’s called Copyright Unbalanced: From Incentive to Excess, and it is not a moral case for or against copyright; it is a pragmatic look at the excesses of the present copyright regime and of proposals to further expand it. The book features:
Yours truly making the Hayekian and public choice case for reform
Reihan Salam and Patrick Ruffini arguing that the GOP should take up the cause of reforming what is now a crony capitalist system
David Post explaining why SOPA was so dangerous
Tim Lee on the criminalization of copyright and the a use of asset forfeiture in enforcing copyright
Christina Mulligan explaining that the DMCA harms competition and free expression
Eli Dourado calculating that the system we have today likely far exceeds what we need in order to offer authors an incentive to create
Tom Bell suggesting five reforms for copyright, including returning to the Founders’ vision of what copyright should be
Conservatives and libertarians, who are naturally suspicious of big government, should be skeptical of an ever-expanding copyright system. They should be skeptical of the recent trend toward criminal prosecution of even minor copyright infringements, of the growing use of civil asset forfeiture in copyright enforcement, and of attempts to regulate the Internet and electronics in the name of piracy eradication. I think our movement is very close to seeing that copyright reform is not just completely compatible with a respect for property rights, but a limited-government project. We hope our book will help make the case.
Also, the Cato Institute will be hosting a lunchtime book forum on December 6. Tom Bell and I will present our views and Mitch Glazier of the Recording Industry Association of America will respond. Please RSVP to attend and tell your colleagues.







November 18, 2012
Latest WCIT Leak Makes Explicit Russian Desire to Overturn ICANN
On Friday evening, I posted on CNET a detailed analysis of the most recent proposal to surface from the secretive upcoming World Conference on International Telecommunications, WCIT 12. The conference will discuss updates to a 1988 UN treaty administered by the International Telecommunications Union, and throughout the year there have been reports that both governmental and non-governmental members of the ITU have been trying to use the rewrite to put the ITU squarely in the Internet business.
The Russian federation’s proposal, which was submitted to the ITU on Nov. 13th, would explicitly bring “IP-based Networks” under the auspices of the ITU, and would in specific substantially if not completely change the role of ICANN in overseeing domain names and IP addresses.
According to the proposal, “Member States shall have the sovereign right to manage the Internet within their national territory, as well as to manage national Internet domain names.” And a second revision, also aimed straight at the heart of today’s multi-stakeholder process, reads: “Member States shall have equal rights in the international allocation of Internet addressing and identification resources.”
Of course the Russian Federation, along with other repressive governments, uses every opportunity to gain control over the free flow of information, and sees the Internet as it’s most formidable enemy. Earlier this year, Prime Minister Vladimir Putin told ITU Secretary-General Hamadoun Toure that Russia was keen on the idea of “establishing international control over the Internet using the monitoring and supervisory capability of the International Telecommunications Union.”
As I point out in the CNET piece, the ITU’s claims that WCIT has nothing to do with Internet governance and that the agency itself has no stake in expanding its jurisdiction ring more hollow all the time. Days after receiving the Russian proposal, the ITU wrote in a post on its blog that, “There have not been any proposals calling for a change from the bottom-up multistakeholder model of Internet governance to an ITU-controlled model.”
This would appear to be an outright lie, and also a contradiction of an earlier acknowledgment by Dr. Touré. In a September interview, Toure told Bloomberg BNA that “Internet Governance as we know it today,” concerns only “Domain Names and addresses. These are issues that we’re not talking about at all,” Touré said. “We’re not pushing that, we don’t need to.”
Touré, expanding on his emailed remarks, told BNA that the proposals that appear to involve the ITU in internet numbering and addressing were preliminary and subject to change.
‘These are preliminary proposals,’ he said, ‘and I suspect that someone else will bring another counterproposal to this, we will analyze it and say yes, this is going beyond, and we’ll stop it.’
Another tidbit from the BNA Interview that now seems ironic:
Touré disagreed with the suggestion that numerous proposals to add a new section 3.5 to the ITRs might have the effect of expanding the treaty to internet governance.
‘That is telecommunication numbering,’ he said, something that preceded the internet. Some people, Touré added, will hijack a country code and open a phone line for pornography. ‘These are the types of things we are talking about, and they came before the internet.’
I haven’t seen all of the proposals, of course, which are technically secret. But the Russian proposal’s most outrageous amendments are contained in a proposed new section 3A, which is titled, “IP-based Networks.”
There’s more on the ITU’s subterfuge in Friday’s CNET piece, as well as these earlier posts:
1. “Why is the UN Trying to Take Over the Internet?” Forbes.com, Aug 9, 2012.
2. “UN Agency Reassures: We Just Want to Break the Internet, Not Take it Over,” Forbes.com, Oct. 1, 2012.







The War on Vertical Integration in the Digital Economy [slideshow]
Here’s a presentation I delivered on “The War on Vertical Integration in the Digital Economy” at the latest meeting of the Southern Economic Association this weekend. It outlines concerns about vertical integration in the tech economy and specifically addresses regulatory proposals set forth by Tim Wu (arguing for a “separations principle” for the tech economy) & Jonathan Zittrain (arguing for “API neutrality” for social media and digital platforms). This presentation is based on two papers published by the Mercatus Center at George Mason University: “Uncreative Destruction: The Misguided War on Vertical Integration in the Information Economy” (with Brent Skorup) & “The Perils of Classifying Social Media Platforms as Public Utilities.”
The War on Vertical Integration in the Digital Economy from Adam Thierer







Cronyism: History, Costs, Case Studies and Solutions
Here’s a presentation I’ve been using lately for various audiences about “Cronyism: History, Costs, Case Studies and Solutions.” In the talk, I offer a definition of cronyism, explain its origins, discuss how various academics have traditionally thought about it, outline a variety of case studies, and then propose a range of solutions. Readers of this blog might be interested because I briefly mention the rise of cronyism in the high-tech sector. Brent Skorup and I have a huge paper in the works on that topic, which should be out early next year.
Cronyism: History, Costs, Case Studies & Solutions from Mercatus
Also, here’s a brief video of me discussing why corporate welfare doesn’t work, which was shot after I recently made this presentation at an event down in Florida.
[video courtesy of Rob Nikolewski, Capitol Report New Mexico.]







November 16, 2012
Congress Delays Requiring Cost-Benefit Analysis of Internet Regulation
You’d think it would be harder for government to justify regulating the Internet than the offline world, right? Wrong—sadly. And Congress just missed a chance to fix that problem.
For decades, regulators have been required to issue a cost-benefit analysis when issuing new regulations. Some agencies are specifically required to do so by statute, but for most agencies, the requirement comes from executive orders issued by each new President—varying somewhat but each continuing the general principle that regulators bear the burden of showing that each regulation’s benefits outweigh its costs.
By Berin Szoka and Ben Sperry
But the FCC, FTC and many other regulatory agencies aren’t required to do cost-benefit analysis at all. Because these are “independent agencies”—creatures of Congress rather than part of the Executive Branch (like the Department of Justice)—only Congress can impose cost-benefit analysis on agencies. A bipartisan bill, the Independent Agency Regulatory Analysis Act (S. 3486), would have allowed the President to impose the same kind of cost-benefit analysis on independent regulatory agencies as on Executive Branch agencies, including review by the Office of Information and Regulatory Affairs (OIRA) for “significant” rulemakings (those with $100 million or more in economic impact, that adversely affect sectors of the economy in a material way, or that create “serious inconsistency” with other agencies’ actions).
Republican Senators Rob Portman and Susan Collins joined with Democrat Mark Warner in this important cause—yet the bill has apparently died during this lame duck Congress. While some public interest groups have attempted to couch their objection on separation-of-powers grounds, their ultimate objection seems to be with subjecting the regulatory state’s rulemaking process to systematic economic analysis—because, after all, rigor makes regulation harder. But what’s so wrong with a cost-benefit analysis?
If there’s any agency that needs some analytical rigor in its work, it’s the FCC—whose underlying legal standard could scarcely be more vague: the “public interest.” As Randy May said in testimony supporting a similar bill in 2011:
The FCC has had a pronounced tendency over the years, and certainly this tendency was evident with respect to the adoption late last year of new net neutrality regulations, to adopt rules without engaging in the type of meaningful analysis required by [a cost-benefit standard]. Certainly, the requirement that the Commission analyze any claimed market failure and consumer harm before adopting new rules should force the FCC to engage in a more rigorous economic analysis than it often does when it simply relies on the indeterminate public interest standard as authority.
The bill would apply to any “rule” as defined by the Administrative Procedures Act. So it would require cost-benefit analysis for most of what the FCC does. But little of what the FTC does is actually “rulemaking” (e.g., consumer protection or antitrust enforcement actions, merger review) The FTC does do APA rulemaking under statutes like the Children’s Online Privacy Protection Act, the Do Not Call Act, and the Fair Credit
Reporting Act. So, for example, the FTC would face an additional hurdle in the current COPPA rulemaking—where we have raised our own concerns about costs. And if Congress ever does pass data security, data breach notification, or additional privacy legislation, any rulemakings under these statutes would be subject to this cost-benefit requirement. (Many pending privacy/security bills require the FTC to implement the statute with an initial rulemaking, just as the FTC did with COPPA and its other targeted statutes.)
Some privacy groups might experience a kneejerk reaction against anything that makes it harder for the FTC to protect consumers. As that great sage Homer Simpson put it: “If something’s hard to do, then it’s not worth doing.” Right? Wrong. The FTC already, in principle is supposed to weigh costs with benefits whenever it applies its unfairness jurisdiction—according to the Unfairness Policy Statement the FTC itself developed in 1980, and which Congress codified in 1994.
Cost-benefit analysis is a tool, not an outcome. Broad application of such analysis by regulatory agencies should lead to better policy because it encourages thinking about trade-offs rather than the mythical belief in magic bullet solutions. That’s why someone as thoughtful about regulation as Cass Sunstein, head of Obama’s Office of Information and Regulatory Affairs, demands applying such careful thinking to all agencies—including
independent agencies like the FTC and FCC. Cost benefit analysis is a formal process for implementing the kind of regulatory humility at the heart of our Declaration of Internet Freedom:
Humility. First, do no harm. No one can anticipate what the future holds and what tradeoffs will accompany it. Don’t meddle in what you don’t understand — and what you can all too easily break, without even seeing what’s been lost. Often, government’s best response is to do nothing. Competition, disruptive technological change, and criticism from civil society tend to resolve problems better, and faster, than government can.
Rule of Law. When you must intervene, start small. Regulation and legislation are broad, inflexible, and prone to capture by incumbent firms and entrenched interests. The best kind of “law” evolves one case at a time, based on simple, economic principles of consumer welfare.
Let’s hope Congress doesn’t drop the cost-benefit analysis issue. The more regulations we slather onto the Internet, the more important it will become to think them through carefully. If Congress can’t implement the kind of cost-benefit requirement that can attract bipartisan support, pressure will continue to build on the Right for even more draconian limits on regulatory agencies. The REINS Act, which passed the House by a 57-vote margin a year ago, would flip the presumption of the Congressional Review Act passed in 1994 as part of the Contract With America such that “significant” rulemakings would actually require Congressional approval (whereas the CRA currently allows a legislative veto).







November 13, 2012
Six Principles for Successful Internet Gambling Regulation
Today the Reason Foundation publishes my policy brief on keys to successful state regulation of Internet gambling.
Thanks to a Department of Justice’s December 2011 memo on the parameters of the Wire Act, states can now license real-money intrastate online casino games. Earlier this year, Nevada became the first state to permit online wagering, and in August granted the first online operating license to South Point Poker LLC, which was to have launched trials last month. Since the Reason report went to press, South Point disclosed that its software is still undergoing independent testing but hopes to have its site up by the end of the year.
Elsewhere, Delaware has enacted legislation to authorize online gambling under the auspcies of the state lottery commission and Illinois has begun selling lottery tickets online.
It goes without saying that U.S. citizens should be free to gamble online, just as they legally can in casinos throughout the country. The degree of regulation is subject to debate, but unfortunately remains a necessary element in policy. Yet lessons about taxation and regulation can be learned from experiences in Europe, as well as from regulation of brick-and-mortar casinos in the U.S. With a better understanding of usage trends, consumer game choices and operator cost models, legislators who want to offer constituents the freedom to play online can craft an environment that supports a robust online gaming climate, as opposed to one that drives legitimate operators away.
Regulation should derive from an enlightened approach that respects the responsibility and intelligence of its citizens. Internet gambling can be a safe, secure pastime. Overall, the government’s only goal should be to protect users from theft or fraud. Gambling should not approached as an activity that needs to be controlled or discouraged under the rationale that it is a “sin” (to moralists) or “destructive behavior” (to social utilitarians), and then, hypocritically, politically tolerated so it can be excessively taxed on those rationales.
Although it is likely states will differ in the particulars of how they structure the license and tax arrangements, a successful climate for legalized Internet gambling is likely to derive from the following fundamental principles. Lawmakers should heed the following guidelines:
Create a competitive environment
Consumers are best served when there is ample competition. The greater the competition, the more incentive competing companies have to offer better value—both to win new customers, and to keep existing ones loyal.
The state government itself should not compete for players
As a corollary to the competition guideline, states should not attempt to operate online casinos themselves. They should also be wary of giving incumbent lottery management companies a built-in advantage, such as an automatic license set-aside. Experiences in Europe, where some countries initially granted exclusive Internet poker and other gaming licenses to lottery operators, have shown that such ventures are rarely competitive, are inefficiently run, and do not draw players.
Recognizes intrastate online gambling has different cost structure than brick-and-mortar casinos
States that do not account for the difference in cost models between brick-and-mortar casinos and Internet counterparts are setting themselves up for failure. An Internet gaming site can be established with a capital investment that is a fraction of that required to build a land-based casino. But revenues scale down as well; one reason an online poker room can support penny-ante games. States must grasp the lower revenue and tax expectations and set up tax and licensing structures so they are compatible.
Tax operators not players
On the other hand, states should avoid creative new tax structures purely on the justification that some hold the opinion that gambling is a vice or sin. Players should not be taxed through levies on their accounts or through “hand charges” that are paid directly to the state, as some European countries have attempted (again without success; players migrated to Internet casinos in countries without such taxes). Meanwhile, winning players under law are obliged to report winnings (and are often held accountable though W-2Gs). Anything else is double taxation.
Do not attempt to “protect players from themselves.”
State legislatures tend to have a love/hate relationship with gambling. They covet the tax revenues, yet they believe that they are being “responsible” by creating artificial notions, such as limiting casinos to “riverboats” or out-of-way locations, in the belief that this will somehow either mask or temper the popular appeal of gambling. The ineffectiveness of these measures is seen in how these conventions gradually fall by the wayside. Likewise, regulations that infantilize players, such as a since-revised Missouri rule that limited player chip purchases to $200 per hour, have proved ineffective and easy to defeat.
Don’t discount the market as an effective regulator
The Internet itself offers numerous resources in the form of information sites, message boards and discussion groups where players can exchange information about the quality and reliability of particular sites, the general skill level of players, and any concerns about sites that might be cheating or too tolerant of collusion or poker bots. Independent game analysts have proved adept at identifying problem software and posted their findings.
The return of Internet gambling is only a matter of time; the consumer demand is there and the fiscal situation in many states makes the taxation opportunities attractive. While a number of states will resist, for most, the issue should lead to serious debate. The paper, in addition to making the principled case for legalized Internet gambling, addresses and recommends policy approach with an aim toward creating win-win-win regulatory environments for consumers, game site operators and state governments.
The full report can be downloaded here.







Matt Hindman on politics and the internet
In the wake of the election, Matt Hindman, author of The Myth of Digital Democracy, analyzes the effect of the internet on electoral politics.
According to Hindman, the internet had a large—but indirect—effect on the 2012 elections. Particularly important was microtargeting to identify supporters and get out the vote, says Hindman. Data and measurements—two things that the GOP was once ahead in, but which they have ceded to the Democrats in the past 8 years—played a key role in determining the winner of the presidential election, according to Hindman.
Hindman also takes a critical look at the blogosphere, comparing it to the traditional media that some argue it is superseding, and he delineates the respective roles played by Facebook and Twitter within the electoral framework.
Related Links
The Myth of Digital Democracy, by Hindman
Victory Lab, Slate
Data Drove Obama’s Ground Game, The Hill
Mitt Romney’s ORCA Program Couldn’t Stay Afloat, Politico







Adam Thierer's Blog
- Adam Thierer's profile
- 1 follower
