Adam Thierer's Blog, page 93

May 31, 2012

Does the EFF favor government regulation of computer manufacturers?

You won’t find the words ‘government’ or ‘regulation’ in this post at EFF’s blog by Micah Lee and Peter Eckersley. They’re just appealing to Apple’s better angels to drop its closed ways. I’ve explained before why that’s a rational thing to do. But will the EFF assure supporters like me that it will never endorse government enforcement of a “bill of rights” like the one Lee and Eckersley propose today?



What I like about EFF is that it is a pro-liberty group, but I hope I’m not wrong in assuming that they view liberty as I do: as a negative concept. They never come out and say it, but it sure sounds like the authors believe that if Apple doesn’t come around to seeing the virtues of openness and provide an escape hatch, then maybe they should be forced to. I get that impression from passages like this:




When technology and phone companies defend the restrictions that they are imposing on their customers, the most frequent defense they offer is that it’s actually in their customers’ interest to be deprived of liberty: “If we let people do what they want with their pocket computers, they will do stupid things with them. You will be safer and happier in our walled compound than you would be outside.”




Imposing on their customers? Seems to me like the vast majority of Apple’s customers are choosing these restrictions. It’s not Apple that thinks its customers are stupid, and is therefore “imposing” a locked phone on them, it’s Lee and Eckersley who seem to have a low regard for customers’ preferences and want to impose an open device on them.



We can of course debate whether customers are being short-sighted in the choice they’re making, whether the benefits of closed platforms outweigh the costs, and whether we have the best of both worlds right now, but you can’t say that customers are being “deprived of their liberty.” What liberty are they being deprived of? Does the EFF believe there is a positive right to mobile computers that run arbitrary code?



I repeat my plea: Can EFF assure us that it will not support government regulation of computer manufacturers?




 •  0 comments  •  flag
Share on Twitter
Published on May 31, 2012 13:04

On the Use of Orwell’s “1984″ in Internet Policy Narratives



This Book is NOT about the Net




My latest Forbes column takes a look at Andrew Keen’s latest book, Digital Vertigo: How Today’s Online Social Revolution Is Dividing, Diminishing, and Disorienting Us. It’s an interesting book, and a much better one than his previous screed, Cult of the Amateur. Andrew raises valid concerns about the sheer volume of over-sharing taking place online today. As I note in my review:



Keen is on solid ground when outlining the many downsides of over-sharing, beginning with the privacy and reputational consequences for each of us. “Social media is the confessional novel that we are not only all writing but also collectively publishing for everyone else to read,” he says. That can be a problem because the Internet has a very long memory. A youngster’s silly pranks or soul-searching self-revelations may seem like a fun thing to upload when such juvenile antics or angst will win praise (and plenty of pageviews) from teen peers. Your 34-year-old self, however, will likely have a very different view of that same rant, picture, or video. Yet, that content will likely still be around for the world to see when you do reach adulthood.


And Keen offers many other reasons why we should be concerned about a world of over-sharing and “hypervisibility.” The problem is that Keen drowns out these valid concerns by assaulting the reader with layers of over-the-top pessimistic prognostications and apocalyptic rhetoric. In particular, again and again and again in the book he comes back to George Orwell and his dystopian novel, 1984. Keen insists that some sort of Orwellian catastrophe is set to befall humanity because of social media over-sharing. (See this other Forbes column on Keen’s book, “Why 1984 Is Upon Us,” to see just how far this theme can be pushed).



Interestingly, Keen is not the only person to raise the specter of Orwell’s “Big Brother” nightmare in an Internet policy tract. Allusions to Orwell’s 1984 and “Big Brother” are increasingly common in Net policy books, blogs, essays and even newspaper articles. Variants on the “Big Brother” theme include: “Corporate Big Brother,” “Big Brother Inc.,” and even “Big Browser.” Similarly, back in 2008, a Public Knowledge analyst likened Apple’s management of applications in its iPhone App Store to an “1984 kind of total control.”



Let’s put an end to this silliness. George Orwell’s 1984 is a book about coercive, totalitarian governmental control in which citizens are forcibly and relentlessly brainwashed by an all-encompassing tyrannical Big Brother. It is a world of propaganda, censorship, historical revisionism, mind control, top-down planning, and total war.



By contrast, the modern digital devices and social media services that Keen and others repeatedly decry as “Orwellian” are nothing of the sort. First, and most obviously, they purely voluntary. No one forces us to use Apple, Google, or Microsoft devices or any other company’s digital technologies. Likewise, no one forces us to join Facebook, LinkedIn, Google+, Twitter, or others social media services.These companies and countless others compete for our allegiance and try to win our business. If we do choose to use their services, we are also free to later abandon them. These are not “crystal prisons,” as this recent EFF blog post suggested (at least of Apple). These companies can’t coercively keep us in their “walled gardens,” which really aren’t all that “walled” or “closed” as I have argued here before. And while we are using those services, there is no effort by any of them to brainwash us or encourage us to take up arms against others. They are just looking to make money, not war.



To reiterate, I absolutely understand that there are very legitimate privacy and reputational concerns associated with excessive social media sharing and online living more generally. Again, Keen and others are on strong ground in raising some alarm about the perils of hypervisibility and over-sharing of personal information. But such concerns are in a totally different league than the sort of issues Orwell was raising in 1984. It’s hard for me to take seriously those Net policy authors and analysts who conflate the two.  I hope they stop.




 •  0 comments  •  flag
Share on Twitter
Published on May 31, 2012 08:57

May 30, 2012

New Paper by Bruce Owen on Video Regulation & the Retrans Wars

I’m pleased to report that the Mercatus Center at George Mason University has just released a new white paper on video marketplace regulation and the ongoing  “retrans” wars by one of America’s leading media economists, Bruce M. Owen.  Owen’s new paper, “Consumer Welfare and TV Program Regulation,” examines the lamentable history of misguided federal interventions into America’s video marketplace. Owen also explores to possibility of deregulating this marketplace via the important new Scalise-DeMint bill, “The Next Generation Television Marketplace Act.” If you’re following these issues, Owen’s paper is must-reading. Here’s the abstract:



Getting rid of obsolete regulation of the broadcast and distribution of video programming is essential to the efficient operation of a market that has the potential to greatly increase the benefits to consumers. Services that increase video program distribution capacity have been delayed and suppressed for many years, and consumer benefits were lost as the Federal Communications Commission (FCC) pursued ill-defined and ephemeral “public interest” and “localism” objectives.

It is past time to stop extending interventions originally intended for old technology to a range of new competitive media. No longer is there any rational public policy basis for a government agency to dictate how much or what content the viewing public can see, any more than there ever has been for printed media. There is no market failure to which the current regulatory framework is responding and no longer any reason for FCC bureaucrats to decide how much of the spectrum should be used for each of many existing and future commercial services. Spectrum reform, along with the repeal of other broadcast programming restrictions contained in the proposed Scalise-DeMint Next Generation Television Marketplace Act, provide a roadmap for the necessary reform. With an adequate supply of tradable rights in spectrum, we will find out how much additional competition is possible among traditional wired and wireless, analog and digital, and fixed and mobile delivery services.


Read the entire thing here [PDF], and you might also be interested in this Forbes column (“Toward a True Free Market in Television Programming“) and these two blog posts of mine (1, 2) on the retrans wars.




 •  0 comments  •  flag
Share on Twitter
Published on May 30, 2012 12:04

May 28, 2012

To Reach 98% Access for Mobile Broadband, Take the FCC Out of Equation

 (Adapted from Bloomberg BNA Daily Report for Executives, May 16th, 2012.)



Two years ago, the Federal Communications Commission’s National Broadband Plan raised alarms about the future of mobile broadband. Given unprecedented increases in consumer demand for new devices and new services, the agency said, network operators would need far more radio frequency assigned to them, and soon. Without additional spectrum, the report noted ominously, mobile networks could grind to a halt, hitting a wall as soon as 2015.



That’s one reason President Obama used last year’s State of the Union address to renew calls for the FCC and the National Telecommunications and Information Administration (NTIA) to take bold action, and to do so quickly. The White House, after all, had set an ambitious goal of making mobile broadband available to 98 percent of all Americans by 2016. To support that objective, the president told the agencies to identify quickly an additional 500 MHz of spectrum for mobile networks.



By auctioning that spectrum to network operators, the president noted, the deficit could be reduced by nearly $10 billion. That way, the Internet economy could not only be accelerated, but taxpayers would actually save money in the process.



A good plan. So how is it working out?



Unfortunately, the short answer is:  Not well.  Speaking this week at the annual meeting of the mobile trade group CTIA, FCC Chairman Julius Genachowski had to acknowledge the sad truth:  “the overall amount of spectrum available has not changed, except for steps we’re taking to
add new spectrum on the market.”



The tortured grammar (how can “steps we’re taking to add new spectrum” constitute an exception to the statement that the amount of available spectrum “has not change”) belies the reality here—all the FCC Chairman can do is promise more spectrum sometime in the vague future.  For now, the FCC and the NTIA have put almost no new spectrum into actual use.  Instead,  the two agencies have piled up a depressing list of delays, scandals, and wasted opportunities. Consider just a few:




NTIA’s long-overdue report on freeing up government spectrum identified nearly 100 MHz of frequencies that could be reallocated for mobile broadband. But the 20 agencies involved in the study demanded 10 years and nearly $18 billion to vacate the spectrum—and insist on moving to frequencies that are already assigned to other public or private license holders. An available 20 MHz of unassigned frequency, left over from the 2009 conversion to digital TV, was actually added to the government’s supply when it was set aside  this year for a dedicated public safety network.



After years of wrangling with Congress, the FCC finally won limited authority to hold “voluntary incentive auctions” for spectrum currently licensed to over-the-air television broadcasters. But those auctions will take years to complete, and a decided lack of enthusiasm by broadcasters doesn’t portend well for the outcome.  As for reducing the deficit, the agency has reserved its right to disqualify bidders it believes already hold more spectrum than the agency thinks best to stimulate competition, even without any measurable signs of market failure. (Voice, data, and text prices continue to decline, according to the FCC’s own data.)



LightSquared’s efforts to reallocate satellite spectrum for use in a competitive new mobile broadband network were crippled—perhaps fatally–by concerns raised by the Department of Defense and others over potential interference with some global positioning system (GPS) devices.  Initial permission to proceed was swiftly revoked–after the company had invested billions.  The FCC’s procedural blunders in the LightSquared case ignited a political scandal that continues to distract the agency. A similar effort by Dish Networks is now being put through the full set of administrative hurdles, delayed at least until after the election..



Transactions in the secondary spectrum markets—long the only real source of supply for mobile network operators–have received an increasingly frosty reception. Last year, AT&T’s planned merger with T-Mobile USA was scuttled on the basis of dubious antitrust concerns the FCC backed up with data that was clumsily rigged by agency staff.  Now, the agency has expanded its review of Verizon’s efforts to buy spectrum from a consortium of cable companies—spectrum that currently isn’t being used for anything.



After the FCC mandated data roaming agreements even for carriers who hold spectrum in the same markets, Sprint announced it would stop serving customers with its own network in two metropolitan areas, piggybacking instead on AT&T’s band-new LTE facilities. Sprint’s move underscores concerns that mandatory roaming will reduce incentives for carriers to invest in infrastructure. According to the FCC, mobile industry investments have reached nearly 15 percent of total revenue in recent years. Of the leading providers, only Sprint decreased its investments during the recession.


Not an impressive showing, to say the least.  Meanwhile, in the real world, demand for mobile broadband continues to mushroom. Network usage has increased as much as 8,000%  since 2007, when Apple’s iPhone first hit the market. It was followed by an explosion of new devices, operating systems, and software apps from a cottage industry of developers large and small. This remarkable ecosystem is driving lightning-fast adoption of mobile services, especially bandwidth-intense video apps.



The mobile broadband ecosystem is one of the few bright spots in the sour economy, creating jobs and generating tax revenues. Makers of tablet computers, for example, expect to sell over 100 million units this year alone. Tablet users, by the way, already rely on the wildly popular devices for 15 percent of their TV viewing, raising the demand for high-bandwidth video services on existing mobile broadband networks.



Spectrum is the principal fuel of these fast-growing mobile applications. So FCC Chairman Julius Genachowski is right to repeatedly emphasize the catastrophic consequences of an imminent “spectrum crunch.”  The FCC is leading the chorus of doomsayers who believe that without more spectrum—and soon—our  mobile revolution will never reach its full economic, educational, and social potential.



But the government has done nothing to head off that disaster. Instead, the FCC, the NTIA, and the Obama administration continue to make policy choices that do little to get more spectrum into the system. If anything, we’re moving backwards.



Many of these decisions appear to be driven by short-term political imperatives, overriding the worthy  goal of making mobile broadband available to all Americans as quickly as possible. The AT&T/T-Mobile deal, for example, was killed simply because the FCC didn’t like the idea of taking even a failing carrier out of the competitive equation. Yet had the deal been approved, AT&T committed to deploy mobile broadband to 95 percent of all Americans—nearly meeting the president’s goal in a single stroke.



This is nothing new. The FCC has a very long and very messy history of using its spectrum management powers to shape emerging markets, and to pick winners and losers among new technologies,  applications, and providers.  Their guiding principle for nearly 100 years has been the so-called “public interest” standard—an undefined and highly-malleable policy tool the FCC employs like a bludgeon.



The era of micromanaging the airwaves by federal fiat must now end once and for all. For first time in a century of federal stewardship, there is almost no inventory of usable spectrum. It has all been allocated to some 50,000 public and private license holders, each the one-time favorite of the FCC. Our spectrum frontier has closed.  And it wouldn’t have closed so soon if the FCC hadn’t remained so determined to manage a 21st century resource as if it were still the 19th century.



Technology may come to our rescue, at least in part. Hardware and software for sharing spectrum, switching frequencies, and maximizing the technical properties of different bandwidths continue to be part of the innovation agenda of the mobile industry. But it is unlikely these developments will be enough to keep spectrum supply even slightly ahead of unbridled consumer demand. Many of these technologies, in any case, still require FCC approval to be deployed. That means even more delays.



Saving the mobile ecosystem–and making way for the next generation of mobile innovation–demands a bold new strategy. For starters, it is time to stage an intervention for  federal agencies hoarding spectrum. Private licensees who no longer need the spectrum they have must be able to sell their rights quickly in a working market, and be prodded when needed to do so. Buyers need the freedom to repurpose spectrum to new uses.



Also, we need to increase incentives for network operators to continue investing in better and more efficient infrastructure, not throw cold water on them in the name of a vague and largely undefined public interest.   The number of competitors isn’t what matters.  It’s the ability of consumers to get what they want at prices that, at least up until now, continue to decline.



In short, we need to take the FCC out of the middle of every transaction and each innovation, slowing Silicon Valley-paced markets down to Washington speed.



With the appetite of mobile consumers growing more voracious, it is long past time for Congress to take a cold, sober look at our obsolete system for spectrum management and the antiquated agency that can’t stop fussing over it. We need a new system, if not a new FCC. That’s the only way to keep the mobile frontier booming, let alone meet the admirable goal of providing a homestead there for every American.




 •  0 comments  •  flag
Share on Twitter
Published on May 28, 2012 18:05

May 26, 2012

What Kinds of Content Would a Selfish, Unregulated Cable Monopolist Block?

Tim Lee and I are narrowing in on our core disagreement (or, at any rate, one of them) with respect to cable broadband regulation. I argued that certain unpopular price discrimination techniques, such as broadband caps, have efficiency rationales. After some apparent talking past each other, Tim has clarified that he agrees with my argument as far as it goes, but his real concern is that cable companies will prevent new forms of content from emerging.



Internet video isn’t just a lower-cost source for the same kind of video content you can get from Comcast. Internet video has the potential to offer totally new kinds of video content that wouldn’t be available on Comcast at any price.


As Tim put it in a comment on my last post,



But the point is that the product Comcast delivers is not homogenous. The YouTube video “Charlie Bit My Finger” and the TV program “Mad Men” have both been watched by hundreds of millions of people, but the process of producing and distributing them is radically different. A vertically integrated video platform is unlikely to deliver “Charlie Bit My Finger” to users because the transaction costs of negotiating carriage exceeds the benefits the producers would get. Hence, if vertical integration had prevented the emergence of Internet video a decade ago, 400+ million users would have been deprived of the opportunity to watch it.



I don’t see any way your economic model can account for this kind of difference, which is qualitative rather than merely quantitative. Vertical integration doesn’t just affect “how much” output and who gets paid for it, it also has powerful effects on the kinds of content and services that get produced. And in my view, that’s more important in the long run. By itself, Charlie Bit My FInger isn’t so important, but YouTube the platform is tremendously important.



Let me state at the outset that I agree with Tim that the opportunity to discover new kinds of amateur and professional content, such as viral videos on YouTube, is important. However, the reason I did not include content diversity in my model is because it’s not immediately clear to me how it’s relevant.



Following Tim’s lead, I’m going to try to be as concrete as possible. Let’s suppose that Comcast were an unregulated monopoly with substantial market power in some areas. What kinds of content would it have an incentive to block? The most obvious answer is ESPN. Comcast owns NBC Sports Network, but ESPN is by far the dominant player in sports broadcasting. Comcast carries ESPN 1 through ocho, at considerable expense, because its customers demand ESPN. If Comcast really were an unconstrained monopolist, their incentive would be to drop all ESPN channels, block access to ESPN streaming, and promote its own sports channels as a substitute.



If NBC Sports Network were a good substitute for ESPN, which it clearly is not at the moment, this move would benefit Comcast. (Counterintuitively, if it were a perfect substitute for ESPN, it would necessarily also benefit Comcast customers, who would be spared a double monopoly problem.) Comcast would gain at least ESPN’s old monopoly rent, possibly more.



OK, all this arguably sounds bad, at least if NBC Sports Network is not a really good substitute for ESPN. And it clearly is not Comcast’s situation, since, without regulatory interference, they recently expanded consumer access to ESPN on the iPad, which I enjoy watching from time to time. But the bottom line is that even though Comcast might have an incentive, under some circumstances, to block certain kinds of content for which it produces a close substitute, I think Tim would agree that this is not the kind of scenario that is truly worrisome.



What worries Tim is Comcast blocking YouTube or newer, heretofore undiscovered forms of content. But here’s the rub: Comcast benefits from the existence of YouTube. If users get consumer surplus from watching YouTube videos, that’s great because Comcast can raise its price to try to capture some of that consumer surplus. YouTube increases the demand for what Comcast already owns, its cable infrastructure that users must go through to get online. Similarly, if some nascent form of online content starts taking off, that will also benefit Comcast. And if the new form of content is bandwidth-intensive, Comcast will have an incentive to revisit its price discrimination scheme in order to ensure that it is capturing as much of the surplus as possible. There is no realistic scenario I am aware of in which Comcast has an incentive to block or inhibit altogether new types of content.



For all the worry about “gatekeepers” on the Internet, gatekeepers do not behave malevolently or randomly. As I try to underscore in all my posts on broadband, digital economics are frequently counterintuitive, so maybe it’s not surprising that some on the Internet perceive gatekeepers as evil or unpredictable. Furthermore, I have no doubt that there are some instances where the effect of gatekeepers is negative. But I think that if we temper our assessment with realistic assumptions about incentives, the case for regulating the cable industry on content diversity grounds is pretty weak.




 •  0 comments  •  flag
Share on Twitter
Published on May 26, 2012 10:40

May 25, 2012

Lafayette Muni: Phone, Cable and Internet at Only $45,000 a Day



Lafayette, La., like a number of U.S. municipalities, is facing a recession-driven budget crunch, largely due to health care and retirement costs. Unlike most municipalities, however, Lafayette faces a $140-million reckoning in the form a municipal fiber to the home system.



After an auditor’s report raised some flags about the extent city’s been dipping into its reserve savings, Lorrie Toups, Lafayette Consolidated Government’s chief financial officer said $5 million in reductions might be needed to maintain a status quo budget.



This might not be so bad save for the loans that start to come due in 2013 on Lafayette municipal FTTH system, which, according to the auditor’s report, is costing the city of Lafayette $45,000 a day. Thus far, the city has issued the full $125 million in bonds authorized for construction and operation of LUS Fiber. In addition, LUS Fiber has borrowed $15 million from its parent, Lafayette Utilities System, the city’s municipally-owned water and power utility. (One reason LUS is so flush is that in 2009 it received $11.6 million as part of the Obama stimulus, ostensibly to fund a smart grid electricity system.)



Andrew Moylan at the National Taxpayers Union picked up the item from the Lafayette Advertiser:


In sum, LUS Fiber is losing boatloads of money and exacerbating an already-difficult budget situation in the area. There is a silver lining though! According to LUS’s own numbers, the project might break even by the time 2014 or 2015 roll around. Or maybe not…you know, whatever.



The Advertiser reports that Toups defended LUS Fiber as a start-up enterprise that “budgeted for losses and expected to incur them in its early years.”



That’s true–to an extent. LUS launched in 2009, and is only half-way through its fourth year of operation. The original feasibility report on the Lafayette FTTH system, produced by CCG Consulting in 2004, projected net losses of $7.1 million and $4.9 million in years two and three of operation. The recent audit, however, showed LUS Fiber ended the 2010 fiscal year, its second year of operation, with a net loss of $12.3 million. In 2011, its third year, LUS Fiber reported a loss of $16.5 million–more than three times the deficit projected in the business plan.



Of course, this is exactly what I warned about when I analyzed the CCG plan back in 2005.



Lafayette is now in the running to be the country’s biggest municipal broadband failure–a fact made worse its diving in headfirst despite the documented financial messes of those cities that went before it.

 




 •  0 comments  •  flag
Share on Twitter
Published on May 25, 2012 12:17

May 24, 2012

Follow-up Post in Symposium on “Competition in Online Search”

Boy, the symposium on “Competition in Online Search” that Daniel Sokol threw together this week over at the Antitrust & Competition Policy Blog could not have been better timed! As most of you know, the European Commission stepped up its attack on Google this week and all signs are that a lot more antitrust activity is on the way on this front.



Anyway, all the entries in the symposium are in and a few rebuttals have followed, including one by me. In my response, I took on Frank Pasquale and Eric Clemons, who were the most aggressive in their calls for search regulation. I thought I would just re-post it here to complement my early entry in the symposium on Monday.



 _______________



I enjoyed the entries in this symposium and learned something from each of them. I have a few things to say in response to both Frank Pasquale and Eric Clemons and their sweeping indictments of not just Google but seemingly the entire modern information economy.



Everywhere they look, it seems, Pasquale and Clemons see villainy. Someone completely alien to the modern online ecosystem would read Pasquale’s description of it — “digital feudalism,” “absolute sovereignty,” “opaque technologies,” “leaving users in the dark,” etc., etc. — and likely conclude that a catastrophe had befallen modern man. Of course, Pasquale’s narrative is missing any reference to the unparalleled expansion in the stock of knowledge and human choices that has been made possible by Google and the others companies he castigates (Apple, Facebook, Twitter, and Amazon). Meanwhile, Clemons wants to group Google in with supposed Wall Street robber barons as well as characters from Sinclair’s “The Jungle.” It’s all a bit much.



Regardless, what about those high-tech feudal lords, especially Google? Can we keep their market power in check without extreme steps? It goes without saying that neither Pasquale nor Clemons places much faith in the sort of dynamic, disruptive competition and creative destruction (which I documented in my entry in the symposium) as being an effective check on market behavior. But their skepticism goes well beyond that and transcends tradition antitrust analysis. They seem to assert that we just can’t trust large digital intermediaries at all, primarily because they are profit-maximizers. Clemons suggests that paid search shouldn’t even be permitted, which is a bit like saying ad-supported, for-profit newspapers should have been forbidden or regulated long ago.



Their skepticism about concentrated power fades quickly, however, when it’s the concentrated power of government that will be calling the shots in the digital economy. Regulators, Pasquale says, will be able to devise forms of redress that “help[] us confront issues of discrimination, malfeasance, nonfeasance, and technological due process in a rapidly changing online environment.” He suggests transparency mandates, external regulatory oversight, and that something akin to a mandatory right of reply for search results are all needed. Meanwhile, Clemons wants full-blown structural separation of Google into three or four different firms.



Pasquale and Clemons don’t bother addressing the trade-offs associated with their proposals. They apparently want us to imagine that these proposed remedies are innocuous and costless. They also don’t seem to give much weight to the critiques set forth by Marvin Ammori, James Grimmelman, or Dan Crane regarding the incoherent and potentially counter-productive nature of “search neutrality” remedies. Clemons also doesn’t seem at all worried about the forgone benefits of vertical integration, even though those benefits can be substantial in the field of search. The rich content and specialized integrated services that Google has been able to freely offer consumers deserve greater consideration before imposing the nuclear option of structural separation.



That last point is essential. We can’t divorce this discussion from the real-world evidence of just how well consumers have been served by the search market today. That begins with the fact that consumers don’t pay a penny for the cornucopia of content or expanding universe of constantly innovating services that they enjoy currently. So, to repeat what I said in my initial entry, the traditional goals of public utility regulation — universal service, price competition, and quality service — are already being achieved quite nicely without intervention. That makes the case for search regulation even harder to sustain.



Finally, let’s just talk about the practicality of all the regulation they advocate. Pasquale asks: “Is it too much to ask for some entity outside Google to be able to ‘look under the hood’ and understand what is going on in plausibly contested scenarios?” Well, perhaps it is! The respected blog SearchEngineLand has estimated that approximately 34,000 searches are conducted per second (or 2 million per minute; 121 million per hour; 3 billion per day; 88 billion per month). That’s a lot of activity for regulators to keep tabs on. And Google’s search algorithm is constantly being tweaked– more than 500 changes each year — to offer websurfers improved results and enhanced security against spammers and other malicious activity. Having regulators constantly “looking under the hood” and trying to adjust those results via a political process would likely slow innovation to a crawl. It would also open up the process to a great deal of gaming by other parties — including spammers and scammers. Moreover, the dangers of political gaming of search should not be discounted. Once policymakers have the sort of authority over search that Pasquale and Clemons recommend, the danger of political influence and regulatory shenanigans both grow exponentially.



In the end, I believe the combination of public pressure, social norms and, most importantly, ongoing innovation and creative destruction, can do a better job of protecting consumer welfare than the sort of sweeping regulatory interventions that Pasquale and Clemons advocate. We should be patient and see how this marketplace develops instead of engaging in rash interventions.




 •  0 comments  •  flag
Share on Twitter
Published on May 24, 2012 06:33

May 23, 2012

The ACLU of Washington State is Looking

May 2012

TECHNOLOGY AND LIBERTY DIRECTOR

(Full-time)



The ACLU of Washington (ACLU-WA) seeks a self-motivated public policy advocate to lead its work to protect civil liberties in the face of society’s increasingly advanced technologies. The ACLU-WA’s staff of 30 employees and numerous volunteers work in a fast-paced, friendly and professional office in downtown Seattle.



Using strategies of education, policy analysis, legislative advocacy, coalition building, and legal efforts, the Technology and Liberty Director advances a civil liberties perspective on such issues as data aggregation, surveillance technologies, and online free speech. The Technology and Liberty Director works closely and collaboratively with senior ACLU staff, and has significant interaction with the national ACLU Speech, Privacy and Technology Project. The position reports to the Executive Director through the Deputy Director.



Responsibilities: Regular responsibilities will include the following work:



Engage in both technical and policy research to analyze technology-related programs and proposals by government and industry. In collaboration with senior staff, develop positions and strategies to respond to civil liberties and technology issues.Provide expertise to policymakers, the press, and coalition partners.Forge relationships with technology experts, public interest groups, government officials, community stakeholders, and academics to engage them in our work.In collaboration with the Legislative Director, advocate on selected technology issues before the state legislature, state or local agencies, and other policy makers.Engage in outreach and educational activities through written materials, speaking engagements, media, and visits with ACLU supporters.Maintain positive working relationships with relevant national ACLU staff, and collaborate on selected efforts.Recruit and supervise interns and volunteers working on technology policy.Assist in other activities as assigned. Help maintain a positive, respectful, welcoming, and professional work environment for employees, interns and volunteers.

Qualifications:



A law degree or another relevant advanced degree.Experience in legislative advocacy and policy analysis in the areas of privacy, technology, or other related fields.Demonstrated skills as an articulate, effective public advocate.Excellent analysis, writing, and research skills. Prior experience simplifying and communicating technical issues to non-technical audiences.Strong project management, organization and collaboration skills with attention to detail and ability to meet deadlines.Strong commitment to and understanding of civil liberties and civil rights.Ability to work cooperatively on a variety of projects with a broad range of individuals and community organizations.Ability to work independently and under pressure, to attend occasional evening meetings and sometimes to work long or irregular hours.A commitment to diversity; a personal approach that values the individual and respects differences of race, ethnicity, age, gender identity and expression, sexual orientation, religion, ability, and socio-economic circumstance.

Compensation:



Salary is based on experience and qualifications. Benefits include three weeks of vacation to start, medical and disability insurance, matching 401(k) plan and bus pass.



Application procedure:
To apply, email a letter of application and resume to Jobs@aclu-wa.org and include in the subject line of the email: your last name and Technology & Liberty Director. In your letter, please indicate where you learned of the posting. Applications will be accepted until the position is filled at which time it will be removed from our website at http://www.aclu-wa.org/jobs-internships.



The ACLU is an affirmative action/equal opportunity employer and encourages qualified individuals of every race, creed, ethnicity, disability, sexual orientation, and gender identity and expression to apply.




 •  0 comments  •  flag
Share on Twitter
Published on May 23, 2012 09:54

May 22, 2012

Michael Burstein on information exchange and IP law

http://surprisinglyfree.com/wp-content/uploads/Burstein-3-lr.jpg

On the podcast this week, Michael Burstein, assistant professor of law at the Benjamin N. Cardozo School of Law, discusses his paper entitled, Exchanging Information Without Intellectual Property. Burstein begins by discussing theories behind IP law and why it exists. According to Burstein, IP law incentivizes creation of intellectual works because it protects the creator’s investment by preventing others from copying the work and obtaining a benefit without any effort. He then goes on to discuss the critiques of these theories, the costs that are involved in protecting intellectual works, and the effect IP law has on innovation. Burstein then discusses practical examples in the pharmaceutical and biotech industry where actors structure the flow of information in a way that is reciprocal but only requires a small role from IP law. According to Burstein, norms protect intellectual works. He believes these norms allow disclosure of intellectual works in stages and facilitate a trusting relationship between two firms. Burstein ends the discussion by addressing policy conclusions surrounding IP law and what role it should play in information exchange.





Related Links

Exchanging Information Without Intellectual Property , by Burstein“Emerging Markets for High Tech Ideas”, Small Business Trends“Frischmann Predicts Prometheus”, Concurring Opinions

To keep the conversation around this episode in one place, we’d like to ask you to comment at the webpage for this episode on Surprisingly Free. Also, why not subscribe to the podcast on iTunes?




 •  0 comments  •  flag
Share on Twitter
Published on May 22, 2012 11:00

May 21, 2012

Entry for Antitrust Policy Blog Symposium on “Competition in Online Search”

It’s my great pleasure this week to be participating in a 2-day symposium on “Competition in Online Search” that is being hosted by the Antitrust & Competition Policy Blog.  Daniel Sokol, Associate Professor of Law at the University of Florida Levin College of Law, was kind enough to invite me to join the fun. Professor Sokol is the editor of the Antitrust & Competition Policy Blog. Others participating in this symposium include: James Grimmelman (NY Law); Eugene Volokh (UCLA); Marvin Ammori (Stanford Law); Mark Jamison (Univ. of Florida); Eric Clemons (Wharton School); Dan Crane (Michigan Law); and both Marina Lao and Frank Pasquale (Seton Hall); and more.



My entry is now live. In it, I focus on how dynamically competitive and innovative the digital economy has been over the past 15 years and question to need for intervention at this time, especially of the “public utility” variety. I’ve re-posted my entry below, but make sure to head over to the Antitrust & Competition Policy Blog to read all the contributions to this excellent symposium.



_______________



If you blink your eyes in the Information Age you can miss revolutions. Let’s take a quick walk back through our turbulent recent history:




Just five years ago, MySpace dominated social networking and had The Guardian wondering, “Will MySpace Ever Lose Its Monopoly?” A short time later, MySpace lost its early lead and became a major liability for owner Rupert Murdoch. Murdoch paid $580 million for MySpace in 2005 only to sell it for $35 million in June 2011.
Just six to eight years ago, the mobile landscape was ruled by Palm, BlackBerry, Nokia, and Motorola. Palm is now all but dead and BlackBerry is trying to stay afloat while Nokia and Motorola had to cut deals with Microsoft and Google respectively in order to survive.
Just 10 years ago, AOL’s hegemony in online services was thought to be unassailable, especially after its merger with Time Warner. But the merger quickly went off the rails and AOL’s online “dominance” quickly evaporated. Losses grew to over $100 billion and the entire deal unraveled within just a few years as AOL’s old dial-up, walled-garden business model had been completely superseded by broadband and the new Web 2.0 world.
Just 12 years ago, Yahoo! and AltaVista were the go-to companies for online search. No one turns to them first today when they go looking for information online.
And just 15 years ago, Microsoft was on everyone’s mind. Today, the firm is struggling to remain part of cocktail party chatter when the topic of modern Tech Titans is discussed. For example, a recent Fast Company cover story on “The Great Tech War of 2012” only mentioned Microsoft in passing. The rise of search, social media, and cloud computing represented disruptive shifts that Microsoft wasn’t prepared for.


The graveyard of tech titans is littered with the names of many other once-mighty giants. Schumpeter’s “gales of creative destruction” have rarely blown harder through any sector of our modern economy. And so now we come to the question of Google’s dominance in the field of search. Should we be worried? Some say yes, and the rhetoric of public utilities and essential facilities is increasingly creeping into policy discussions about the Internet, including the search layer. A growing cabal of cyberlaw experts—Tim WuDawn NunziatoFrank Pasquale, among many others—argue that some sort of regulation is needed.



But the recent history I recounted above makes it clear that patience and humility are the more sensible policy prescriptions. Calls for regulation or public utility classification are particularly premature and problematic. As I argued in my recent white paper, “The Perils of Classifying Social Media Platforms as Public Utilities,” search and social media platforms do not resemble traditional public utilities and there are good reasons why policymakers should avoid a rush to regulate them as such.



First, there has not been any serious showing of monopoly power in the search or social media sectors in which Google operates. It’s also impossible to find any way in which consumer welfare is currently being harmed by Google. All their products are free and constantly evolving. New technologies and rivals continue to emerge. DuckDuckGo, for example, differentiates itself in search by stressing privacy above all else. Meanwhile, the contours of these markets are constantly evolving in a dynamic way, making market definition challenging. Is Facebook a search company? Signs are good that it soon could soon become a formidable one.



These market-definition considerations are especially important because of how long it takes to formulate regulations or impose antitrust remedies. In a market that changes this rapidly, taking several months or even years to complete rulemakings or litigate remedies will almost certainly mean that most rules will be completely out of date by the time they are implemented. And once implemented, there will be very little incentive to rework them as rapidly as the market contours change. Regulation could retard innovation in search and social media markets by denying firms the ability to evolve or innovate across pre-established, artificial market boundaries. Second, treating these digital services as regulated utilities would harm consumer welfare because public utility regulation has traditionally been the archenemy of innovation and competition. Public utility regulation has a long, lamentable history that has been well-documented by economists and political scientists. That’s why it is usually considered the last resort, not the first option. Moreover, the traditional goals of public utility regulation — universal service, price competition, and quality service — are already being achieved without intervention. And as Marvin Ammori and Luke Pelican outline in a new study, all the proposed antitrust remedies to deal with Google in particular also have serious downsides. Almost all the cures would be worse than whatever disease it is critics hope to solve with antitrust intervention.



Third, treating today’s leading search and social media providers as digital essential facilities threatens to convert “natural monopoly” or “essential facility” claims into self-fulfilling prophecies. The very act of imposing utility obligations on a particular platform or company tends to lock it in as the preferred or only choice in its sector. Public utility regulation also shelters a utility from competition once it is enshrined as such. Also, by forcing standardization or a common platform, regulation can erect de jure or de facto barriers to entry that restrict beneficial innovation and the disruption of market leaders.



Fourth, because social media are fundamentally tied up with the production and dissemination of speech and expression, First Amendment values are at stake, warranting heightened constitutional scrutiny of proposals for regulation. As Eugene Volokh noted in a recent white paper, social media providers should possess the editorial discretion to determine how their platforms are configured and what can appear on them.



Will Google meet the same fate as earlier Tech Titans? It’s impossible to know. But with the wrecking ball of creative digital destruction doing such a fine job of keeping competition and innovation thriving, we’d be smart to reject heavy-handed, top-down regulation of such a dynamic segment of our economy at this time.




 •  0 comments  •  flag
Share on Twitter
Published on May 21, 2012 11:54

Adam Thierer's Blog

Adam Thierer
Adam Thierer isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Adam Thierer's blog with rss.