Adam Thierer's Blog, page 105

January 17, 2012

We Need More "Big Money" in Politics

As if my earlier essay on why "We Need More Attack Ads in Political Campaigns" wasn't incendiary enough, allow me to heap praise on this outstanding new oped by Washington Post columnist Richard Cohen "In Defense of Big Money in Politics."  Few things get me more steamed than when Democrats and Republicans decry "big money" in politics and claim we need to aggressively clamp down on it.  What's even more insulting is when they say this is a smart way to encourage 3rd party political candidates and movements.



What's really going on here is simple protectionism. The reason that today's politicians want to regulate cash in campaigns is because the two parties already own the system. Believe me, as someone who has NEVER voted for a politician from either of the two leading parties, I would love for it to be the case that clamping down on campaign spending actually helped 3rd party candidates. Richard Cohen's column explains why that certainly wasn't the case with Eugene McCarthy's historic 1968 challenge to Lyndon Johnson, which was fueled by "big money" contributions from a handful of major donors. Here's how Cohen begins his piece:



Sheldon Adelson is supposedly a bad man. The gambling mogul gave $5 million to a Newt Gingrich-loving super PAC and this enabled Gingrich to maul Mitt Romney — a touch of opinion here — who had it coming anyway. Adelson is a good friend of Gingrich and a major player in Israeli politics. He owns a newspaper in Israel and supports politicians so far to the right I have to wonder if they are even Jewish. This is Sheldon Adelson, supposedly a bad man. But what about Howard Stein?

The late chairman of the Dreyfus Corp. was a wealthy man but, unlike Adelson, a liberal Democrat. Stein joined with some other rich men — including Martin Peretz, the one-time publisher of the New Republic; Stewart Mott, a GM heir; and Arnold Hiatt of Stride Rite Shoes — to provide about $1.5 million for Eugene McCarthy's 1968 challenge to Lyndon Johnson. Stein and his colleagues did not raise this money in itsy-bitsy donations but by chipping in large amounts themselves. Peretz told me he kicked in $30,000. That was a huge amount of money at the time.


As Cohen points out, while many campaign regulation fans today "pooh-pooh the argument that money is speech, they cannot deny that when McCarthy talked — when he had the cash for TV time or to set up storefront headquarters — that was political speech at the highest decibel." Amen, brother. McCarthy changed history. His was easily the most important 3rd party run of the past half century, and one of the most important in American history.



If we want more serious 3rd party candidates, then we need more cash in politics. Lots more. Unlimited, direct to candidate contributions.  Let's have Robert Redford and his Hollywood buddies open their checkbooks and fund a serious run by the Green Party, or let wealthy industrialists fund a Libertarian Party candidate. Or whatever else.



But won't that be "corrupting," the skeptics ask? We can handle that: Just demand transparency. Force them to tell us where the money is coming from. We already have laws that do that.



In the meantime, I would really appreciate it if all those politicians and academics who say they are just trying help out independents like me would just quit it. They are not helping us get more 3rd party voices into the American political system; they're making it harder than ever for them to even exist.




 •  0 comments  •  flag
Share on Twitter
Published on January 17, 2012 16:17

Michael Weinberg on 3D Printing

http://surprisinglyfree.com/wp-content/uploads/mike.jpg

On the podcast this week, Michael Weinberg, staff attorney with Public Knowledge, discusses his white paper entitled, It Will Be Awesome If They Don't Screw This Up: 3D Printing, Intellectual Property, and the Fight Over the Next Great Disruptive Technology. The discussion begins with Weinberg describing 3D printing: the process of printing three dimensional objects layer-by-layer from a digital file on a computer. According to Weinberg the design method used for printing includes programs like AutoCad and 3D scanners that can scan existing objects, making it possible to print a 3D replica. He goes on to explain why he thinks 3D printing, coupled with the Internet, is a disruptive technology. Finally, Weinberg discusses the thesis of his paper, where he anticipates industries affected by potential disruption will not compete with or adapt to this technology, but rather, will seek legal protection through IP law to preemptively regulate 3D printing.





Related Links

It Will Be Awesome If They Don't Screw This Up: 3D Printing, Intellectual Property, and the Fight Over the Next Great Disruptive Technology , By Weinberg"Difference Engine: Making it", The EconomistThingverse.comAnthony Atala: Printing a human kidney, TED

To keep the conversation around this episode in one place, we'd like to ask you to comment at the webpage for this episode on Surprisingly Free. Also, why not subscribe to the podcast on iTunes?




 •  0 comments  •  flag
Share on Twitter
Published on January 17, 2012 10:00

Why Google's Biggest Problem with 'Search Plus Your World' Isn't Antitrust

Over at TIME.com, I write that while some claim that Google Search Plus Your World violates antitrust laws, it likely doesn't. But I note that Google does have a big problem on its hands: market reaction.




So if antitrust is not Google's main concern, what is? It's that user reaction to SPYW and other recent moves may invite the very switching and competitive entry that would have to be impossible for monopoly to hold. … Users, however, may not wait for the company to get it right. They can and will switch. And sensing a weakness, new competitors may well enter the search space. The market, therefore, will discipline Google faster than any antitrust action could.




Read the whole thing here.




 •  0 comments  •  flag
Share on Twitter
Published on January 17, 2012 06:03

January 16, 2012

On Incentive Auctions, the FCC Reaps what it Sowed

After three years of politicking, it now looks like Congress may actually give the FCC authority to conduct incentive auctions for mobile spectrum, and soon.  That, at least, is what the FCC seems to think.



At CES last week, FCC Chairman Julius Genachowski largely repeated the speech he has now given three years in a row.  But there was a subtle twist this time, one echoed by comments from Wireless Bureau Chief Rick Kaplan at a separate panel.



Instead of simply warning of a spectrum crunch and touting the benefits of the incentive auction idea, the Chairman took aim at a House Republican bill that would authorize the auctions but limit the agency's "flexibility" in designing and conducting them. "My message on incentive auctions today is simple," he said, "we need to get it done now, and we need to get it done right."



By "done right," Genachowski means without meaningful limits on how the agency constructs or oversees the auctions.  The Chairman's attitude now seems to be if the FCC can't have complete freedom, it would rather not have incentive auctions at all.  That's a strange stance given the energy the FCC has expended making the case that such auctions are critical for mobile broadband users.



What's the fight about?  The House bill would prohibit the agency from introducing bidder qualifications based on external factors, such as current spectrum holdings.  The FCC could not, in other words, directly or indirectly exclude carriers who already have significant spectrum licenses.  The agency would also be limited in its ability to attach special conditions to new licenses issued as part of particular auctions.  An amendment by Rep. Marsha Blackburn (R-Tenn.) that was approved last month would specifically forbid special net neutrality conditions.



This may sound like an inside-the-beltway spat, but the stakes are in fact quite high, going right to the core of what role the FCC should play in 21st century communications.  For the Chairman, these limits rise to the level of an existential crisis, casting doubt on the agency's very nature as an expert regulator.  Congress should, he argued, authorize the auctions and let the agency's staff of legal, economic and technical experts decide how best to organize them.  Tying the FCC's hands by statute, he said, is "a mistake":



because it preempts an expert agency process that's fact-based, data-driven and informed by a broad range of economists, technologists and stakeholders on an open record. The proposals on the table to restrict the FCC's flexibility in its area of technical expertise would be a significant departure from precedent.



 Spectrum- and auction-related issues pose hard questions.  I believe they should be answered based on the evidence, on an open record, as close as possible to the time when they need to be made.



House leaders see it very differently.  They see an agency that badly bungled the recent 700 Mhz. auctions—the last major auctions the FCC has conducted.   As a pre-condition to bidding, for example, Google demanded "open access" conditions, which the FCC belatedly agreed to add.  Instead of answering "hard" questions based on "facts" and "data" in an open record, the agency simply gave in to pressure from a late and well-connected bidder.



There was no expertise applied here.  And the result, as I've noted elsewhere, was that bids for the C block (where the open access conditioned were applied) were discounted to the tune of billions of dollars that would otherwise have gone to the Treasury.



Verizon won the auction, but now faces uncertain application of the conditions, which differ materially from the open Internet rules the agency passed last year in the net neutrality rulemaking.  Meanwhile, the mobile marketplace is a very different place than it was when Google first stepped in, dominated by device  and operating system providers and proprietary app stores that didn't even exist in 2008.



Larger bidders, meanwhile, wary of the vaguely-defined new conditions, shifted to the A and B blocks, pushing out smaller carriers.  Precisely the opposite result to what the agency intended in designing the auctions in the first place.



Politically-driven choices on how the D block should be licensed for public safety turned out even worse.  That auction could not find a bidder willing to live with the FCC's conditions.  The spectrum sits unused, even as public safety still has no interoperable network more than a decade after 9/11.



If that's what an "expert" agency with does with its "flexibility," then it's no wonder House leaders are skeptical.   "Flexibility" should mean maximizing revenues and ensuring that limited and critical spectrum assets are licensed to those who can put them to the best and highest use.  Not trying to stack the deck in favor of some bidders–and still getting it wrong.



Nothing has changed.  The agency still seems determined to use its auction authority to shape mobile broadband competition in its own sclerotic image.  It wants to create a competitive market among carriers even as competition is increasingly driven by other players in the mobile ecosystem.  It wants a return to the failed practice of unbundling to create an abundance of phantom competitors who have no assets and no understanding of communications, created by financial engineers who recognize a good regulatory arbitrage when they see one.



Not so, says the Chairman.  Our view of the market is deeply analytical, the result of thorough technical and economic analysis conducted by the bureaus.  His evidence?  The agency's annual competition reports.  Or so he told CEA Gary Shapiro following his speech, when asked for proof that the agency understands the markets with which it tinkers.



But the competition reports are hardly models of lucid analysis.  They are constrained by the bureaus' crabbed view of the market, a view required by the statutory requirements that generate the reports.  They continue to emphasize obsolete proxies for measuring competition, including HHIs and the spectrum screen, even as actual data on market conditions is relegated to the back of the report.  For the last two years, the mobile competition report pointedly refused to say whether the agency thought the market was competitive or not.



Yet the agency deliberately forfeited even the limited value of the competition reports by rejecting out-of-hand the AT&T/T-Mobile USA deal.  Rather than focusing on declining prices for voice, text, and data over the last ten years, or the regulatory constraints that make mergers necessary to expand coverage and service (both amply documented in the reports), the staff report on the T-Mobile deal largely swallowed the simplistic mantra of opponents of the deal that taking out one "national" carrier was per se anti-competitive.  The report's principal objection seemed to be that any horizontal merger of two companies would result in one fewer competitor.  True, but irrelevant.



There was no sign of expert regulator at work here; nothing to suggest an analysis that was "fact-based, data-driven and informed by a broad range of economists, technologists and stakeholders."  The analysis started with a conclusion and worked backwards.  And when even the old formulas didn't come out right, at least in the case of the spectrum screen, the books were simply cooked until they did.



Well, that's all water under the bridge in 2012.  "This is an incredibly fast-moving space," the Chairman said of the need for flexibility, "and any policy that pre-judges or predicts the future runs a great risk of unintended and unfortunate consequences."



That's a good point.  But it's also a perfect description of last year's Net Neutrality rulemaking.  During a year of proceedings, the FCC turned up next to no evidence of an actual problem, let alone a market failure.  Still, the agency stuck doggedly to its first principals, insisting after-the-fact that "prophylactic" rules limiting network management technologies of the future were essential to maintaining a "level playing field."  Never mind that the playing field showed no signs of imbalance, or that it continued to evolve dramatically (iPhone, iPad, Android and Verizon's LTE introduction, for starters) as deliberations dragged on in a regulatory vacuum.



One "unintended and unfortunate consequence" of that and similar missteps has already become clear—Congress doesn't trust the Chairman to follow the law.



Which is, I suspect, the main reason incentive auction authority hasn't yet passed, even though nearly everyone agrees it's the best short-term solution to a spectrum crisis of the government's own making.  And why, when it does come, there are likely to be plenty of strings attached.



Which is too bad.  Because, if the FCC really acted as the expert agency it is chartered to be, Genachowski would be right about the value of flexibility.



 



 



 




 •  0 comments  •  flag
Share on Twitter
Published on January 16, 2012 17:35

January 12, 2012

The Kids Are Alright

My latest weekly Forbes column asks, "Why Do We Always Sell the Next Generation Short?" and it explores the dynamics that lead many parents and policymakers to perpetually write off younger generations.  As the late journalism professor Margaret A. Blanchard once observed: "[P]arents and grandparents who lead the efforts to cleanse today's society seem to forget that they survived alleged attacks on their morals by different media when they were children. Each generation's adults either lose faith in the ability of their young people to do the same or they become convinced that the dangers facing the new generation are much more substantial than the ones they faced as children."



What explains this phenomenon? In my essay, I argue that it comes down to a combination of "juvenoia" and hyper-nostalgia. University of New Hampshire sociologist David Finkelhor defines juvenoia as "exaggerated anxiety about the influence of social change on children and youth." Once you combine such panicky juvenoia about new media and youth culture with a nostalgic view of the past that says the "good 'ol days" are behind us, you get the common generational claim that the current good-for-nothing generation and their new-fangled gadgets and culture are steering us straight into the moral abyss.



Instead of panic and hyper-pessimism,  I believe that the more sensible approach approach is patient parental engagement and mentoring. I argue that "quite often, the best approach to learning more about our children's culture is to immerse ourselves in it. Should we worry about the content found in some games, music, or videos? Perhaps we should. Sitting down and consuming that content with our kids and talking to them about it might be the best way to better understand their culture and then mentor them accordingly."



Anyway, read my entire essay over at Forbes. And, on a related note, I highly recommend this new piece by Perri Klass, M.D. in The New York Times: "Seeing Social Media as Adolescent Portal More Than Pitfall."  It adopts a similar approach.




 •  0 comments  •  flag
Share on Twitter
Published on January 12, 2012 11:13

Feds Should Stay Out of Google/Twitter Social Search Spat

By Berin Szoka, Geoffrey Manne & Ryan Radia



As has become customary with just about every new product announcement by Google these days, the company's introduction on Tuesday of its new "Search, plus Your World" (SPYW) program, which aims to incorporate a user's Google+ content into her organic search results, has met with cries of antitrust foul play. All the usual blustering and speculation in the latest Google antitrust debate has obscured what should, however, be the two key prior questions: (1) Did Google violate the antitrust laws by not including data from Facebook, Twitter and other social networks in its new SPYW program alongside Google+ content; and (2) How might antitrust restrain Google in conditioning participation in this program in the future?



The answer to the first is a clear no. The second is more complicated—but also purely speculative at this point, especially because it's not even clear Facebook and Twitter really want to be included or what their price and conditions for doing so would be. So in short, it's hard to see what there is to argue about yet.



Let's consider both questions in turn.



Should Google Have Included Other Services Prior to SPYW's Launch?

Google says it's happy to add non-Google content to SPYW but, as Google fellow Amit Singhal told Danny Sullivan, a leading search engine journalist:




Facebook and Twitter and other services, basically, their terms of service don't allow us to crawl them deeply and store things. Google+ is the only [network] that provides such a persistent service,… Of course, going forward, if others were willing to change, we'd look at designing things to see how it would work.




In a follow-up story, Sullivan quotes his interview with Google executive chairman Eric Schmidt about how this would work:




"To start with, we would have a conversation with them," Schmidt said, about settling any differences.


I replied that with the Google+ suggestions now hitting Google, there was no need to have any discussions or formal deals. Google's regular crawling, allowed by both Twitter and Facebook, was a form of "automated conversation" giving Google material it could use.


"Anything we do with companies like that, it's always better to have a conversion," Schmidt said.




MG Siegler calls this "doublespeak" and seems to think Google violated the antitrust laws by not making SPYW more inclusive right out of the gate. He insists Google didn't need permission to include public data in SPYW:




Both Twitter and Facebook have data that is available to the public. It's data that Google crawls. It's data that Google even has some social context for thanks to older Google Profile features, as Sullivan points out.


It's not all the data inside the walls of Twitter and Facebook — hence the need for firehose deals. But the data Google can get is more than enough for many of the high level features of Search+ — like the "People and Places" box, for example.




It's certainly true that if you search Google for "site:twitter.com" or "site:facebook.com," you'll get billions of search results from publicly-available Facebook and Twitter pages, and that Google already has some friend connection data via social accounts you might have linked to your Google profile (check out this dashboard), as Sullivan notes. But the public data isn't available in real-time, and the private, social connection data is limited and available only for users who link their accounts. For Google to access real-time results and full social connection data would require… you guessed it… permission from Twitter (or Facebook)! As it happens, Twitter and Google had a deal for a "data firehose" so that Google could display tweets in real-time under the "personalized search" program for public social information that SPYW builds on top of. But Twitter ended the deal last May for reasons neither company has explained.



At best, therefore, Google might have included public, relatively stale social information from Twitter and Facebook in SPYW—content that is, in any case, already included in basic search results and remains available there. The real question, however, isn't could Google have included this data in SPYW, but rather need they have? If Google's engineers and executives decided that the incorporation of this limited data would present an inconsistent user experience or otherwise diminish its uniquely new social search experience, it's hard to fault the company for deciding to exclude it. Moreover, as an antitrust matter, both the economics and the law of anticompetitive product design are uncertain. In general, as with issues surrounding the vertical integration claims against Google, product design that hurts rivals can (it should be self-evident) be quite beneficial for consumers. Here, it's difficult to see how the exclusion of non-Google+ social media from SPYW could raise the costs of Google's rivals, result in anticompetitive foreclosure, retard rivals' incentives for innovation, or otherwise result in anticompetitive effects (as required to establish an antitrust claim).



Further, it's easy to see why Google's lawyers would prefer express permission from competitors before using their content in this way. After all, Google was denounced last year for "scraping" a different type of social content, user reviews, most notably by Yelp's CEO at the contentious Senate antitrust hearing in September. Perhaps one could distinguish that situation from this one, but it's not obvious where to draw the line between content Google has a duty to include without "making excuses" about needing permission and content Google has a duty not to include without express permission. Indeed, this seems like a case of "damned if you do, damned if you don't." It seems only natural for Google to be gun-shy about "scraping" other services' public content for use in its latest search innovation without at least first conducting, as Eric Schmidt puts it, a "conversation."



And as we noted, integrating non-public content would require not just permission but active coordination about implementation. SPYW displays Google+ content only to users who are logged into their Google+ account. Similarly, to display content shared with a user's friends (but not the world) on Facebook, or protected tweets, Google would need a feed of that private data and a way of logging the user into his or her account on those sites.



Now, if Twitter truly wants Google to feature tweets in Google's personalized search results, why did Twitter end its agreement with Google last year? Google responded to Twitter's criticism of its SPYW launch last night with a short Google+ statement:




We are a bit surprised by Twitter's comments about Search plus Your World, because they chose not to renew their agreement with us last summer, and since then we have observed their rel=nofollow instructions [by removing Twitter content results from "personalized search" results].




Perhaps Twitter simply got a better deal: Microsoft may have paid Twitter $30 million last year for a similar deal allowing Bing users to receive Twitter results. If Twitter really is playing hardball, Google is not guilty of discriminating against Facebook and Twitter in favor of its own social platform. Rather, it's simply unwilling to pony up the cash that Facebook and Twitter are demanding—and there's nothing illegal about that.



Indeed, the issue may go beyond a simple pricing dispute. If you were CEO of Twitter or Facebook, would you really think it was a net-win if your users could use Google search as an interface for your site? After all, these social networking sites are in an intense war for eyeballs: the more time users spend on Google, the more ads Google can sell, to the detriment of Facebook or Twitter. Facebook probably sees itself increasingly in direct competition with Google as a tool for finding information. Its social network has vastly more users than Google+ (800 million v 62 million, but even larger lead in active users), and, in most respects, more social functionality. The one area where Facebook lags is search functionality. Would Facebook really want to let Google become the tool for searching social networks—one social search engine "to rule them all"? Or would Facebook prefer to continue developing "social search" in partnership with Bing? On Bing, it can control how its content appears—and Facebook sees Microsoft as a partner, not a rival (at least until it can build its own search functionality inside the web's hottest property).



Adding to this dynamic, and perhaps ultimately fueling some of the fire against SPYW, is the fact that many Google+ users seem to be multi-homing, using both Facebook and Google+ (and other social networks) at the same time, and even using various aggregators and syncing tools (Start Google+, for example) to unify social media streams and share content among them. Before SPYW, this might have seemed like a boon to Facebook, staunching any potential defectors from its network onto Google+ by keeping them engaged with both, with a kind of "Facebook primacy" ensuring continued eyeball time on its site. But Facebook might see SPYW as a threat to this primacy—in effect, reversing users' primary "home" as they effectively import their Facebook data into SPYW via their Google+ accounts (such as through Start Google+). If SPYW can effectively facilitate indirect Google searching of private Facebook content, the fears we suggest above may be realized, and more users may forego vistiing Facebook.com (and seeing its advertisers), accessing much of their Facebook content elsewhere—where Facebook cannot monetize their attention.



Amidst all the antitrust hand-wringing over SPYW and Google's decision to "go it alone" for now, it's worth noting that Facebook has remained silent. Even Twitter has said little more than a tweet's worth about the issue. It's simply not clear that Google's rivals would even want to participate in SPYW. This could still be bad for consumers, but in that case, the source of the harm, if any, wouldn't be Google. If this all sounds speculative, it is—and that's precisely the point. No one really knows. So, again, what's to argue about on Day 3 of the new social search paradigm?



The Debate to Come: Conditioning Access to SPYW

While Twitter and Facebook may well prefer that Google not index their content on SPYW—at least, not unless Google is willing to pay up—suppose the social networking firms took Google up on its offer to have a "conversation" about greater cooperation. Google hasn't made clear on what terms it would include content from other social media platforms. So it's at least conceivable that, when pressed to make good on its lofty-but-vague offer to include other platforms, Google might insist on unacceptable terms. In principle, there are essentially three possibilities here:




Antitrust law requires nothing because there are pro-consumer benefits for Google to make SPYW exclusive and no clear harm to competition (as distinct from harm to competitors) for doing so, as our colleague Josh Wright argues.
Antitrust law requires Google to grant competitors access to SPYW on commercially reasonable terms.
Antitrust law requires Google to grant such access on terms dictated by its competitors, even if unreasonable to Google.


Door #3 is a legal non-starter. In Aspen Skiing v. Aspen Highlands (1985), the Supreme Court came the closest it has ever come to endorsing the "essential facilities" doctrine by which a competitor has a duty to offer its facilities to competitors. But in Verizon Communications v. Trinko (2004), the Court made clear that even Aspen Skiing is "at or near the outer boundary of § 2 liability." Part of the basis for the decision in Aspen Skiing was the existence of a prior, profitable relationship between the "essential facility" in question and the competitor seeking access. Although the assumption is neither warranted nor sufficient (circumstances change, of course, and merely "profitable" is not the same thing as "best available use of a resource"), the Court in Aspen Skiing seems to have been swayed by the view that the access in question was otherwise profitable for the company that was denying it. Trinko limited the reach of the doctrine to the extraordinary circumstances of Aspen Skiing, and thus, as the Court affirmed in Pacific Bell v. LinkLine (2008), it seems there is no antitrust duty for a firm to offer access to a competitor on commercially unreasonable terms (as Geoff Manne discusses at greater length in his chapter on search bias in TechFreedom's free ebook, The Next Digital Decade).



So Google either has no duty to deal at all, or a duty to deal only on reasonable terms. But what would a competitor have to show to establish such a duty? And how would "reasonableness" be defined?



First, this issue parallels claims made more generally about Google's supposed "search bias." As Josh Wright has said about those claims, "[p]roperly articulated vertical foreclosure theories proffer both that bias is (1) sufficient in magnitude to exclude Google's rivals from achieving efficient scale, and (2) actually directed at Google's rivals." Supposing (for the moment) that the second point could be established, it's hard to see how Facebook or Twitter could really show that being excluded from SPYW—while still having their available content show up as it always has in Google's "organic" search results—would actually "render their efforts to compete for distribution uneconomical," which, as Josh explains, antitrust law would require them to show. Google+ is a tiny service compared to Google or Facebook. And even Google itself, for all the awe and loathing it inspires, lags in the critical metric of user engagement, keeping the average user on site for only a quarter as much time as Facebook.



Moreover, by these same measures, it's clear that Facebook and Twitter don't need access to Google search results at all, much less its relatively trivial SPYW results, in order find, and be found by, users; it's difficult to know from what even vaguely relevant market they could possibly be foreclosed by their absence from SPYW results. Does SPYW potentially help Google+, to Facebook's detriment? Yes. Just as Facebook's deal with Microsoft hurts Google. But this is called competition. The world would be a desolate place if antitrust laws effectively prohibited firms from making decisions that helped themselves at their competitors' expense.



After all, no one seems to be suggesting that Microsoft should be forced to include Google+ results in Bing—and rightly so. Microsoft's exclusive partnership with Facebook is an important example of how a market leader in one area (Facebook in social) can help a market laggard in another (Microsoft in search) compete more effectively with a common rival (Google). In other words, banning exclusive deals can actually make it more difficult to unseat an incumbent (like Google), especially where the technologies involved are constantly evolving, as here.



Antitrust meddling in such arrangements, particularly in high-risk, dynamic markets where large up-front investments are frequently required (and lost), risks deterring innovation and reducing the very dynamism from which consumers reap such incredible rewards. "Reasonable" is a dangerously slippery concept in such markets, and a recipe for costly errors by the courts asked to define the concept. We suspect that disputes arising out of these sorts of deals will largely boil down to skirmishes over pricing, financing and marketing—the essential dilemma of new media services whose business models are as much the object of innovation as their technologies. Turning these, by little more than innuendo, into nefarious anticompetitive schemes is extremely—and unnecessarily—risky.



The Fragmentation Claim

For some, the problem isn't so much about antitrust but about the fragmentation of the web. John Battelle claims that tensions between search engines and social networking platforms threaten our culture, and we need a "public commons" for social data to set things right. In the abstract (and the real world is never "in the abstract"), the claim has appeal: the Web users of today might, in some sense, be better off if Facebook, Google, Twitter, and Bing could all just "get along" and share social content among themselves seamlessly so that users could find content from any major social media platform on Google (or Bing, for that matter). Instead of facing a choice among major search engines that each only offer a fragment of potentially relevant social networking content, users in this Social Commons Utopia would choose search engines based on the quality of the algorithm, or other features—not on which social networks the search engine indexes. Meanwhile, users active in multiple social networks would enjoy a one-stop shop for searching content shared by their friends.



That all sounds well and good, but it misses the forest for the trees. The question isn't simply about consumer welfare in a static snapshot of today's marketplace. From that myopic perspective, commoditizing search might make a lot of sense. But of course, what's ultimately important is that search keeps evolving to become more social and more … who knows what else the future will bring? Achieving a static "utopia" might end up killing the contentious rivalry that fuels the evolution of the market in ways that dramatically outweigh any short-term gains for consumers. Incorporating a realistic appreciation for that into a court-ordered "reasonable" deal is a Sisyphean task—yet another reason why courts are (and should be) likely to err on the side of extreme caution about meddling here.



To be sure, a "public commons" for social data is an interesting idea, and it may well make sense someday. But how would such a regime, if implemented tomorrow, affect social networking firms looking to grow and innovate? Unlike Microsoft and Google, both among the world's most profitable companies, Facebook and Twitter are still trying to figure out how to effectively monetize their massive user platforms. Inking creative deals to sell access to social data to search engines, or to other entities such as advertisers, is a logical way to generate the income that social networking companies need. This sort of arrangement may offend diehard believers in information commons, but it should seem perfectly natural to those who recognize that, to serve consumers, web companies need to innovate not just in new technologies but in strategies for monetizing those technologies.



Conclusion

Do we really want to live in a world where companies like Google have to wait to launch innovative new features until they've worked out how to to ensure that their competitors get to participate—on their competitors' terms? This kind of "open access" requirement would be catastrophic for innovation. Even forcing companies to clearly define their terms of access on day one would essentially be equivalent to requiring them to file a rate tariff as if they were an old regulated utility—a recipe for stagnation, not innovation. Condemning Google to antitrust purgatory for failing to accept competitors' offers to participate when those offers don't even exist is nothing if not premature.




 •  0 comments  •  flag
Share on Twitter
Published on January 12, 2012 07:50

January 11, 2012

Time for the Supreme Court to End FCC Indecency Censorship

[Cross posted from Huffington Post]



Does the First Amendment allow the FCC to censor "indecent" content like the occasional curse word or a brief glimpse of a bare butt on broadcast TV? The Supreme Court hears arguments on this question Tuesday in FCC v. Fox—the first time in more than 30 years the Court will squarely confront this constitutional question. The case stems from the use of "fleeting" expletives by Nicole Richie and Cher at the Billboard Music Awards Show nearly a decade ago, which prompted a draconian crackdown on broadcasters by the Bush FCC in 2004.



Our five organizations—which differ widely on many issues—have filed a joint amicus brief urging the Court to recognize that the Constitution demands an end to FCC censorship of television, given the fundamental transformation of the media landscape. In its 1978 FCC v. Pacifica decision, the Court gave broadcasting less protection than other media (like newspapers) because it was both "pervasive" in American culture and "invasive"—an "intruder" in the home from which parents were powerless to protect their children. But that rationale long ago disintegrated.



When a federal appellate court struck down the FCC's indecency rules last year, it hit the nail on the head: "we face a media landscape that would have been almost unrecognizable in 1978." Back then, nearly all Americans relied on broadcasting to deliver a limited range of video media to their homes. Today, only 8 to 15% percent of American households rely on over-the-air broadcasting, with the majority subscribing to cable or satellite service. More and more Americans are getting video content online from Netflix, Hulu, YouTube, and countless other sites. These services are not "intruders" in the home, but invited guests.



More importantly, a wide range of tools empower parents to decide what broadcast content their children can access. Since 2000, every television larger than 13 inches has come with the V-Chip. This free technology empowers parents to block content based on ratings that include age-based designations as well as several specific content descriptors (coarse language, sex, violence, etc.). A wide variety of other tools have empowered parents, such as DVD players, digital video recorders and video-on-demand services, which allow parents to build, and even pre-screen, libraries of preferred programming for their children. Similar tools are available for cable content, video games, movies, and the Internet.



Today's world of converged, customizable video media would have seemed like science fiction to the Pacifica court 31 years ago. But it is precisely the kind of world the Supreme Court contemplated in a 2000 opinion, boldly declaring: "Technology expands the capacity to choose; and it denies the potential of this revolution if we assume the Government is best positioned to make these choices for us."



The last decade has vindicated this vision, with parental empowerment tools flourishing even as the media landscape changed dramatically. In a dynamic world, technological tools and parental control methods need not be perfect to be preferable to government regulation.



The Supreme Court has already decided as much for cable television: in 2000, the Court struck down a law that had caused cable operators to restrict adult content on subscription channels to between the hours of 10pm and 6am. While operators scrambled these channels for non-subscribers, Congress worried that children might still be able to see or hear something on these channels during the day. But the Court insisted that total preemption of adult content was excessive, because concerned parents could request targeted blocking of the adult channels:



"[I]t is no response that voluntary blocking requires a consumer to take action, or may be inconvenient, or may not go perfectly every time. A court should not assume a plausible, less restrictive alternative would be ineffective; and a court should not presume parents, given full information, will fail to act."


That's precisely the right standard for the digital "revolution." Anything less will allow the continuation of censorship of a bygone era—and help to validate censorship in countries like China, which is often justified as protecting children. We urge the Supreme Court to affirm that this standard applies just as much to broadcasting as to the Internet or newspapers. That means striking down Pacifica's double-standard.



Invalidating the FCC's indecency rules doesn't mean government can do nothing. It can still assist in improving parental controls, promote awareness of existing tools and methods, and punish companies that fail to live up to their voluntary content labels. But our Constitution requires that government focus on helping parents—rather than choosing for them.



Berin Szoka is President of TechFreedom. Ilya Shapiro is Senior Fellow in Constitutional Studies at the Cato Institute. Emma Llanso is a Policy Counsel at the Center for Democracy & Technology. Lee Tien is a Senior Staff Attorney at the Electronic Frontier Foundation. John Bergmayer is a Senior Staff Attorney at Public Knowledge. All five organizations are public interest non-profits with a focus in technology policy.




 •  0 comments  •  flag
Share on Twitter
Published on January 11, 2012 15:02

January 10, 2012

Andrew McAfee on Digital Innovation, Employment and Productivity

http://surprisinglyfree.com/wp-content/uploads/Andrew-McAfee.jpg

On the podcast this week, Andrew McAfee, Principal Research Scientist at MIT's Center for Digital Business, discusses his new book, co-authored with Erik Brynjolfsson, entitled, "Race Against the Machine: How the Digital Revolution is Accelerating Innovation, Driving Productivity, and Irreversibly Transforming Employment and the Economy." The book looks at the interplay between unemployment and fast-paced technological innovation. In the book, McAfee and Brynjolfsson propose that technology is outpacing humans, and they discuss whether humans can keep up. According to McAfee, technology is encroaching on skills that once belonged exclusively to humans. He believes that entrepreneurial thinking, different institutions, and new organizational structures can prevent humans from being left behind by the machines.





Related Links

"Race Against The Machine: How the Digital Revolution is Accelerating Innovation, Driving Productivity, and Irreversibly Transforming Employment and the Economy", by McAfee & Brynjolfsson"More Jobs Predicted for Machines, Not People", New York Times"Race Against the Machine* and TGS, a comparison", Marginal Revolution"Are You An Internet Optimist or Pessimist? The Great Debate over Technology's Impact on Society", Technology Liberation Front

To keep the conversation around this episode in one place, we'd like to ask you to comment at the webpage for this episode on Surprisingly Free. Also, why not subscribe to the podcast on iTunes?




 •  0 comments  •  flag
Share on Twitter
Published on January 10, 2012 10:00

January 9, 2012

At the Top of Congress' New Year Agenda? Regulate the Net

Over at TIME.com, I recap the latest on SOPA and PIPA and look at what's ahead once Congress reconvenes. I also address the argument that the piracy bills don't amount to censorship since they're aimed at unprotected speech.




Both bills would likely affect non-infringing speech because they allow for entire sites to be blocked — even if they also include otherwise legal speech. Yet the Supreme Court has ruled, "Broad prophylactic rules in the area of free expression are suspect. Precision of regulation must be the touchstone in an area so closely touching our most precious freedoms." And you can add to that a troubling lack of due process that's a recipe for abuse.




Read the whole thing here.




 •  0 comments  •  flag
Share on Twitter
Published on January 09, 2012 13:03

"If we can put a man on the moon…"

I enjoyed this new piece by Matt Welch over at Reason about the uses and abuses of the "if we can put a man on the moon" metaphor. "There's no escaping the moonshot in contemporary political discourse," Welch notes. Indeed, in the field of technology policy, we hear the old "if we can put a man on the moon, then we can [fill in the blank]… " line with increasing regularity.



For example, just a few years ago, in the midst of the social networking "predator panic," several state Attorneys General, led by Roy Cooper of North Carolina and Richard Blumenthal of Connecticut, pushed aggressively for a mandatory online age verification scheme.  At several points during the debate, Blumenthal, now a U.S. Senator, argued that "The technology is available. The solution is financially feasible, practically doable. If we can put a man on the moon, we can check ages of people on these Web sites," he claimed. Of course, just saying so doesn't make it true. As I noted in a big paper on the issue, online age verification is extremely complicated, likely even impossible, and history has shown that no technological control is foolproof. Moreover, attempts to impose authentication and identification schemes would have numerous trade-offs and unintended consequences, especially for online anonymity, privacy, and free speech. A subsequent report by the Harvard-based blue ribbon Internet Safety Technical Task Force (ISTTF) showed why that was the case.



We also increasingly hear "man on the moon" quips in the burgeoning field of cybersecurity, as we did last October when President Obama announced National Cybersecurity Awareness Month. We can expect plenty more of those in years to come.



The bottom line is that we shouldn't be basing public policy on grand "if we can put a man on the moon…" pronouncements or predictions. Every government action has costs and consequences that must be taken into account, no matter how noble the goal. And, unlike the actual case of sending a man to the moon, where almost everyone thought it was a good idea, many of us do not believe the sort of "man on the moon" proposals bandied about in the cyberlaw arena these days are even worth pursuing.



[Of course, we all know the moon landings were faked, so I'm not sure why we're even debating this!]




 •  0 comments  •  flag
Share on Twitter
Published on January 09, 2012 12:23

Adam Thierer's Blog

Adam Thierer
Adam Thierer isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Adam Thierer's blog with rss.