Adam Thierer's Blog, page 145
February 11, 2011
Goldsmith on Assange, WikiLeaks, the First Amendment & Press Freedoms
There's a sharp piece in today's Washington Post from Jack Goldsmith, currently with Harvard Law but formerly an assistant attorney general in the Bush administration, about "Why the U.S. Shouldn't Try Julian Assange." Goldsmith points to the sticky First Amendment / press freedom issues at stake should the U.S. try to go after Assange and WikiLeaks:
A conviction would also cause collateral damage to American media freedoms. It is difficult to distinguish Assange or WikiLeaks from The Washington Post. National security reporters for The Post solicit and receive classified information regularly. And The Post regularly publishes it. The Obama administration has suggested it can prosecute Assange without impinging on press freedoms by charging him not with publishing classified information but with conspiring with Bradley Manning, the alleged government leaker, to steal and share the information. News reports suggest that this theory is falling apart because the government cannot find evidence that Assange induced Bradley to leak. Even if it could, such evidence would not distinguish the many American journalists who actively aid leakers of classified information.
One reason journalists have never been prosecuted for soliciting and publishing classified information is that the First Amendment, to an uncertain degree never settled by courts, protects these activities. Convicting Assange would require courts to resolve this uncertainty in a way that narrows First Amendment protections. It would imply that the First Amendment does not prevent prosecution of American journalists who seek and publish classified information. At the very least it would render the First Amendment a less certain shield. This would – in contrast to WikiLeaks copycats outside our borders – chill the American press in its national security reporting.
Quite right, and it's a point bolstered by another editorial that also appeared in the Post a few weeks ago by Adam Penenberg of New York University, in which he made the case for treating Assange as a journalist. Penenberg asks: "What constitutes "legitimate newsgathering activities"? How do you differentiate between what WikiLeaks does and what the New York Times does?"
Importantly, Goldsmith correctly notes that, practically speaking, a prosecution of Assange probably wouldn't do much to put the genie back in the bottle. "A successful prosecution, on the other hand, would not achieve the desired deterrent effect," Goldsmith says. "WikiLeaks copycats are quickly proliferating around the globe, beyond the U.S. government's effective reach. A conviction would make a martyr of Assange, embolden copycat efforts and illustrate the limits of American law to stop them." Again, quite right. It's a point I've stressed in my recent essays about the challenges faced by information control regimes.
Anyway, Goldsmith's entire piece here.







February 10, 2011
Some Sense on Sexting
Bucking a trend seen in other states, Texas lawmakers are taking steps to separate teen "sexting," the sending and receiving sexually explicit photos via cell phone or email, from child pornography.
A bill proposed by State Sen. Kirk Watson of Austin, and backed by Texas State Attorney General Greg Abbott, would classify sexting as a Class C misdemeanor for first time violators under 18. Under current law, sexting is a Class C felony carrying penalties of two to 10 years in prison, a fine up to $10,000 and lifelong registration as a sex offender.
The Lone Star State deserves credit for taking a sensible approach to addressing what is without doubt stupid behavior that comes with serious consequences, but is far from the predation that child pornography laws are intended to target.
As the Houston Chronicle reports, instead of sending young people to jail for sexting, the bill, SB 407, would authorize judges to sentence minors — and one of the minor's parents — to participate in an education program about sexting's consequences.
The new law also would allow teens to apply to the court to have the offense expunged from their records.
"This bill ensures that prosecutors — and, frankly, parents — will have a new, appropriate tool to address this issue," [Abbott] said. "It helps Texas laws keep up with technology and our teenagers." According to the Chronicle, Texas has never prosecuted a teen for sexting under child porn laws.
Texas joins Vermont, Illinois, Utah and Ohio among states seeking to decriminalize sexting. These states stand in stark contrast to others where attorneys general apparently want to use the threat of lifelong sex offender designation as a bludgeon.
In northeastern Pennsylvania, a prosecutor recently threatened to file child porn charges against three teenage girls who authorities say took racy cell-phone pictures that ended up on classmates' cell phones. In New Jersey, a 14-year old girl was charged with distributing child pornography after she posted nude pictures of herself on MySpace. The charge brought criticism from Maureen Kanka, whose daughter Megan became the namesake of Megan's Law after she was raped and killed by a twice-convicted sex offender.
The teen needs help, not legal trouble, Kanka told the Associated Press. "This shouldn't fall under Megan's Law in any way, shape or form. She should have an intervention and counseling, because the only person she exploited was herself."
Finally, prosecuting sexting as child pornography creates problems in the long-term because it defines predation down. We don't want to give truly dangerous child predators an opportunity to credibly dismiss their sex offender status as a result of poor teenage judgment when it came to pressing the "send" button on a cell phone. Yet, if overzealous prosecutors keep this up, the cynics will be predicting that, in the future, everyone will be a registered sex offender. That's not a very funny joke.







The Regulated Internet: How We Got Here
In the March issue of Reason, Peter Suderman takes us on a tour of the recent telecom and Internet regulatory scene as he looks at the Federal Communications Commission Chairman and Obama hoops buddy Julius Genachowski and his push to regulate the Web.
The article, which recaps the five-year network neutrality battle that reached a watershed moment this December when Genachowski all but rammed through the new rules as the rest of D.C. was heading out for the holidays, punctures many of the myths of the network neutrality rationale–including the notion that it is a small site-vs.-large-site issue and that large ISPs were exploiting their bottlenck position.
Suderman succintly shows how Genachowski, following the lead of groups like Free Press, framed what is essentially a geeky tug-of-war about network engineering concepts as wholesale market failure that demanded regulation, with himself as top Intenet cop.
But the net neutrality debate doesn't really pit the Goliaths against the Davids. It's a battle between the edge of the Internet and the center, with application and content providers (the edge) fighting for control against infrastructure owners (the center). Large business interests dominate both sides of the debate. Google, for example, has long favored some form of net neutrality, as have Facebook, Amazon, Twitter, and a smattering of other big content providers, who prefer a Web in which the network acts essentially as a "dumb pipe" to carry their content. Mom-and-pop sites aren't the issue.
Google makes its support sound as simple and earnest as its corporate motto of "don't be evil." Much like Genachowski, it defines net neutrality as "the concept that the Internet should remain free and open to all comers." But the freedom and openness that Google claims to prize bear a distinct resemblance to regulatory protection. An Internet in which ISPs can freely discriminate between services, prioritizing some data in order to offer enhanced services to more customers, is an Internet in which content providers may have to pay more to reach their customers. Under Google and Genachowski's net neutrality regime, ISPs may own the network, but the FCC will have a say in how those networks are run, with a bias toward restrictions that favor content providers.
The entire article can be found here,







February 9, 2011
Tim Wu to the FTC: What does it mean?
As Adam notes, Columbia lawprof and holder of the dubious distinction of having originated the term and concept of Net Neutrality, Tim Wu, is headed to the FTC as a senior advisor.
Curiously, his guest stint runs for only about four and a half months. As the WSJ reports:
Mr. Wu, 38, will start his new position on Feb. 14 in the FTC's Office of Policy Planning, and will help the agency to develop policies that affect the Internet and the market for mobile communications and services. The FTC said Mr. Wu will work in the unit until July 31. Mr. Wu, who is taking a leave from Columbia, said that to work after that date he would have to request a further leave from the university.
Mr. Wu's claim that the source of the date constraint is Columbia doesn't pass the smell test. Now, it is possible that what he says is literally true–and therefore intentionally misleading. Perhaps he asked only for leave through the end of July and would indeed have to request further leave if he wanted it. But the implication that Columbia would have trouble granting further leave–especially during the summer!–and thus the short tenure seems very fishy to me.
So what else could be going on, while we're reading inscrutable tea leaves? Well, for one thing, it could be that Wu has already signed on for some not-yet-public role at Columbia that he prefers not to imperil. Maybe associate dean or something like that.
But I have another, completely unsupported speculation. I think the author of The Master Switch (commented on by Josh and me here) and one of the most capable (as far as that goes) proponents of Internet regulation in the land is being brought in to the FTC to help the agency gin up a case against Google.
I think with Google-ITA seemingly approaching its denouement, the FTC knows or believes that Google is either planning to abandon the merger or else enter into an (insufficiently-restrictive for the FTC) settlement with the DOJ. In either case, not a full-blown investigation and intervention into Google's business. So the FTC is preparing its own Section 5 (and Section 2, but who needs that piker when you have the real deal in Section 5?) (for previous TOTM takes on Section 5, see, e.g., here and here) case and has brought in Wu to help. Given the switching back and forth between the DOJ and FTC in reviewing Google mergers, it could very well be (I haven't kept close tabs on Google's proposed acquisitions) that there's even already another merger review in waiting at the FTC on which the agency is planning to build its case.
But the phase of the case requiring Wu's full attention–the conceptual early phase–should be completed by the end of July, so no need to detain him further.
More concretely, I would point out that it says a lot about the agency's mindset that it is bringing in the likes of Wu to help it with its ongoing forays into the regulation of Internet businesses. By comparison, I would just point out that Chairman Majoras' FTC brought in our own Josh Wright as the agency's first Scholar in Residence. Sends a very different signal, don't you think?







February 8, 2011
Congrats Tim Wu! But Please Don't Toss "The Regulatory Switch"
Congrats are due to Tim Wu, who's just been appointed as a senior advisor to the Federal Trade Commission (FTC). Tim is a brilliant and gracious guy; easily one of the most agreeable people I've ever had the pleasure of interacting with in my 20 years in covering technology policy. He's a logical choice for such a position in a Democratic administration since he has been one of the leading lights on the Left on cyberlaw issues over the past decade.
That being said, Tim's ideas on tech policy trouble me deeply. I'll ignore the fact that he gave birth to the term "net neutrality" and that he chaired the radical regulatory activist group, Free Press. Instead, I just want to remind folks of one very troubling recommendation for the information sector that he articulated in his new book, The Master Switch: The Rise and Fall of Information Empires. While his book was preoccupied with corporate power and the ability of media and communications companies to posses a supposed "master switch" over speech or culture, I'm more worried about the "regulatory switch" that Tim has said the government should toss.
Tim has suggested that a so-called "Separations Principle" govern our modern information economy. "A Separations Principle would mean the creation of a salutary distance between each of the major functions or layers in the information economy," he says. "It would mean that those who develop information, those who control the network infrastructure on which it travels, and those who control the tools or venues of access must be kept apart from one another." Tim calls this a "constitutional approach" because he models it on the separations of power found in the U.S. Constitution.
I critiqued this concept in Part 6 of my ridiculously long multi-part review of his new book, and I discuss it further in a new Reason magazine article, which is due out shortly. As I note in my Reason essay, Tim's blueprint for "reforming" technology policy represents an audacious industrial policy for the Internet and America's information sectors. In concrete regulatory terms—and despite Tim's insistence to the contrary, his approach most assuredly would require regulation—the Separations Principle would segregate information providers into three buckets: creators, distributors, and hardware makers. Presumably these would become three of the new "titles" (or regulatory sections) of a forthcoming Information Economy Separations Act.
While conceptually neat, these classifications don't conform to our highly dynamic digital economy, whose parameters can change wildly within the scope of just a few years. For example, Google cut its teeth in the search and online advertising markets, but it now markets phones and computers. Verizon, once just a crusty wireline telephone company, now sells pay TV services and a variety of wireless devices. AOL reinvented itself as media company after its brief reign as the king of dial-up Internet access. Would firms that already possess integrated operations and investments (for instance, Microsoft or Apple) be forced to divest control of them to comply with the Separations Principle? If so, wouldn't that hinder technological development?
In his book, Tim shrugs off such concerns. "The Separations Principle accepts in advance that some of the benefits of concentration and unified action will be sacrificed," he writes, "even in ways that may seem painful or costly." Such a flippant attitude ignores not only the potential benefits of certain forms of integration but also the fact that his proposed information apartheid would upend the American economy as we know it (for instance, by forcing the breakup of dozens of leading technology companies as well as countless media and entertainment providers). He also ignores the litigation nightmare that would ensue once the government started forcing divestitures.
More shockingly,Tim never explains how the bureaucratic machinations and regulatory capture he decries throughout his book would be held in check under his proposed regime. He breezily writes that "the government [should] also keep its distance and not intervene in the market to favor any technology, network monopoly, or integration of the major functions of an information industry," but does not explain how this will be accomplished. Does he believe we can build a better breed of bureaucrat if we just try harder? (I suppose he does now that he's been appointed as one!!)
Equally astonishing is Tim's assertion that "a Separations regime would take much of the guesswork and impressionism…out of the oversight of information industries." To the extent that his Separations Principle eliminates "guesswork" and creates more regulatory certainty, it would do so only by creating rigid artificial barriers to market entry and innovation across the information economy. That's the kind of "certainty" we can live without!
I can only hope that Prof. Wu leaves this particular idea back in the ivory tower as he makes his transition to policy advisor at the FTC. America's high-tech sector cannot survive the sort of regulatory wrecking ball approach to public policy that Tim has recommended.







Susan Maushart on pulling the plug
On the podcast this week, Susan Maushart, a columnist, author and social commentator, discusses her new book, The Winter of our Disconnect. Maushart talks about her experience unplugging herself, and her three teenagers, from most screen-based technologies for 6 months. She discusses how she got her kids to go along with the plan, how she found support in Thoreau's Walden, what boredom is, and whether she found balance through the experience. Maushart also talks about limits to allowing your children the luxury of choice, commenting on Amy Chua's Tiger Mother philosophy.
Related Links
"The winter of their disconnect – how Susan Maushart and her family lived without technology for six months", an edited excerpt from Maushart's book
"Information flatulence", Irish Times
"Why Chinese Mothers Are Superior", by Amy Chua
To keep the conversation around this episode in one place, we'd like to ask you to comment at the web page for this episode on Surprisingly Free. Also, why not subscribe to the podcast on iTunes?







February 7, 2011
Two Years on the Internet is an Eternity
Video is now available for all of the excellent programming at this year's State of The Net 2011 conference. (Programming will also be available over time on C-SPAN's video library.) The Conference, organized by the Advisory Committee to the Congressional Internet Caucus, featured Members of Congress, leading academics, Administration, agency, and Congressional staff and other provocateurs. Topics this year ranged from social networking, Wikileaks, COICA, copyright, privacy, security, broadband policy and, of course, the end-of-the-year vote by the FCC to approve new rules for network management by broadband providers, aka net neutrality.
(My article for CNET on the communications agenda for the new Congress is here.)
I was honored to sit on the net neutrality panel, joined by Colin Crowell, former legal advisor to FCC Chairman Julius Genachowski, Markham Erickson of the Open Internet Coalition, and Professor Christopher Yoo of the University of Pennsylvania Law School. The panel was ably moderated by Tim Lordan, Executive Director of the Advisory Committee.
My colleagues had clearly all read the Report and Order closely, and the discussion was lively and far-ranging.
But one exchange I had with Markham Erickson at the very end of the panel (roughly minute 55) has continued to haunt me. After listing some of the exceptions and caveats that the final rules included to account for the reality of non-neutral technologies and practices that have developed over the last ten years, I pointed out that the FCC has now put a stake in the ground, arbitrarily as of late 2010, to say these and no more–or at least no more without our permission. That, as I pointed out in an earlier post on the Report, is what makes the new rules so dangerous, in that they will effectively put screeching brakes on further advances in network management.
Not to worry, said Markham Erickson. "The FCC sort of understood that and accounted for that, and said they'd have a review of all these rules within two years."
As someone who still spends more time in Silicon Valley than in Washington, that reassurance hit me like a cold bucket of water to the face. For it's true that in Washington time, rules that will be reviewed after two years are rules that will operate for a very short period of time indeed. But in Internet time, two years is a very long time. In two years, great information empires can rise and fall, thousands of exciting startups will be launched and shuttered, new products embraced with wild enthusiasm will become tomorrow's e-waste.
If the rules do have the effect of skewing, stunting, or stalling new technologies to improve network performance, two years might as well be an eternity.
That's what's most worrisome, in the end, about the increasingly accident-prone intersection of information technology and policy. Information technology travels at the speed of Moore's Law, accelerating all the time. Government is by design slow, deliberate, and incremental, a design well-suited to everything but revolutionary change.
One kind of traffic enters the intersection at 5 miles per hour; the other at 100 miles per hour. The latter is so fast that the former doesn't even see it coming, and only knows it was there because of the calamitous effects of the crash.







February 6, 2011
Is a U.S. Company Assisting Egyptian Surveillance?
Boeing subsidiary Narus reports on its Web site that it "protects and manages" a number of worldwide networks, including that of Egypt Telecom. A recent IT World article entitled "Narus Develops a Scary Sleuth for Social Media" reported on a Narus product called Hone last year:
Hone will sift through millions of profiles searching for people with similar attributes — blogger profiles that share the same e-mail address, for example. It can look for statistically likely matches, by studying things like the gender, nationality, age, location, home and work addresses of people. Another component can trace the location of someone using a mobile device such as a laptop or phone.
Media advocate Tim Karr reports that "Narus provides Egypt Telecom with Deep Packet Inspection equipment (DPI), a content-filtering technology that allows network managers to inspect, track and target content from users of the Internet and mobile phones, as it passes through routers on the information superhighway."
It's very hard to know how Narus' technology was used in Egypt before the country pulled the plug on its Internet connectivity, or how it's being used now. Narus is declining comment.
So what's to be done?
Narus and its parent, The Boeing Company, have no right to their business with the U.S. government. On our behalf, Congress is entitled to ask about Narus'/Boeing's assistance to the Mubarak regime in Egypt. If contractors were required to refrain from assisting authoritarian governments' surveillance as a condition of doing business with the U.S. government, that seems like the most direct way to dissuade them from providing top-notch technology capabilities to regimes on the wrong side of history.
Of course, decades of U.S. entanglement in the Middle East have created the circumstance where an authoritarian government has been an official "friend." Until a few weeks ago, U.S. unity with the Mubarak regime probably had our government indulging Egypt's characterization of political opponents as "terrorists and criminals." It shouldn't be in retrospect that we learn how costly these entangling alliances really are.
Chris Preble made a similar point ably on the National Interest blog last week:
We should step back and consider that our close relationship with Mubarak over the years created a vicious cycle, one that inclined us to cling tighter and tighter to him as opposition to him grew. And as the relationship deepened, U.S. policy seems to have become nearly paralyzed by the fear that the building anger at Mubarak's regime would inevitably be directed at us.
We can't undo our past policies of cozying up to foreign autocrats (the problem extends well beyond Egypt) over the years. And we won't make things right by simply shifting — or doubling or tripling — U.S. foreign aid to a new leader. We should instead be open to the idea that an arms-length relationship might be the best one of all.







February 5, 2011
The Stagnation Conversation, continued
Another review of Tyler Cowen's The Great Stagnation, this one by Michael Mandel. More from Brink Lindsey.
<
p style="text-align: left">And Nick Schulz's video interview of Cowen:







February 4, 2011
The Problem of Search Engines as Essential Facilities
For my contribution to Berin Szoka and Adam Marcus' (of TechFreedom fame) awesome Next Digital Decade book, I wrote about search engine "neutrality" and the implicit and explicit claims that search engines are "essential facilities." (Check out the other essays on this topic by Frank Pasquale, Eric Goldman and James Grimmelmann, linked to here, under Chapter 7).
The scare quotes around neutrality are there because the term is at best a misnomer as applied to search engines and at worst a baseless excuse for more regulation of the Internet. (The quotes around essential facilities are there because it is a term of art, but it is also scary). The essay is an effort to inject some basic economic and legal reasoning into the overly-emotionalized (is that a word?) issue.
So, what is wrong with calls for search neutrality, especially those rooted in the notion of Internet search (or, more accurately, Google, the policy scolds' bête noir of the day) as an "essential facility," and necessitating government-mandated access? As others have noted, the basic concept of neutrality in search is, at root, farcical. The idea that a search engine, which offers its users edited access to the most relevant websites based on the search engine's assessment of the user's intent, should do so "neutrally" implies that the search engine's efforts to ensure relevance should be cabined by an almost-limitless range of ancillary concerns. Nevertheless, proponents of this view have begun to adduce increasingly detail-laden and complex arguments in favor of their positions, and the European Commission has even opened a formal investigation into Google's practices, based largely on various claims that it has systematically denied access to its top search results (in some cases paid results, in others organic results) by competing services, especially vertical search engines. To my knowledge, no one has yet claimed that Google should offer up links to competing general search engines as a remedy for its perceived market foreclosure, but Microsoft's experience with the "Browser Choice Screen" it has now agreed to offer as a consequence of the European Commission's successful competition case against the company is not encouraging. These more superficially sophisticated claims are rooted in the notion of Internet search as an "essential facility" – a bottleneck limiting effective competition. These claims, as well as the more fundamental harm-to-competitor claims, are difficult to sustain on any economically-reasonable grounds. To understand this requires some basic understanding of the economics of essential facilities, of Internet search, and of the relevant product markets in which Internet search operates.
The essay goes into much more detail, of course, but the basic point is that Google's search engine is not, in fact, "essential" in the economically-relevant sense. Rather, Google's competitors and other detractors have basically built precisely the most problematic sort of antitrust case, where success itself is penalized (in this case, Google is so good at what it does it just isn't fair to keep it all to itself!).
Search neutrality and forced access to Google's results pages is based on the proposition that—Google's users' interests be damned—if Google is the easiest way competitors can get to potential users, Google must provide that access. The essential facilities doctrine, dealt a near-death blow by the Supreme Court in Trinko, has long been on the ropes. It should remain moribund here. On the one hand Google does not preclude, nor does it have the power to preclude, users from accessing competitors' sites; all users need do is type "www.foundem.com" into their web browser—which works even if it's Google's own Chrome browser! To the extent that Google can and does limit competitors' access to its search results page, it is not controlling access to an "essential facility" in any sense other than Wal-Mart controls access to its own stores. "Google search results generated by its proprietary algorithm and found on its own web pages" does not constitute a market to which access should be forcibly granted by the courts or legislature.
The set of claims that are adduced under the rubric of "search neutrality" or the "essential facilities doctrine" against Internet search engines in general and, as a practical matter, Google in particular, are deeply problematic. They risk encouraging courts and other decision makers to find antitrust violations where none actually exist, threatening to chill innovation and efficiency-enhancing conduct. In part for this reason, the essential facilities doctrine has been relegated by most antitrust experts to the dustbin of history.
The full text of my essay is below, but you can also find it at SSRN and the book's website.
The Problem of Search Engines as Essential Facilities (Geoffrey A. Manne)







Adam Thierer's Blog
- Adam Thierer's profile
- 1 follower
