Adam Thierer's Blog, page 132

May 3, 2011

Jessica Litman on reclaiming copyright for readers

Post image for Jessica Litman on reclaiming copyright for readers

On this week's podcast, Jessica Litman, professor of law at the University of Michigan Law School and one of the country's foremost experts on copyright, discusses her new essay, Reader's Copyright. Litman talks about the origins of copyright protection and explains why the importance of readers', viewers', and listeners' interests have diminished over time. She points out that copyright would be pointless without readers and claims that the system is not designed to serve creators or potential creators exclusively. Litman also discusses differences in public and private protections and talks about rights that should be made more explicit regarding copyright.



Related Links


Readers' Copyright, by Litman
Jessica Litman on Copyright Liberties
"Users' Rights in Copyright: An Interview with Ray Patterson," American Library Association


To keep the conversation around this episode in one place, we'd like to ask you to comment at the web page for this episode on Surprisingly Free. Also, why not subscribe to the podcast on iTunes?




 •  0 comments  •  flag
Share on Twitter
Published on May 03, 2011 10:00

Doctorow's Definition of "Techno-Optimism" Is Full of Fear & False Choices

I've spent a great deal of time here defending "techno-optimism" or "Internet optimism" against various attacks through the years, so I was interested to see Cory Doctorow, a novelist and Net activist, take on the issue in a new essay at Locus Online.  I summarized my own views on this issue in two recent book chapters. Both chapters appear in The Next Digital Decade and are labeled "The Case for Internet Optimism." Part 1 is sub-titled "Saving the Net From Its Detractors" and Part 2 is called "Saving the Net From Its Supporters." More on my own thoughts in a moment. But let's begin with Doctorow's conception of the term.



Doctorow defines "techno-optimism" as follows:



In order to be an activist, you have to be… pessimistic enough to believe that things will get worse if left unchecked, optimistic enough to believe that if you take action, the worst can be prevented. [...]

Techno-optimism is an ideology that embodies the pessimism and the optimism above: the concern that technology could be used to make the world worse, the hope that it can be steered to make the world better.


What this definition suggests is that Doctorow has a very clear vision of what constitutes "good" vs. "bad" technology or technological developments. He turns to that dichotomy next as he seeks to essentially marry "techno-optimism" to a devotion to the free/open software movement and a rejection of "proprietary technology":



There are many motivations for contributing to free/open software, but the movement's roots are in this two-sided optimism/pessimism: pessimistic enough to believe that closed, proprietary technology will win the approval of users who don't appreciate the dangers down the line (such as lock-in, loss of privacy, and losing work when proprietary technologies are orphaned); optimistic enough to believe that a core of programmers and users can both create polished alternatives and win over support for them by demonstrating their superiority and by helping people understand the risks of closed systems.


In other words, recalling his definition of techno-optimism, Doctorow is basically saying that the way we "steer" technology to "make the world better" is by taking steps to foster or favor "open" technologies over "closed" ones:



It falls to techno-optimists to do two things: first, improve the alternatives and; second, to better articulate the risks of using unsuitable tools in hostile environments. … Herein lies the difference between a ''technology activist'' and ''an activist who uses technology'' — the former prioritizes tools that are safe for their users; the latter prioritizes tools that accomplish some activist goal. The trick for technology activists is to help activists who use technology to appreciate the hidden risks and help them find or make better tools. That is, to be pessimists and optimists: without expert collaboration, activists might put themselves at risk with poor technology choices; with collaboration, activists can use technology to outmaneuver autocrats, totalitarians, and thugs.


I have no problem with Doctorow issuing a clarion call to programmers to "find or make better tools." Power to him and the developers who take him up on the request. But I do have a problem with the sort of 'you're-either-with-us-or-against-us' sort of attitude Doctorow adopts here and in much of his past writing, which attempts to force a false choice upon us regarding "open" vs. "closed" digital technologies.



The irony of Doctorow's definition of "techno-optimism" is that, as he notes, it's actually rooted in the fairly pessimistic belief that unless we do something to affect the balance between "open vs. closed" technology then "technology could be used to make the world worse," he says. I think that view is myopic and misguided for several reasons.



First, I think it's a mistake to tether "techno-optimism" to overly binary conceptions of "good vs. bad" / "open vs. closed" technology. I spent a great deal of time in the second of my two "Case for Internet Optimism" chapters addressing the group of thinkers that I refer to as "Openness Evangelicals," or those who believe that "Openness" is almost always The Good; anything "closed" (restricted or proprietary) in nature is The Bad. In a sense, it's tantamount to picking (or at least favoring) technological winners and losers regardless of what others prefer and voluntarily choose to use because it gives them greater satisfaction.



Second, there are no clear definitions of "openness" or "closedness" (if that's even a word); both are matters of degree. You can call Apple and Facebook "closed" — and they certainly are in many senses of the term — but they are not nearly as "closed" or "proprietary" as the communications devices or platforms of the past. To put it in Zittrainian parlance, "generativity" continues to thrive even in environments or on platforms that are "closed" is some ways. Almost all modern digital devices and networks feature some generative and "non-generative" attributes. "No one has ever created, and no one will ever create, a system that allows any user to create anything he or she wants.  Instead, every system designer makes innumerable tradeoffs and imposes countless constraints," note James Grimmelmann and Paul Ohm."Every generative technology faces … tradeoffs.  Good system designers always restrict generativity of some kinds in order to encourage generativity of other kinds.  The trick is in striking the balance," they argue.



And most companies now have stronger incentives to strike a better balance between "open" vs. "closed." Attempting to completely lock-down digital innovation or "generativity" on any platform these days would be a kiss of death. Netizens have come to expect a fair degree of freedom to tinker with and to configure digital technologies in unique ways. That's why the general progression of things is increasingly toward more "openness," even if it's not the perfect openness that Doctorow and others seem to demand.



In this regard, I find it interesting that Doctorow never mentions Twitter in his essay. After all, it's a somewhat closed system, and seems to be growing more closed in some ways as it searches for a sustainable business model. And yet Twitter — which Doctorow uses aggressively himself — allows for an amazingly "open" channel of constant, instantaneous human communication. By most accounts, it has been a true "technology of freedom" and helped advance importance causes of various sorts.



Will Twitter's proprietary API make it easier for the company to eventuate manipulate users, or for governments to co-opt for their own nefarious ends?  That seems to be the horror story the Openness Evangelicals want us to believe when they protest proprietary code or private systems. But such manipulation is much easier said than done. And when it is attempted, it is usually unearthed and made visible to us in fairly short order, which spawns the search for, and use of, alternative systems. People and platforms don't sit still long. Evolution continues at a breakneck pace in the digital arena.



Moreover, say what you will about "proprietary" or "closed" devices and platforms like Twitter, Facebook, Apple, Microsoft, and others, but the reality is this: Part of the reason they have been able to "scale up" and become major communications platforms in the first place is because they are focused on developing a sustainable business model.  Yes, I know this will be absolutely heresy to some of the Openness Evangelicals (how dare these companies seek to make money!), but the reality is that the reach of many platforms like these is fundamentally tied up with their success as good old fashion capitalist entrepreneurs. By contrast, the perfectly "free" and "open" technologies and platforms that Doctorow clearly favors have not been able to achieve similar scale.  I suppose he would claim that's because proprietary technologies have crowded-out his favored systems and platforms, or that consumers have been duped into making bad choices.



But this raises a third issue: Just how far should we go to advance Doctorow's vision and "steer" technology in a better direction? Again, I wholeheartedly applaud Doctorow's call to programmers to "find or make better tools" and I should make it clear that my strong preference is for many of the same tools that he tends to favor. I bet I hate Apple and Facebook even more than Doctorow, for example. I don't own a single Apple device and I only have a Facebook account as a cyber-traffic sign to direct people to find me elsewhere online. Meanwhile, I love hacking and cracking my devices until I have tweaked them to death — usually quite literally since I end up "bricking" a lot of my devices. (My Dad is still pretty angry about the Commodore 128 computer that my brother and I hacked and destroyed in the early 1980s!) So, at heart, I'm with Doctorow and the "openness-is-better" crowd.



But these are my personal choices. I don't attempt to impress my values upon others or suggest that there is only One True Way when it comes to digital technology. And I would never be so arrogant as to suggest that my preferred technologies were the "good" ones and those chosen by the cyber-hoi polloi were "bad," even if they were more "closed" or "proprietary."



Which raises my ultimate concern with the mindset of Openness Evangelicals: If one is so wedded to bringing about the results they desire, ironically, it becomes significantly more likely that the "openness" they advocate will inevitably devolve into expanded government control of cyberspace and digital systems. If you run around all day lamenting that proprietary, unregulated systems will — as the Openness Evangelicals fear — become subject to "perfect control" by the private sector (as Lawrence Lessig claimed) or lead to a diminution of cyber-freedom (as Jonathan Zittrain and Tim Wu claim), then you shouldn't be at all surprised when the code cops come knocking and insisting that they're just there to help.



In closing, I remain perplexed that Doctorow and the Openness Evangelicals have so little faith in the "open" systems and technologies they trumpet. If such systems really are superior, shouldn't they win out in the end? Regardless, what separates them from me is that I'm far more willing to allow things to run their course within digital markets, even if that means some "closed" devices and platforms remain or even thrive at times.



Thus, when it comes to "techno-optimism," the better disposition is technological agnosticism and a real "openness" to technological evolution. Here's how I summarized it in my recent book chapter:



History counsels patience and humility in the face of radical uncertainty and unprecedented change. More generally, it counsels what we might call "technological agnosticism." We should avoid declaring "openness" a sacrosanct principle and making everything else subservient to it without regard to cost or consumer desires. As Chris Anderson has noted, "there are many Web triumphalists who still believe that there is only One True Way, and will fight to the death to preserve the open, searchable common platform that the Web represented for most of its first two decades (before Apple and Facebook, to name two, decided that there were Other Ways)." The better position is one based on a general agnosticism regarding the nature of technological platforms and change.  In this view, the spontaneous evolution of markets has value in its own right, and continued experimentation with new models—be they "open" or "closed," "generative" or "tethered"—should be permitted.


Moreover, the real "techno-optimist" doesn't express the sort of fear and loathing we see in Doctorow's essay or the work of other digital doomsayers like Wu, Lessig, or Zittrain. [See my critiques of all their works here.] Instead, the real "techno-optimist" embraces change, uncertainty, experimentation, evolution, and does not automatic reject alternative conceptions of "good" technologies or platforms as determined by others who may not share our own preferences.




 •  0 comments  •  flag
Share on Twitter
Published on May 03, 2011 09:28

May 2, 2011

Langevin: Panetta is cyberdoom certified

Here's a doozy for the cyber-hype files. After it was announced that CIA Director Leon Panetta would take over at the Department of Defense, Rep. Jim Langevin, co-chair of the CSIS cybersecurity commission and author of comprehensive cybersecurity legislation, put out a statement that read in part:




"I am particularly pleased to know that Director Panetta will have a full appreciation for the increasing sense of urgency with which we must approach cybersecurity issues. Earlier this year, Panetta warned that 'the next Pearl Harbor could very well be a cyberattack."




That's from a statement made by Panetta to a house intelligence panel in February, and it's an example of unfortunate rhetoric that Tate Watkins and I cite in our new paper. Pearl Harbor left over two thousand persons dead and pushed the United States into a world war. There is no evidence that a cyber-attack of comparable effect is possible.



What's especially unfortunate about that kind of alarmist rhetoric, apart from the fact that unduly scares citizens, is that it is often made in support of comprehensive cybersecurity legislation, like that introduced by Rep. Langevin. That bill gives DHS the authority to issue standards for, and audit for compliance, private owners of critical infrastructure.



What qualifies as critical infrastructure? The bill has an expansive definition, so let's hope that the "computer experts" cited in this National Journal story on the Sony PlayStation breach are not the ones doing the interpreting:




While gaming and music networks may not be considered "critical infrastructure," the data that perpetrators accessed could be used to infiltrate other systems that are critical to people's financial security, according to some computer experts. Stolen passwords or profile information, especially codes that customers have used to register on other websites, can provide hackers with the tools needed to crack into corporate servers or open bank accounts.




It's not hard to imagine a logic that leads everything to be considered "critical infrastructure" because, you know, everything's connected on the network. We need to be very careful about legislating great power stemming from vague definitions and doing so on little evidence and lots of fear.




 •  0 comments  •  flag
Share on Twitter
Published on May 02, 2011 13:04

Lauren Weinstein on Privacy & "Do Not Track"

I've already Tweeted about it, but if you are following Internet privacy debates and have not yet had the chance to read Lauren Weinstein's new paper, "Do-Not-Track, Doctor Who, and a Constellation of Confusion," it is definitely worth a look.  Weinstein, founder of the Privacy Forum, zeroes in on two related issue that I have made the focus of much of my work on this issue: (1) the fact that Do Not Track is seemingly viewed by some as a silver-bullet quick fix to online privacy concerns but will really be far more complicated in practice to enforce, and (2) that Do Not Track regulation will likely have many unintended consequences, most of which are going unexplored by proponents.



For example, Weinstein says:



Do-not-track in actuality encompasses an immensely heterogeneous mosaic of issues and considerations, not appropriately subject to simplistic approaches or "quick fix" solutions.   Approaching this area without a realistic appreciation of such facts is fraught with risks and the potential for major undesirable collateral damages to businesses, organizations, and individuals. Attempts to portray these controversies as "black or white" topics subject to rapid or in some cases even unilaterally imposed resolutions may be politically expedient, but are ultimately both childish and dangerous. [...]

Above all, we should endeavor to remember that tracking issues both on and off the Internet are in reality part of a complicated whole, a multifaceted  set of problems — and very importantly — potentials as well. The decisions that we make now regarding these issues will likely have far-ranging implications and effects on the Internet for many years to come, perhaps for decades.


Absolutely correct. He also argues that:



Rather than view do-not-track and tracking in general as binary choices, or even as an overly simplistic one-dimensional continuum — with "no tracking" and "tracking" at the good and evil ends of the spectrum respectively — a multidimensional and so significantly more nuanced view would seem to make a great deal better logical sense. For each of us, our comfort levels with "tracking" as it may be most broadly defined — both in Internet and non-Internet contexts — will vary widely depending on specific details and circumstances.


Quite right. I made similar arguments in my February filing to the Federal Trade Commission as part of it Do Not Track proceeding.



Weinstein also asks an important question here:



Even while some divisions of government are proselytizing for the rapid adoption of risky and overly simplistic do-not-track mechanisms that are more akin to sledgehammers than balanced control methodologies, and aimed particularly at ad personalization networks — others in government are pushing hard for vast and comprehensive data retention laws that would require ISPs and Web services to record and maintain detailed records of virtually all Web browsing, email, and other activities. … Why is there such a focus on do-not-track in the relatively innocuous ad serving sector, but often so much hypocritical disregard of government's desire for encompassing tracking in other contexts that carry enormously larger potentials for abuses?


To be fair, however, I do think that many of the advocates of Do Not Track regulation are also focused on government access to data but I think they sometimes fail to adequately distinguish between the "enormously larger potentials for abuses" associated with government data collection and what Weinstein rightly regards as the far less serious issue of "the relatively innocuous ad serving sector."  There is a world of difference between what government collects and uses private data to accomplish versus what the private sector does with it. As I pointed out in my latest Forbes column this week, "Governments possess unique powers the private sector lacks, such as taxation, surveillance, fines, and imprisonment." By contrast, private companies mostly collect data to sell us a better mousetrap at a better price.  It's hard to see how that is a "harm" in the same league with what government officials and agencies would like to do with data. In fact, that's really a benefit to consumers.



Anyway, make sure to read Weinstein's entire essay.  I have not yet seen any responses to it but I very much look forward to seeing what proponents of Do Not Track regulation have to say about his very sharp piece.




 •  0 comments  •  flag
Share on Twitter
Published on May 02, 2011 12:16

The Case of the Non-Hacking Hacker

Wired reports that a recent federal court decision would make it possible for a private-sector employee to be found in violation of the the Computer Fraud and Abuse Act for simply violating their employer's data policies, without any real "hacking" having occurred. This not only applies to data access, like grabbing data via a non-password-protected computer, but also to unauthorized use, such as emailing or copying data the employee might otherwise have permission to access.



On face, this doesn't seem entirely unreasonable. Breaking and entering is a crime, but so is casually walking into a business or home and taking things that aren't yours, so it seems like data theft, even without any "hacking," should be a crime. For the law to be otherwise would create a "but he didn't log out" defense for would-be data thieves.



But what about unauthorized use? Is there a physical property equivalent of this? Could I be criminally liable for using the corporate car to drag race my against my neighbor, or would I only be fired and potentially sued in civil court? Does this new interpretation CFAA simply expand the scope of this law into realms already covered, perhaps more appropriately, by statutes that specifically address trade secrets or other sensitive information in a broader way that doesn't involve computing technology?



Judge Tena Campbell noted in the dissent that under the ruling, "any person who obtains information from any computer connected to the internet, in violation of her employer's computer-use restrictions, is guilty of a federal crime." So, perhaps this is a case of the court overreaching in an incredibly dramatic fashion.



I hope my lawyerly co-bloggers can weigh-in on this issue.



HT: Ryan Lynch




 •  0 comments  •  flag
Share on Twitter
Published on May 02, 2011 11:43

George Will & Jeff Jacoby on Internet Sales Taxes & "Tax Fairness"

I was pleased to see columnists George Will of The Washington Post and Jeff Jacoby of The Boston Globe take on the Internet sales tax issue in two smart recent essays. Will's Post column ("Working Up a Tax Storm in Illinois") and Jacoby's piece,"There's No Fairness in Taxing E-Sales," are both worth a read. They are very much in line with my recent Forbes column on the issue ("The Internet Tax Man Cometh,") as well as this recent oped by CEI's Jessica Melugin, which Ryan Radia discussed here in his recent essay "A Smarter Way to Tax Internet Sales."



I was particularly pleased to see both Will and Jacoby take on bogus federalism arguments in favor of allowing States to form a multistate tax cartel to collect out-of-state sales taxes.  Senators Dick Durbin (D-IL) and Mike Enzi (R-WY) will soon introduce the "Main Street Fairness Act," which would force all retailers to collect sales tax for states who join a formal compact. It's a novel—and regrettable—ploy to get around constitutional hurdles to taxing out-of-state vendors. Sadly, it is gaining support in some circles based on twisted theories of what federalism is all about. Real federalism is about a tension between various levels of government and competition among the States, not a cozy tax cartel.



Will rightly notes that "Federalism — which serves the ability of businesses to move to greener pastures — puts state and local politicians under pressure, but that is where they should be, lest they treat businesses as hostages that can be abused." And Jacoby argues that an "origin-based" sales tax sourcing rule is the more sensible solution to leveling the tax playing field:



The current system is far fairer than the one [Senator] Durbin wants. Bricks-and-mortar merchants charge sales taxes based on their physical location. The same rule applies to online merchants. A Pennsylvania tobacco shop doesn't collect Ohio sales taxes whenever it sells a humidor to a visitor from Ohio. Amazon shouldn't have to, either.


Jacoby also addresses the "tax fairness" argument as follows:



All other things being equal, consumers no doubt prefer a tax-free shopping experience. But all other things are rarely equal. E-retailers (or mail-order catalogs) may have a price advantage, but well-run "Main Street'' businesses have competitive advantages of their own. They attract customers with eye-catching window displays. They play up local ties and neighborhood loyalty. They give shoppers the chance to see, feel, or try on items before buying them. They enable the serendipitous joys of browsing. They don't charge for shipping. And they offer potential customers a degree of personal service and warmth that no website can match.


And Will says:



[bricks and mortar] stores have the competitive advantage of local loyalties and customers being able to handle merchandise.Besides, Main Street stores pay sales taxes to support local police, fire and rescue, sewage, schools and other services. If Amazon's Seattle headquarters catches fire, will Champaign, Ill., firefighters extinguish it?


Anyway, read both columns and stay tuned: this fight is about to get hot once again.




 •  0 comments  •  flag
Share on Twitter
Published on May 02, 2011 08:03

April 29, 2011

Mad About Bogus Takedowns? Blame Congress, Not Online Intermediaries

User-driven websites — also known as online intermediaries — frequently come under fire for disabling user content due to bogus or illegitimate takedown notices. Facebook is at the center of the latest controversy involving a bogus takedown notice. On Thursday morning, the social networking site disabled Ars Technica's page after receiving a DMCA takedown notice alleging the page contained copyright infringing material. While details about the claim remain unclear, given that Facebook restored Ars's page yesterday evening, it's a safe bet that the takedown notice was without merit.



Understandably, Ars Technica wasn't exactly pleased that its Facebook page — one of its top sources of incoming traffic — was shut down for seemingly no good reason. Ars was particularly disappointed by how Facebook handled the situation. In an article posted yesterday (and updated throughout the day), Ars co-founder Ken Fisher and senior editor Jacqui Cheng chronicled their struggle in getting Facebook to simply discuss the situation with them and allow Ars to respond to the takedown notice.



Facebook took hours to respond to Ars's initial inquiry, and didn't provide a copy of takedown notice until the following day. Several other major tech websites, including ReadWriteWeb and TheNextWeb, also covered the issue, noting that Ars Technica is the latest in a series of websites to have suffered from their Facebook page being wrongly disabled. In a follow-up article posted today, Ars elaborated on what happened and offered some tips to Facebook on how it could have better handled the situation.



It's totally fair to criticize how Facebook deals with content takedown requests. Ars is right that the company could certainly do a much better job of handling the process, and Facebook will hopefully re-evaluate its procedures in light of this widely publicized snafu. In calling out Facebook's flawed approach to dealing with takedown requests, however, Ars Technica doesn't do justice to the larger, more fundamental problem of bogus takedown notices.



As Mike Masnick explains on Techdirt, U.S. federal laws strongly discourage online intermediaries from trying to figure out if takedown notices are legitimate or not. If Facebook were to refuse to comply with a copyright takedown notice that subsequently turned out to be meritorious, it would lose its safe harbor provided for in 17 U.S.C. § 512(c). Should Facebook err in its judgment, therefore, it would potentially be on the hook for harsh copyright infringement penalties. In effect, the DMCA incentivizes what Masnick describes as "massive overreactions" by online intermediaries.



That's not to say that there aren't some simple steps Facebook could take to combat bogus takedown notices without exposing itself to additional liability, especially in "easy" cases, as Ars and others have argued. Verifying that takedown notices are associated with valid email addresses is one such step that Facebook apparently does not currently employ. Facebook could also be more responsive to users whose content has been disabled, at least when the content in question is highly visible.



Perhaps more importantly, Facebook should adopt a system for enabling users who believe their content has been wrongly disabled to file a counter notification. YouTube, for instance, has a slick online system that lets users challenge wrongful takedown requests. Under 17 U.S.C. § 512(g), an online service provider may restore previously-disabled content between 10 and 14 days after receipt of a valid counter notification if the content owner hasn't initiated legal proceedings. It's odd that Facebook hasn't adopted an online counter notification system, especially given that service providers are shielded from liability if they respond to counter notices in accordance with section 512(g).



While it would be great if Facebook were to manually and thoroughly screen all user complaints and requests, expecting online intermediaries to pay for a live human being — say, an intellectual property lawyer or a paralegal — to vet the legal merits of each takedown notice is simply unreasonable. Facebook has more than 600 million active users, but a mere 2,000 or so employees (although that number may soon grow substantially). That's over 300,000 users per employee!



And let's not forget that Facebook is a free service. The company generated a scant $4 of revenue per user in 2010. Facebook's going to have to do a much better job of monetizing its platform before we can reasonably expect it to vet legal requests on its users' behalf. Even Google — with a head count and revenue more than ten times Facebook's — is frequently chastised for not doing enough to identify bogus or otherwise invalid takedown notices. Based on some of the "horror stories" that have been reported recently, Ars Technica is lucky that Facebook restored its page within a day of its removal.



Even if Facebook improves its system, however, the underlying problem of bogus takedown notices is probably here to stay — that is, until Congress acts. Reopening the legislative debate over the DMCA is a risky gambit, but at least in theory, Congress could improve the statute by adopting some relatively minor tweaks.



First, the DMCA should do more to deter parties from filing invalid or bad faith DMCA takedown notices. Courts rarely punish parties for filing illegitimate takedown notices, as it is very difficult in practice to show  that a notice was filed in bad faith. All in all, the overwhelming majority of incidents of bogus takedown notices go unpunished, as I've discussed before on these pages.



Wendy Seltzer of Princeton's Center for Information Technology Policy chronicled the chilling effects of DMCA takedown abuses in a recent Harvard Journal of Law & Technology article. She suggests a few legislative fixes to 17 U.S.C. § 512(f) to better balance the interests of users and rightsholders:



The law should require greater diligence: declarations on penalty of perjury to match those required by the respondent, and perhaps even a bond against erroneous claims. . . . Strengthening the counter-suit provisions could encourage a plaintiffs' bar to take up these cases as private attorneys general. Stiffening the penalties against claimants who obtained takedowns through misrepresentation of infringement would encourage claimants to verify and support their claims of infringement or penalize them for failure to do so rather than allowing them to shift that burden to service providers and posters.


Congress should also create a safe harbor, notice-and-takedown system for online trademark infringement, as Elizabeth Levin has argued. While copyright takedown notices receive most of the attention in the IP debates, there's no DMCA-esque process established in statute to provide for online intermediaries to disable and repost allegedly trademark-infringing content.




 •  0 comments  •  flag
Share on Twitter
Published on April 29, 2011 15:36

When It Comes to Information Control, Everybody Has a Pet Issue & Everyone Will Be Disappointed

When it comes to information control, everybody has a pet issue and everyone will be disappointed when law can't resolve it. I was reminded of this truism while reading a provocative blog post yesterday by computer scientist Ben Adida entitled "(Your) Information Wants to be Free." Adida's essay touches upon an issue I have been writing about here a lot lately: the complexity of information control — especially in the context of individual privacy. [See my essays on "Privacy as an Information Control Regime: The Challenges Ahead," "And so the IP & Porn Wars Give Way to the Privacy & Cybersecurity Wars," and this recent FTC filing.]



In his essay, Adida observes that:



In 1984, Stewart Brand famously said that information wants to be free. John Perry Barlow reiterated it in the early 90s, and added "Information Replicates into the Cracks of Possibility." When this idea was applied to online music sharing, it was cool in a "fight the man!" kind of way. Unfortunately, information replication doesn't discriminate: your personal data, credit cards and medical problems alike, also want to be free. Keeping it secret is really, really hard.


Quite right. We've been debating the complexities of information control in the Internet policy arena for the last 20 years and I think we can all now safely conclude that information control is hugely challenging regardless of the sort of information in question. As I'll note below, that doesn't mean control is impossible, but the relative difficulty of slowing or stopping information flows of all varieties has increased exponentially in recent years.



But Adida's more interesting point is the one about the selective morality at play in debates over information control. That is, people generally expect or favor information freedom in some arenas, but then get pretty upset when they can't crack down on information flows elsewhere. Indeed, some people can get downright religious about the whole "information-wants-to-be-free" thing in some cases and then, without missing a beat, turn around and talk like information totalitarians in the next breath.



I discussed this in relation to the privacy debates in my essays referenced above. I've noted how some "cyber-progressives" (or whatever you prefer to call tech thinkers and advocates on the Left) have been practically giddy with delight at the sight of copyright owners scrambling to find methods to protect their content from widespread distribution over distributed digital networks. Just about every information control effort attempted in the copyright arena — whether we are talking about efforts like DRM  & paywalls or even suing end-users — has failed to provide the degree of protection desired. The "darknet" critique remains fairly cogent. It doesn't mean I'm excusing copyright piracy as a normative matter; it's just to say that the cyber-progressives were certainly on to something as an empirical matter when they detailed the deficiencies of various IP control efforts.



But here's the interesting question: Why shouldn't we believe that the exact same critique applies to privacy and personal information flows? Again, it's not to say that, as a normative matter, privacy isn't important. And data security certainly is. It's just to say that, as an empirical matter, information control in this context is going to be every bit as difficult as information control in the copyright context. Yet, the same crowd of cyber-progressives who were all for information freedom in the copyright context are now hoping to crack down on personal information flows in the name of protecting privacy.



And it is not going to work.



Nor will it work well for those who are looking to crack down on the flow of bits that contain porn or violent content.



Nor will it work well for those "cyber-conservatives" who are looking to crack down on the flow of bits that contain state secrets or online gambling.



Nor will it work well for those who want to curb what they regard as "harassing" speech, "hate speech," or defamatory comments.



And so on. And so on.



I will be accused of being too much of a technological determinist, but I think there's a lot of evidence suggesting that at least "soft determinism" is the order of the day. In a brilliant and highly provocative new paper, "Hasta La Vista Privacy, or How Technology Terminated Privacy,"  Konstantinos K. Stylianou of the University of Pennsylvania Law School discusses varieties of technological determinism as it pertains to information control and notes:



In-between the two extremes (technology as the defining factor of change and technology as a mere tangent of change) and in a multitude of combinations falls the so called soft determinism; that is, variations of the combined effect of technology on one hand and human choices and actions on the other. (p. 46)


Unfortunately, Stylianou notes, "The scope of soft determinism is unfortunately so broad that is loses all normative value. Encapsulated in the axiom 'human beings do make their world, but they are also made by it,' soft determinism is reduced to the self-evident."  Nonetheless, he argues, "a compromise can be reached by mixing soft and hard determinism in a blend that reserves for technology the predominant role only in limited cases," since he believes "there are indeed technologies so disruptive by their very nature they cause a certain change regardless of other factors." (p. 46) He concludes his essay by noting:



it seems reasonable to infer that the thrust behind technological progress is so powerful that it is almost impossible for traditional legislation to catch up. While designing flexible rules may be of help, it also appears that technology has already advanced to the degree that is is able to bypass or manipulate legislation. As a result, the cat-and-mouse chase game between the law and technology will probably always tip in favor of technology. It may thus be a wise choice for the law to stop underestimating the dynamics of technology, and instead adapt to embrace it. (p. 54)


That pretty much sums up where I'm at on most information control issues and explains why I sound so fatalistic at times, even if I do believe that law can have an impact at the margins. Such "soft determinism" will be hard for some to swallow. Many will simply refuse to accept it, especially when they hear statements like those Stylianou makes in the context of privacy, such as: "the advancement of digital technology is ineluctably bound to have a destructive impact on privacy" (p. 47), or "technology has made it indeed so easy to collect personal data that in many cases they have lost their individual value, and instead function merely as statistical or ancillary data" (p. 51), or "What technological determinism teaches us so far is that people will always react negatively to more intrusive technology, but in the end they will probably succumb." (p. 54)



One might cynically view this simply as a more eloquent restatement of Scott McNealy's famous quip: "privacy is dead, get over it."  While that's an a bit of overstatement, it's nonetheless true that privacy is under enormous strain because of modern digital developments (summarized in Exhibit 3 below). But, again, everything is under enormous strain. Perhaps, therefore, we need a reformulation of McNealy's quip: "Information control is dead, get over it."



Anyway, going forward, we need a framework to think about information control efforts. I've been working with my Mercatus Center colleague Jerry Brito to develop just that in a forthcoming paper (current running title: "The Trouble with Information Control.")  To begin, we simplify matters by dividing information control efforts into four big buckets, as shown in Exhibit 1 below. (Note: With Jerry Brito's help, I have reworked these categories since first outlining them here):



Exhibit 1: RATIONALES FOR INFORMATION CONTROL



(1) Censorship / Speech Control




politically unpopular speech
porn
violent content
hate speech
cyberbullying


(2) Privacy




defamation
reputation


(3) Copyright & Trademark Protection



(4) Security




state secrets
national security
law enforcement
cybersecurity
online gambling


Next, we can consider various legal responses to these objects of information control, as detailed in Exhibit 2:



Exhibit 2: LEGAL & REGULATORY RESPONSES / APPROACHES TO INFORMATION CONTROL




Intermediary deputization / secondary liability
Individual prosecutions / fines
Controls on speech / expression
Controls on monetary flows
Other Regulation
Taxation / fines
Agency enforcement / adjudication


Finally, we need to consider how efforts to control information today are greatly complicated by problems or phenomena that are unique to the Internet or the Information Age, as outlined in Exhibit 3:



Exhibit 3: INFORMATION CONTROL CONSIDERATIONS / COMPLICATIONS




Media & Technological Convergence
Decentralized, Distributed Networking
Unprecedented Scale of Networked Communications
Explosion of the Overall Volume of Information
Unprecedented Individual Information Sharing Through User-Generation of Content and Self-Revelation of Data


In this upcoming paper, Jerry and I will provide case studies based on many of the issues outlined in Exhibit 1 and show how the information control methods shown in Exhibit 2 typically fail to slow or restrict information flows because of the factors outlined in Exhibit 3. Assuming we can prove our thesis — that soft determinism is the order of the day and information control efforts of all varieties are increasingly difficult (and often completely futile) — I fully expect that we will make just about everybody unhappy with us!



However, I want to conclude by noting that just because I am somewhat fatalistic or deterministic about the likely failure of most information control proposals or mechanisms, it doesn't mean I am willing to just throw my hands in the air and say there's absolutely nothing that can be done to address some of the concerns listed in Exhibit 1.  In my work on how to address online child safety issues, I tried to develop what I call a "3-E Solution" to address these concerns.  In my paper with Jerry, I'm hoping to use this as a framework for how to deal with all information control concerns going forward:




Education: Get more information out about the issue / concern.
Empowerment: Give consumers more and better tools to act on that information.
(Selective) Enforcement: Have law step in at the margins when it's appropriate and cost-efficient, and only after education and empowerment fail.


Of course, how much stress we place on each component of this toolbox will depend on the issue. I've already suggested that the last "E" of enforcement will be largely ineffective, especially when outright prohibition of particular information flows is the objective. But enforcement could be more effective in other contexts, such as holding companies accountable for the promises they make to consumers, by policing industry self-regulatory schemes, or by demanding more transparency / disclosure. Those enforcement practices have helped in the child safety and privacy contexts. In other contexts, the severity of the harm in question may be so severe — ex: child pornography — that we would bypass the education and empowerment steps altogether and go to much greater lengths to make the enforcement option work. Even then, we should keep our expectations in check and avoid a rush to extreme solutions.



There's much more to be explored here. Stay tuned.




 •  0 comments  •  flag
Share on Twitter
Published on April 29, 2011 11:21

April 27, 2011

Overclassification stifles the cybersecurity conversation

Thanks to all of you who have sent your comments about Tate Watkins and my new cybersecurity paper. It's been getting a good reception.



James Fallows of The Atlantic, for example, noted yesterday that the paper "represents a significant libertarian-right voice of concern about this latest expansion of the permanent national-security surveillance state," and that while we can't underestimate cyber risks, "the emphasis on proportionate response, and the need to guard other values, comes at the right time. We should debate these threats rather than continuing to cower."



Today I wanted to bend your ears (or eyes, I guess) with another excerpt. The subject today is the "if you only knew what we know," rationale for government action. I'm happy to see that Sen. Sheldon Whitehouse has a new bill getting right at the problem of over-classification that allows leaders to get away with "just trust us" rhetoric. Check out the excerpt is after the jump.

One of the most widely cited arguments for increased federal involvement in cybersecurity can be found in the report of the Commission on Cybersecurity for the 44th Presidency, which I've discussed here before.



The report makes assertions about the nature of the threat, such as, "America's failure to protect cyberspace is one of the most urgent national security problems facing the new administration that will take office in January 2009. It is . . . a battle fought mainly in the shadows. It is a battle we are losing." Unfortunately, the report provides little evidence to support such assertions. There is a brief recitation of various instances of cyber-espionage conducted against government computer systems. However, it does not put these cases in context, nor does it explain how these particular breaches demonstrate a national security crisis, or that "we are losing."



The report also notes that Department of Defense computers are "probed hundreds of thousands of times each day." This is a fact that proponents of increased federal involvement in cybersecurity often cite as evidence for a looming threat. However, probing and scanning networks are the digital equivalent of trying doorknobs to see if they are unlocked—a maneuver available to even the most unsophisticated would-be hackers. The number of times a computer network is probed is not evidence of an attack or a breach, or a even of a problem.



Nevertheless, the Commission report and the cybersecurity bills it has inspired prescribe regulation of the Internet. The report asserts plainly: "It is undeniable that an appropriate level of cybersecurity cannot be achieved without regulation, as market forces alone will never provide the level of security necessary to achieve national security objectives." But without any verifiable evidence of a threat, how is one to know what exactly is the "appropriate level of cybersecurity" and whether market forces are providing it? How is one to judge whether the recommendations that make up the bulk of the Commission's report are necessary or appropriate?



Although never clearly stated, the implication seems to be that the report's authors are working from classified sources, which might explain the dearth of verifiable evidence. To its credit, the Commission laments what it considers the "overclassification" of information related to cybersecurity. But this should not serve as an excuse. If our past experience with threat inflation teaches us anything, it is that we cannot accept the word of government officials with access to classified information as the sole source of evidence for the existence or scope of a threat. The watchword is "trust but verify." Until those who seek regulation can produce clear reviewable evidence of a threat, we should discount assertions such as "The evidence is both compelling and overwhelming," and, "This is a strategic issue on par with weapons of mass destruction and global jihad."




 •  0 comments  •  flag
Share on Twitter
Published on April 27, 2011 14:15

The iPhone flap and the anatomy of a privacy panic

I've written a long article this morning for CNET (See "Privacy panic debate:  Whose data is it?") on the discovery of the iPhone location tracking file and the utterly predictable panic response that followed.  Its life-cycle follows precisely the crisis model Adam Thierer has so frequently and eloquently traced, most recently here on TLF.



In particular, the CNET article takes a close and serious look at Richard Thaler's column in Saturday's New York Times, "Show us the data.  (It's ours, after all.)" Thaler uses the iPhone scare as occassion to propose a regulatory fix to the "problem" of users being unable to access in "computer-friendly form" copies of the information "collected on" them by merchants. 



That information, Thaler assumes, is a discreet kind of property and must, since it refers to customer behavior, be the sole property of the customer, "lent" to the merchant and reclaimable at any time.



Information can certainly be treated as if it were property, and often is under law.  Personally, I don't find the property metaphor to be the most useful in dealing with intangibles, but if you're going to go there you need to understand the economics of how information behaves in ways very different to physical property.  (See my chapter on the subject in "The Next Digital Decade.")



Thaler's "proposed rule" is wrong on the facts (he doesn't seem to know how cell phone bills really look, and he certainly doesn't understand how supermarket club cards operate–and these are his leading examples of the "problem"), wrong on the law, and even wrong on the business and economics.  (Other than that, it's a pretty good article!)



This kind of intellectual frivolity is par for the course with many academic economists.  Thaler is at the University of Chicago's business school, and describes himself as an economist and behavioral scientist.  That means instead of throwing around calculus all day, he devises toy experiments with a few subjects–or reads the findings of other behavioral scientists who have done the same.



Not only is the article bad privacy policy, it's bad economics.  The latter is certainly the more serious concern.  Nearly 70 years after Ronald Coase called on economists to put down the pencil and paper methods and do actual empirical research in how markets actually work, the profession has if anything become more insular.  There are exceptions, of course, but they stand out in a field of mediocrity.



Which is too bad.  We need good economists now, more than ever.



 



 



 




 •  0 comments  •  flag
Share on Twitter
Published on April 27, 2011 08:48

Adam Thierer's Blog

Adam Thierer
Adam Thierer isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Adam Thierer's blog with rss.