Adam Thierer's Blog, page 98

March 27, 2012

Bruce Schneier on the importance of trust in society

http://surprisinglyfree.com/wp-content/uploads/SchneierSmile.jpg

On the podcast this week, Bruce Schneier, internationally renowned security expert and author, discusses his new book entitled, "Liars & Outliers: Enabling the Trust That Society Needs To Thrive." Schneier starts the discussion by looking at society and trust and explains why he thinks the two are necessary for civilization. According to Schneier, two concepts contribute to a trustful society: first, humans are mostly moral; second, informal reputation systems incentivize trustworthy behavior. The discussion turns to technology and trust, and Schneier talks about how the information society yields greater consequences when trust is breached. He then describes how society deals with technology and trust and why he thinks the system is not perfect but working well overall.





Related Links

"Liars and Outliers: Enabling the Trust that Society Needs to Thrive", by Schneier"Why Doesn't Society Just Fall Apart?", Forbes.comBruce Schneier-CFP 2011-Keynote on Cyberwar Rhetoric, JerryBrito.org

To keep the conversation around this episode in one place, we'd like to ask you to comment at the webpage for this episode on Surprisingly Free. Also, why not subscribe to the podcast on iTunes?




 •  0 comments  •  flag
Share on Twitter
Published on March 27, 2012 10:00

March 26, 2012

danah boyd's "Culture of Fear" Talk

I want to highly recommend everyone watch this interesting new talk by danah boyd on "Culture of Fear + Attention Economy = ?!?!" In her talk, danah discusses "how fear gets people into a frenzy" or panic about new technologies and new forms of culture. "The culture of fear is the idea that fear can be employed by marketers, politicians, the media, and the public to really regulate the public… such that they can be controlled," she argues. "Fear isn't simply the product of natural forces. It can systematically be generated to entice, motivate, or suppress. It can be leveraged as a political tool and those in power have long used fear for precisely these goals."  I discuss many of these issues in my new 80-page white paper, "Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle."





Webstock '12: danah boyd – Culture of Fear + Attention Economy = ?!?! from Webstock on Vimeo.



danah points out that new media is often leveraged to generate fear and so we should not be surprised when the Internet and digital technologies are used in much the same way. She also correctly notes that our cluttered, cacophonous information age might also be causing an escalation of fear-based tactics. "The more there are stimuli competing for your attention, the more likely it is that fear is going to be the thing that will drive your attention" to the things that some want you to notice or worry about.



I spent some time in my technopanics paper discussing this point in Section III.C ("Bad News Sells: The Role of the Media, Advocates, and the Listener.") Here's the relevant passage:



Fear mongering and prophecies of doom have always been with us, since they represent easy ways to attract attention and get heard. "Pessimism has always been big box office," notes [Matt] Ridley. This is even more true in the midst of the modern information age cacophony. Breaking through all the noise is hard when competition for our eyes and ears is so intense. It should not be surprising, therefore, that sensationalism and alarmism are used as media differentiation tactics. This is particularly true as it relates to kids and online safety. "Unbalanced headlines and confusion have contributed to the climate of anxiety that surrounds public discourse on children's use of new technology," argues Professor Sonia Livingstone of the London School Economics. "Panic and fear often drown out evidence."

Sadly, most of us are eager listeners and lap up bad news, even when it is overhyped, exaggerated, or misreported. [Michael] Shermer notes that psychologists have identified this phenomenon as "negativity bias," or "the tendency to pay closer attention and give more weight to negative events, beliefs, and information than to positive." Negativity bias, which is closely related to the phenomenon of "pessimistic bias" … is frequently on display in debates over online child safety, digital privacy, and cybersecurity.


Unfortunately, as danah correctly notes in her remarks, "it's extremely difficult to combat fear [but] it's extremely easy to ramp it up." Worse yet,  "it's impossible to combat fear with statistics."  As I note in my paper, fear-tactics are remarkably powerful rhetorical devices that can be enormously challenging to overcome. However, I remain a bit more optimistic than danah that facts and common sense can prevail eventually. After all, most panics don't last. They fizzle out after a time. I'd like to believe that part of the reason they do is because facts, education, awareness, and reasonable discussion all combine to debunk fears and help us cope with the realities of cultural or technological change.  On the other hand, as I note in the paper, it may instead simply be the case that one panic crowds out an older one! As I note in the paper (on pgs. 42-3):



Perhaps it is the case that the unique factors that combine to create technopanics tend to dissipate more rapidly over time precisely because technological changes continue to unfold at such a rapid clip. Maybe there is something about human psychology that "crowds out" one panic as new fears arise. Perhaps the media and elites lose interest in the panic du jour and move on to other issues. Finally, people may simply learn to accommodate cultural and economic changes. Indeed, some of things that evoke panic in one generation come to be worshiped (or at least respected) in another. As The Economist magazine recently noted, "There is a long tradition of dire warnings about new forms of media, from translations of the Bible into vernacular languages to cinema and rock music. But as time passes such novelties become uncontroversial, and eventually some of them are elevated into art forms." These topics and explanations are ripe for future study.


danah also notes that "one of the frustrating thing about my job these days is that I'm dealing with the idea that 'protect the kids' becomes justification for regulating the Internet in any way you can possibly imagine." Of course, that's nothing new. "It's for the children!" is the mantra we hear regularly in media and Internet policy debates. [Some of you might find my mock testimony on this front to be humorous: "It's For the Children: A Template for Hill Testimony on Child Safety Issues."] In my paper, I devote a great deal of time to explaining how generational differences and fears about the impact of technology on society–especially the young–accounts for a large part of the pessimism at work in debates over these issues.



Anyway, please listen to danah's talk. It's well worth your time. And I hope some of you will read my paper as well.



Note: All my TLF essays on moral panics and technopanics can be found here.




 •  0 comments  •  flag
Share on Twitter
Published on March 26, 2012 11:54

Initial Thoughts on FTC's Final Privacy Report

The Federal Trade Commission (FTC) has just released its final privacy framework proposal, "Protecting Consumer Privacy in an Era of Rapid Change." The agency released a draft report with the same title back in late 2010 and then asked for comments. [Here were my comments to the agency.] The FTC's final report comes just a month after the Obama Administration released its 50-page privacy framework, Consumer Data Privacy in a Networked World, which included a privacy "bill of rights." That report was primarily driven by the Department of Commerce. [I penned a Forbes column about that report the day it was released.]  The new FTC report is fairly consistent with the earlier Commerce Department report.  Here are some of the key themes or recommendations from the final FTC report:




rooted in a set of baseline privacy principles with a strong push for "privacy by design," more consumer choice, and better transparency.
along with Dept of Commerce, the agency will work with industry to develop privacy codes of conduct and then give them teeth with possibility of FTC enforcement.
pushes for industry to pursue voluntary "Do Not Track" mechanism, which to the agency apparently means "do not collect" any info.
calls on Congress to pass data security legislation and legislation "to provide greater transparency for, and control over, the practices of information brokers." Also, "to further increase transparency, the Commission calls on data brokers that compile data for marketing purposes to explore creating a centralized website where data brokers could (1) identify themselves to consumers and describe how they collect and use consumer data and (2) detail the access rights and other choices they provide with respect to the consumer data they maintain."
the agency will host a workshop later this year to discuss privacy withing "large platform providers." The report notes: "To the extent that large platforms, such as Internet Service Providers, operating systems, browsers, and social media, seek to comprehensively track consumers' online activities, it raises heightened privacy concerns."
the agency is also stepping up oversight on mobile privacy issues.
the agency says it "generally supports the exploration of efforts to develop additional mechanisms, such as the 'eraser button' for social media," but stops short of saying it should be mandated at this time.


Some of my initial random thoughts about the FTC report:



Not as bad as it could have been



Overall, the FTC's final privacy report not as heavy-handed as it could have been. There's no sweeping, immediate effort to impose a top-down privacy regime or "Data Directive" that some of us feared would put the FTC in a position to become a full-blown Data Protection Agency and regulate every facet of the information economy.



… but "self-regulation" sure sounds a lot like European-style "co-regulation."



Nonetheless, it doesn't mean that can't happen. It is clear that the new FTC and Commerce privacy reports signal the next step in the Obama Administration's gradual move toward more of a "co-regulation" model for Internet governance on the privacy front. The Administration seems to favor a "government steers, industry rows" model for privacy policy that assigns a broad oversight role to federal regulators allowing them to "nudge" the tech industry in certain directions with the stern but amorphous "do this or else" sword of Damocles hanging over industry "self-regulatory" decisions on this front.



In his dissenting statement, Commissioner J. Thomas Rosch makes this point (on C-8):



The Report also acknowledges that it is intended to serve as a template for legislative recommendations. Moreover, to the extent that the Report's "best practices" mirror the Administration's privacy "Bill of Rights," the President has specifically asked either that the "Bill of Rights" be adopted by the Congress or that they be distilled into "enforceable codes of conduct." As I testified before the same subcommittee, this is a "tautology;" either these practices are to be adopted voluntarily by the firms involved or else there is a federal requirement that they be adopted, in which case there can be no pretense that they are "voluntary." It makes no difference whether the federal requirement is in the form of enforceable codes of conduct or in the form of an act of Congress. Indeed, it is arguable that neither is needed if these firms feel obliged to comply with the "best practices" or face the wrath of "the Commission" or its staff.


Columbia Law School professor and former FTC adviser Tim Wu refers to this as an "agency threats" model of governance. That's generally what the FTC is endorsing here. Intimidation is often a very effective regulatory policy. Thus, I hope we can dispense with this silly notion that this process represents truly voluntary self-regulation.  Ask yourself this: If the FTC and Dept of Commerce had instead proposed this same framework for overseeing private media ratings or online speech "codes of conduct," would anyone seriously call it "voluntary" or "self-regulatory"?  I don't think so. We'd understand that these implied threats constituted a form of indirect speech control. The only difference in this case–as I have noted here many times before–is that a bit of selective morality is in play when it comes to privacy policy; many of those who oppose regulation-via-intimidation in other contexts are, unfortunately, positively giddy about when it comes to privacy! And so we have arrived at the point where these tactics have become favored information control mechanisms in some contexts but not others.



Trade-offs associated with regulation still must be considered.



If the Obama Administration's new co-regulatory model results in the sort of de facto regulatory regime that many wanted them to just impose forcefully right from the start, then we are right back at the same point we were before in terms of the trade-offs between information sharing and the largely unregulated economy of "free" online sites and services. As I noted in my filing to the FTC in this matter: " There is no free lunch. While well-intentioned, government regulation that attempts to create a cost-free opt-out for data collection and targeted online advertising will likely have damaging unintended consequences. In terms of direct costs to consumers, Do Not Track could result in higher prices for service as paywalls go up or, at a minimum, advertising will become less relevant to consumers and, therefore, more "intrusive" in other ways." To be clear, we could get this result even in absence of a top-down regulatory regime if the FTC and Commerce are able to use threats to accomplish their same regulatory objectives.



"Harmonization" is overrated.



The final FTC report continues the Obama Admin's misguided obsession with "global harmonization" in terms of achieving more consistent international privacy norms and regulations. As I have noted before, this is an epic blunder. If our norms aren't the same as Europe's or the rest of the world's, some might point out that's why our Internet sector is better positioned and more highly regarded than the rest of the planet's online sectors and operators! Even if you don't accept that premise, you should be skeptical of the wisdom of doing whatever it takes to make America's privacy policies more consistent with the regulatory models others follow. Sometimes when it comes to global standards and "harmonization," the better approach is to just go our own way.



The FTC has been doing plenty without additional regulatory authority.



Ironically, the report opens with two pages (p. ii-iii) of "developments since issuance of the preliminary report," listing the many ways the FTC has been active on this front over the past year in the absence of expanded authority. That includes major actions against two tech titans, Google and Facebook, which included the FTC slapping 20-year privacy audits on them. The FTC also lists many other enforcement actions (via COPPA, FCRA, and  general Sec. 5 authority) and other educational steps it has taken over the past year.  All of which begs the question: Why, then, do we need to expanded federal regulation and enhanced agency power over the information economy?



Does anyone still care about personal responsibility?



Sadly, the report doesn't have much to say about the role of personal responsibility in this context. It does note that "All stakeholders should expand their efforts to educate consumers about commercial data privacy practices." That's good. But had this been an agency report on child safety issues, I have to imagine that the agency would have pointed out that best practices begin at home. As I noted in my filing to the agency, "For some reason, when the topic of debate shifts from concerns about potentially objectionable content to the free movement of personal information, personal responsibility and self-regulation become the last option, not the first.  . . . those who advocate personal responsibility and industry self-regulatory approaches to free-speech and child-protection issues should be advancing the same position with regards to privacy. . . . it is not unreasonable to expect privacy-sensitive consumers to exercise some degree of personal responsibility to avoid unwanted content or communications in this context, just as they must in the context of objectionably content or online child safety."   Again, the Obama Administration doesn't seem very interested in pushing personal responsibility as the first order of business with regards to online privacy the way it has for online safety issues.  That's a real shame.



There is another way.



In closing, I continue to believe that privacy is best governed by a set of evolutionary norms, ongoing online marketplace interactions and experiments, contractual negotiations, public pressures, educational efforts, user empowerment, personal responsibility, and targeted legal enforcement and the use of state torts when true harms can be demonstrated. That's been the uniquely American approach to privacy protection and we should not abandon it lightly.



I'll try to update this post after I read through the report a second time but wanted to just get these initial thoughts out for now.



 



Additional Reading:




my big Mercatus Center filing to the FTC last year on privacy and Do Not Track regulation
my recent Forbes oped, "The Problem with Obama's "Let's Be More Like Europe" Privacy Plan"


other TLF essays…




Isn't "Do Not Track" Just a "Broadcast Flag" Mandate for Privacy?
Privacy as an Information Control Regime: The Challenges Ahead
Obama Admin's "Let's-Be-Europe" Approach to Privacy Will Undermine U.S. Competitiveness
Lessons from the Gmail Privacy Scare of 2004
When It Comes to Information Control, Everybody Has a Pet Issue & Everyone Will Be Disappointed
And so the IP & Porn Wars Give Way to the Privacy & Cybersecurity Wars
Book Review: Solove's Understanding Privacy


 




 •  0 comments  •  flag
Share on Twitter
Published on March 26, 2012 10:37

FTC Issues Groundhog Report on Privacy

The Federal Trade Commission issued a report today calling on companies "to adopt best privacy practices." In related news, most people support airline safety… The report also "recommends that Congress consider enacting general privacy legislation, data security and breach notification legislation, and data broker legislation."



This is regulatory cheerleading of the same kind our government's all-purpose trade regulator put out a dozen years ago. In May of 2000, the FTC issued a report finding "that legislation is necessary to ensure further implementation of fair information practices online" and recommending a framework for such legislation. Congress did not act on that, and things are humming along today without top-down regulation of information practices on the Internet.



By "humming along," I don't mean that all privacy problems have been solved. (And they certainly wouldn't have been solved if Congress had passed a law saying they should be.) "Humming along" means that ongoing push-and-pull among companies and consumers is defining the information practices that best serve consumers in all their needs, including privacy.



Congress won't be enacting legislation this year, and there doesn't seem to be any groundswell for new regulation in the next Congress, though President Obama's reelection would leave him unencumbered by future elections and so inclined to indulge the pro-regulatory fantasies of his supporters.



The folks who want regulation of the Internet in the name of privacy should explain how they will do better than Congress did with credit reporting. In forty years of regulating credit bureaus, Congress has not come up with a system that satisfies consumer advocates' demands. I detail that government failure in my recent Cato Policy Analysis, "Reputation under Regulation: The Fair Credit Reporting Act at 40 and Lessons for the Internet Privacy Debate."




 •  0 comments  •  flag
Share on Twitter
Published on March 26, 2012 08:53

March 22, 2012

Video from Internet Tax Policy Event

On Monday it was my great pleasure to participate in a Cato Institute briefing on Capitol Hill about "Internet Taxation: Should States Be Allowed to Tax outside Their Borders?" Also speaking was my old friend Dan Mitchell, a senior fellow with Cato. From the event description: "State officials have spent the last 15 years attempting to devise a regime so they can force out-of-state vendors to collect sales taxes, but the Supreme Court has ruled that such a cartel is not permissible without congressional approval. Congress is currently considering the Main Street Fairness Act, a bill that would authorize a multistate tax compact and force many Internet retailers to collect sales taxes for the first time. Is this sensible? Are there alternative ways to address tax "fairness" concerns in this context?"



Watch the video for our answers. Also, here's the big Cato paper that Veronique de Rugy and I penned for Cato on this back in 2003 and here's a shorter recent piece we did for Mercatus.






 •  0 comments  •  flag
Share on Twitter
Published on March 22, 2012 13:45

Cybersecurity Threat Inflation Watch: Blood-Sucking Weapons!

In their paper, "Loving the Cyber Bomb? The Dangers of Threat Inflation in Cybersecurity Policy," my Mercatus Center colleagues Jerry Brito and Tate Watkins warned of the dangers of "threat inflation" in cybersecurity policy debates. In early 2011, Mercatus also published a paper by Sean Lawson, an assistant professor in the Department of Communication at the University of Utah, entitled "Beyond Cyber Doom" that documented how fear-based tactics and cyber-doom scenarios and rhetoric increasingly were on display in cybersecurity policy debates.  Finally, in my recent Mercatus Center working paper, "Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle," I extended their threat inflation analysis and developed a comprehensive framework offering additional examples of, and explanations for, threat inflation in technology policy debates.



These papers make it clear that a sort of hysteria has developed around cyberwar and cybersecurity issues. Frequent allusions are made in cybersecurity debates to the potential for a "Digital Pearl Harbor," a "cyber cold war," a "cyber Katrina," or even a "cyber 9/11." These analogies are made even though these historical incidents resulted in death and destruction of a sort not comparable to attacks on digital networks. Others refer to "cyber bombs" even though no one can be "bombed" with binary code. And new examples of such inflationary rhetoric seem to emerge each day. For example, today's NPR's Morning Edition program featured a segment by Tom Gjelten entitled, "Cybersecurity Bill: Vital Need Or Just More Rules?" that included the comments of Michael McConnell, a former director of National Intelligence, Here's what McConnell said about cyberwar at the 6:30 mark of the show:



"this threat is so intrusive, it's so serious, it could literally suck the life's blood out of this country, and if we don't address it, it's going to be a severe impact and so I think we have no choice but to address it and some of that process will be regulatory."


Wow, who knew the blood could literally be drained from our bodies by cyberattacks! Have the Chinese or Iranians developed a cyber-superweapon that can reach through our screens and suck the life right out of us? (Like a cross between Videodrome and Halloween III: Season of the Witch!)



I'm being silly, of course. And some might dismiss such rhetorical flourishes or even defend them in the name of "doing whatever it takes" to raise awareness about an important concern. But these fear-based tactics are dangerous. As Brito and Watkins note, "when a threat is inflated, the marketplace of ideas on which a democracy relies to make sound judgments—in particular, the media and popular debate—can become overwhelmed by fallacious information." In my paper, I argue that technopanics and threat inflation can have many troubling ramifications. They can:




Foster animosities and suspicions among the citizenry;
Create distrust of many institutions, especially the press;
Often divert attention from actual, far more serious risks; and,
Lead to calls for information control.


But we shouldn't expect such rhetorically tactics to subside any time soon. After all, bombastic predictions of an impending cyber-apocalypse are nothing new, especially because they are such an effective way to grab attention, headlines, and funding.



Back in January 1996, the conservative Weekly Standard magazine ran a truly over-the-top cover story by Charles J. Dunlap entitled "How We Lost the High-Tech War of 2007." (The actual cover appears above and the whole outlandish article is worth reading for its comedic value if noting else.) It included a dramatic Tom Clancy-esque cover illustration of the U.S. Capitol building smoldering in flames after an apparent cyber-attack of some sort.  Of course, there was no High-Tech War of 2007. But talk is cheap and there are few downsides to using such alarmist tactics. Pessimistic critics who use threat inflation to advance their causes are rarely held accountable when their panicky predictions fail to come to pass. As journalist Matt Ridley correctly observes, "Pessimism has always been big box office."  Bad news sells, and there are always plenty of buyers.



It's a shame rational debate is increasing impossible in this and other Internet policy arenas.




 •  0 comments  •  flag
Share on Twitter
Published on March 22, 2012 13:15

LightSquared and Dish: What Would Coase Do?

On CNET today, I have a longish post on the FCC's continued machinations over LightSquared and Dish Networks respective efforts to use existing satellite spectrum to build terrestrial mobile broadband networks. Both companies plan to build 4G LTE networks; LightSquared has already spent $4 billion in build-out for its network, which it plans to offer wholesale.



After first granting and then, a year later, revoking LightSquared's waiver to repurpose its satellite spectrum, the agency has taken a more conservative (albeit slower) course with Dish. Yesterday, the agency initiated a Notice of Proposed Rulemaking that would, if adopted, assign flexible use rights to about 40 Mhz. of MSS spectrum licensed to Dish.



Current allocations of spectrum have little to do with the technical characteristics of different bands. That existing licenses limit Dish and LightSquared to satellite applications, for example, is simply an artifact of more-or-less random carve-outs to the absurdly complicated spectrum map managed by the agency since 1934. Advances in technology makes it possible to successfully use many different bands for many different purposes.



But the legacy of the FCC's command-and-control model for allocations to favor "new" services (new, that is, until they are made obsolete in later years or decades) and shape competition to its changing whims is a confusing and unnecessary pile-up of limitations and conditions that severely and artificially limit the ways in which spectrum can be redeployed as technology and consumer demands change.  Today, the FCC sits squarely in the middle of each of over 50,000 licenses, a huge bottleneck that is making the imminent spectrum crisis in mobile broadband even worse.



Even with the best of intentions, the agency can't possibly continue to micromanage the map.  And, as the LightSquared and Dish stroies demonstrate yet again, the risk of agency capture and political pressure often mean the agency doesn't do the right thing even when it does act.



Who would be the more efficient and neutral regulator?  According to Nobel Prize-winning economist Ronald Coase's seminal 1959 article, "The Federal Communications Commission," the answer is the market.  In his trademark straight-forward, common sense style, Coase elegantly dismantles the idea that scarce spectrum resources demand a non-market solution of government management.



For one thing, Coase demonstrates how screwed up the system already was over fifty years ago.  There's little doubt that the problems he describes have only gotten worse with time and increased demand on the airwaves by insatiable consumers.



Instead, Coase proposed to treat spectrum like any other industry input–as property.  The FCC, he said, should auction spectrum rights to the highest bidder, without licenses, conditions, or limitations on use, and then stand back.  (He acknowledged the risk of antitrust problems, but, as in any industry, such problems could be addressed by antitrust regulators and not the FCC.)  Spectrum rights would efficiently change hands when new applications and devices created higher-value uses.



Potential interference problems–such as those raised by GPS device manufacturers in the case of LightSquared–would be resolved precisely as they are in other property contexts.  Without an FCC to run to, the parties would be forced to negotiate against a backdrop of established liability rules and a safety net of potential litigation.  Indeed, LightSquared and GPS offer a classic example of Coase's later work demonstrating that regardlesss of how property is initially allocated, liability rules ensure that parties will bargain to the most socially-efficient solution to interference.



Of course we'll never know if the socially-optimal solution here is for LightSquared to protect GPS devices from receiving its signal or for device manufacturers to change their designs to stay out of LightSquared's bands.  The heavy hand of the regulator has foreclosed a market solution, or even an attempt at negotiations.



Instead, we have the disaster of the FCC's decision in January, 2011 to grant a conditional waiver to LightSquared and then, last month, indefinitely revoking it.  Meanwhile, LightSqaured spent $4 billion on infrastructure it may never use, and lost its CEO and key customers including Sprint.  No one is happy, and no one reasonably argue this is was an optimal outcome, or even close to one.



For Dish, the NPRM will ensure a more orderly process, but at the cost of months or perhaps longer delay before Dish can begin building its terrestrial network.  And in the interim, all sorts of irrelevant issues may interfere with the orderly (and expensive) resolution.



When Coase proposed a property model for spectrum in 1959, the idea was considered too radical.  Congress and the FCC have, slowly but surely, taken pieces of the proposal to heart, introducing auctions (but not property rights) in the 1990′s.  Yesterday's NPRM takes a small step toward more flexible use licenses, but this may be too little reform too late.  We have all the evidence we need that micromanagement of spectrum can't possibly keep up with the pace of innovation.  Time to try a new, fifty year old, approach.




 •  0 comments  •  flag
Share on Twitter
Published on March 22, 2012 12:47

March 21, 2012

How Much is a Bad Blog Post Worth?

I was astounded to see the misstatements and misapplication of math in a recent Atlantic blog post called "How Much Is Your Data Worth? Mmm, Somewhere Between Half a Cent and $1,200."



For his back-of-envelope calculations about the value of personal data, Alexis Madrigal writes, "User profiles — slices of our digital selves — are sold in large chunks, i .e. at least 10,000 in a batch. On the high end, they go for $0.005 per profile, according to advertising-industry sources."



The dollar value isn't crazy—a CPM rate of about five cents is on the low end—but he has got the nature of the transaction precisely wrong. Advertisers place ads with content providers like Facebook, Google, and ad networks. The latter direct those ads to their visitors, trying to get ads to the people the advertiser wants to reach. They do not sell the information they use to guess at what interests consumers—consumers' profiles, to whatever extent they exist.



If content providers sold data about their visitors to advertisers, this would undercut their own role in the advertising business. There wouldn't be a second sale to make. And doing so would require a radical re-engineering of targeted advertising, which is largely cookie-based. The purchaser of the profile wouldn't know how to find the subject of the profile in order to deliver an ad.



Madrigal repeats several times that "profiles" are "sold." It's a highly misleading characterization, creating the impression that dossiers of information about people are circulating the Internet on a strange black market. On the contrary, profiles are held—not sold—by content providers and advertising networks. There are privacy concerns enough with that business model. We don't need it mis-described.



I probably would have let this pass. Madrigal isn't the first to get the advertising business model wrong. (And he hasn't repeated the error that I know of.) But then comes the bad math.



Writes Madrigal:



[L]et's not forget the rest of the Internet advertising ecosystem either, which the Internet Advertising Bureau says supported $300 billion in economic activity last year. That's more than $1,200 per Internet user and much of the online advertising industry's success is predicated on the use of this kind of targeting data.


Personal information is one input into part of the online advertising. It makes no sense to assign all the value from the entire ecosystem to that one input. The auto industry is about a $400 billion industry, and there are about 250 million car tires sold in the U.S. each year. This does not mean that tires are worth over $2,000 each.



The idea, evidently, is to make the case that consumers are losing a lot in the advertising ecosystem today. That may or may not be true. I'd like to see it shown in the success of a company like Personal or others in the Personal Data Ecosystem, which could re-jigger the personal-data > free-content bargain. But I don't think that misstating how advertising works and exploding the value of personal data is a good way to make the case for change.




 •  0 comments  •  flag
Share on Twitter
Published on March 21, 2012 12:01

March 20, 2012

Jason Mazzone on copyright and the abuse of IP law

http://surprisinglyfree.com/wp-content/uploads/mazzone_jason.jpg

On the podcast this week, Jason Mazzone, professor of law at Brooklyn Law School, discusses his new book, Copyfraud and Other Abuses of Intellectual Property Law. Copyfruad, according to Mazzone, occurs when intellectual property law is used in an abusive or overreaching manner. Mazzone believes the problem arises when content owners make false or fraudulent claims of intellectual property rights that are not recognized by the law. The discussion turns to the scope of harm that results from Copyfraud, and Mazzone proposes that the solution lies in legislative measures as well as education on the scope of intellectual property law.





Related Links

Copyfraud and Other Abuses of Intellectual Property Law , by MazzoneCopyright Criminals, pbs.org"IP Feudalism and the Shrinking of the Public Domain", Marginal Revolution

To keep the conversation around this episode in one place, we'd like to ask you to comment at the webpage for this episode on Surprisingly Free. Also, why not subscribe to the podcast on iTunes?




 •  0 comments  •  flag
Share on Twitter
Published on March 20, 2012 10:00

March 19, 2012

new paper: The Perils of Classifying Social Media Platforms as Public Utilities

The Mercatus Center at George Mason University has just released my new white paper, "The Perils of Classifying Social Media Platforms as Public Utilities." [PDF] I first presented a draft of this paper last November at a Michigan State University conference on "The Governance of Social Media." [Video of my panel here.]



In this paper, I note that to the extent public utility-style regulation has been debated within the Internet policy arena over the past decade, the focus has been almost entirely on the physical layer of the Internet. The question has been whether Internet service providers should be considered "essential facilities" or "natural monopolies" and regulated as public utilities. The debate over "net neutrality" regulation has been animated by such concerns.



While that debate still rages, the rhetoric of public utilities and essential facilities is increasingly creeping into policy discussions about other layers of the Internet, such as the search layer. More recently, there have been rumblings within academic and public policy circles regarding whether social media platforms, especially social networking sites, might also possess public utility characteristics. Presumably, such a classification would entail greater regulation of those sites' structures and business practices.



Proponents of treating social media platforms as public utilities offer a variety of justifications for regulation. Amorphous "fairness" concerns animate many of these calls, but privacy and reputational concerns are also frequently mentioned as rationales for regulation. Proponents of regulation also sometimes invoke "social utility" or "social commons" arguments in defense of increased government oversight, even though these notions lack clear definition.



Social media platforms do not resemble traditional public utilities, however, and there are good reasons why policymakers should avoid a rush to regulate them as such. Treating these nascent digital services as regulated utilities would harm consumer welfare because public utility regulation has traditionally been the archenemy of innovation and competition. Furthermore, treating today's leading social media providers as digital essential facilities threatens to convert "natural monopoly" or "essential facility" claims into self-fulfilling prophecies. Related proposals to mandate "API neutrality" or enforce a "Separations Principle" on integrated information platforms would be particularly problematic. Such regulation also threatens innovation and investment. Marketplace experimentation in search of sustainable business models should not be made illegal.



Remedies less onerous than regulation are available. Transparency and data-portability policies would solve many of the problems that concern critics, and numerous private empowerment solutions exist for those users concerned about their privacy on social media sites.



Finally, because social media are fundamentally tied up with the production and dissemination of speech and expression, First Amendment values are at stake, warranting heightened constitutional scrutiny of proposals for regulation. Social media providers should possess the editorial discretion to determine how their platforms are configured and what can appear on them.



This 63-page paper can be found on the Mercatus site here, on SSRN, or on Scribd.  I've also embedded it below in a Scribd reader. Eventually, a shorter version of this paper will appear as a chapter in a MIT Press book.





Social Networks as Public Utilities [Adam Thierer]






 •  0 comments  •  flag
Share on Twitter
Published on March 19, 2012 11:25

Adam Thierer's Blog

Adam Thierer
Adam Thierer isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Adam Thierer's blog with rss.