Adam Thierer's Blog, page 115

October 3, 2011

Transparency and Its Discontents

Remember when you had to wait until the end of the month to see your bank statement?



Last week, on the cusp of failing to pass any annual appropriations bills ahead of the October 1 start of the new fiscal year, congressional leaders came up with a short-term government funding bill (or "continuing resolution") that would fund the government until November 18th. For whatever reason, that deal (H.R. 2608) wasn't ready to go before the end of the week, so Congress passed an even shorter-term continuing resolution (H.R. 2017) that funds the government until tomorrow, October 4th.



Every weekend, I hunch over my computer and update key records in the database of WashingtonWatch.com, a government transparency website I run as a non-partisan, non-ideological resource. Then I put a summary of what's going on into an email like this one (subscribe!) that goes out to 7,000 or so of my closest friends.



Last weekend, the Library of Congress' THOMAS website, which is one of my resources, was down a good chunk of the time for maintenance. Even after it came up again, some materials such as bill text and committee reports weren't available. (They had come up by the wee hours this morning.) Maintenance is necessary sometimes, though when the service provider I use for the WashingtonWatch.com email does maintenance, it's usually for an hour or so in the middle of a weekend night.



But when I went to update the database to reflect last week's passage of H.R. 2017, I could find no record of its public law number. When a bill becomes a law, it gets a public law number starting with the number of the Congress that passed and then a sequential number, like Public Law No. 112-29. The Government Printing Office's FDsys system lets you browse public laws. At this writing, it isn't updated to reflect the passage of new laws last week. When THOMAS came back up, its public laws page also had no data to reflect the passage of that continuing resolution last week (and still doesn't, also at this writing).



There is barely any news reporting on humdrum details about governing like the passage of a law expending $40 billion in taxpayer funds. (That's about what H.R. 2017 spends to operate the government four more days, roughly $400 per U.S. family.) Where can you confirm with an official source that this happened?



The winning data resource this week, if by default, is Whitehouse.gov, which has a page dedicated to laws the president has signed. That page says that President Obama signed four new laws on Friday (Sept. 30). When might FDsys or THOMAS reflect this information? It'll happen soon, and that data will start to propagate out to society.



But I think that's not soon enough. A couple of days' delay is a big deal.



If I were to take $400 in cash out of my bank account at an ATM, I could review that transaction from that instant forward on my bank's website. If I had a concern or even a passing interest, I could just go look. That is an utterly unremarkable service in this day and age.



But it's remarkable that such a service doesn't exist in systems that are as important as our bank accounts. When Congress and the president pass a bill to spend $40 billion dollars, the fact of its passage is pretty much undocumented by any official sources until enough Mon-Fri, 9-to-5 work hours have passed.



In my recently published paper, Publication Practices for Transparent Government, I go through the things the government should do to make itself more transparent (thus improving public oversight and producing lots of felicitous outcomes). A practice I cite is "real-time or near-real-time publication." Why? Because then any of the 300 million Americans who have an interest, real or passing, can see what is happening with their money as it happens, just like they can with their bank holdings. People like me (and many more) can propagate complete and timely information, making it that much more accessible.



When you're talking about a potential audience of 200 million people and $40 billion in expense (one of the tiniest spending bills—others are much larger), it is not too much to ask to have the data published in real time.



I don't expect a lot of people to join me at the barricades with pitchforks and torches on this one. Government transparency is an area ruled by implicit demand. People don't know what they are missing, so they don't know to suffer a sense of deprivation. I do that for them—all of them. (Heroic, idn't it?)



Before too long, though, the government's opacity will be recognized as a contributor to the public's general—and strong—distaste for all that goes on in Washington, D.C. The idea of spending $400 per U.S. family without documenting every detail of it on the Internet will seem as absurd as waiting until the end of the month to see what happened in your bank account.




 •  0 comments  •  flag
Share on Twitter
Published on October 03, 2011 09:38

More on Jarvis, "Publicness" & Privacy Rights

In his latest weekly Wall Street Journal column, Gordon Crovitz has penned a review of the new Jeff Jarvis book, Public Parts: How Sharing in the Digital Age Improves the Way We Work and Live. Gordon's review closely tracks my own thoughts on the book, which I laid out last week in my Forbes essay, "Is Privacy Overrated?"  Gordon's essay is entitled "Are We Too Hung Up on Privacy" and he finds, like I do, that Jarvis makes compelling case for understanding the benefits of publicness as the flip-side of privacy. Instead of repeating all the arguments we make in our reviews here, I'll just ask people go check out both of our essays if they are interested.



I did, however, want to elaborate on one thing I didn't have time to discuss in my review of the Jarvis book. While I like the approach he used in the book, I thought Jarvis could have spent a bit more time exploring some the thorny legal issues in play when advocates of privacy regulation look to enshrine into law quite expansive views of privacy "rights."



One of the things that both Crovitz and I appreciated about the Jarvis book was the way he tries to get us to think about privacy in the context of ethics instead of law. "Privacy is an ethic governing the choices made by the recipient of someone else's information," Jarvis argues, while "publicness is an ethic governing the choices made by the creator of one's own information," he says. In my review, I explained why this was so important:



Jarvis' approach to thinking about privacy and publicness in terms of ethics is particularly smart precisely because privacy is such a subjective human condition—a "conceptual jungle" and a "concept in disarray," says law professor Daniel J. Solove, author of Understanding Privacy. Thus, a good case can be made for restraint when it comes to legislating to define and protect privacy. That doesn't mean privacy isn't important—it is. But how we go about "protecting" it needs to be balanced against other rights and responsibilities.

For example, we'd all agree with Thomas Jefferson and the Founders that we have a "right to pursue happiness," but a right to happiness would be a different matter altogether. Government can't guarantee happiness. It wouldn't even be able to define it. The same is largely true of privacy. We certainly have the right to pursue private lives and take steps to secure facts about ourselves. At the margins, law can sometimes help us do so—most often by safeguarding us against fraudulent activities. And there are plenty of tools on the market that can help people protect their personal data. By contrast, legalistic efforts to define privacy as a strict "right" leads us back into that "conceptual jungle," which is full of unintended consequences.


Let's unpack this a bit more because if one agrees with the argument that Jarvis makes–that privacy is better thought of as a matter of ethics and social norms–it has important ramifications for ongoing efforts to speak of privacy in legalistic ways. It's not that I'm against any sort of privacy "rights," but I do believe it is important to acknowledge that other important values are at stake here and we must appreciate how increased privacy controls could conflict with them.  "Recognizing that we are legislating in the shadow of the First Amendment suggests a powerful guiding principle for framing privacy regulations," argues Kent Walker, a privacy expert who now serves as a general counsel at Google. "Like any laws encroaching on the freedom of information, privacy regulations must be narrowly tailored and powerfully justified."



Ironically, many privacy advocates are strongly critical of copyright law and claim that, as currently structured, it represents an unjust or excessive information control regime. Yet, privacy regulation would constitute a stronger information control regime by creating the equivalent of copyright law for personal information, which would, in turn, conflict mightily with the First Amendment. [See my essays, "Two Paradoxes of Privacy Regulation" and "Privacy as an Information Control Regime: The Challenges Ahead." The rest of this essay borrows from those pieces as well as this big filing I submitted to the FTC in February.]



In his recent book Skating on Stilts, Stewart Baker reminds us that the famous 1890 Samuel Warren and Louis Brandeis Harvard Law Review essay on "The Right to Privacy"—which is tantamount to a sacred text for many modern privacy advocates—was heavily influenced by copyright law.  As Baker explains:



Brandeis wanted to extend common law copyright until it covered everything that can be recorded about an individual. The purpose was to protect the individual from all the new technologies and businesses that had suddenly made it easy to gather and disseminate personal information: "the too enterprising press, the photographer, or the possessor of any other modern device for rewording or reproducing scenes or sounds."  [...] Brandeis thought that the way to ensure the strength of his new right to privacy was to enforce it just like state copyright law. If you don't like the way "your" private information is distributed, you can sue everyone who publishes it.


Incidentally, it is important to recall that their call for such a regime was essentially driven by a desire to censor the press. In their article, Warren and Brandeis argued that:



The press is overstepping in every direction the obvious bounds of propriety and of decency. Gossip is no longer the resource of the idle and of the vicious, but has become a trade, which is pursued with industry as well as effrontery. To satisfy a prurient taste the details of sexual relations are spread broadcast in the columns of the daily papers. To occupy the indolent, column upon column is filled with idle gossip, which can only be procured by intrusion upon the domestic circle.


So angered were Warren and Brandeis by reports in daily papers of specifics from their own lives that they were led to conclude that:



man, under the refining influence of culture, has become more sensitive to publicity, so that solitude and privacy have become more essential to the individual; but modern enterprise and invention have, through invasions upon his privacy, subjected him to mental pain and distress, far greater than could be inflicted by mere bodily injury.


It is unclear how one could have greater "pain and distress" inflicted by words than "by mere bodily injury," and yet the law review article that essentially gave birth to American privacy law articulated such a theory of harm.  And it only follows, then, that they would advocate fairly draconian controls on speech and press rights if they felt this strongly.



Taken to the extreme, however, giving such a notion the force of law would put privacy rights on a direct collision course with the First Amendment and freedom of speech.  As Eugene Volokh argued in a 2000 law review article entitled, "Freedom of Speech, Information Privacy, and the Troubling Implications of a Right to Stop People from Speaking about You":



The difficulty is that the right to information privacy—the right to control other people's communication of personally identifiable information about you—is a right to have the government stop people from speaking about you. And the First Amendment (which is already our basic code of "fair information practices") generally bars the government from "control[ling the communication] of information," either by direct regulation or through the authorization of private lawsuits.


This is what makes efforts to untether privacy regulation from a harms-based model or mode of analysis so troubling. For example, the Federal Trade Commission's recent privacy review says that "the FTC's harm-based approach also has limitations [because] it focuses on a narrow set of privacy-related harms—those that cause physical or economic injury or unwarranted intrusion into consumers' daily lives."  The Commission then suggests that "for some consumers, the actual range of privacy-related harms is much wider and includes reputational harm, as well as the fear of being monitored or simply having private information 'out there,'" and suggests "consumers may feel harmed when their personal information… is collected, used, or shared without their knowledge or consent or in a manner that is contrary to their expectations."



Not only does the Commission fail to offer any data on how this supposed harm manifests itself, how severe it is, or what trade-offs it presents to society, but it utterly fails to account for the dangerous slippery slope of speech control it puts us on. If appeals for regulation are based on emotion instead of concrete evidence of consumer harm, where will this take us next? If, for example, the Commission is to regulate based upon the fact that "consumers may feel harmed… when their personal information… in a manner that is contrary to their expectations," how long will it be before some suggest this standard should trump First Amendment rights in other contexts?



For example, this more emotional approach to privacy regulation brings us one step closer to a "right not to be offended" or a "right to be forgotten," as some in Europe favor. Here in the U.S., we see a similar effort underway with the so-called "Internet Eraser Button" idea, which has even been floated in federal legislation. How could a journalist even conduct their business in such a world? By their very nature, good reporters are nosy and, to some extent, disregard the privacy of the people and institutions they report on.



This is why privacy regulation must not be reduced to amorphous claims of "dignity" rights, where an assertion by a small handful that they "feel harmed" comes to replace a strict showing of actual harm to persons or property. To go down that path would have grave consequences for the future of freedom of speech, transparency, openness, and accountability.



Of course, there are many different types of privacy concerns, each of which demands its own analysis and legal consideration.  While I think most privacy concerns should be left to the realm of personal responsibility, user empowerment, and industry self-regulation, other privacy issues are more serious and should be elevated to the level of "rights." When we speak of government search and seizure or surveillance concerns, "rights" talk certainly makes more sense. Likewise, identity theft is more than just a violation of privacy, it's a violation of personal property rights.



With such notable exceptions, however, I prefer we speak of privacy in terms of ethics and norms. Legalistic, rights-based conceptions of privacy invite excessive government interventions with myriad unintended consequences.




 •  0 comments  •  flag
Share on Twitter
Published on October 03, 2011 08:01

Would Top-Down Global Planning Have Created the Net?

Here's a sharp editorial from The Economist about Internet governance entitled,  "In Praise of Chaos: Governments' Attempts to Control the Internet Should be Resisted." In the wake of the recent Internet Governance Forum meeting, many folks are once again debating the question of who rules the Net? Along with Wayne Crews, I edited a huge collection of essays on that topic back in 2003 and it's a subject that continues to interest me greatly. As I noted here last week, many of those who desire greater centralization of control over Net governance decisions are using the fear that "fragmentation" will occur without some sort of greater plan for the Net's future. I believe these fears are greatly overstated and are being used to justify expanded government meddling with online culture and economics.



The new Economist piece nicely brings into focus the key question about who or what we should trust to guide the future of the Internet. It rightly notes that the current state of Net governance is, well, messy. But that's not such a bad thing when compared to the alternative:



the internet is shambolically governed. It is run by a hotch-potch of organisations with three- to five-letter acronyms. Many of their meetings, both online and offline, are open to the public. Some—like the Internet Governance Forum, which held its annual meeting in Nairobi this week—are just talking shops. Decision-making is slow and often unpredictable.


It is in short a bit chaotic. But sometimes chaos, even one that adherents like to claim somewhat disingenuously is a "multi-stakeholder" approach, is not disastrous: the internet mostly works. And the shambles is a lot better than the alternative—which nearly always in this case means governments bringing the internet under their control.


Quite right, and the editorial continues on to pose the crucial question about today's situation:



Imagine if the ITU, a classic example of a sluggish international bureaucracy with antiquated diplomatic rituals, or indeed any other inter-governmental organisation, had been put in charge of the nascent global network two decades ago. Would it have produced a world-changing fount of innovation? We think not.


Indeed, it would be hard to imagine top-down design and central planning could have given rise to today's Internet. While very few global officials propose the wholesale government takeover of the Net today, we should nonetheless be skeptical about calls to have international bureaucracies exert greater authority over the Internet, regardless of the justification. Messy governance beats top-down planning.




 •  0 comments  •  flag
Share on Twitter
Published on October 03, 2011 06:18

September 27, 2011

Sonia Arrison on technology and longevity

http://surprisinglyfree.com/wp-content/uploads/Sonia-Arrison.jpg

On the podcast this week, Sonia Arrison, writer, futurist, and senior fellow at the Pacific Research Institute, discusses her new book entitled 100+: How the Coming of Age of Longevity Will Change Everything from Careers and Relationships to Family and Faith. The process of aging, according to Arrison, is not set in stone, and the way humans experience age can be changed as technology evolves. She discusses the different types of technology, including tissue engineering and gene therapy, which are poised to change numerous aspects of human life by improving health and increasing lifespan to 150 years and beyond. She also talks about how increased lifespans will affect institutions in society and addresses concerns, such as overpopulation and depletion of resources, raised by critics of this technology.





Related Links

100+ , Sonia Arrison's webpage"Living And Working To 100″, fastcompany.com"Living to 100 and Beyond", Wall Street Journal"The World Will Be More Crowded — With Old People", foreignpolicy.com

To keep the conversation around this episode in one place, we'd like to ask you to comment at the webpage for this episode on Surprisingly Free. Also, why not subscribe to the podcast on iTunes?




 •  0 comments  •  flag
Share on Twitter
Published on September 27, 2011 10:00

Online 'Fragmentation' Fears & the Downside of a 'Globally Coherent Approach' to Internet Governance

In a speech today before the Internet Governance Forum entitled "Taking Care of the Internet," Neelie Kroes, Vice President of the European Commission, responsible for the Digital Agenda for Europe, argued for "a globally coherent approach" to preserve "the global character of the Internet, and keep it from fragmenting." That sounds good in theory but, as always, the devil is in the details. No one wants to see a highly balkanized Internet with each country and continent becoming a digital island cut off from the rest of Internet. On the other hand, if "a globally coherent approach" means layers of international red tape and bureaucracy, then fragmentation doesn't sound so bad by comparison. That's particularly true for those of us who live in countries to cherish principles of freedom of speech and free enterprise, as we do in the United States.



For example, to most of the rest of the planet, America's First Amendment is viewed as a pesky local ordinance that simply interferes with the ability of government to establish rules for acceptable speech and expression throughout society. What, then, does "a globally coherent approach" to Internet governance mean when America's values conflict with other countries and continents? Does it mean that the U.S. should conform to a global norm as established by a "consensus body"? Who would that be? The OECD? The United Nations? The International Telecommunications Union? If so, it is clear that protections for freedom of speech and expression would be sacrificed on the altar of "consensus" or a "coherent global approach" to Net governance.



The same holds true for commercial regulation. The U.S. leaves more breathing room for commercial experimentation and entrepreneurialism than most other governments across the globe. It is likely that a more "globally coherent approach" to Internet governance would lead to a ramping up of regulations governing commercial interactions online.



Thus, there should be some limits to how far we are willing to go in the name of avoiding "Internet fragmentation." America shouldn't be ashamed to boast of its superior "light-touch" framework for online policy, and it should defend it against efforts that would force a sort of global regulatory super-convergence and lead to a future that is less free for online denizens.




 •  0 comments  •  flag
Share on Twitter
Published on September 27, 2011 07:59

September 26, 2011

ABA Roundtable Discussion Tomorrow on the AT&T/T-Mobile Merger

[Cross posted at Truthonthemarket]



As I have posted before, I was disappointed that the DOJ filed against AT&T in its bid to acquire T-Mobile.  The efficacious provision of mobile broadband service is a complicated business, but it has become even more so by government's meddling.  Responses like this merger are both inevitable and essential.  And Sprint and Cellular South piling on doesn't help — and, as Josh has pointed out, further suggests that the merger is actually pro-competitive.



Tomorrow, along with a great group of antitrust attorneys, I am going to pick up where I left off in that post during a roundtable discussion hosted by the American Bar Association.  If you are in the DC area you should attend in person, or you can call in to listen to the discussion–but either way, you will need to register here.  There should be a couple of people live tweeting the event, so keep up with the conversation by following #ABASAL.



Panelists:
Richard Brunell, Director of Legal Advocacy, American Antitrust Institute, Boston
Allen Grunes, Partner, Brownstein Hyatt Farber Schreck, Washington
Glenn Manishin, Partner, Duane Morris LLP, Washington
Geoffrey Manne, Lecturer in Law, Lewis & Clark Law School, Portland
Patrick Pascarella, Partner, Tucker Ellis & West, Cleveland



Location: 
Wilson Sonsini Goodrich & Rosati, P.C. 1700 K St. N.W. Fifth Floor Washington, D.C. 20006



For more information, check out the flyer here.




 •  0 comments  •  flag
Share on Twitter
Published on September 26, 2011 15:23

Net neutrality: Doing the Numbers

For Forbes this morning, I reflect on the publication late last week of the FCC's "Open Internet" or net neutrality rules and their impact on spectrum auctions past and future.  Hint:  not good.



An important study last year by Prof. Faulhaber and Prof. Farber, former chief economist and chief technologist, respectively, for the FCC, found that the last-minute imposition of net neutrality limits on the 700 MHz "C" block in the FCC's 2008 auction reduced the winning bid by 60%–a few billion dollars for the Treasury.



Yet the FCC maintained in the December Report and Order approving similar rules for all broadband providers that the cost impact of these "prophylactic" rules would be minimal, because, after all, they simply endorse practices most providers already follow.  (And the need for the new rules, then, came from where?)



In response to oral and written questions directed at the agency by Congress over the course of the last ten months (while the White House mysteriously held up publication of the new rules), the agency maintained with a straight face that a detailed cost-benefit analysis of the new rules was part of the rulemaking.  But the Chairman seems unable to identify a single paragraph in the majority's 200-page report where that analysis can be found.



Well, but perhaps bidders in the 2008 auction misjudged the potential negative impact of the new rules on their ability to best utilize the C block.  Perhaps a 60% reduction in bid price was an overreaction to the neutrality limits.  Perhaps, but not likely.  Already, Verizon, which won the auction and is using the spectrum for its state-of-the-art 4G LTE service, has been hit with a truly frivolous complaint from Free Press regarding Google's refusal to allow software that tethers Android phones to other devices to share the network connection.



And there were rumblings earlier this year in WIRED that curated app stores would also violate the "no blocking" provision in the C block auction (provisions, recall, that were added at the request of Google as a condition of their participating in the auction).  If that were true, then Verizon could never offer an iPhone on the LTE network.  A definite and pointless limit to the value of the C block…for consumers most of all.



These seem like complaints unlikely to go anywhere, but then again who knows?  Even prevailing in FCC adjudications takes time, money, and uncertainty.  Investors don't like that.  And the new net neutrality rules make complaining even easier, as I noted earlier this year.



So the impact of the net neutrality rules, should they survive Congressional and legal challenges, will be to reduce incentives for broadband carriers to continue investing in their networks.  It won't stop them, obviously.  But it will surely slow them down.  By how much?  Well, as much as 60%, apparently.  And given that the major facilities-based carriers spend around $20 billion a year in network investments, even a few percentage points of uncertainty translate into real losses.



Balanced out by which benefits, exactly?  Oh right–these are "prophylactic" rules.  So the benefits aren't knowable.  Until the future.  Maybe.



If reduced investment wasn't a bad enough result, there's a deeper and more deeply disturbing lesson of last year's Net Neutrality free-for-all.  The FCC, an "expert agency," has become increasingly political.  Its experts are being run over by operative inside and outside the agency with an agenda that lives outside the agency's expertise, trumping traditional independent values of costs and benefits, and of applying scarce resources to their best and highest use.



That may be one reason Congress has yet to move forward with pending legislation granting the agency authority to conduct Voluntary Incentive Auctions, and why the draft legislation tries to curb the flexibility the agency has if it does get the new authority.



Flexibility, after all, cost the taxpayers a small fortune in the 2008 auction.  And it led to conditions being placed on the license that aren't helping anyone, and which may keep consumers from getting what all but a few loudmouths genuinely value.



A rulemaking whose goal was to "preserve" the Open Internet may wind up having the opposite result.  The joke, unfortunately, is on mobile users.



 




 •  0 comments  •  flag
Share on Twitter
Published on September 26, 2011 12:18

September 23, 2011

Publication Practices for Transparent Government: Rating the Congress

The Cato Institute is doing a live-streamed Capitol Hill briefing this morning—start-time 9:00 a.m. Eastern—on congressional transparency.



You can see and download all the materials being released to Hill staff on a Cato@Liberty blog post summarizing where congressional transparency stands: "needs improvement."



You can watch the event live (or later on tape) and join the conversation at the Twitter hashtag #RateCongress.




 •  0 comments  •  flag
Share on Twitter
Published on September 23, 2011 05:52

September 22, 2011

Three Provocations about Parental Controls, Online Safety & Kids' Privacy

On Wednesday afternoon, it was my great pleasure to make some introductory remarks at a Family Online Safety Institute (FOSI) event that was held at the Yahoo! campus in Sunnyvale, CA. FOSI CEO Stephen Balkam asked me to offer some thoughts on a topic I've spent a great deal of time thinking about in recent years: Who needs parental controls? More specifically, what role do parental control tools and methods play in the upbringing of our children? How should we define or classify parental control tools and methods? Which are most important / effective? Finally, what should the role of public policy be toward parental control technologies on both the online safety and privacy fronts?



In past years, I spent much time writing and updating a booklet on these issues called Parental Controls & Online Child Protection: A Survey of Tools & Methods. It was an enormous undertaking, however, and I have abandoned updating it after I hit version 4.0. But that doesn't mean I'm not still putting a lot of thought into these issues. My focus has shifted over the past year more toward the privacy-related concerns and away from the online safety issues. Of course, all these issues intersect and many people now (rightly) considered them to largely be the same debate.



Anyway, to kick off the FOSI event, I offered three provocations about parental control technologies and the state of the current debate over them. I buttressed some of my assertions with findings from a recent FOSI survey of parental attitudes about parental controls and online safety.



Provocation #1: While parental controls will continue to play an important role, it may be the case that many parents will not need parental controls technologies quite to the extent we once thought they did.



In one sense, usage of parental control tools is actually surprisingly high. The FOSI survey reported that 53% of parents say they have used parental control tools to assist them in monitoring their child's Internet usage. That's much higher than I would have expected.



Of course, that means that the other half of parents aren't using parental controls. Why aren't they? It can't be because parents aren't aware of the tools. Awareness of parental control tools is growing. According to the FOSI survey, 87% of parents report knowledge of at least one parental control technology.



Some critics claim it's because the tools are too complicated, but that's also hard to believe. The tools keep getting easier to use and cheaper—often being completely free of charge.



The better explanation lies in the fact that, first, talking to our kids continues to be the most important approach to mentoring youth and protecting them, just as it was for previous generations of parents. Almost all of the parents surveyed by FOSI (96%) said they have had a conversation with their child about what to do and not to do online.



Second, "household media rules" are the other unforgotten element here. These rules can be quite formal in the sense that parents make clear rules and enforce them routinely in the home over an extended period of time. Other media consumption rules can be fairly informal, however, and are enforced on a more selective basis. In my book on parental controls, I devised a taxonomy of household media rules and outlined four general categories: (1) "where" rules; (2) "when and how much" rules; (3) "under what conditions" rules; and, (4) "what" rules.



The FOSI survey reveals that such household media rules are widely utilized. Nearly all parents (93%) said they have set rules or limits to monitor their children's online usage. In particular:




79% of parents surveyed require their children to only use the computer in a certain area of the house. (This is an example of a "where" rule.)
75% of parents limit the amount of time a child can spend online. (This is a "how much" rule.)
74% set rules for the times of day a child can be online. (This is a "when" rule.)
59% established time limits for use of a child's cell phone. (This is another "how much" rule.)


Again, many pundits and policymakers routinely ignore the importance of such household media rules when talking about online child safety. They incorrectly assume that lower than expected usage of various parental control technologies means that those tools have failed or that kids are in great danger online. The reality is that most parents usually think of parental control technologies as a backup plan or complement to traditional parental mentoring and rule-setting responsibilities.



In fact, the FOSI survey revealed that, of those parents who have not used parental controls, 60% of them said it was because they already have rules and limits in place. Of course, none of this should be surprising. Most of us over 40 grew up without any parental control tools in our homes. Just like our parents before us, we devise strategies to mentor our youth and guide their development. Simple lessons and smart rules will, therefore, always be the first order of business. Technological controls will often only be used to supplement and better enforce those lessons and rules, if they are used at all.



In sum: parents are parenting!



Provocation #2: Kids are more resilient than we think.



Despite the panic we sometimes hear surrounding online safety and privacy, kids seem to be adapting to online environments and challenges quicker than parents (and policymakers) give them credit for. Without minimizing the seriousness of any particular concern, I think we need to step back and appreciate just how good of a job most kids have done adjusting to the modern Information Revolution.



There's a great deal of literature in the field of psychology and sociology dealing with resiliency theory. When we think about risk in this world, there exists a range of responses. Prohibition and anticipatory regulation are on one end of the spectrum. Resiliency and adaptation are on the other.



When highly disruptive information technologies come on the scene, the first reaction is often prohibition or anticipatory regulation. That's driven by fear of the new and unknown. Oftentimes, however, patience is the better disposition. Building resiliency and crafting adaptation strategies often makes more sense. Instilling principles and lessons to last a lifetime will ultimately do more to make our kids smart, savvy cyber-citizens and prepare them for the worst of what the world might throw their way.  It's like the old "teach a man to fish" approach, except in this case it's "teach a child to think."



In many ways, this is precisely what has been happening for the past decade. Both parents and kids have been "learning on the job" so to speak. They've been adapting to new online worlds and gradually assimilating them into their lives. In the process, they have learned important lessons and become more resilient.



Of course, some risks are serious enough that they demand a more anticipatory solution, perhaps even prohibition. Child porn and online child abuse of any sort are the primary examples. But for most other things, social adaptation and resiliency responses generally trump prohibition or anticipatory regulation as the smart solution.



Provocation #3: The most interesting and important public policy debate going forward—both for child safety and kids' privacy concerns—continues to be the vexing question of where to set to defaults and who sets them.



This isn't the provocative part of this particular provocation. After all, we've always know that defaults matter. Psychologists speak of "status quo bias," or the general inclination for humans to often stick with the choice they've initially been offered. Thus, default parental control and privacy-related settings are often quite "sticky." Where safety and privacy defaults are set out of the gates is usually where they stay for many people.



A lot of people would like to find a way to change that—potentially through regulation—because they do not approve of the initial defaults offered by various online sites, service, or devices.



Generally speaking, there are two sets of hard questions here. First should we default to the most restrictive setting, the least restrictive, or should we force the consumer to make the choice before using the site, service, or device? Second, who makes that call? Private or public actors?



So, here's my real provocation: We are better served as a society when these defaults evolve organically and are not imposed from above. Trial and error experimentation with varying defaults help us better understand the relative value of online safety and privacy to various users. That experimentation also sends important signals to other players in the marketplace and encourages them to offer innovative alternative or approaches to these issues. [Here's a longer paper I penned on this issue explain why mandatory and highly restrictive defaults usually aren't a good idea.]



The obvious objection to my position is that, if companies are the ones setting the defaults, then only their values get heard and their preferred defaults will always prevail. In reality, however, defaults often do evolve from where they are initially set. (Think of how browsers and social networking sites have added and changed privacy and security controls over just the past few years.) Press exposure and social pressure—especially from average parents and advocacy groups—typically help make sure service providers are responsive to needs of their communities.



Importantly, just because some the preferred defaults of some child safety or privacy advocates do not prevail, that doesn't constitute "market failure." There are many competing values at work here. First off, we must never forget that only 32% of all U.S. households have children present in them at any given time. And of that 32%, a small subset might need parental controls or enhanced privacy settings. Many others won't need any. We live in a diverse nation with a wide spectrum of values and approaches when it comes to rearing our children and protection their safety and privacy. Some parents will never use any parental control or privacy tools. Others will layer them on. Others will use a mix of tools and strategies as outlined above.



In the end, we should expect that experimentation with varying defaults will continue and that there will always be some who are cranky about their preferred defaults not prevailing. But I think we are better off if we allow experiments to continue.





After I offered these initial provocations at the FOSI event, we had a terrific conversation among a diverse group of attendees. I took notes and tried to distill the key takeaways from the conversation, which was off the record. Here are 5 themes that I kept hearing coming up again and again from participants:




There is no single tool or silver-bullet solution that can solve all these problems; many tools and solutions are needed for the various concerns that are out there today
The term "parental controls" is too narrow since it just implies tools. We need a broader term or paradigm that incorporates education, awareness, empowerment, household media rules, etc.
Whether we are talking about tools or awareness efforts, there is remains a trade-off between sophistication and usability.  Many people and policymakers say they want more sophisticated tools but then turn around and complain about complexity of those solutions later. Stated differently, there will never be a "Goldilocks formula" that gets it just right precisely because needs and values evolve.
There are shifting concerns among parents from old days. In the early days of the Net, the concern tended to be focused more on content consumption (mostly adult material). Today, the concern seems to have shifted strongly toward content creation (ex: user-generated content on social networking sites, Twitter, SMS, etc.)
Kids are getting online at a younger age despite regulatory prohibitions such as COPPA and we're going to have to grapple with that reality and whether we'll allow it.



 •  0 comments  •  flag
Share on Twitter
Published on September 22, 2011 18:14

September 20, 2011

Top 10 Antitrust Fallacies to Watch for at Today's Google Antitrust Hearing

by Berin Szoka & Geoffrey Manne



In advance of today's Senate Judiciary hearing, "The Power of Google: Serving Consumers or Threatening Competition?," we've assembled a list of fallacies you're likely to hear, either explicitly or implicitly:




Competitors, not Competition.  Antitrust protects consumer welfare: competition, not competitors.  Competitors complain because a practice hurts them, but antitrust asks only whether a practice actually hurts consumers. The two are rarely the same.
Big Is Bad. Being big ("success") isn't illegal.  Market share doesn't necessarily create market power.  And even where market power does exist, antitrust punishes only its abuse.
Burden-Shifting. Google, like any defendant, is presumed innocent until proven guilty.  So Google's critics bear the burden of proving both that Google has market power and that it has abused that power to the detriment of consumers.  Yet, ironically, it's Google at the table defending itself rather than the antitrust agencies explaining their concerns.
Ignoring Error Costs. The faster technology moves, the greater the risk of a "false positive" and the more likely "false negatives" are to be mooted by disruptive innovation that unseats incumbents.  Thus, error costs counsel caution.
Waving the Magic Wand.  Google's critics often blithely assume that Google is "smart enough to figure it out" when it comes to implementing, or coping with, a wide range of proposed remedies.  But antitrust remedies, like all regulation, must be grounded in technological reality, and we must be realistic about real-world trade-offs.
The Nirvana Fallacy. These two—ignoring error costs and ignoring the very real problems of fashioning effective remedies—create the Nirvana Fallacy: the belief that any problem must be fixed, without considering that the fix may be worse than the problem.  We learned this the hard way with the aborted, 13-year travesty of an antitrust case against IBM.
Overly-Narrow Market Definition. Shrinking the size of the market is the easiest way to exaggerate market power.  Google competes not just with Bing but with many other consumer research tools.  Some, like Yelp, share Google's focus on keyword searches, but others, like Facebook, offer wholly new paradigms for finding information.  Consumers generally want information, not URLs, and keyword search is one of many tools available to find useful information.
Leveraging Dominance. This oft-repeated phrase comes from EU competition law, which lacks a rigorous focus on consumer welfare.  It's frequently used to attack big companies for expanding into new markets—here, customer reviews, mobile handsets, operating systems, etc.  But there's essentially no empirical evidence (pdf) that this has ever been bad for consumers—or that efforts to thwart it have been beneficial.
Downplaying Business Model Innovation. New technologies are great, but most innovation involves a process of discovering better ways to offer existing technologies.  Search results have evolved from a list of URLs ("ten blue links") to a varied presentation of both information and links (from maps to reviews of local shops to flight information) not only because the technology evolved to enable it, but also because the business case was made to support it.
Stasis Mentality. Many assumed IBM and Microsoft would rule tech forever, or that a combined AOL and Time Warner would be unstoppable.  Google, for all its might, is already playing catch-up with Facebook.  Even seemingly simple products change rapidly, and competitors emerge from the most unexpected places.
Antitrust Isn't Regulation. Like any form of government economic intervention, the limited knowledge of government regulators makes antitrust prone to "government failures," which are often worse than the "market failures" they intend to correct.  (see the Nirvana Fallacy, above).  If antitrust is superior to other forms of regulation, it's only because of the rigorous economic analysis of consumer welfare that has developed in recent decades to replace "Big is Bad" thinking.  Without that, antitrust can be much, much worse than regulation.  This thinking is especially common on the philosophical Right as a way to reconcile otherwise-healthy regulatory skepticism with antitrust activism.


(We just couldn't resist throwing in a bonus fallacy to round out the list.  Call it a baker's 10.)



We'll try to tweet (hashtag #GAntitrust; follow us at @Tech_Freedom) anytime someone falls into one of these fallacies at the hearing.  You can get your own Google Antitrust Fallacy Bingo card here.  The first two to tweet with a picture of a completed card (quotes or times scrawled in the margin of each square would be nice!) win a copy of The Next Digital Decade: Essays on the Future of the Internet, a unique and philosophically diverse collection of essays published earlier this year.



For Bingo purposes (to fill out the cards), also be on the look out for participants using these ill-defined or otherwise problematic (yet cavalierly tossed-around) terms:




"Search Neutrality"
"Scraping"
"Black Box"
"Search Fairness"
"Disclosure"
"Transparency"
"Conflicts of Interest"
"Level Playing Field"
"Corporate Responsibility"
"Deceptive Practice"
"Federal Search Commission"
"Privacy Violation"
"Market Power"


We do not recommend turning this into a drinking game.



Berin Szoka is Founder and President and Geoffrey Manne is Senior Adjunct Fellow at TechFreedom, a non-profit, non-partisan technology policy think tank launched in 2011.




 •  0 comments  •  flag
Share on Twitter
Published on September 20, 2011 22:50

Adam Thierer's Blog

Adam Thierer
Adam Thierer isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Adam Thierer's blog with rss.