Adam Thierer's Blog, page 137
April 2, 2011
Revolving Door of Government & the RIAA
Early in President Obama's term it became clear that efforts to close the revolving door between industry and government weren't serious or the very least weren't working. For a quick refresher on this, check out this ABC news story from August of 2009, which shows how Mr. Obama exempted several officials from rules he claimed would "close the revolving door that lets lobbyists come into government freely" and use their power and position "to promote their own interests over the interests of the American people whom they serve."
The latest example of this rapidly turning revolving door is covered expertly by Nate Anderson at Ars Technica:
Last week, Washington, DC federal judge Beryl Howell ruled on three mass file-sharing lawsuits. Judges inTexas, West Virginia, and Illinois had all ruled recently that such lawsuits were defective in various ways, but Howell gave her cases the green light; attorneys could use the federal courts to sue thousands of people at once and then issue mass subpoenas to Internet providers. Yes, issues of "joinder" and "jurisdiction" would no doubt arise later, but the initial mass unmasking of alleged file-swappers was legitimate.
Howell isn't the only judge to believe this, but her important ruling is especially interesting because of Howell's previous work: lobbying for the recording industry during the time period when the RIAA was engaged in its own campaign of mass lawsuits against individuals.
The bolding above is my own and is meant to underscore an overarching problem in government today of which Judge Howell is just one example. In a government that is expected to regulate nearly every commercial activity imaginable, it should be no surprise that a prime recruiting ground for experts on those subjects are the very industries being regulated.







March 31, 2011
Messing With Your Head: MSFT Costs World Economy $500 Billion
The English language is public domain (the language itself, not everything said with it). So it's worthless, right? No dollars change hands when people use it. Perhaps it could be made worth something if someone were to own it. The owner could charge a license fee to people who use English, making substantial revenue on this suddenly valuable language.
Congress can take works in the public domain and make intellectual property of them according to the Tenth Circuit Court of Appeals in a case that approved Congress "restoring" public domain works to copyrighted status. (The case is Golan v. Holder, and the Supreme Court has granted certiorari.)
But would we really be better off if the English language were given a dollar value through the mechanism of ownership and licensing? No. What is now a costless positive-externality machine would turn into a profit-center for one lucky owner. The society would not be better off, just that owner. If we had to pay for a language, we would regard that as a cost.
In a similar vein, Mike Masnick at TechDirt indulges the somewhat tongue-in-cheek observation that Microsoft costs the world economy $500 billion by accumulating to itself that would have gone to other things. It's a sort of Broken Window fallacy for intellectual property: the idea that creating ownership of intellectual goods creates value. What is not seen when intellectual property is withheld from the public domain is the unpaid uses that might have been made of it.
Now, Microsoft has reaped wonderful benefits from its intellectual creations because it has bestowed wonderful benefits on societies across the globe. But might it have provided all these benefits for slightly less reward, leaving more money with consumers for their preferred uses?
This is all a way of challenging the mental habit of assuming that dollars are equal to value. In the area of intellectual property (whether or not protected by federal statutes), things that have no effect on the economy (because they're in the public domain) may have huge value. Things privately owned because of intellectual property law may have less value than they should, even though their owners collect lots of money.







March 30, 2011
The troubled history of the Global Network Initiative
I've posted a long article on Forbes.com this morning on the Global Network Initiative. A non-profit group aimed at improving human rights though the agency of information technology companies, GNI has never really gotten off the ground.
Since its formal launch in 2008, following two years of negotiations among tech companies, human rights groups and academics, not a single company has agreed to join beyond the original members–Google, Yahoo and Microsoft.
This despite considerable pressure from supporters of GNI, including Senator Richard Durbin (D-IL), Chair of the Senate Judiciary's Subcommittee on Human Rights. Indeed, in the wake of uprisings in Tunisia, Egypt, Libya and elsewhere and the seminal role played by social media and other IT, a full-court press has been launched against Facebook and Twitter in particular for failing to sign up.
The tone of the criticism hardly seems designed to encourage new members to join. (In The Huffington Post, Amy Lee asks simply, "Why won't Twitter and Facebook sign on for free speech on the Internet?")
Why indeed.
The article reviews the troubled history of GNI and its complex, incomplete, and worrisome organizational structure, which gives considerable power to NGOs to shape the policies and practices of participating companies. (That features is especially worrisome, as many of the NGOs are traditional human rights organizations with little or no experience dealing with IT.)
Participating companies, among other commitments, must submit to bi-annual "assessments" of their compliance with GNI principles, conducted by assessors certified by GNI's board.
Details aside, there is a more fundamental question worth asking here. Why are technology companies being asked to influence (one might say interfere with) public policy and local laws of other countries? GNI requires not only that participants resist efforts by repressive governments to censor content or to force disclosure of private information of their citizens, but also that they actively lobby these governments, to "engage government officials to promote the rule of law and the reform of laws, policies and practices that infringe on freedom of expression and privacy."
Freedom of expression and privacy are worthwhile goals, but isn't it the job of a country's own citizens to petition their governments for change? And if those citizens are suppressed, isn't it the job of the global community, operating through political and trade organizations such as the U.N. and the WTO, to lobby for change? Why is foreign policy being outsourced to Facebook and Twitter?
Perhaps it's because national governments won't do it. But the demur by tech companies to take on the job is hardly a reason for Sen. Durbin to criticize and threaten them. If he's looking for someone to blame for the poor human rights record of some governments, perhaps he should look a little closer to home.







The FTC's Google Buzz Privacy Settlement
The FTC today announced it has reached a settlement with Google concerning privacy complaints about how the company launched its Buzz social networking service last year. The consent decree runs for a standard twenty-year term and provides that Google shall (i) follow certain privacy procedures in developing products involving user information, subject to regular auditing by an independent third party, and (ii) obtain opt-in consent before sharing certain personal information. Here's my initial media comment on this:
For years, many privacy advocates have insisted that only stringent new regulations can protect consumer privacy online. But today's settlement should remind us that the FTC already has sweeping powers to punish unfair or deceptive trade practices. The FTC can, and should, use its existing enforcement powers to build a common law of privacy focused on real problems, rather than phantom concerns. Such an evolving body of law is much more likely to keep up with technological change than legislation or prophylactic regulation would be, and is less likely to fall prey to regulatory capture by incumbents.
I've written in the past about how the FTC can develop such a common law. If the agency needs more resources to play this role effectively, that is what we should be talking about before we rush to the assumption that new regulation is necessary. Anyway, a few points about Part III of the consent decree, regarding the procedures the company has to follow:
The company has to assess privacy risks raised by new products as well as existing products, much like data security assessments currently work. The company would have to assess, document and address privacy risks—and then subject those records to inspection by the independent auditor, who would determine whether the company has adequately studied and dealt with privacy risks.
Google is agreeing to implement a version of Privacy by Design, in that the company will do even more to bake privacy features into its offerings.
This is intended to avoid instances where the company makes a privacy blunder because it lacked adequate internal processes to thoroughly vet new offerings or simply to avoid innocent mistakes—as with the its inadvertent collection of content sent over unsecured Wi-Fi hotspots because the engineer designing its Wi-Fi mapping program mistakenly left that code in the system, even though it wasn't necessary for what Google was doing. I wrote more on that here.
As to Part II of the consent decree, express affirmative consent for changes in the sharing of "identified information": It's well-worth reading Commissioner Rosch's concurring statement. I have my differences with him on some issues (like his sometimes overly zealous approach to antitrust), but I've found him to be a welcome voice of skepticism on the Commission. Here, he retiterates his concerns in his earlier concurring statement on the FTC's Preliminary Staff Privacy Report that an opt-in, if mandated by law, might reduce competition. I appreciate his sensitivity to the danger of regulatory capture; regulators should be asking these questions a lot more than they do! But in this particular case, I'm not sure the opt-in for changes in sharing practices would really advantage Google over its rivals, as the Commissioner fears.
An opt-in for changes in sharing practices would seem to be most difficult for incumbents like Google who have large installed user bases for products like Gmail that they try to adapt with add-ons like Buzz. These changes often require changes in what data is shared and how in order to roll out new tools that meet demands from users with evolving privacy expectations. Getting "express affirmative consent" will really slow down user adoption and prevent many of these new tools from reaching critical mass. Google Buzz has clearly failed to meet the hopes of those who thought it would be a Twitter-killer, illustrating just how hard it can be for even a giant like Google to make a new product succeed. By contrast, such an opt-in isn't a problem for a new company that enters a space with a wholly new model of dealing with user data, like Twitter or even Facebook before it. Wouldn't such companies thus have an advantage over Google, even if they all operated under the same opt-in rule regarding sharing changes? I'm sure there's more to the story here, but I'd be careful about leaping to assumptions that there's a dark cloud to this silver lining—as so many in the privacy advocacy community are prone to do.







March 29, 2011
Mark Stevenson on his tour of the future
On the podcast this week, Mark Stevenson, writer, comedian, and author of the new book An Optimist's Tour of the Future: One Curious Man Sets Out to Answer "What's Next?", discusses his book. Stevenson calls An Optimist's Tour of the Future a travelogue about science written for non-scientists, and he talks about why he traveled the world to try to draw conclusions about where human innovation is headed. He discusses his investigation of nanotechnology and the industrial revolution 2.0, transhumanism, information and communication technologies, and the ultimate frontier: space. Stevenson also discusses why he's hopeful about the future and why he wants to encourage others to have optimism about the future.
Related Readings
"An Optimist's Tour of the Future," Stevenson's blog about the book
"A Key Lesson of Adulthood: The Need to Unlearn," by Matt Ridley
"An Optimist's Tour of the Future by Mark Stevenson – review," The Guardian
"An Optimist's Tour of the Future," Financial Times
To keep the conversation around this episode in one place, we'd like to ask you to comment at the web page for this episode on Surprisingly Free. Also, why not subscribe to the podcast on iTunes?







March 28, 2011
On Facebook "Normalizing Relations" with Washington
The New York Times reports that, "Facebook is hoping to do something better and faster than any other technology start-up-turned-Internet superpower. Befriend Washington. Facebook has layered its executive, legal, policy and communications ranks with high-powered politicos from both parties, beefing up its firepower for future battles in Washington and beyond." The article goes on to cite a variety of recent hires by Facebook, its new DC office, and its increased political giving.
This isn't at all surprising and, in one sense, it's almost impossible to argue with the logic of Facebook deciding to beef up its lobbying presence inside the Beltway. In fact, later in the Times story we hear the same two traditional arguments trotted out for why Facebook must do so: (1) Because everyone's doing it! and (2) You don't want be Microsoft, do you? But I'm not so sure whether "normalizing relations" with Washington is such a good idea for Facebook or other major tech companies, and I'm certainly not persuaded by the logic of those two common refrains regarding why every tech company must rush to Washington.
In an essay I penned for the Cato Institute last November entitled "The Sad State of Cyber-Politics," I reiterated arguments made a decade earlier by two brilliant men: Cypress Semiconductor CEO T. J. Rodgers and the late great Milton Friedman. Rodgers penned a prescient manifesto for Cato in 2000 with the provocative title: "Why Silicon Valley Should Not Normalize Relations with Washington, D.C." in which he argued that, "The political scene in Washington is antithetical to the core values that drive our success in the international marketplace and risks converting entrepreneurs into statist businessmen." A year earlier, Friedman penned another Cato essay called "The Business Community's Suicidal Impulse" in which he lamented the persistent propensity of companies to persecute one's competitors using regulation or the threat thereof. What both men stressed was that coming to Washington has a tendency to change a company's focus and disposition, and not for the better — if you believe in real capitalism, that is, and not the abominable crony capitalism fostered by Washington.
But few in the high-tech world have listened to this logic, especially when the whole rest of the world was falling all over themselves to open a Washington, DC office first in an effort to cover their butts from regulatory encroachments and then later to figure out how the wield the hammer of Big Government to their corporate advantage. I documented numerous examples of the latter in my Cato essay.
I'm not saying that the folks at Facebook are going to be looking to screw over their competitors right away. In fact, I can't currently think of any examples of how they might. The company is still firmly in that "cover your butt" period that is common when a hot new digital innovator first comes to DC. And I certainly can't blame them for wanting to push back against many misguided forms of Internet regulation, such as free speech controls or heavy-handed privacy regulation. But I fear there will come a day when they fall in line with many other high-tech companies and trade associations and seek to turn the regulatory state to their advantage. Only time will tell. And I certainly hope I am wrong.
Regardless, as the folks at Facebook and other high-tech firms ponder their future inside the Beltway, let me ask them to return to the two premises for "normalizing relations" that I cited above and explain why they are not exactly true:
Premise #1: Everyone's doing it! Most are, but not all. How active are Apple and Sony to name just two companies without a major DC presence? Most days of the week, Steve Jobs seems to be giving DC a big middle finger. I'm the last guy in the world you'll ever hear giving Apple much credit since I hate their products, but Jobs is about the closest thing you'll find to an Ayn Rand character in Silicon Valley these days. He seems to do exactly what he wants to build innovative products for consumers and, in the process, ignore all his critics, especially those in Washington. Of course, not everybody can be Steve Jobs in this regard, but I can't help but wonder: Why don't more of them try? What if high-tech entrepreneurs just old Washington to buzz off?
Premise #2: You don't want be Microsoft, do you? The Times article says, "legal analysts say Facebook is hoping to avoid mistakes made by predecessors like Microsoft. And they say the company is becoming politically savvy earlier in its life than Google, whose connections were firmly established once Eric E. Schmidt, the chief executive, advised the Obama presidential campaign and the administration."
I've never really bought into this argument. I think it's pretty far-fetched to claim, as so many people in this field do, that if Microsoft would have just had a small army of lobbyists here on the ground back in the early 1990s that none of their antitrust problems would have popped up. And regarding Google coming to Washington in the hope of winning friends, well, how's that working out for them?! As I noted in my Cato essay:
Everybody — and I do mean everybody — wants Google dead, right now. Google currently serves as the Great Satan in this drama — taking over the role Microsoft filled a decade ago — as just about everyone views it with a combination of envy and enmity.
Indeed, no one could be happier about Facebook coming to town at this moment than Google! They get to hand the "Great Satan" baton off to Facebook and wish them the best! Of course, Google's problems with Washington aren't done by a long-shot, but I'm quite sure they're relieved to see Facebook getting grilled more at hearings and events around town these days.
Anyway, in all seriousness, I'll say the same thing to the fine folks in the Facebook DC office — several of whom I know well — that I've said to countless other tech companies here in the Beltway through the years: Stay true to the same principles that made your company so great to begin with. It wasn't Washington that built Facebook, or Google, or Microsoft, or any other high-tech innovators; it was entrepreneurial capitalism that did. Free minds and free markets made the high-tech sector what it is today, not handouts and special favors from Washington. Stick to real capitalism; avoid the crony variety.







The Problem with Paul Ohm's Suggestion to Regulate Inferences to Protect Privacy
Here's an interesting SmartPlanet interview with Paul Ohm, associate professor of law at the University of Colorado Law School, in which he discusses his concerns about "reidentification" as it relates to privacy issues. "Reidentification" and "de-anonymization" fears have been set forth by Ohm and other computer scientists and privacy theorists, who suggest that because the slim possibility exists of some individuals in certain data sets being re-identified even after their data is anonymized, that fear should trump all other considerations and public policy should be adjusted accordingly (specifically, in the direction of stricter privacy regulation / tighter information controls).
I won't spend any time here on that particular issue since I am still waiting for Ohm and other "reidentification" theorists to address the cogent critique offered up by Jane Yakowitz in an important new study that I discussed here last week. Once they do, I might have more to say on that point. Instead, I just wanted to make some brief comments on one particular passage from the Ohm interview in which he outlines a bold new standard for privacy regulation:
We have 100 years of regulating privacy by focusing on the information a particular person has. But real privacy harm will come not from the information they have but the inferences they can draw from the data they have. No law I have ever seen regulates inferences. So maybe in the future we may regulate inferences in a really different way; it seems strange to say you can have all this data but you can't take this next step. But I think that's what the law has to do.
This is a rather astonishing new legal standard and there are two simple reasons why, as Ohm suggests, "no law… regulates inferences" and why, in my opinion, no law should. First, every day in countless ways, other people (including many businesses) make inferences about us to satisfy a variety of needs. Consider a few examples based on my own personal experiences:
Example 1: Your local butcher may deduce from past purchases which types of meat you like and suggest new choices or cuts that are to your liking. This happened just this past weekend for me when a butcher at my local Balducci's grocer recommended I try a terrific cut of steak after years of watching what else I bought there. And because I am such a regular shopper at Balducci's, I also get special coupons and discounts offered to me all the time based on inferences drawn from past purchases. (I have a very similar experience at a local beer and wine store).
Example 2: Your mobile phone provider may draw inferences from past usage patterns to offer you a more sensible text or data plan. This happened to me last year when Verizon Wireless cold-called me and set up a much better plan for me.
Example 3: Your car or home insurance agent may use data about your past behavior to adjust premiums or offer better plans. When I was teenage punk, my family's insurance company properly inferred that I was a bad risk to them (and others on the road!) because of multiple speeding tickets. I paid higher premiums as a result all the way through my 20s. But, as I aged and got fewer tickets, they inferred I was a better bet and gave me a lower premium.
I could go on and cite a litany of other examples, but you get the point: Personal information and inferences based upon that information are a natural part of any society and economy. As my local butcher example illustrates, inferences have always been part of our economy, but such inferences drive an increasing portion of our Information Age economy these days. Thus, practically speaking, it would be quite difficult to devise a clear legal standard that specified what sort of inferences were allowed versus those that would be regarded as verboten.
But there's a far more profound problem with Ohm's suggestion that "in the future we may regulate inferences in a really different way." Simply stated, at least here in the United States, it could conflict rather radically with our strong First Amendment traditions. Eugene Volokh of UCLA law school summarized this general problem for much of privacy law in his seminal 2000 law review article, "Freedom of Speech, Information Privacy, and the Troubling Implications of a Right to Stop People from Speaking About You." As he observed there:
The difficulty is that the right to information privacy — the right to control other people's communication of personally identifiable information about you — is a right to have the government stop people from speaking about you. And the First Amendment (which is already our basic code of "fair information practices") generally bars the government from "control[ling the communication] of information," either by direct regulation or through the authorization of private lawsuits.
Now, I understand that there are times when the First Amendment will need to give way to accommodate certain privacy concerns, although my list would be a short one (mostly extremely sensitive forms of personal information). But the problem with Ohm's paradigm of regulating inferences is that is puts privacy regulation on an epic collision course with the First Amendment since it would require the repression of large amounts of inferential data. This could have a profound chilling effect on speech, journalism, transparency efforts, and much more. For consumers it could mean fewer choices and higher prices. As noted above, using data to draw inferences is what facilitates a huge array of offers and special deals in our capitalist economy. Those offers and deals would dry up if those making them were suddenly denied the right to collect information about us and draw inferences from them.
I can imagine one response to my argument that goes something like this: "Well, we'll just have to separate 'good' inferences from 'bad' inferences and regulate accordingly!" Again, I suppose we can find a couple of buckets where special consideration — even rules — are needed, such as some health and financial information categories. But we already have laws on the books to deal with those issues. What Ohm is suggesting is that something more is needed, and by making inferences the linchpin of his new paradigm it raises serious issues about just how far the law can and should go to bottle up information and restrict human observation.
Additional Reading:
Two Paradoxes of Privacy Regulation
The Conflict Between a "Right to Be Forgotten" & Speech / Press Freedoms
Jane Yakowitz on How Privacy Regulation Threatens Research & Knowledge
Filing in FTC "Do Not Track" / Privacy Proceeding







March 27, 2011
Why My New Forbes Column is Called "Technologies of Freedom"
I'm very excited to announce that I now have a regular Forbes column that will fly under the banner, "Technologies of Freedom." My first essay for them is already live and it addresses a topic I've dealt with here extensively through the years: Irrational fears about tech monopolies and "information empires." Jump over to Forbes to read the whole thing.
Regular readers of this blog will understand why I chose "Technologies of Freedom" as the title for my column, but I thought it was worth reiterating. No book has had a more formative impact on my thinking about technology policy than Ithiel de Sola Pool's 1983 masterpiece, Technologies of Freedom: On Free Speech in an Electronic Age. As I noted in my short Amazon.com review, Pool's technological tour de force is simply breathtaking in its polemical power and predictive capabilities. Reading this book almost three decades after it was published, one comes to believe that Pool must have possessed a crystal ball or had a Nostradamus-like ability to foresee the future.
For example, long before anyone else had envisioned what we now refer to as "cyberspace," Pool was describing it in this book. "Networked computers will be the printing presses of the twenty-first century," he argued in his remarkably prescient chapter on electronic publishing. "Soon most published information will disseminated electronically," and "there will be networks on networks on networks," he predicted. "A panoply of electronic devices puts at everyone's hands capacities far beyond anything that the printing press could offer." Few probably believed his prophecies in 1983, but no one doubts him now!
Far more importantly, Pool did all this while also providing a passionate defense of technological freedom and freedom of speech in the electronic age. In his closing chapter on "Policies for Freedom," Pool discussed possible futures for the emerging world of electronic communications and noted that:
Technology will not be to blame if Americans fail to encompass this system within the political tradition of free speech. On the contrary, electronic technology is conducive to freedom. The degree of diversity and plenitude of access that mature electronic technology allows far exceed what is enjoyed today. Computerized information networks of the twenty-first century need not be any less free for all to use without hindrance than was the printing press. Only political errors might make them so. (p. 231)
Pool went on to outline his "Guidelines for Freedom." #1 was that "the First Amendment applies fully to all media" and #2 was that "anyone may publish at will." Regarding economic regulation of tech markets, Pool stressed in principles #3 and #4 that "enforcement must be after the fact, not by prior restraint" and that "regulation is a last recourse. In a free society, the burden of proof is for the least possible regulation of communication."
This framework for freedom and innovation has governed everything I have done over my first two decades in the field of technology policy and it will shape everything I pen for Forbes, much like it has here at the TLF through the years. I can't pretend to possess Pool's predictive powers, but I can and will commit myself to espousing and defending his beautiful vision of technological freedom and progress.
This is what I wake up and go to work for each day. The fight for technological freedom!







March 25, 2011
Senators Seek to Censor Mobile App Stores, Disregarding Public Safety and the Constitution
In the latest example of big government run amok, several politicians think they ought to be in charge of which applications you should be able to install on your smartphone.
On March 22, four U.S. Senators sent a letter to Apple, Google, and Research in Motion urging the companies to disable access to mobile device applications that enable users to locate DUI checkpoints in real time. Unsurprisingly, in their zeal to score political points, the Senators—Harry Reid, Chuck Schumer, Frank Lautenberg, and Tom Udall—got it dead wrong.
Had the Senators done some basic fact-checking before firing off their missive, they would have realized that the apps they targeted actually enhance the effectiveness of DUI checkpoints while reducing their intrusiveness. And had the Senators glanced at the Constitution – you know, that document they swore an oath to support and defend – they would have seen that sobriety checkpoint apps are almost certainly protected by the First Amendment.
While Apple has stayed mum on the issue so far, Research in Motion quickly yanked the apps in question. This is understandable; perhaps RIM doesn't wish to incur the wrath of powerful politicians who are notorious for making a public spectacle of going after companies that have the temerity to stand up for what is right.
Google has refused to pull the DUI checkpoint finder apps from the Android app store, reports Digital Trends. Google's steadfastness on this matter reflects well on its stated commitment to free expression and openness. Not that Google's track record is perfect on this front – it's made mistakes from time to time – but it's certainly a cut above several of its competitors when it comes to defending Internet freedom.
Advance Publicity & DUI Checkpoints
Trying to keep the locations of DUI checkpoints secret is bad public policy. Contrary to the Senators' assertion that "applications that alert users to DUI checkpoints" are "harmful to public safety," there is zero evidence that publicizing sobriety checkpoints contributes to drunk driving accidents.
If anything, advance publicity actually saves lives. DUI checkpoints aren't primarily about catching drunk drivers, but about deterring drunk driving in the first place. When drivers know that police have set up checkpoints nearby, they're likely to think twice about getting behind the wheel. Instead, they might hail a cab or catch a ride from a sober friend.
The California Supreme Court recognized in Ingersoll v. Palmer that DUI checkpoints are designed to deter drunk driving:
The stated goals of several law enforcement agencies explicitly point to deterrence as a primary objective of the checkpoint program. The Burlingame manual described the objectives of its program, noting the historical use of roving patrols as the principal law enforcement response to the drunk driving problem… Two major goals of the checkpoint as stated in the manual were to increase public awareness of the seriousness of the problem and to increase the perceived risk of apprehension.
The Ingersoll court further stated with regard to the checkpoints that, "advance publicity is important to the maintenance of a constitutionally permissible sobriety checkpoint. Publicity both reduces the intrusiveness of the stop and increases the deterrent effect of the roadblock."
California is not alone in focusing on the deterrent effect of DUI checkpoints. In 1990, shortly after the U.S. Supreme Court upheld the constitutionality of certain kinds of DUI checkpoints in Michigan Department of State Police v. Sitz, the National Highway Traffic Safety Administration (NHTSA) published a document (PDF) laying out guidelines for police in conducting sobriety checkpoints. NHTSA's model sobriety checkpoint guidelines include the following section:
C. ADVANCE NOTIFICATION
1. For the purpose of public information and education, this agency will announce to the media that checkpoints will be conducted.
2. This agency will encourage media interest in the sobriety checkpoint program to enhance public perception of aggressive enforcement, to heighten the deterrent effect and to assure protection of constitutional rights.
Indeed, police departments routinely publicize information about DUI checkpoints in local newspapers and other media outlets. Many police officers think such publicity is beneficial to law enforcement. Take Indiana State Police Sgt. Dave Burstein, who brushed off the Senators' concerns about DUI checkpoint apps, saying to local news affiliate WXIN-TV, "Let everybody know they're there because the whole idea is to get voluntary compliance."
Regulation Through Intimidation
The Senators' letter isn't just uninformed and irresponsible, it's also arrogant – a prime example of regulation through intimidation. When politicians want to dictate behavior but know they cannot lawfully legislate or regulate it, a widely favored tactic is to demonize the target by sending a threatening letter accompanied by a vitriolic press release. When that doesn't get the job done, politicians hold congressional hearings to publicly rake the alleged wrongdoers over the coals. This reprehensible strategy has long been used to suppress constitutionally protected speech in ways that, if legislated, would almost certainly be overturned by courts on First Amendment grounds. As former U.S. Senator Paul Simon warned in 2003:
I have no problem with holding hearings and putting on pressure. But the problem with holding hearings and putting on pressure is that most of the members have no sensitivity on the First Amendment…The only oath we take says that we promise to support and defend the Constitution of the United States against all enemies, foreign and domestic. The domestic enemies of the Constitution are often on the floor of the House and the Senate.
In a free society, it is unacceptable for a handful of Senators to attempt to dictate mobile app store decisions without a floor vote or any judicial oversight. Lawmakers' function is to make laws, not exploit their bully pulpit to try to coerce private businesses into doing their bidding. If voters let these politicians get away with going after DUI checkpoint apps, which politically unpopular apps will be next? A ban on apps that locate abortion clinics? A ban on apps that locate handgun dealers?
If Reid, Schumer, Lautenberg, and Udall want to examine a serious threat to public safety, they should look in the mirror. Meanwhile, they should leave mobile app stores alone. The Washington Times nailed it in a recent editorial:
Real drunk drivers deserve severe punishment, but the best way to catch them is to respect the Fourth Amendment. Instead of having cops stand around behind barricades interrogating soccer moms, have them patrol the streets looking for evidence of impaired driving. It works. In the meantime, high-tech companies ought to email these senators a free Constitution app for their smart phones.
Amen.







Not-So-Fast Do-Not-Track
FTC Commissioner J. Thomas Rosch puts the brakes on some of the Do-Not-Track excitement that has been bubbling up in this (wouldn't you know it) Advertising Age piece.
The concept of do not track has not been endorsed by the commission or, in my judgment, even properly vetted yet. In actuality, in a preliminary staff report issued in December 2010, the FTC proposed a new privacy framework and suggested the implementation of do not track. The commission voted to issue the preliminary FTC staff report for the sole purpose of soliciting public comment on these proposals. Indeed, far from endorsing the staff's do-not-track proposal, one other commissioner has called it premature.
Do-Not-Track does need more vetting and consideration. Don't get your hopes up about being free of tracking anytime soon. (Do you even know what "tracking" is?)
If Do-Not-Track goes forward, don't get your hopes up to be free of tracking either. When you take control of what your browser sends out over the Internet? Then you can rightly anticipate being free of unwanted tracking!







Adam Thierer's Blog
- Adam Thierer's profile
- 1 follower
