Adam Thierer's Blog, page 73
February 26, 2013
Legislative Data and Wikipedia Workshop—March 14th and 15th
In my Cato paper, “Publication Practices for Transparent Government,” I talked about the data practices that will produce more transparent government. The government can and should improve the way it provides information about its deliberations, management, and results.
“But transparency is not an automatic or instant result of following these good practices,” I wrote, “and it is not just the form and formats of data.”
It turns on the capacity of the society to interact with the data and make use of it. American society will take some time to make use of more transparent data once better practices are in place. There are already thriving communities of researchers, journalists, and software developers using unofficial repositories of government data. If they can do good work with incomplete and imperfect data, they will do even better work with rich, complete data issued promptly by authoritative sources.
We’re not just sitting around waiting for that to happen.
Based on the data modeling reported in “Grading the Government’s Data Publication Practices,” and with software we acquired and modified for the purpose, we’ve been marking up the bills introduced in the current Congress with “enhanced” XML that allows computers to automatically gather more of the meaning found in legislation. (Unfamiliar with XML? Several folks have complimented the explanation of it and “Cato XML” in our draft guide.)
No, we are not going to replace the lawyers and lobbyists in Washington, D.C., quite yet, but our work will make a great deal more information about bills available automatically.
And to build society’s capacity “to interact with the data and make use of it,” we’re hoping to work with the best outlet for public information we know, Wikipedia, making data about bills a resource for the many Wikipedia articles on legislation and newly passed laws.
Wikipedia is a unique project, both technically and culturally, so we’re convening a workshop on March 14th and 15th to engage Wikipedians and bring them together with data transparency folks, hopefully to craft a path forward that informs the public better about what happens in Washington, D.C. We’ve enlisted Pete Forsyth of Wiki Strategies to help assemble and moderate the discussion. Pete was a key designer of the Wikimedia Foundation’s U.S. Public Policy Initiative—a pilot program that guided professors and students in making substantive contributions to Wikipedia, and that led to the establishment of the Foundation’s Global Education Program.
The Thursday afternoon session is an open event, a Wikipedia tutorial for the many inexperienced editors among us. It’s followed by a Sunshine Week reception open to all who are interested in transparency.
On Friday, we’ll roll up our sleeves for an all-day session in which we hope Wikipedians and experienced government data folks will compare notes and produce some plans and projects for improving public access to information.
You can view a Cato event page about the workshop here. To sign up, go here, selecting which parts of the event you’d like to attend. (Friday attendance requires a short application.)
For some Wikipedians, particularly, this may be their first direct experience with the Cato Institute. We are known, of course, for policy positions that contest the current size and scope of government, but transparency, and the hope with getting data on to Wikipedia, is meant to provide the public with neutral information tools that all communities can use to oversee the government and advocate for what they want.
From Cato’s first event on transparency, and again in “Publication Practices,” I’ve emphasized that transparency is a sort of win-win bet.
Government transparency is a widely agreed-upon value, but it is agreed upon as a means toward various ends. Libertarians and conservatives support transparency because of their belief that it will expose waste and bloat in government. If the public understands the workings and failings of government better, the demand for government solutions will fall and democracy will produce more libertarian outcomes. American liberals and progressives support transparency because they believe it will validate and strengthen government programs. Transparency will root out corruption and produce better outcomes, winning the public’s affection and support for government.
Though the goals may differ, pan-ideological agreement on transparency can remain. Libertarians should not prefer large government programs that are failing. If transparency makes government work better, that is preferable to government working poorly. If the libertarian vision prevails, on the other hand, and transparency produces demand for less government and greater private authority, that will be a result of democratic decisionmaking that all should respect and honor.
…
By putting out data that is “liquid” and “pure,” governments can meet their responsibility to be transparent, and they can foster this evolution toward a body politic that better consumes data. Transparency is likely to produce a virtuous cycle in which public oversight of government is easier, in which the public has better access to factual information, in which people have less need to rely on ideology, and in which artifice and spin have less effectiveness. The use of good data in some areas will draw demands for more good data in other areas, and many elements of governance and public debate will improve.
Hope to see you March 14th and 15th.







Do we need a special government program to help cybersecurity supply meet demand?
Today, the House Science Committee is holding a hearing on “Cyber R&D Challenges and Solutions.” Under consideration is a bill reintroduced by Rep. Mike McCaul that takes numerous steps purported to increase the network security workforce. The bill passed overwhelmingly last year.
I have no doubt that, as we move more of our lives online, we need to draw more people into computer security. But just as we need more network security professionals, we need more programmers, geneticists, biomedical engineers, statisticians, and countless other professions. We will also continue to need some number of doctors, lawyers, mechanics, plumbers, and grocery clerks. Does it make sense to introduce legislation to fine tune the number of practitioners of every trade?
Of course not. Which raises the question: what is so special about computer security? And the answer, I think, is “nothing is so special about computer security.” More people will get trained in computer security if the returns to doing so are higher, and fewer people will get trained in computer security if the returns to doing so are lower. Entry into the computer security business is simply a function of supply and demand.
The Washington Post reports, “The median salary for a graduate earning a degree in security was $55,000 in 2009, compared with $75,000 for computer engineering.” Is it any surprise, then, that more smart, tech-savvy students have pursued the latter route in recent years?
Intervening in a market that shows no signs of failing can have lots of unintended consequences. Most obviously, subsidies would run the serious risk of drawing too many workers into the computer security workforce. Those workers might find that they spent years investing in specialized skills without as much of a payoff as they expected. Tinkering could also affect the composition of people drawn into the field, with ill effect, for example by lowering the equilibrium salary and reducing the incentive for those with natural talent and without the need for training to work in security.
The bottom line is that a shortage of a particular kind of worker is a problem that solves itself. As salaries for security workers get bid up, more people will get training in security. The supply and demand dynamic is completely sufficient to get people into the correct professions in sufficient numbers.
The McCaul bill works through various subsidies and governmental reports to try to accomplish the same thing that the market would do if left to operate on its own. If the government wants to hire more computer security professionals, let them pay the money needed to draw people into this field. But let’s not jump through needless hoops to accomplish what should really be a straightforward task.







Joseph Reagle on the gender gap in geek culture
Is geek culture sexist? Joseph Reagle, Assistant Professor of Communications Studies at Northeastern University and author of a new paper entitled, “Free as in Sexist? Free culture and the gender gap,” returns to Surprisingly Free to address geek feminism and the technology gender gap.
According to Reagle, only 1% of the free software community and 9% of Wikipedia editors are female, which he sees as emblematic of structural problems in the geek community. While he does not believe that being a geek or a nerd is in any way synonymous with being a sexist, he concludes that three things that he otherwise loves—geekiness, openness, and the rhetoric and ideology of freedom–are part of the problem inasmuch as they allow informal cliques to arise, dominate the discussion, and squeeze out minority views. Reagle also comments on a unintentional androcentricity he has observed even amongst free software community heroes, highlighting the ways in which this behavior can be alienating to women and prevents geek culture from growing beyond its traditional base.
Reagle prescribes a 3-step solution to sexism in geek culture: talking about gender; challenging and expanding what it means to be a geek; and not allowing the rhetoric of freedom to be used as an excuse for bad behavior.
Reagle further supports efforts to form female-only subcultures within the geek community, which opponents argue goes against the free software value of openness. Instead of the balkanization of their movement that opponents fear, these closed-group discussions actually strengthen geek culture at large, according to Reagle.
Related Links
“Free as in Sexist? Free culture and the gender gap”, Reagle
Geek Feminism, blog
Finally, A Feminism 101 Blog, blog
Geek Feminism Wiki, Wikia







February 25, 2013
“Copyright Reform” at 2013 Public Knowledge Policy Forum
Want to hear the latest thinking on copyright reform? Come to the 2013 Public Knowledge Policy Forum tomorrow, February 26, at 1 pm, at the US Capitol Visitor Center, where I will discuss and debate the issue with these fellow copyright wonks:
Erik Martin, General Manager, Reddit
Pamela Samuelson, professor of law at Berkeley Law, University of California; Faculty Director, Berkeley Center for Law & Technology
Michael McGeary, Co-Founder, Engine Advocacy
Gigi B. Sohn, President & CEO, Public Knowledge, will moderate.
To catch the full roster, which includes some great panels, come at 10. Registration–and lunch!–is free. Details here.
Can’t make it? Here’s my presentation: PK_(C)_Reform.







WCITLeaks is Ready for WTPF-13
When Jerry and I started WCITLeaks, we didn’t know if our idea would gain traction. But it did. We made dozens of WCIT-related documents available to civil society and the general public—and in some cases, even to WCIT delegates themselves. We are happy to have played a constructive role, by fostering improved access to the information necessary for the media and global civil society to form opinions on such a vital issue as the future of the Internet. You can read my full retrospective account of WCITLeaks and the WCIT over at Ars Technica.
But now it’s time to look beyond the WCIT. The WCIT revealed substantial international disagreement over the future direction of Internet governance, particularly on the issues of whether the ITU is an appropriate forum to resolve Internet issues and whether Internet companies such as Google and Twitter should be subject to the provisions of ITU treaties. This disagreement led to a split in which 55 countries opted not to sign the revised ITRs, the treaty under negotiation.
Where does this divisive ITR revision leave us? It means that the next two years or so of ITU meetings have the potential to be extremely interesting. In particular, the World Telecommunication/Information and Communication Technology Policy Forum (WTPF) in May 2013 in Geneva and the ITU Plenipotentiary Conference (known as “Plenipot”) in October-November 2014 in Busan, South Korea, are worth watching closely.
Unlike the WCIT, the WTPF is not a treaty conference. It is a meeting that produces opinions and reports. Also unlike the WCIT, at WTPF the Internet is explicitly on the table in an up-front, honest way. The opinions and reports produced at WTPF about the Internet will be used as input documents into Plenipot, which is a full treaty conference. At Plenipot, the entire Constitution and Convention of the ITU is subject to revision, so it is extremely likely that the Internet will be considered. One contact of mine has called Plenipot “WCIT 2.”
There is some good news. So far, all WTPF preparatory documents have been 100% open to the public. WCITLeaks applauds the ITU for this policy. Transparency provided directly by the ITU is better than the transparency we have provided in the past, because the ITU’s public documents are often available in multiple languages, something that WCITLeaks does not have the resources to offer. For example, here is the fourth draft of the SG’s report from the Informal Experts Group for WTPF. Note that it is available in English, Arabic, Chinese, Spanish, French, and Russian. The multilingual availability of this document ensures that an even broader array of global civil society will be able to more closely follow WTPF preparations.
The bad news is that we do not yet know if WTPF documents beyond the preparatory phase will be publicly available. When those documents appear, they will be listed here, but it is possible that users who are not affiliated with Member States or Sector Members won’t have access. In addition, we do not yet know what the policy will be toward access to documents relating to Plenipot.
We hope that the ITU will continue to take these important steps toward greater transparency. At the same time, we are ready to reprise our WCIT role if necessary. To that end, we have reoriented the WCITLeaks site to focus on WTPF and future conferences. WCIT-related documents will continue to be available at wcitleaks.org/wcit. As always, you can stay up to date by following @WCITLeaks on Twitter. Happy leaking!







Let’s give the Copyright Alert System a chance
After several delays, it looks like the “six-strikes” Copyright Alert System is launching today. Over at Reason.com I write that instead of dismissing it out of hand, those of skeptical of the current copyright regime should give it a chance:
While the Copyright Alert System is far from perfect, it succeeds in treating illegal file-sharing as an infraction more akin to speeding, and less like grand larceny the way courts and prosecutors do. And the private system has its own set of checks and balances absent from public enforcement: ISPs have a strong incentive to ensure that their customers are not harassed by false positives or overzealous enforcement. (Indeed, the agreement limits the number of notices copyright holders may send in a month.) This is why the temptation to codify such a “six-strike” system in law the way France and other countries have should be resisted.
In the long run, the new system is likely to be ineffective at stopping piracy. Determined pirates will be able to detect and evade monitoring, spoof their IP addresses, or simply switch to other methods of file-sharing not covered by the agreement, like streaming or using locker sites or Usenet. In the short run, however, copyright alerts will attempt to nudge public norms that have increasingly moved toward widespread acceptance of file-sharing. Evidence suggests, though, that it’s probably too late for that too.
Rather than dismiss the new system out of hand, those of us seeking a saner copyright regime should welcome this experiment while keeping a close eye on it. If nothing else, it’s preferable to have content owners make constructive use of their private rights rather than rely on the power of the state.







Review of Copyright Unbalance in the Weekly Standard
In the current issue of the Weekly Standard, Sonny Bunch has a very nice review of our book, Copyright Unbalanced: From Incentive to Excess:
Into the fray jumps this collection of essays, arguing that copyright is hopelessly broken. The libertarian right has grown increasingly skeptical of the institution, arguing that media corporations have perverted the Constitution’s Copyright Clause into a tool used not to “promote the Progress of Science and useful Arts” but to swell their coffers. Many libertarians see the endless extension of copyright terms, the retroactive granting of such extensions, and the increasing number of instruments that can be copyrighted as crony capitalism.
There is certainly a case to be made for copyright reform. Whereas the Copyright Act originally provided that copyrightable items—limited to books, maps, and charts—could be protected for one 14-year term, and extended for another 14-year term (if the author wished), we now have, in essence, unending, unlimited copyright: the life of the author, plus 70 years. Gone is the requirement that copyright holders actively pursue their copyright or its extensions. The effect is rather to grant copyright protection to everything created, in perpetuity. The public domain is no more.
Bunch does have one critique, however:
Were copyright protections simply a question of economic utility—a quest to discover which economic regime inspires content creators to make the most stuff—Copyright Unbalanced would be on more solid footing. But there is a moral dimension that must be accounted for. Libertarian opponents of copyright are not necessarily wrong to dodge the question; it has been a tricky one in American legal discourse. But the moral dimension of copyright has been a part of the general conversation since the days of the Founders—and before.
Guilty as charged. We did indeed dodge the moral question in the book, but that’s because we felt that there is so much patently bad policy in the current system—before even getting to the moral questions—that conservatives and libertarians should be able to agree needs reform. I believe one can take a pretty strict Randian or Lockean approach to copyright and still find lots of cronyist malfeasance in copyright.
That said, we won’t be dodging the moral question for long. Mercatus will later this year publish another book, Intellectual Privilege by Tom W. Bell, which in part refutes some of Adam Mossoff’s claims about the significance of Locke’s and Adam’s writings. While I don’t agree with everything Tom says in his forthcoming book, I think it will be an important contribution to conservative and libertarian thinking on what should be the proper bounds of a copyright system.







February 23, 2013
The cyberelephant in the room
Good question in The Economist from December of last year, before all the Mandiant madness:
As Mr Libicki asks, “what can we do back to a China that is stealing our data?” Espionage is carried out by both sides and is traditionally not regarded as an act of war. But the massive theft of data and the speed with which it can be exploited is something new. Responding with violence would be disproportionate, which leaves diplomacy and sanctions. But America and China have many other big items on their agenda, while trade is a very blunt instrument. It may be possible to identify products that China exports which compete only because of stolen data, but it would be hard and could risk a trade war that would damage both sides.
Given what China-U.S. relations are today, its not clear there are any good options. This situation reminds me of America’s early history of piracy. Until China is better integrated into the global order, the executive is going to have quite a challenge on his hands.







Smart technology’s seen and unseen
Evgeny Morozov in the WSJ is afraid that ‘smart technology’ might make us a bit unthinking:
The problem with many smart technologies is that their designers, in the quest to root out the imperfections of the human condition, seldom stop to ask how much frustration, failure and regret is required for happiness and achievement to retain any meaning.
It’s great when the things around us run smoothly, but it’s even better when they don’t do so by default. That, after all, is how we gain the space to make decisions—many of them undoubtedly wrongheaded—and, through trial and error, to mature into responsible adults, tolerant of compromise and complexity.
I think he overestimates how successfully engineers will eliminate friction. Even as new technologies solve some problems, they introduce new ones. He also overestimates how accepting people will be of these technologies. I know several persons with WiThings scales, but none of them tweet their weight. And Google Glass’s official unveiling has been met mostly with derision. I agree with Morozov that preserving experimentation and serendipity are important for human flourishing, but we should be careful not to forgo technologies that might unlock our curiosity and humanity in ways we can’t now predict.







February 22, 2013
With Obama cyber executive order, we don’t need new legislation
Politicians from both parties are now saying that although President Obama took comprehensive action on cybersecurity through executive order, we still need legislation. Over at TIME.com I write that no, we don’t.
Republicans want to protect businesses from suit for breach of contract or privacy statute violations in the name of information sharing, but there’s no good reason for such blanket immunity. Democrats would like to see mandated security standards, but top-down regulation is a bad idea, especially in such a fast-moving area. But as I write:
Yet guided by their worst impulses – to extend protections to business, or to exert bureaucratic control – members of Congress will insist that it is imperative they get in on the action.
If they do, they will undoubtedly be saddling us with a host of unintended consequences that we will come to regret later.
The executive order does most of what Congress failed to do in its last session. What Congress could add now is unnecessary and likely pernicious. The executive order should be given time to work. Only then will Congress now if and how it might need to be “strengthened.”







Adam Thierer's Blog
- Adam Thierer's profile
- 1 follower
