Adam Thierer's Blog, page 34
July 7, 2016
The Folly of Forecasting Future Tech Innovations and Professions
In a terrific little essay on “Local Economic Revival and The Unpredictability of Technological Innovation,” Michael Mandel, the chief economic strategist at the Progressive Policy Institute, makes several important points regarding the fundamental folly for future forecasting efforts as it pertains to new innovations. He notes, for example:
There are plenty of candidates for the “next big thing,” ranging from the Internet of Things to additive manufacturing to artificial organ factories to autonomous cars to space commerce to Elon Musk’s hyperloop. Each of these has the potential to revolutionize an industry, and to create many thousands or even millions of jobs in the process–not just for the highly-educated, but a whole range of workers.
Yet the problem–and the beauty–is that technological innovation is fundamentally unpredictable, even at close range. Consider this: The two most important innovations of the past decade, economically, have been the smartphone and fracking. The smartphone transformed the way that we communicate and hydraulic fracturing has driven down the price of energy, not to mention shifting the geopolitical balance of power.
But few saw the smartphone and fracking revolutions coming, he notes. The pundits and the press were too focused on technologies of the past.
In fact, as I noted in a case study in the 2nd edition of my book, Permissionless Innovation (p. 50-51), various pundits denigrated Apple and Google’s entry into the smartphone business because many industry analysts believed the market was mature. After all, in the early and mid-2000s, the big names of the mobile world were Palm, Blackberry, Motorola, and Nokia. And their market power seemed unassailable. Thus, as perplexing as it sounds now, the common wisdom in the mid-2000s regarding Apple and Google’s potential entry into the smartphone business was that, if they dare tried, they were destined to fail miserably!
For example, in December 2006, Palm CEO Ed Colligan summarily dismissed the idea that a traditional personal computing company could compete in the smartphone business. “We’ve learned and struggled for a few years here figuring out how to make a decent phone,” he said. “PC guys are not going to just figure this out. They’re not going to just walk in.” Similarly, in January 2007, Microsoft CEO Steve Ballmer laughed off the prospect of an expensive smartphone without a keyboard having a chance in the marketplace: “Five hundred dollars? Fully subsidized? With a plan? I said that’s the most expensive phone in the world and it doesn’t appeal to business customers because it doesn’t have a keyboard, which makes it not a very good e-mail machine.” (Quotes from John Paczkowski, “Apple: How Do You Say ‘Eat My Dust’ in Finnish?” All Things D, November 11, 2009.)
Even better, in March 2007, computing industry pundit John C. Dvorak insisted that “Apple Should Pull the Plug on the iPhone,” because he believed the mobile handset business was already locked up by the era’s major players. “This is not an emerging business,” Dvorak said. “In fact it’s gone so far that it’s in the process of consolidation with probably two players dominating everything, Nokia Corp. and Motorola Inc.”
Just a few years later, of course, Nokia’s profits and market share plummeted, and Google had purchased the struggling Motorola. Meanwhile, Palm was dead, Blackberry was languishing, and Microsoft continued to struggle to win back market share lost to Apple and Google. So much for tech prophecy!
Tech pundits also usually fail when forecasting the professions of the future. As I noted in Chapter 5 of my Permissionless Innovation book (p. 101-2):
It’s also worth noting how difficult it is to predict future labor market trends. In early 2015, Glassdoor, an online jobs and recruiting site, published a report on the 25 highest paying jobs in demand today. Many of the job titles identified in the report probably weren’t considered a top priority 40 years ago, and some of these job descriptions wouldn’t even have made sense to an observer from the past. For example, some those hotly demanded jobs on Glassdoor’s list include software architect (#3), software development manager (#4), solutions architect (#6), analytics manager (#8), IT manager (#9), data scientist (#15), security engineer (#16), quality assurance manager (#17), computer hardware engineer (#18), database administrator (#20), UX designer (#21), and software engineer (#23).
Looking back at reports from the 1970s and ’80s published by the US Bureau of Labor Statistics, the federal agency that monitors labor market trends, one finds no mention of these computing and information technology-related professions because they had not yet been created or even envisioned. So, what will the most important and well-paying jobs be 30 to 40 years from now? If history is any guide, we probably can’t even imagine many of them right now.
Indeed, as Mandel concludes in his new essay, history tells us that, “the next big job-creating innovation isn’t likely to announce itself in bold letters before it arrives. Just because the next big thing isn’t obvious today doesn’t mean it won’t be obvious a year from now.”
Today’s tech pundits would be wise to remember that insight when making forecasts about a future that is, as Mandel suggests, so “fundamentally unpredictable, even at close range.”
June 29, 2016
Clinton’s Tech and Telecom Agenda: Good News for Communications Act Reform?
Yesterday, Hillary Clinton’s campaign released a tech and innovation agenda. The document covers many tech subjects, including cybersecurity, copyright, and and tech workforce investments, but I’ll narrow my comments to the areas I have the most expertise in: broadband infrastructure and Internet regulation. These roughly match up, respectively, to the second and fourth sections of the five-section document.
On the whole, the broadband infrastructure and Internet regulation sections list good, useful priorities. The biggest exception is Hillary’s strong endorsement of the Title II rules for the Internet, which, as I explained in the National Review last week, is a heavy-handed regulatory regime that is ripe for abuse and will be enforced by a politicized agency.
Her tech agenda doesn’t mention a Communications Act rewrite but I’d argue it’s implied in her proposed reforms. Further, her statements last year at an event suggest she supports significant telecom reforms. In early 2015, for instance, Clinton spoke to tech journalist Kara Swisher (HT Doug Brake) and it was pretty clear Clinton viewed Title II as an imperfect and likely temporary effort to enforce neutrality norms. In fact, Clinton said she prefers “a modern, 21st-century telecom technology act” to replace Title II and the rest of the 1934 Communications Act.
It’s refreshing to see that, regarding broadband and Internet policy, there’s significant bipartisan agreement that government’s role should be primarily to provide public goods, protect consumers, and lower regulatory barriers, not micromanage providers, deploy public networks, and shape social policy. (Niskanen Center’s Ryan Hagemann similarly agrees that, with the exception of Title II, there’s a lot to like in Clinton’s tech agenda.) In fact, 85% of the text in Clinton’s broadband infrastructure and Internet policy sections could be copied-and-pasted to a free-market Republican presidential candidate’s tech platform and it would be right at home.
It’s difficult to know what to make of her pledge to defend and enforce Title II. I suspect it represents a promise she won’t reverse the Title II determination of the FCC, not that she’s particularly enamored with Title II. Clinton (and President Bill Clinton) seem to prefer a more hands-off approach to the Internet.
The Good
The document emphasizes that all types of broadband should be encouraged, including “fiber, wireless, satellite, and other technologies.” It’s nice to see this flexibility because many advocates are pushing a fiber optics-only agenda that is simply infeasible and tremendously expensive. (Professor Susan Crawford has said bluntly that governments should “refuse to fund last-mile solutions that aren’t primarily fiber.”) The reality, acknowledged by Google and others, is that fixed wireless and satellite broadband are needed to affordably connect households in rural and suburban areas for the foreseeable future. A fiber-only policy, because it’s impractically expensive, would have rather regressive effects and Clinton’s all-the-above strategy is commendable.
There’s also a recognition in the document that broadband networks are not natural monopolies and can be competitive, especially if the federal government works to lower entry barriers. Government policy for several decades was that telephone and cable networks were natural monopolies. Increasingly, broadband is competitive, especially as consumers go wireless only, but we’re still living with the negative side effects of past policies. The Clinton document emphasizes the need to reduce local regulatory barriers, streamline permitting, and allow nondiscriminatory access to conduits, poles, and rights-of-way controlled by local governments.
Spectrum policy is critical to any technology agenda and it’s a priority for Clinton. She emphasizes the need for more spectrum and identifying and reclaiming underutilized federal spectrum, a subject I’ve written about. The federal government uses spectrum worth hundreds of billions of dollars and pays very little for that asset, so there’s significant consumer gains available.
Clinton’s call to reinvigorate antitrust enforcement in technology and telecommunications is also noteworthy. Though the DOJ and FTC can overreach, they are better equipped to handle broadband and tech competition issues than the FCC.
The Not So Good
In the “Close the Digital Divide” item, there are some problems. In a word: the right goal with the wrong tools. The legacy broadband subsidy programs, which Clinton wishes to retain and expand, are fragmented and poorly designed. They essentially function as corporate welfare programs and should be eliminated in favor of consumer-focused subsidies.
One item says that by 2020 “100 percent of households in America will have the option of affordable broadband.” Literally connecting all American homes to the Internet is impossible today because millions of Americans simply don’t want the Internet. According to Pew, 70% of non-adopters are just not interested, and many would not subscribe no matter the price. (Relatedly, after over a century of telephone’s existence and tens of billions in federal universal service funding, US phone subscribership has hovered around 95% for 20 years.)
To accomplish the expansion of broadband access, Clinton promises to fund the FCC’s Connect America Fund (CAF), the Ag Department’s Rural Utilities Service Program (RUS), and the Broadband Technology Opportunities Program (BTOP). They differ somewhat in purpose and strategy but their major flaw is the same: they primarily fund and lend to broadband providers, not subscribers.
As I’ve noted before,
A direct subsidy plus a menu of options is a good way to expand access to low-income people (assuming there are effective anti-fraud procedures). A direct subsidy is more or less how the US and state governments help lower-income families afford products and services like energy, food, housing, and education. For energy bills there’s LIHEAP. For grocery bills there’s SNAP and WIC. For housing, there’s Section 8 vouchers. For higher education, there’s Pell grants.
By subsidizing providers, not consumers, there’s immense waste, corruption, and goldplated service. For instance, last year, Tony Romm at Politico published an in-depth investigation about the RUS program, funded by the stimulus. The waste in the RUS broadband program is appalling and the program will serve only a fraction of the subscribers that were promised. As one GAO researcher said about the program, “We are left with a program that spent $3 billion and we really don’t know what became of it.” “Even more troubling,” Romm explained “RUS can’t tell which residents its stimulus dollars served.”
Similarly, Clinton cites E-rate as a model for connecting “anchor institutions” like libraries and schools. E-rate likewise primarily benefits telecom and tech companies, not the intended recipients. As OECD researchers have found regarding EdTech government investment,
The results…show no appreciable improvements in student achievement in reading, mathematics or science in the countries that had invested heavily in ICT for education.
Rather than the E-rate model, a smarter policy is to provide block grants to schools and institutions to give them more flexibility to optimize according to their own perceived technology and education needs. The federal government already started doing this to a limited extent with Section IV of the 2015 Every Student Succeeds Act, which allocates $1.6 billion annually in block grants to states for tech-focused education spending. Policymakers should eliminate the expensive, dysfunctional E-rate program, which is funded by regressive fees on telephone bills, and expand the block grants somewhat to make up the shortfall.
Altogether, there’s a lot to like in Clinton’s broadband infrastructure and Internet policy agenda. There are hiccups–namely Title II enforcement and retention of broken broadband and tech subsidy programs–and hopefully her advisors will reexamine those. Given Clinton’s past statements about the need for a modernized Communications Act in place of Title II, she and her advisors have developed a forward-looking telecom agenda.
June 17, 2016
New Law Review Article on 3D Printing & Public Policy
I’m pleased to announce the publication of my latest law review article, “Guns, Limbs, and Toys: What Future for 3D Printing?” The article, which appears in Vol. 17 of the Minnesota Journal of Law, Science & Technology, was co-authored with Adam Marcus. Here’s the abstract:
We stand on the cusp of the next great industrial revolution thanks to technological innovations and developments that could significantly enhance the welfare of people across the world. This article will focus on how one of those modern inventions–3D printing–could offer the public significant benefits, but not without some serious economic, social, and legal disruptions along the way. We begin by explaining what 3D printing is and how it works. We also discuss specific applications of this technology and its potential benefits. We then turn to the policy frameworks that could govern 3D printing technologies and itemize a few of the major public policy issues that are either already being discussed, or which could become pertinent in the future. We offer some general guidance for policymakers who might be pondering the governance of 3D printing technologies going forward. Contra to the many other articles and position papers that have already been penned about 3D printing policy, which only selectively defend permissionless innovation in narrow circumstances, we endorse it as the default rule across all categories of 3D printing applications.
More specifically, we do a deep dive into 3 primary public policy “fault lines” for 3D printing: firearms, medical devices, and intellectual property concerns. Read the whole thing for more details.
June 15, 2016
Elizabeth Warren on Regulatory Capture & Simple Rules
The folks over at RegBlog are running a series of essays on “Rooting Out Regulatory Capture,” a problem that I’ve spent a fair amount of time discussing here and elsewhere in the past. (See, most notably, my compendium on, “Regulatory Capture: What the Experts Have Found.”) The first major contribution in the RegBlog series is from Sen. Elizabeth Warren (D-MA) and it is entitled, “Corporate Capture of the Rulemaking Process.”
Sen. Warren makes many interesting points about the dangers of regulatory capture, but the heart of her argument about how to deal with the problem can basically be summarized as ‘Let’s Build a Better Breed of Bureaucrat and Give Them More Money.’ In her own words, she says we should “limit opportunities for ‘cultural’ capture'” of government officials and also “give agencies the money that they need to do their jobs.”
It may sound good in theory, but I’m always a bit perplexed by that argument because the implicit claims here are that:
(a) the regulatory officials of the past were somehow less noble-minded and more open to corruption than some hypothetical better breed of bureaucrat that is out there waiting to be found and put into office; and
(b) that the regulatory agencies of the past were somehow starved for resources and lacked “the money that they need to do their jobs.”
Neither of these assumptions is true and yet those arguments seem to animate most of the reform proposals set forth by progressive politicians and scholars for how to deal with the problem of capture.
I think it’s wishful thinking at best and willful ignorance of history at worst. First, people–including regulators–were no different in the past than they are today. We are not magically going to find a more noble lot who will walk into office and be immune from these pressures. If anything, you could make the argument that the regulators of the early Progressive Era were less susceptible to this sort of influence because they were riding a wave of impassioned regulatory zeal that accompanied that period. I don’t buy it, but it’s more believable tale than the opposite story.
Secondly, if you think that the problem of regulatory capture is solved by simply giving agencies more money, you’ve got it exactly backwards. Regulated interests go to where the power and money is. They find it and influence it. You can deny it all you want, but that’s what history shows us. So long as we are delegating broad administrative powers to administrative agencies and then sending them big bags of enforcement money at the same time, special interests will seek and find ways to influence that process.
Is that too grim of a statement on the modern administrative state? No, it’s simply a perspective informed by history; a history that has best been told, incidentally, by progressive scholars and critics! And yet they all too often don’t seem willing to learn the lessons of that history.
The cycle of influence doesn’t end just because you try to erect more firewalls to keep the special interests out. Where power exists, they will always find a way to flex their muscle. It’s only really a question if you want this activity to be over or under the table. The whole “get-all-the-money-out-of-politics” fiction is, well, just that–a fiction. It’s a fine-sounding fairly tale that we continue to repeat again and again and yet nothing much ever changes. And, yet, a whole hell of lot of smart people continue to believe in that fairy tale if for no other reason than they can’t possible live with the idea that perhaps the only way to get this problem under control is to limit the underlying discretion and power of regulatory agencies to begin with.
On a better, more optimistic note, I want to highlight one argument Sen, Warren made in her essay with which I find myself in wholehearted agreement: We need more simple rules. As she correctly notes:
Complex rules take longer to finalize, are harder for the public to understand, and inevitably contain more special interest carve-outs that favor big business interests over small businesses and individuals. Complex rules are also more reliant on industry itself to provide additional detail and expertise—and that means more opportunities for capture. Simple works better.
Amen to all that! This is an issue I address in Chapter 6 of my recent book, Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom. In subjection F beginning on pg. 140, I explain why policymakers should “Rely on ‘Simple Rules for a Complex World’ When Regulation Is Needed.” I build that section around the insights of Philip K. Howard and Richard Epstein. Howard, who is chair of Common Good and the author of The Rule of Nobody, notes:
Too much law . . can have similar effects as too little law. People slow down, they become defensive, they don’t initiate projects because they are surrounded by legal risks and bureaucratic hurdles. They tiptoe through the day looking over their shoulders rather than driving forward on the power of their instincts. Instead of trial and error, they focus on avoiding error. Modern America is the land of too much law. Like sediment in a harbor, law has steadily accumulated, mainly since the 1960s, until most productive activity requires slogging through a legal swamp. It’s degenerative. Law is denser now than it was 10 years ago, and will be denser still in the next decade. This growing legal burden impedes economic growth.
That’s exactly why we need, to borrow the title of Richard Epstein’s 1995 book of the same name, “simple rules for a complex world.” As I argue in my book:
This is why flexible, bottom-up approaches to solving complex problems. . . are almost always superior to top-down laws and regulations. For example, we have already identified how social norms and pressure from the public, media, or activist groups can “regulate” behavior and curb potential abuses. And we have seen how education, awareness-building, transparency, and empowerment-based efforts can often help alleviate the problems associated with new forms of technological change.
But there are other useful approaches that can be tapped to address or alleviate concerns or harms associated with new innovations. To the extent that other public policies are needed to guide technological developments, simple legal principles are greatly preferable to technology-specific, micromanaged regulatory regimes. Ex ante (preemptive and precautionary) regulation is often highly inefficient, even dangerous. Prospective regulation based on hypothesizing about future harms that may never materialize is likely to come at the expense of innovation and growth opportunities. To the extent that any corrective action is needed to address harms, ex post measures, especially via the common law, are typically superior.
I itemized those “simple rules” and solutions in another recent piece (“What 20 Years of Internet Law Teaches Us about Innovation Policy“). They include both formal mechanisms (property and contract law, torts, class action activity, and other common law tools) and informal strategies (ongoing voluntary negotiations, multistakeholder agreements, industry self-regulatory best practices and codes of conduct, education and transparency efforts, and so on). We should exhaust those sorts of solutions first before turning to administrative regulation. And then we should subject such regulatory proposals to a strict benefit-cost analysis (BCA). As I note in my Permissionless Innovation book,
All new proposed regulatory enactments should be subjected to strict BCA and, if they are formally enacted, they should also be retroactively reviewed to gauge their cost-effectiveness. Better yet, the sunsetting guidelines recommended above should be applied to make sure outdated regulations are periodically removed from the books so that innovation is not discouraged.
If Sen. Warren is serious about crafting more sensible “simple” rules and working to end the problem of regulatory chapter, this is a better approach than simply trying, yet again, to build a better breed of bureaucrat.
June 8, 2016
New Article at Harvard JLPP: The FCC’s Transaction Reviews May Violate the First Amendment
The FCC’s transaction reviews have received substantial scholarly criticism lately. The FCC has increasingly used its license transaction reviews as an opportunity to engage in ad hoc merger reviews that substitute for formal rulemaking. FCC transaction conditions since 2000 have ranged from requiring AOL-Time Warner to make future instant messaging services interoperable, to price controls for broadband for low-income families, to mandating merging parties to donate $1 million to public safety initiatives.
In the last few months alone,
Randy May and Seth Cooper of the Free State Foundation wrote a piece that the transaction reviews contravene rule of law norms.
T. Randolph Beard et al. at the Phoenix Center published a research paper about how the FCC’s informal bargaining during mergers has become much more active and politically motivated in recent years.
Derek Bambauer, law professor at the University of Arizona, published a law review article that criticized the use of informal agency actions to pressure companies to act in certain ways. These secretive pressures “cloak what is in reality state action in the guise of public choice.”
This week, in the Harvard Journal of Law and Public Policy, my colleague Christopher Koopman and I added to this recent scholarship on the FCC’s controversial transaction reviews.
We echo the argument that the FCC merger policies undermine the rule of law. Firms have no idea which policies they’ll need to comply with to receive transaction approval. We also note that the FCC is motivated to shift from formal regulation, which is time consuming and subject to judicial review, to “regulation by transaction,” which has fewer restraints on agency action. The FCC and the courts have put few meaningful limits on what can be coerced from merging firms. Many concessions from merging firms are policies that the FCC is simply unwilling to accomplish via formal rulemaking or, sometimes, is outright prohibited by law from regulating. Since a firm’s concessions in this coercive process are nominally voluntary, they typically can’t sue.
We point out, further, that the FCC has a potentially damaging legal issue on its hands. Since the agency is now extracting concessions related to content distribution and TV and radio programming, its transaction review authority may be presumptively unconstitutional and subject to facial First Amendment challenges. That means many parties can challenge the law, not simply the ones burdened by conditions (who fear FCC retaliation).
Content-neutral licensing laws, like the FCC’s transaction review authority, are presumptively unconstitutional when there’s a risk that public officials will intimidate speakers about content. We cite for this proposition the Supreme Court’s decision in City of Lakewood v. Plain Dealer Publishing Co., a 1988 case striking down as unconstitutional a city requirement that newspapers seek a public interest determination from public officials before installing newsracks. As the Court said, for rules with a “nexus to expression,”
a facial [First Amendment] challenge lies whenever a licensing law gives a government official or agency substantial power to discriminate based on the content or viewpoint of speech by suppressing disfavored speech or disliked speakers.
The public officials in City of Lakewood hadn’t even pressured newspapers about content; the mere potential for intimidation was a constitutional violation. If the agency’s authority was challenged, the FCC would be in worse shape than the public officials in City of Lakewood. Unlike those local officials, the FCC has used licensing to pressure firms to add certain types of programming. So the law certainly has the nexus to expression that the Supreme Court requires for a facial challenge.
We highlight, for instance, the many concessions related to content in the 2010 Comcast-NBCU merger. Comcast-NBCU conceded to create children’s, public interest, and Spanish-language TV and video-on-demand programming, relinquish editorial control over Hulu programming, and spend millions of dollars on digital literacy and FDA nutritional TV public service announcements. In that merger and many others, the FCC conditioned approval on compliance with open access and net neutrality policies. As I and others have pointed out, net neutrality rules also threaten free speech rights.
We conclude with some policy recommendations to avoid a constitutional problem for the FCC, including congressional repeal of the FCC’s transaction review authority. We point out that the FCC actually has Clayton Act authority to review common carrier mergers, but the FCC refuses to use it, likely because the agency views traditional competition analysis as too constraining. In our view, unless or until the FCC promulgates predictable guidelines about what is relevant in a transaction review and stays away from content distribution issues, the FCC’s transaction review authority is vulnerable to legal challenge.
May 9, 2016
FDA, Biohacking & the “Right to Try” for Families
In theory, the Food & Drug Administration (FDA) exists to save lives and improve health outcomes. All too often, however, that goal is hindered by the agency’s highly bureaucratic, top-down, command-and-control orientation toward drug and medical device approval.
Today’s case in point involves families of children with diabetes, many of whom are increasingly frustrated with the FDA’s foot-dragging when it comes to approval of medical devices that could help their kids. Writing today in The Wall Street Journal, Kate Linebaugh discusses how “Tech-Savvy Families Use Home-Built Diabetes Device” to help their kids when FDA regulations limit the availability of commercial options. She documents how families of diabetic children are taking matters into their own hands and creating their own home-crafted insulin pumps, which can automatically dose the proper amount of proper amount of the hormone in response to their child’s blood-sugar levels. Families are building, calibrating, and troubleshooting these devices on their own. And the movement is growing. Linebaugh reports that:
More than 50 people have soldered, tinkered and written software to make such devices for themselves or their children. The systems—known in the industry as artificial pancreases or closed loop systems—have been studied for decades, but improvements to sensor technology for real-time glucose monitoring have made them possible.
The Food and Drug Administration has made approving such devices a priority and several companies are working on them. But the yearslong process of commercial development and regulatory approval is longer than many patients want, and some are technologically savvy enough to do it on their own.
Linebaugh notes that this particular home-built medical project (known as OpenAPS), was created by Dana Lewis, a 27-year-old with Type 1 diabetes in Seattle. Linebaugh says that:
Ms. Lewis began using the system in December 2014 as a sort of self-experiment. After months of tweeting about it, she attracted others who wanted what she had. The only restriction of the project is users have to put the system together on their own. Ms. Lewis and other users offer advice, but it is each one’s responsibility to know how to troubleshoot. A Bay Area cardiologist is teaching himself software programming to build one for his 1-year-old daughter who was diagnosed in March.
In essence, these individuals and families are engaging in a variant of the sort of decentralized “biohacking” that is becoming increasingly prevalent in society today. As I discussed in a recent law review article, biohacking refers to the efforts of average citizens (often working together in a decentralized fashion) to enhance various human capabilities. This can include implanting things inside one’s body or using external devices to supplement one’s abilities or to address health-related issues.
I documented other examples of this trend in my essays on average citizens making 3D-printed prosthetics (The Right to Try, 3D Printing, the Costs of Technological Control & the Future of the FDA) as well as retainers (“In a World Where Kids Can 3D-Print Their Own Retainers, What Should Regulators Do?”) As “software eats the world” and allows for this sort of democratized medical self-experimentation, more and more citizens are likely going to be engaging in biohacking. In the process, they will often be doing an end-around the FDA and its complex maze of regulatory restrictions on health innovation.
Stated more provocatively, thanks to new technological capabilities and networking platforms, the public may increasingly enjoy a de facto “right to try” for many new medical devices and treatments. Technological innovation will decentralize and democratize medical decisions even when the legal status of such actions is unclear or even flatly illegal.
But is a world of increasingly decentralized, democratized, and such highly personalized medicine actually safe? Well, all risk is relative and as I discussed extensively in my recent book and other work on innovation policy, sometimes the greatest risk of all is the refusal to take any risk to begin with. If you disallow or limit efforts to engage in certain risky endeavours, ultimately, you could end up doing more harm because there can be no reward without a corresponding amount of risk-taking. It is only through constant trial and error experimentation that we find new and better ways of doing things. That is particularly true as it pertains to life-enriching or even life-saving medical treatments. While the FDA likes to think that its hyper-cautious approach to medical drug and device approval ultimately saves lives, in the aggregate, we have no idea how many lives are actually being lost (or how much pain and suffering is occurring) due to FDA prohibitions on our freedom to experiment with new products and services.
One of the parents Linebaugh interviewed for her story made the following remark: “Diabetes is dangerous anyway. Insulin is dangerous. I think what we are doing is actually improving that and lowering the risk.” That is exactly right. This father understands the reality of risk trade-offs. There are certainly risks associated with what these families are doing for their children. But these families also have a very palpable sense of the opposite problem: There is a profound and immediate risk of doing nothing and waiting for the FDA to finally get around to approving the devices that their children need right now.
All this raises another interesting policy question: Why is it legal for these parents to engage in this sort of medical self-experimentation–experimentation on their children, no less!–while it remains flatly illegal for any commercial operator to offer similar products that could help these families? Many modern regulatory regimes accord differential treatment to commercial activities. Non-commercial versions of some activities are left alone, but as soon as commercial opportunities arise, policymakers seek to apply regulation.
Does this sort of commercial vs. non-commercial regulatory asymmetry make any sense? As far as I can tell, this regulatory distinction is mostly rooted in the fact that deep-pocked commercial operators make easier targets for regulators to go after when compared to harassing average citizens. Going after average citizens would be bad PR and a serious legal hassle as well because issues pertaining to personal autonomy or parental rights would likely be raised both in the court of public opinion and courts of law.
Regardless, let’s not kid ourselves into thinking that this regulatory distinction is rooted in safety considerations. After all, it is almost certainly the case that those commercial medical innovators are likely building safer products, made by medical professionals with years of experience. Moreover, commercial operators are more likely to carry insurance to address any problems that may develop, and they possess strong reputational incentives to be good market actors. Commercial operators have to maintain brand loyalty to earn new or repeat business, or perhaps just to avoid stiff legal liability that non-commercial operators might not face.
In any event, one thing should be abundantly clear: If the FDA doesn’t change its ways, we can expect an increasing number of citizens to begin pursuing medical treatments outside the boundaries of the law (and potentially outside the realm of common sense). Many people want a right to try new devices and therapies, and in our modern networked world, they are increasingly going to get it whether regulators like it or not.
Lawmakers in Congress need to exercise better oversight of rogue agencies like the FDA, which face no serious penalties for the sort of endless regulatory foot-dragging that threatens public welfare. If the agency was required by Congress to improve its drug and device approval process, then perhaps fewer Americans would be forced to take matters into their own hands to begin with. Down below, I’ve included a few reports suggesting how we might get this much-needed reform process started.
_________________________
Additional reading from Mercatus Center scholars:
white paper: “US Medical Devices: Choices and Consequences,” by Richard Williams, Robert Graboyes & Adam Thierer
white paper: “The Proper Role of the FDA for the 21st Century,” by Joseph V. Gulfo, Jason Briggeman, Ethan Roberts
white paper: “How Productive Is the FDA’s Devices Program?” by Richard Williams, Jason Briggeman, Ethan Roberts
special report: “From Fortress to Frontier: How Innovation Can Save Health Care,” by Robert Graboyes
blog post: “The Right to Try, 3D Printing, the Costs of Technological Control & the Future of the FDA,” by Adam Thierer
blog post: “In a World Where Kids Can 3D-Print Their Own Retainers, What Should Regulators Do?” by Adam Thierer
April 20, 2016
Wendell Wallach on the Challenge of Engineering Better Technology Ethics
On May 3rd, I’m excited to be participating in a discussion with Yale University bioethicist Wendell Wallach at the Microsoft Innovation & Policy Center in Washington, DC. (RSVP here.) Wallach and I will be discussing issues we write about in our new books, both of which focus on possible governance models for emerging technologies and the question of how much preemptive control society should exercise over new innovations.
Wallach’s latest book is entitled, A Dangerous Master: How to Keep Technology from Slipping beyond Our Control. And, as I’ve noted here recently, the greatly expanded second edition of my latest book, Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom, has just been released.
Of all the books of technological criticism or skepticism that I’ve read in recent years—and I have read stacks of them!—A Dangerous Master is by far the most thoughtful and interesting. I have grown accustomed to major works of technological criticism being caustic, angry affairs. Most of them are just dripping with dystopian dread and a sense of utter exasperation and outright disgust at the pace of modern technological change.
Although he is certainly concerned about a wide variety of modern technologies—drones, robotics, nanotech, and more—Wallach isn’t a purveyor of the politics of panic. There are some moments in the book when he resorts to some hyperbolic rhetoric, such as when he frets about an impending “techstorm” and the potential, as the book’s title suggests, for technology to become a “dangerous master” of humanity. For the most part, however, his approach is deeper and more dispassionate than what is found in the leading tracts of other modern techno-critics.
Many Questions, Few Clear Answers
Wallach does a particularly good job framing the major questions about emerging technologies and their effect on society. “Navigating the future of technological possibilities is a hazardous venture,” he observes. “It begins with learning to ask the right questions—questions that reveal the pitfalls of inaction, and more importantly, the passageways available for plotting a course to a safe harbor.” (p. 7) Wallach then embarks on a 260+ page inquiry that bombards the reader with an astonishing litany of questions about the wisdom of various forms of technological innovation—both large and small. While I wasn’t about to start an exact count, I would say that the number of questions Wallach poses in the book runs well into the hundreds. In fact, many paragraphs of the book are nothing but an endless string of questions.
Thus, if there is a primary weakness with A Dangerous Master, it’s that Wallach spends so much time formulating such a long list of smart and nuanced questions that some readers may come away disappointed when they do not find equally satisfying answers. On the other hand, the lack of clear answers is also completely understandable because, as Wallach notes, there really are no simple answers to most of these questions.
Just Slow Down!
Moving on to substance, let me make clear where Wallach and I generally see eye-to-eye and where we part ways.
Generally speaking, we agree about the need to come up with better “soft governance” systems for emerging technologies, which might include multistakeholder process, developer codes of conduct, sectoral self-regulation, sensible liability rules, and so on. (More on those strategies in a moment.)
But while we both believe it is wise to consider how we might “bake-in” better ethics and norms into the process of technological development, Wallach seems much more inclined than me to expect that we will be able to pre-ordain (or potentially require?) all this happens before much of this experimentation and innovation actually moves forward. Wallach opens by asking:
Determining when to bow to the judgment of experts and whether to intervene in the deployment of a new technology is certainly not easy. How can government leaders or informed citizens effectively discern which fields of research are truly promising and which pose serious risks? Do we have the intelligence and means to mitigate the serious risks that can be anticipated? How should we prepare for unanticipated risks? (p. 6)
Again, many good questions here! But this really gets to the primary difference between Wallach’s preferred approach and my own: I tend to believe that many of these things can only be worked out through ongoing trial and error, the constant reformulation of the various norms that govern the process of innovation, and the development of sensible ex post solutions to some of the most difficult problems posed by turbulent technological change.
By contrast, Wallach’s generally attitude toward technological evolution is probably best summarized by the phrases: “Slow down!” and, “Let’s have a conversation about it first!” As he puts it in his own words: “Slowing down the accelerating adoption of technology should be done as a responsible means to ensure basic human safety and to support broadly shared values.” (p. 13)
But I tend to believe that it’s not always possible to preemptively determine which innovations to slow down, or even how to determine what those “shared values” are that will help us make this determination. More importantly, I worry that there are very serious potential risks and unintended consequences associated with slowing down many forms of technological innovation, which could improve human welfare in important ways. There can be no prosperity, after all, without a certain degree of risk-taking and disruption.
Getting Out Ahead of the Pacing Problem
It’s not that Wallach is completely hostile to new forms of technological innovation or blind to the many ways those innovations might improve our lives. To the contrary, he does a nice job throughout the book highlighting the many benefits associated with various new technologies, or he is at least willing to acknowledge that there can be many downsides associated with efforts aimed at limiting research and experimentation with new technological capabilities.
Yet, what concerns Wallach most is the much-discussed issue from the field of the philosophy of technology, the so-called “pacing problem.” Wallach concisely defines the pacing problem as “the gap between the introduction of a new technology and the establishment of laws, regulations, and oversight mechanisms for shaping its safe development.” (p. 251) “There has always been a pacing problem,” he notes, but he is concerned that technological innovation—especially highly disruptive and potentially uncontrollable forms of innovation—is now accelerating at an absolutely unprecedented pace.
(Just as an aside for all the philosophy nerds out there… Such a rigid belief in the “pacing problem” represents a techno-deterministic viewpoint that is, ironically, sometimes shared by technological skeptics like Wallach as well as technological optimists like Larry Downes and even many in the middle of this debate, like Vivek Wadhwa. See, for example, The Laws of Disruption by Downes and “Laws and Ethics Can’t Keep Pace with Technology” by Wadhwa. Although these scholars approach technology ethics and politics quite differently, they all seem to believe that the pace of modern technological change is so relentless as to almost be an unstoppable force of nature. I guess the moral of the story is that, to some extent, we’re all technological determinists now!)
Despite his repeated assertions that modern technologies are accelerating at such a potentially uncontrollable pace, Wallach nonetheless hopes we can achieve some semblance of control over emerging technologies before they reach a critical “inflection point.” In the study of history and science, an inflection point generally represents a moment when a situation and trend suddenly changes in a significant way and things begin moving rapidly in a new direction. These inflections points can sometimes develop quite abruptly, ushering in major changes by creating new social, economic, or political paradigms. As it relates to technology in particular, inflection points can refer to the moment with a particular technology achieves critical mass in terms of adoption or, more generally, to the time when that technology begins to profoundly transform the way individuals and institutions act.
Another related concept that Wallach discusses is the so-called “Collingridge dilemma,” which refers to the notion that it is difficult to put the genie back in the bottle once a given technology has reached a critical mass of public adoption or acceptance. The concept is named after David Collingridge, who wrote about this in his 1980 book, The Social Control of Technology. “The social consequences of a technology cannot be predicated early in the life of the technology,” Collingridge argued. “By the time undesirable consequences are discovered, however, the technology is often so much part of the whole economics and social fabric that its control is extremely difficult.”
On “Having a Discussion” & Coming Up with “a Broad Plan”
These related concepts of inflection points and the Collingridge dilemma constitute the operational baseline of Wallach’s worldview. “In weighing speedy development against long-term risks, speedy development wins,” he worries. “This is particularly true when the risks are uncertain and the perceived benefits great.” (p. 85)
Consequently, throughout his book, Wallach pleads with us to take what I will call Technological Time Outs. He says we need to pause at times so that we can have “a full public discussion” (p. 13) and make sure there is a “broad plan in place to manage our deployment of new technologies” (p. 19) to make sure that innovation happens only at “a humanly manageable pace” (p. 261) “to fortify the safety of people affected by unpredictable disruptions.” (p. 262) Wallach’s call for Technological Time Outs is rooted in his belief that “the accelerating pace [of modern technological innovation] undermines the quality of each of our lives.” (p. 263)
That is Wallach’s weakest assertion in the book and he doesn’t really offer much evidence to prove that the velocity of modern technological is hurting us rather than helping us, as many of us believe. Rather, he treats it as a widely accepted truism that necessitates some sort of collective effort to slow things down if the proverbial genie is about to exit the bottle, or to make sure those genies don’t get out of their bottles without a lot of preemptive planning regarding how they are to be released into the world. In the following passage on pg. 72, Wallach very succinctly summarizes his approach recommended throughout A Dangerous Master:
this book will champion the need for more upstream governance: more control over the way that potentially harmful technologies are developed or introduced into the larger society. Upstream management is certainly better than introducing regulations downstream, after a technology is deeply entrenched or something major has already gone wrong. Yet, even when we can access risks, there remain difficulties in recognizing when or determining how much control should be introduced. When does being precautionary make sense, and when is precaution an over-reaction to the risks? (p. 72)
Those who have read my Permissionless Innovation book will recall that I open by framing innovation policy debates in almost exactly the same way as Wallach suggests in that last line above. I argue in the first lines of my book that:
The central fault line in innovation policy debates today can be thought of as ‘the permission question.’ The permission question asks: Must the creators of new technologies seek the blessing of public officials before they develop and deploy their innovations? How that question is answered depends on the disposition one adopts toward new inventions and risk-taking, more generally. Two conflicting attitudes are evident.
One disposition is known as the ‘precautionary principle.’ Generally speaking, it refers to the belief that new innovations should be curtailed or disallowed until their developers can prove that they will not cause any harm to individuals, groups, specific entities, cultural norms, or various existing laws, norms, or traditions.
The other vision can be labeled ‘permissionless innovation.’ It refers to the notion that experimentation with new technologies and business models should generally be permitted by default. Unless a compelling case can be made that a new invention will bring serious harm to society, innovation should be allowed to continue unabated and problems, if any develop, can be addressed later.
So, by contrasting these passages, you can see what I am setting up here is a clash of visions between what appears to be Wallach’s precautionary principle-based approach versus my own permissionless innovation-focused worldview.
How Much Formal Precaution?
But that would be a tad bit too simplistic because just a few paragraphs after Wallach makes the statement just above about “upstream management” being superior to ex post solutions formulated “after a technology is deeply entrenched,” Wallach begins slowly backing away from an overly-rigid approach to precautionary principle-based governance of technological processes and systems.
He admits, for example, that “precautionary measures in the form of regulations and governmental oversight can slow the development of research whose overall society impact will be beneficial,” (p. 26) and that can “be costly” and “slow innovation.” For countries, Wallach admits, this can have real consequences because “Countries with more stringent precautionary policies are at a competitive disadvantage to being the first to introduce a new tool or process.” (p. 74)
So, he’s willing to admit that what we might call a hard precautionary principle usually won’t be sensible or effective in practice, but he is far more open to soft precaution. But this is where real problems begin to develop with Wallach’s approach, and it presents us with a chance to turn the tables on him a bit and begin posing some serious questions about his vision for governing technology.
Much of what follows below are my miscellaneous ramblings about the current state of the intellectual dialogue about tech ethics and technological control efforts. I have discussed these issues at greater length in my new book as well as a series of essays here in past years, most notably: “On the Line between Technology Ethics vs. Technology Policy; “What Does It Mean to “Have a Conversation” about a New Technology?”; and, “Making Sure the “Trolley Problem” Doesn’t Derail Life-Saving Innovation.”
As I’ve argued in those and other essays, my biggest problem with modern technological criticism is that specifics are in scandalously short supply in this field! Indeed, I often find the lack of details in this arena to be utterly exasperating. Most modern technological criticism follows a simple formula:
TECHNOLOGY –>> POTENTIAL PROBLEMS –>> DO SOMETHING!
But almost all the details come in the discussion about the nature of the technology in question and the apparent many problems associated with it. Far, far less thought goes into the “DO SOMETHING!” part of the critics’ work. One reason for that is probably self-evident: There are no easy solutions. Wallach admits as much at many junctures throughout the book. But that doesn’t excuse the need for the critics to give us a more concrete blueprint for identifying and then potentially rectifying the supposed problems.
Of course, the other reason that many critics are short of specifics is because what they really mean when they quip how much we need to “have a conversation” about a new disruptive technology is that we need to have a conversation about stopping that technology.
Where Shall We Draw the Line between Hard and Soft Law?
But this is what I found most peculiar about Wallach’s book: He never really gives us a good standard by which to determine when we should look to hard governance (traditional top-down regulation) versus soft governance (more informal, bottom-up and non-regulatory approaches).
On one hand, he very much wants society to exercise greatly restraint and precaution when it comes to many of the technologies he and others worry about today. Again, he’s particularly concerned about the potential runaway development and use of drones, genetic editing, nanotech, robotics, and artificial intelligence. For at least one class of robotics—autonomous military robots—Wallach does call for immediate policy action in the form of an Executive Order to ban “killer” autonomous systems. (Incidentally, there’s also a major effort underway called the “Campaign to Stop Killer Robots” that aims to make such a ban part of international law through a multinational treaty.)
But Wallach also acknowledges the many trade-offs associated with efforts to preemptively controls on robotics and other technology. Perhaps for that reason, Wallach doesn’t develop a clear test for when the Precautionary Principle should be applied to new forms of innovation.
Clearly there are times when it is appropriate, although I believe it is only in an extremely narrow subset of cases. In the 2nd Edition of my Permissionless Innovation book, I tried to offer a rough framework for when formal precautionary regulation (i.e., highly-restrictive policy defaults are necessary, such as operational restrictions, licensing requirements, research limitations, or even formal bans) might be necessary. I do not want to interrupt the flow of this review of Wallach’s book too much, so I have decided to just cut-and-paste that portion of Chapter 3 of my book (“When Does Precaution Make Sense?”) down below as an appendix to this essay.
The key takeaway of that passage from my book is that all of us who study innovation policy and the philosophy of technology—Wallach, myself, the whole darn movement—have done a remarkably poor job being specific about precisely when formal policy precaution is warranted. What is the test? All too often, we get lazy and apply what we might call an “I-Know-It-When-I-See-It” standard. Consider the possession of bazookas, tanks, and uranium. Almost all of us would agree that citizens should not be allowed to possess or use such things. Why? Well, it seems obvious, right? They just shouldn’t! But what is the exact standard we use to make that determination.
In coming years, I plan on spending a lot more time articulating a better test by which Precautionary Principle-based policies could be reasonably applied. Those who know me may be taken aback by what I just said. After all, I’ve spend many years explaining why Precautionary Principle-based thinking threatens human prosperity and should be rejected in the vast majority of cases. But that doesn’t excuse the lack of a serious and detailed exploration of the exact standard by which we determine when we should impose some limits on technological innovation.
Generally speaking, while I strongly believe that “permissionless innovation” should remain the policy default for most technologies, there certainly exists some scenarios where the threat of harm associated with a new innovation might be highly probable, tangible, immediate, irreversible, and catastrophic in nature. If so, that could qualify it for at least a light version of the Precautionary Principle. In a future paper or book chapter I’m just now starting to research, I hope to fuller develop those qualifiers and formulate a more robust test around them.
I would have very much liked to see Wallach articulate and defend a test of his own for when formal precaution would make sense. And, by extension, when should we default to soft precaution, or soft law and informal governance mechanisms for emerging technologies.
We turn to that issue next.
Toward Soft Governance & the Engineering of Better Technological Ethics
Even though Wallach doesn’t provide us with a test for determining when precaution makes sense or when we should instead default to soft governance, he does a much better job explaining the various models of soft law or informal governance that might help us deal with the potential negative ramifications of highly disruptive forms of technological change.
What Wallach proposes, in essence, is that we bake a dose of precautionary directly into the innovation process through a wide variety of informal governance/oversight mechanisms. “By embedding shared values in the very design of new tools and techniques, engineers improve the prospect of a positive outcome,” he claims. “The upstream embedding of shared values during the design process can ease the need for major course adjustments when it’s often too late.” (p. 261)
Wallach’s favored instrument of soft governance is what he refers to as “Governance Coordinating Committees” (GCCs). These Committees would coordinate “the separate initiatives by the various government agencies, advocacy groups, and representatives of industry” who would serve as “issue managers for the comprehensive oversight of each field of research.” (p. 250) He elaborates and details the function of GCCs as follows:
These committees, led by accomplished elders who have already achieved wide respect, are meant to work together with all the interested stakeholders to monitor technological development and formulate solutions to perceived problems. Rather than overlap with or function as a regulatory body, the committee would work together with existing institutions. (p. 250-51)
Wallach discussed the GCC idea in much greater detail in a 2013 book chapter he penned with Gary E. Marchant for a collected volume of essays on Innovative Governance Models for Emerging Technologies. (I highly recommend you pick up that book if you can afford it! Many terrific essays in that book on these issues.) In their chapter, Marchant and Wallach specify some of the soft law mechanisms we might use to instill a bit of precaution preemptively. These mechanisms include: “codes of conduct, statements of principles, partnership programs, voluntary programs and standards, certification programs and private industry initiatives.”
If done properly, GCCs could provide exactly the sort of wise counsel and smart recommendations that Wallach desires. In my book and many law review articles on various disruptive technologies, I have endorsed many of the ideas and strategies Wallach identifies. I’ve also stressed the importance of many other mechanisms, such as education and empowerment-based strategies that could help the public learn to cope with new innovations or use them appropriately. In addition, I’ve highlighted the many flexible, adaptive ex post remedies that can help when things go wrong. Those mechanisms include common law remedies such as product defects law, various torts, contract law, property law, and even class action lawsuits. Finally, I have written extensively about the very active role played by the Federal Trade Commission (FTC) and other consumer protection agencies, which have broad discretion to police “unfair and deceptive practices” by innovators.
Moreover, we already have a quasi-GCC model developing today with the so-called “multistakeholder governance” model that is often used in both informal and formal ways to handle many emerging technology policy issues. The Department of Commerce (the National Telecommunications and Information Administration in particular) and the FTC have already developed many industry codes of conduct and best practices for technologies such as biometrics, big data, the Internet of Things, online advertising, and much more. Those agencies and others (such as the FDA and FAA) are continuing to investigate other codes or guidelines for things like advanced medical devices and drones, respectively. Meanwhile, I’ve heard other policymakers and academics float the idea of “digital ombudsmen,” “data ethicists,” and “private IRBs” (institutional review boards) as other potential soft law solutions that technology companies might consider. Perhaps going forward, many tech firms will have Chief Ethical Officers just as many of them today have Chief Privacy Officers or Chief Security Officers.
In other words, there’s already a lot of “soft law” activities going on in this space. And I haven’t even begun an inventory of the many other bodies or groups that already exist in each sector today that has set forth their own industry self-regulatory codes, but they exist in almost every field that Wallach worries about.
So, I’m not sure how much his GCC idea will add to this existing mix, but I would not be opposed to them playing the sort of coordinating “issue manager” role he describes. But I still have many questions about GCC’s, including:
How many of them are needed and how we will know which one is the definitive GCC for each sector or technology?
If they are overly formal in character and dominated by the most vociferous opponents of any particular technology, a real danger exists that a GCC could end up granting a small cabal a “heckler’s veto” over particular forms of innovation.
Alternatively, the possibility of “regulatory capture” could be a problem for some GCCs if incumbent companies come to dominate their membership.
Even if everything went fairly smoothly and the GCCs produced balanced reports and recommendations, future developers might wonder if and why they are to be bound by older guidelines.
And if those future developers choose not to play by the same set of guidelines, what’s the penalty for non-compliance?
And how are such guidelines enforced in a world were what I’ve called “global innovation arbitrage” is an increasing reality?
Challenging Questions for Both Hard and Soft Law
To summarize, whether we are speaking of “hard” or “soft” law approaches to technological governance, I am just not nearly as optimistic as Wallach seems to be that we will be able to find consensus on these three things:
(1) what constitutes “harm” in many of these circumstances;
(2) which “shared values” should prevail when “society” debates the shaping of ethics or guiding norms for emerging technologies but has highly contradictory opinions about those values (consider online privacy as a good example, where many people enjoy hyper-sharing while other demand hyper-privacy); and,
(3) that we can create a legitimate “governing body” (or bodies) that will be responsible for formulating these guidelines in a fair way without completely derailing the benefits of innovation in new fields and also remaining relevant for very long.
Nonetheless, as he and others have suggested, the benefit of adopting a soft law/informal governance approach to these issues is that it at least seeks to address these questions in more flexible and adaptive fashion. As I noted in my book, traditional regulatory systems “tend to be overly rigid, bureaucratic, inflexible, and slow to adapt to new realities. They focus on preemptive remedies that aim to predict the future, and future hypothetical problems that may not ever come about. Worse yet, administrative regulation generally preempts or prohibits the beneficial experiments that yield new and better ways of doing things.” (Permissionless Innovation, p. 120)
So, despite the questions I have raised here, I welcome the more flexible soft law approach that Wallach sets forth in his book. I think it represents a far more constructive way forward when compared to the opposite “top-down” or “command-and-control” regulatory systems of the past. But I very much want to make sure that even these new and more flexible soft law approaches leave plenty of breathing room for ongoing trial-and-error experimentation with new technologies and systems.
Conclusion
In closing, I want to reiterate that not only did I appreciate the excellent questions raised by Wendell Wallach in A Dangerous Master, but I take them very seriously. When I sat down to revise and expand my Permissionless Innovation book last year, I decided to include this warning from Wallach in my revised preface: “The promoters of new technologies need to speak directly to the disquiet over the trajectory of emerging fields of research. They should not ignore, avoid, or superficially dampen criticism to protect scientific research.” (p. 28–9)
As I noted, in response to Wallach: “I take this charge seriously, as should others who herald the benefits of permissionless innovation as the optimal default for technology policy. We must be willing to take on the hard questions raised by critics and then also offer constructive strategies for dealing with a world of turbulent technological change.”
Serious questions deserve serious answers. Of course, sometimes those posing those questions fail to provide many answers of their own! Perhaps it is because they believe the questions answer themselves. Other times, it’s because they are willing to admit that easy answers to these questions typically prove quite elusive. In Wallach’s case, I believe it’s more the latter.
To wrap up, I’ll just reiterated that both Wallach and I share a common desire to find solutions to the hard questions about technological innovation. But the crucial question that probably separates his worldview and my own is this: Whether we are talking about hard or soft governance, how much faith should we place in preemptive planning vs. ongoing trial and error experimentation to solve technological challenges? Wallach is more inclined to believe we can divine these things with the sagacious foresight of “accomplished elders” and technocratic “issue managers,” who will help us slow things down until we figure out how to properly ease a new technology into society (if at all). But I believe that the only way we will find many of the answers we are searching for is by allowing still more experimentation with the very technologies that he and others seek to control the development of. We humans are outstanding problem-solvers and have the uncanny ability among all mammals to adapt to changing circumstances. We roll with the punches, learn from them, and become more resilient in the process. As I noted in my 2014 essay, “Muddling Through: How We Learn to Cope with Technological Change”:
we modern pragmatic optimists must continuously point to the unappreciated but unambiguous benefits of technological innovation and dynamic change. But we should also continue to remind the skeptics of the amazing adaptability of the human species in the face of adversity. [. . .] Humans have consistently responded to technological change in creative, and sometimes completely unexpected ways. There’s no reason to think we can’t get through modern technological disruptions using similar coping and adaptation strategies.
Will the technologies that Wallach fears bring about a “techstorm” that overwhelms our culture, our economy, and even our very humanity? It’s certainly possible, and we should continue to seriously discuss the issues that he and other skeptics raise about our expanding technological capabilities and the potential for many of them to do great harm. Because some of them truly could.
But it is equally plausible—in fact, some of us would say, highly probable—that instead of overwhelming us, we learn how to bend these new technological capabilities to our will and make them work for our collective benefit. Instead of technology becoming “a dangerous master,” we will instead make it our helpful servant, just as we have so many times before.
APPENDIX: When Does Precaution Make Sense?
[excerpt from chapter 3 of Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom. Footnotes omitted. See book for all references.]
But aren’t there times when a certain degree of precautionary policymaking makes good sense? Indeed, there are, and it is important to not dismiss every argument in favor of precautionary principle–based policymaking, even though it should not be the default policy rule in debates over technological innovation.
The challenge of determining when precautionary policies make sense comes down to weighing the (often limited) evidence about any given technology and its impact and then deciding whether the potential downsides of unrestricted use are so potentially catastrophic that trial-and-error experimentation simply cannot be allowed to continue. There certainly are some circumstances when such a precautionary rule might make sense. Governments restrict the possession of uranium and bazookas, to name just two obvious examples.
Generally speaking, permissionless innovation should remain the norm in the vast majority of cases, but there will be some scenarios where the threat of tangible, immediate, irreversible, catastrophic harm associated with new innovations could require at least a light version of the precautionary principle to be applied. In these cases, we might be better suited to think about when an “anti-catastrophe principle” is needed, which narrows the scope of the precautionary principle and focuses it more appropriately on the most unambiguously worst-case scenarios that meet those criteria.
Precaution might make sense when harm is …
Precaution generally doesn’t make sense for asserted harms that are …
Highly probable
Highly improbable
Tangible (physical)
Intangible (psychic)
Immediate
Distant / unclear timeline
Irreversible
Reversible / changeable
Catastrophic
Mundane / trivial
But most cases don’t fall into this category. Instead, we generally allow innovators and consumers to freely experiment with technologies, and even engage in risky behaviors, unless a compelling case can be made that precautionary regulation is absolutely necessary. How is the determination made regarding when precaution makes sense? This is where the role of benefit-cost analysis (BCA) and regulatory impact analysis is essential to getting policy right. BCA represents an effort to formally identify the tradeoffs associated with regulatory proposals and, to the maximum extent feasible, quantify those benefits and costs. BCA generally cautions against preemptive, precautionary regulation unless all other options have been exhausted—thus allowing trial-and-error experimentation and “learning by doing” to continue. (The mechanics of BCA are discussed in more detail in section VII.)
This is not the end of the evaluation, however. Policymakers also need to consider the complexities associated with traditional regulatory remedies in a world where technological control is increasingly challenging and quite costly. It is not feasible to throw unlimited resources at every problem, because society’s resources are finite. We must balance risk probabilities and carefully weigh the likelihood that any given intervention has a chance of creating positive change in a cost-effective fashion. And it is also essential to take into account the potential unintended consequences and long-term costs of any given solution because, as Harvard law professor Cass Sunstein notes, “it makes no sense to take steps to avert catastrophe if those very steps would create catastrophic risks of their own.” “The precautionary principle rests upon an illusion that actions have no consequences beyond their intended ends,” observes Frank B. Cross of the University of Texas. But “there is no such thing as a risk-free lunch. Efforts to eliminate any given risk will create some new risks,” he says.
Oftentimes, after working through all these considerations about whether to regulate new technologies or technological processes, the best solution will be to do nothing because, as noted throughout this book, we should never underestimate the amazing ingenuity and resiliency of humans to find creative solutions to the problems posed by technological change. (Section V discusses the importance of individual and social adaptation and resiliency in greater detail.) Other times we might find that, while some solutions are needed to address the potential risks associated with new technologies, nonregulatory alternatives are also available and should be given a chance before top-down precautionary regulations are imposed. (Section VII considers those alternative solutions in more detail.)
Finally, it is again essential to reiterate that we are talking here about the dangers of precautionary thinking as a public policy prerogative—that is, precautionary regulations that are mandated and enforced by government officials. By contrast, precautionary steps may be far more wise when undertaken in a more decentralized manner by individuals, families, businesses, groups, and other organizations. In other words, as I have noted elsewhere in much longer articles on the topic, “there is a different choice architecture at work when risk is managed in a localized manner as opposed to a society-wide fashion,” and risk-mitigation strategies that might make a great deal of sense for individuals, households, or organizations, might not be nearly as effective if imposed on the entire population as a legal or regulatory directive.
Finally, at times, more morally significant issues may exist that demand an even more exhaustive exploration of the impact of technological change on humanity. Perhaps the most notable examples arise in the field of advance medical treatments and biotechnology. Genetic experimentation and human cloning, for example, raise profound questions about altering human nature or abilities as well as the relationship between generations.
The case for policy prudence in these matters is easier to make because we are quite literally talking about the future of what it means to be human. Controversies have raged for decades over the question of when life begins and how it should end. But these debates will be greatly magnified and extended in coming years to include equally thorny philosophical questions. Should parents be allowed to use advanced genetic technologies to select the specific attributes they desire in their children? Or should parents at least be able to take advantage of genetic screening and genome modification technologies that ensure their children won’t suffer from specific diseases or ailments once born?
Outside the realm of technologically enhanced procreation, profound questions are already being raised about the sort of technological enhancements adults might make to their own bodies. How much of the human body can be replaced with robotic or bionic technologies before we cease to be human and become cyborgs? As another example, “biohacking”—efforts by average citizens working together to enhance various human capabilities, typically by experimenting on their own bodies —could become more prevalent in coming years. Collaborative forums, such as Biohack.Me, already exist where individuals can share information and collaborate on various projects of this sort. Advocates of such amateur biohacking sometimes refer to themselves as “grinders,” which Ben Popper of the Verge defines as “homebrew biohackers [who are] obsessed with the idea of human enhancement [and] who are looking for new ways to put machines into their bodies.”
These technologies and capabilities will raise thorny ethical and legal issues as they advance. Ethically, they will raise questions of what it means to be human and the limits of what people should be allowed to do to their own bodies. In the field of law, they will challenge existing health and safety regulations imposed by the FDA and other government bodies.
Again, most innovation policy debates—including most of the technologies discussed throughout this book—do not involve such morally weighty questions. In the abstract, of course, philosophers might argue that every debate about technological innovation has an impact on the future of humanity and “what it means to be human.” But few have much of a direct influence on that question, and even fewer involve the sort of potentially immediate, irreversible, or catastrophic outcomes that should concern policymakers.
In most cases, therefore, we should let trial-and-error experimentation continue because “experimentation is part and parcel of innovation” and the key to social learning and economic prosperity. If we froze all forms of technological innovation in place while we sorted through every possible outcome, no progress would ever occur. “Experimentation matters,” notes Harvard Business School professor Stefan H. Thomke, “because it fuels the discovery and creation of knowledge and thereby leads to the development and improvement of products, processes, systems, and organizations.”
Of course, ongoing experimentation with new technologies always entails certain risks and potential downsides, but the central argument of this book is that (a) the upsides of technological innovation almost always outweigh those downsides and that (b) humans have proven remarkably resilient in the face of uncertain, ever-changing futures.
In sum, when it comes to managing or coping with the risks associated with technological change, flexibility and patience is essential. One size most certainly does not fit all. And one-size-fits-all approaches to regulating technological risk are particularly misguided when the benefits associated with technological change are so profound. Indeed, “[t]echnology is widely considered the main source of economic progress”; therefore, nothing could be more important for raising long-term living standards than creating a policy environment conducive to ongoing technological change and the freedom to innovate.
April 19, 2016
Permissionless Innovation: Book, Video, Slides, Podcast, Paper & More!
I am pleased to announce the release of the second edition of my book, Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom. As with the first edition, the book represents a short manifesto that condenses — and attempts to make more accessible — arguments that I have developed in various law review articles, working papers, and blog posts over the past few years. The book attempts to accomplish two major goals.
First, I attempt to show how the central fault line in almost all modern technology policy debates revolves around “the permission question,” which asks: Must the creators of new technologies seek the blessing of public officials before they develop and deploy their innovations? How that question is answered depends on the disposition one adopts toward new inventions. Two conflicting attitudes are evident.
One disposition is known as the “precautionary principle.” Generally speaking, it refers to the belief that new innovations should be curtailed or disallowed until their developers can prove that they will not cause any harms to individuals, groups, specific entities, cultural norms, or various existing laws, norms, or traditions.
The other vision can be labeled “permissionless innovation.” It refers to the notion that experimentation with new technologies and business models should generally be permitted by default. Unless a compelling case can be made that a new invention will bring serious harm to society, innovation should be allowed to continue unabated and problems, if they develop at all, can be addressed later.
I argue that we are witnessing a grand clash of visions between these two mindsets today in almost all major technology policy discussions today.
The second major objective of the book, as is made clear by the title, is to make a forceful case in favor of the latter disposition of “permissionless innovation.” I argue that policymakers should unapologetically embrace and defend the permissionless innovation ethos — not just for the Internet but also for all new classes of networked technologies and platforms. Some of the specific case studies discussed in the book include: the “Internet of Things” and wearable technologies, smart cars and autonomous vehicles, commercial drones, 3D printing, and various other new technologies that are just now emerging.
I explain how precautionary principle thinking is increasingly creeping into policy discussions about these technologies. The urge to regulate preemptively in these sectors is driven by a variety of safety, security, and privacy concerns, which are discussed throughout the book. Many of these concerns are valid and deserve serious consideration. However, I argue that if precautionary-minded regulatory solutions are adopted in a preemptive attempt to head-off these concerns, the consequences will be profoundly deleterious.
Mye central thesis is this: Living in constant fear of hypothetical worst-case scenarios — and premising public policy upon them — means that best-case scenarios will never come about. When public policy is shaped by precautionary principle reasoning, it poses a serious threat to technological progress, economic entrepreneurialism, social adaptation, and long-run prosperity.
Again, that doesn’t mean we should ignore the various problems created by these highly disruptive technologies. But how we address these concerns matters greatly. If and when problems develop, there are many less burdensome ways to address them than through preemptive technological controls. The best solutions to complex social problems are almost always organic and “bottom-up” in nature. Luckily, there exists a wide variety of constructive approaches that can be tapped to address or alleviate concerns associated with new innovations. These include:
education and empowerment efforts (including media literacy, digital citizenship efforts);
social pressure from activists, academics, and the press and the public more generally.
voluntary self-regulation and adoption of best practices (including privacy and security “by design” efforts); and,
increased transparency and awareness-building efforts to enhance consumer knowledge about how new technologies work.
Such solutions are almost always superior to top-down, command-and-control regulatory edits and bureaucratic schemes of a “Mother, May I?” (i.e., permissioned) nature. The problem with “top-down” traditional regulatory systems is that they often tend to be overly-rigid, bureaucratic, inflexible, and slow to adapt to new realities. They focus on preemptive remedies that aim to predict the future, and future hypothetical problems that may not ever come about. Worse yet, administrative regulation generally preempts or prohibits the beneficial experiments that yield new and better ways of doing things. It raises the cost of starting or running a business or non-business venture, and generally discourages activities that benefit society.
To the extent that other public policies are needed to guide technological developments, simple legal principles are greatly preferable to technology-specific, micro-managed regulatory regimes. Again, ex ante (preemptive and precautionary) regulation is often highly inefficient, even dangerous. To the extent that any corrective legal action is needed to address harms, ex post measures, especially via the common law (torts, class actions, etc.), are typically superior. And the Federal Trade Commission will, of course, continue to play a backstop here by utilizing the broad consumer protection powers it possesses under Section 5 of the Federal Trade Commission Act, which prohibits “unfair or deceptive acts or practices in or affecting commerce.” In recent years, the FTC has already brought and settled many cases involving its Section 5 authority to address identity theft and data security matters. If still more is needed, enhanced disclosure and transparency requirements would certainly be superior to outright bans on new forms of experimentation or other forms of heavy-handed technological controls.
In the end, however, I argue that, to the maximum extent possible, our default position toward new forms of technological innovation must remain: “innovation allowed.” That is especially the case because, more often than not, citizens find ways to adapt to technological change by employing a variety of coping mechanisms, new norms, or other creative fixes. We should have a little more faith in the ability of humanity to adapt to the challenges new innovations create for our culture and economy. We have done it countless times before. We are creative, resilient creatures. That’s why I remain so optimistic about our collective ability to confront the challenges posed by these new technologies and prosper in the process.
If you’re interested in taking a look, you can find a free PDF of the book at the Mercatus Center website or you can find out how to order it from there as an eBook. Hardcopies are also available.
The Mercatus Center also recently hosted a book launch party for the release of the 2nd edition. The event was very well-attended and many of those present asked me to forward along specific slides or the entire deck. So, for those who asked, or others who may be interested in seeing the slides, here ya go!
And here’s the video from the event, which also incorporates these slides:
Also, back in September 2015, Sonal Chokshi was kind enough to invite me on the a16z podcast and we discussed, “Making the Case for Permissionless Innovation.” You can listen to that conversation here:
Finally, I put together a paper summarizing the major policy recommendations contained in the book. It’s entitled, “Permissionless Innovation and Public Policy: A 10-Point Blueprint.” And then, along with Michael Wilt, I published condensed version of the paper as an essay over at Medium.
Materials mentioned in this post related to Permissionless Innovation project:
BOOK: Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom
SLIDES: ‘Permissionless Innovation’ & the Clash of Visions over Emerging Technologies
VIDEO: Permissionless Innovation & the Clash of Visions over Emerging Technologies
PAPER: “Permissionless Innovation and Public Policy: A 10-Point Blueprint.”
SUMMARY ESSAY: “Permissionless Innovation: A 10-Part Policy Checklist“
WEBSITE: PermissionlessInnovation.org
Related Essays:
Embracing a Culture of Permissionless Innovation
How Attitudes about Risk & Failure Affect Innovation on Either Side of the Atlantic
Tech Policy Threat Matrix [see image down below]
The Innovator’s Defense Fund: What it is and why it’s needed
Thinking about Innovation Policy Debates: 4 Related Paradigms
Muddling Through: How We Learn to Cope with Technological Change
A Section 230 for the “Makers” Movement
FTC’s Ohlhausen on Innovation, Prosperity, “Rational Optimism” & Wise Tech Policy
A Nonpartisan Policy Vision for the Internet of Things
What Cory Booker Gets about Innovation Policy
CFTC’s Giancarlo on Permissionless Innovation for the Blockchain
Problems with Precautionary Principle-Minded Tech Regulation & a Federal Robotics Commission
Global Innovation Arbitrage: Genetic Testing Edition
UK Competition & Markets Authority on Online Platform Regulation
What Should the FTC Do about State & Local Barriers to Sharing Economy Innovation?
Making Sure the “Trolley Problem” Doesn’t Derail Life-Saving Innovation
On the Line between Technology Ethics vs. Technology Policy
What Does It Mean to “Have a Conversation” about a New Technology?
Defining “Technology”
Don Boudreaux on What Fueled the “Orgy of Innovation”
Journal articles and book chapters:
“US Medical Devices: Choices and Consequences,” (with Richard Williams and Robert Graboyes), October 21, 2015.
“How the Internet, the Sharing Economy, and Reputational Feedback Mechanisms Solve the ‘Lemons Problem,’” (with Christopher Koopman, Anne Hobson, and Chris Kuiper), May 26, 2015.
“The Sharing Economy and Consumer Protection Regulation: The Case for Policy Change,” (with Christopher Koopman and Matthew Mitchell), The Journal of Business, Entrepreneurship & the Law 8, no. 2 (2015).
“The Internet of Things and Wearable Technology: Addressing Privacy and Security Concerns without Derailing Innovation,” Richmond Journal of Law and Technology 21, no. 6 (2015).
“Removing Roadblocks to Intelligent Vehicles and Driverless Cars,” (with Ryan Hagemann), Wake Forest Journal of Law & Policy 5, no. 2 (2015): 339–91.
“Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle,” Minnesota Journal of Law, Science & Technology 14 (2013): 309–86.
“Privacy Law’s Precautionary Principle Problem,” Maine Law Review 66, no. 2 (2014).
“The Pursuit of Privacy in a World Where Information Control Is Failing,” Harvard Journal of Law & Public Policy 36 (2013): 409–55.
“A Framework for Benefit-Cost Analysis in Digital Privacy Debates,” George Mason University Law Review 20, no. 4 (Summer 2013): 1055–105.
“The Case for Internet Optimism, Part 1: Saving the Net from Its Detractors,” in The Next Digital Decade: Essays on the Future of the Internet, ed. Berin Szoka and Adam Marcus (Washington, DC: Tech Freedom, 2010), 57–87.
April 15, 2016
Cable set top boxes are a distraction. The FCC is regulating apps.
For decades Congress has gradually deregulated communications and media. This poses a significant threat to the FCC’s jurisdiction because it is the primary regulator of communications and media. The current FCC, exhibiting alarming mission creep, has started importing its legacy regulations to the online world, like Title II common carrier regulations for Internet providers. The FCC’s recent proposal to “open up” TV set top boxes is consistent with the FCC’s reinvention as the US Internet regulator, and now the White House has supported that push.
There are a lot of issues with the set top box proposal but I’ll highlight a few. I really don’t even like referring to it as “the set top box proposal” because the proposal is really aimed at the future of TV–video viewing via apps and connected devices. STBs are a sideshow and mostly just provide the FCC a statutory hook to regulate TV apps. Even that “hook” is dubious–the FCC arbitrarily classifies apps and software as “navigation devices” but concludes that actual TV devices like Chromecast, Roku, smartphones, and tablets aren’t navigation devices. And, despite what activists say, this isn’t about “cable” either but all TV distributors (“MVPDs”) like satellite and telephone companies and Google Fiber, most of whom are small TV players.
First, the entire push for the proposal is based on the baseless notion that “charging monthly STB fees reveals that cable companies are abusing their market power.” I say baseless because cable companies have lost 14 million TV subscribers since 2002 to phone and satellite companies’ TV offerings (Verizon FiOS TV, Dish, Google Fiber, etc.), which suggests cable doesn’t have market power to charge anticompetitive prices. This is bolstered by the fact that the rates cable companies charge are consistent with what their smaller phone and satellite competitors charge for STBs. In fact, the STB monthly rates cable companies charge are pretty much identical to what municipal-owned and -operated TV stations charge. Even competing STB companies like TiVo charge monthly fees.
Second, as I’ve written, the FCC’s plans simply won’t work. The FCC tried “opening up” cable boxes for years with CableCard. That debacle resulted in ten years of regulations and FCC-directed standards and had only a marginal effect on the STB market. At conclusion, under 5% of the STB market went to “competitive” STB makers like TiVo. This latest plan has an even smaller chance of success because the FCC is not simply regulating cable boxes, but also boxes from satellite TV and IPTV distributors and their apps. The FCC is telling these hundreds of companies using dozens of technologies, codecs, and standards to develop interoperable standards so that other companies can retransmit the TV programming the MVPDs have bundled. It’s impractical and likely to fail, as Larry Downes noted in Recode, which is why the FCC provides few details about how this will work.
Third, what little progress the FCC does make in forcing MVPDs to make their TV programming accessible to competitors’ video apps and devices will tend to make broadband and TV less competitive. What the FCC is trying to do is force, say, Comcast’s TV programming to be available to certain application makers who want to retransmit that programming. So whatever streams to the Comcast Xfinity app will need to also work on competing apps if a competitor wants to re-bundle that programming.
The problem is that TV packages are how these companies compete and FCC rules will hinder that competitive process. TV distributors, including Netflix, purchase rights for sports and other programming to steal subscribers away from competitors. For instance, DirecTV attracts many customers solely because they have NFL Sunday Ticket and Amazon and Netflix original programming is a huge draw to their video services. TV programming and bundling that programming drives the competitive process. The Google Fiber folks likewise found out the importance of TV programming to compete. They planned originally to offer only broadband but came to find out there was little market for a broadband-only provider. Most people want TV packaged with broadband and Google was compelled by market forces to go out and purchase TV programming to attract customers. (On the other hand, some cable companies like Cable One are getting out of the TV game because programmers have significant leverage.)
Even non-MVPDs like mobile carriers and tech companies, including Twitter, Yahoo, and Facebook, are using TV programming to compete and they are investing big into video programming. Verizon Wireless has exclusive NFL programming, T-Mobile recently gave its subscribers a year of streaming access to most baseball games via a MLB.TV deal, and AT&T is giving mobile subscribers access to DirecTV programming. The point is, companies compete by experimenting with different service and program bundles. By forcing programming onto competing applications, devices, and platforms, the FCC short-circuits these competitive dynamics.
Fourth, speaking of purchasing rights, there is misinformation spreading about what TV access consumers are entitled to. For instance, there’s a recent Public Knowledge post that simply distorts the economics and law of TV licensing. Notably, the post says the FCC’s proposal “makes it easier for subscribers to control their own experience when accessing the programming that they…have paid for and to which they have lawful access.” This is simply false. Just because Walking Dead has been licensed for viewing on your television does not mean it’s lawful (or beneficial) for a TV competitor to take that same programming and send it to you via their own app.
Copyright holders re-sell the same programming to different distributors, sometimes several times over. Programmers have exclusive licensing deals with various distributors and device makers, so just because your cable contract allows you to watch it on your TV does not mean you have lawful access anywhere. For instance, the NFL has licensed Thursday Night NFL games to CBS and NBC for broadcast TV viewing, to the NFL Network for cable TV viewing, to Verizon Wireless for smartphone viewing, and to Twitter for computer viewing. Same programming, four different distribution technologies and five different companies. When programming can be easily repurposed, as the FCC would like, that upends entire business models of hundreds of media companies and distributors.
Further, it injects the FCC into copyright licensing issues. Put aside for the moment the debates, that the Public Knowledge post touches on, whether copyright holders are too restrictive. Whatever your views, reforming program licensing should come from Congress and the courts–not the FCC through this convoluted proposal. In fact, change via the courts is what Public Knowledge implicitly endorses. It was the courts–not the FCC–that made VCRs, DVRs, and DVR cloud storage legal in the face of copyright holder opposition. When the FCC last got involved in intervening in TV rights assignments in the 1960s and 1970s, the agency created broadcast retransmission rights, which have plagued communications and copyright law with complexity and lawsuits to this day.
Quite simply, the FCC is coercing companies to make their contracted-for TV content available to others who didn’t contract for it. This proposal will create a mess in television when implemented. It’s an unnecessary intervention into a marketplace–video programming–that is working. We are in what many media critics regard as the Golden Age of Television. That’s because there are so many TV distributors competing for programming. It’s a sellers’ market. The supposed problems here–high STB prices and copyright restrictiveness–are problems for competition agencies and the courts, respectively, not the FCC. The FCC wants to fix what’s not broken and start regulating apps and online video. It does nothing to improve the television market and simply makes more tech and media companies dependent on the FCC’s good graces for competitive survival.
April 4, 2016
In a World Where Kids Can 3D-Print Their Own Retainers, What Should Regulators Do?
As “software eats the world,” the reach of the Digital Revolution continues to expand to far-flung fields and sectors. The ramifications of this are tremendously exciting but at times can also be a little bit frightening.
Consider this recent Washington Post headline: “A College Kid Spends $60 to Straighten His Own Teeth. What Could Possibly Go Wrong?” Matt McFarland of the Post reports that, “A college student has received a wealth of interest in his dental work after publishing an account of straightening his own teeth for $60.” The student at the New Jersey Institute of Technology, “had no dentistry experience when he decided to create plastic aligners to improve his smile,” but was able to use a 3D printer and laser scanner on campus to accomplish the job. “After publishing before-and-after pictures of his teeth this month, [the student] has received hundreds of requests from strangers, asking him to straighten their teeth.”
McFarland cites many medical professionals who are horrified at the prospect of patients taking their health decisions into own hands and engaging in practices that could be dangerous to themselves and others. Some of the licensed practitioners cited in the story come across as just being bitter losers as they face the potential for the widespread disintermediation of their profession. After all, they currently charge thousands of dollars for various dental procedures and equipment. Thanks to technological innovations, however, those costs could soon plummet, which could significantly undercut their healthy margins on dental services and equipment. On the other hand, these professionals have a fair point about untrained citizens doing their own dental work or giving others the ability to do so. Things certainly could go horribly wrong.
This is another interesting case study related to the subject of a forthcoming Mercatus paper as well as an upcoming law review article on 3D printing of mine, both of which pose the following question: What happens when radically decentralized technological innovation (such as 3D printing) gives people a de facto “right to try” new medicines and medical devices? In one sense, decentralized, democratized innovation of this sort presents us with an exciting new world of possibilities. On the other hand, we know that when average citizens take their health into their own hands, the results could be disastrous. The question is, what do want policymakers to do about it? Ban 3D printers? Restrict the distribution of 3D printed blueprints freely shared online? Try to license average users? Or regulate the materials used to make these medical devices?
For the reasons I suggest in my forthcoming paper, none of these options are likely to work very well in practice. It will prove too complex and costly to employ top-down, command-and-control regulation in a world of such decentralized innovation. Moreover, many people will also find it highly offensive if the government takes steps to limit their personal autonomy and ability to self-treat themselves at a much lower cost than our currently health care system typically demands for similar treatments. The example in McFarland’s story is quite powerful in that regard because, as it makes clear, even young kids could be engaging in this sort of innovation and self-experimentation, at greatly reduced cost to themselves or their families. Again, this is both wonderful and a little bit scary.
The best hope, I argue in my forthcoming papers, lies in improved risk education. The goal should be to help create a more fully-informed citizenry that is empowered with more and better information about relative risk trade-offs. The Food & Drug Administration already engages in various product labeling efforts as well as public education campaigns and strategies. But this has always been a secondary mission for the agency, which has instead focused on trying to preemptively guarantee the safety and efficacy of drugs and devices. And much of the “education” the FDA does is basically explaining to companies and the public how to comply with its voluminous body of regulation.
This is going to have to change, and change quickly. Going forward, the FDA will likely have to reorient its focus in this way to cope with the rapidly evolving universe of not just mobile medical apps and 3D-printed technologies, but also all the wearable technologies that are part of the larger Internet of Things. For example, the FDA recently released a guidance document for “Management of Cybersecurity in Medical Devices,” encouraging innovators and other stakeholders to address security vulnerability in a collaborative, flexible fashion. This same model could be applied to 3D printing and many other new technologies. As I continue on to note in my forthcoming paper:
Guidance documents should be crafted that suggest various best practices for developers as well as risk education and communication messaging for the general public. The downside of such guidance documents, however, is that they leave unanswered the question of exactly what regulatory authority the agency might bring to bear against companies who are found to violate the “voluntary” principles or best practices in the documents. On the other hand, those guidance documents are usually superior to the alternative path of overly-rigid, top-down, preemptive controls on innovation. Congress should monitor the FDA’s use of such guidance documents closely to ensure that the agency does not abuse its broad regulatory discretion through arbitrary guidance actions.
My forthcoming papers also suggests that other non-governmental bodies will need to play a more active role in this risk education process and help explain safe and sensible uses of new technologies to the public, especially kids. And product developers will need to step-up their “safety-by-design” efforts to try to make sure that the products they release into the wild are as safe as possible. Of course, as with other general purpose technologies (like computers and smartphones), there is only so much that can be done preemptively to make sure devices like 3D printers are “safe and secure” out of the box. The reality is that, the more open and generative a new technology or platform, the harder it is to preemptively design it in such a way to foresee and limit all its uses–for better or for worse.
We live in exciting times, but serious risks exist when radical technological decentralization places tools and capabilities in the hands of average citizens. The goal of public policy should not be to retard the development or distribution of all these wonderful new tools, but instead to redouble efforts to education citizens about proper and improper uses of them.
Adam Thierer's Blog
- Adam Thierer's profile
- 1 follower
