Adam Thierer's Blog, page 5
July 22, 2022
My Forthcoming Book on Artificial Intelligence & Robotics Policy
I’m finishing up my next book, which is tentatively titled, “A Flexible Governance Framework for Artificial Intelligence.” I thought I’d offer a brief preview here in the hope of connecting with others who care about innovation in this space and are also interested in helping to address these policy issues going forward.
The goal of my book is to highlight the ways in which artificial intelligence (AI) machine learning (ML), robotics, and the power of computational science are set to transform the world—and the world of public policy—in profound ways. As with all my previous books and research products, my goal in this book includes both empirical and normative components. The first objective is to highlight the tensions between emerging technologies and the public policies that govern them. The second is to offer a defense of a specific governance stance toward emerging technologies intended to ensure we can enjoy the fruits of algorithmic innovation.
AI is a transformational technology that is general-purpose and dual-use. AI and ML also build on top of other important technologies—computing, microprocessors, the internet, high-speed broadband networks, and data storage/processing systems—and they will become the building blocks for a great many other innovations going forward. This means that, eventually, all policy will involve AI policy and computational considerations at some level. It will become the most important technology policy issue here and abroad going forward.
The global race for AI supremacy has important implications for competitive advantage and other geopolitical issues. This is why nations are focusing increasing attention on what they need to do to ensure they are prepared for this next major technological revolution. Public policy attitudes and defaults toward innovative activities will have an important influence on these outcomes.
In my book, I argue that, if the United States hopes to maintain a global leadership position in AI, ML, and robotics, public policy should be guided by two objectives:
Maximize the potential for innovation, entrepreneurialism, investment, and worker opportunities by seeking to ensure that firms and other organizations are prepared to compete at a global scale for talent and capital and that the domestic workforce is properly prepared to meet the same global challenges.Develop a flexible governance framework to address various ethical concerns about AI development or use to ensure these technologies benefit humanity, but work to accomplish this goal without undermining the goals set forth in the first objective.The book primarily addresses the second of these priorities because getting the governance framework for AI right significantly improves the chances of successfully accomplishing the first goal of ensuring that the United States remains a leading global AI innovator.
I do a deep dive into the many different governance challenges and policy proposals that are floating out there today—both domestically and internationally. The most contentious of these issues involved the so-called “socio-algorithmic” concerns that are driving calls for comprehensive regulation today. Those include the safety, security, privacy, and discrimination risks that AI/ML technologies could pose for individuals and society.
These concerns deserve serious consideration and appropriate governance steps to ensure that these systems are beneficial to society. However, there is an equally compelling public interest in ensuring that AI innovations are developed and made widely available to help improve human well-being across many dimensions.
Getting the balance right requires agile governance strategies and decentralized, polycentric approaches. There are many different values and complex trade-offs in play in these debates, all of which demand tailored responses. But this should not be done in an overly rigid way through complicated, inflexible, time-consuming regulatory mandates that preemptively curtail or completely constrain innovation opportunities. There’s no need to worry about the future if we can’t even build it first. AI innovation must not be treated as guilty until proven innocent.
The more agile and adaptive governance approach I outline in my book builds on the core principles typically recommended by those favoring precautionary principle-based regulation. That is, it is similarly focused on (1) “baking in” best practices and aligning AI design with widely-shared goals and values; and, (2) keeping humans “in the loop” at critical stages of this process to ensure that they can continue to guide and occasionally realign those values and best practices as needed. However, a decentralized governance approach to AI focuses on accomplishing these objectives in a more flexible, evolutionary fashion without the costly baggage associated with precautionary principle-based regulatory regimes.
The key to the decentralized approach is a diverse toolkit of so-called soft law governance solutions. Soft law refers to agile, adaptable governance schemes for emerging technology that create substantive expectations and best practices for innovators without regulatory mandates. Precautionary regulatory restraints will be necessary in some limited circumstances—particular for certain types of very serious existential risk—but most AI innovations should be treated as innocent until proven guilty.
When things do go wrong, many existing remedies are available, including a wide variety of common law solutions (torts, class actions, contract law, etc), recall authority possessed by many regulatory agencies, and various consumer protection policies and other existing laws. Moreover, the most effective solution to technological problems usually lies in more innovation, not less of it. It is only through constant trial and error that humanity discovers better and safer ways of satisfying important wants and needs.
The book has six chapters currently, although I am toying with adding back in two other chapters (on labor market issues and industrial policy proposals) that I finished but then cut to keep the theme of the book more tightly focused on social and ethical considerations surrounding AI and robotics.
Here are the summaries of the current six chapters in the manuscript:
Chapter 1: Understanding AI & Its Potential Benefits – Defining the nature and scope of artificial intelligence and its many components and related subsectors is complicated and this fact creates many governance challenges. But getting AI governance right is vital because these technologies offer individuals and society meaningful improvements in living standards across multiple dimensions.Chapter 2: The Importance of Policy Defaults for Innovation Culture – Every technology policy debate involves a choice between two general defaults: the precautionary principle and the proactionary principle or “permissionless innovation.” Setting the initial legal default for AI technologies closer to the green light of permissionless innovation will enable greater entrepreneurialism, investment, and global competitiveness.Chapter 3: Decentralized Governance for AI: A Framework – The process of embedding ethics in AI design is an ongoing, iterative process influenced by many forces and factors. There will be much trial and error when devising ethical guidelines for AI and hammering out better ways of keeping these systems aligned with human values. A top-down, one-size-fits-all regulatory framework for AI is unwise. A more decentralized, polycentric governance approach is needed—nationally and globally. [This chapter is the meat of the book and several derivative articles will be spun out of it beginning with a report on algorithmic auditing and AI impact assessments.]Chapter 4: The US Governance Model for AI So Far – U.S. digital technology and ecommerce sectors have enjoyed a generally “permissionless” policy environment since the early days of the Internet, and this has greatly benefited our innovation and global competitiveness. While AI has thus far been governed by a similar “light-touch” approach, many academics and policymakers are now calling for aggressive regulation of AI rooted in a precautionary principle-oriented mindset, which threatens to derail a great deal of AI innovation.Chapter 5: The European Regulatory Model & the Costs of Precaution by Default – Over the past quarter century, the European Union has taken a more aggressive approach to digital technology and data regulation, and is now advancing several new comprehensive regulatory frameworks, including an AI Act. The E.U.’s heavy-handed regulatory regime, which is rooted in the precautionary principle, discouraged innovation and investment across the continent in the past and will continue to do so as it grows to encompass AI technologies. The U.S. should reject this model and welcome European innovators looking to escape it.Chapter 6: Existential Risks & Global Governance Issues around AI & Robotics – AI and robotics could give rise to certain global risks that warrant greater attention and action. But policymakers must be careful to define existential risk properly and understand how it is often the case that the most important solution to such risks is more technological innovation to overcome those problems. The greatest existential risk of all would be to block further technological innovation and scientific progress. Proposals to impose global bans or regulatory agencies are both unwise and unworkable. Other approaches, including soft law efforts, will continue to play a role in addressing global AI risks and concerns.This book, which I hope to have out some time later this year, grows out of a large body of research I’ve done over the past decade. [Some of that work is listed down below.] AI, ML, robotics, and algorithmic policy issues will dominate my research focus and outputs over the next few years.
I look forward to doing my small part to help ensure that America builds on the track record of success it has enjoyed with the Internet, ecommerce, and digital technologies. Again, that stunning success story was built on wise policy choices that promoted a culture of creativity and innovation and rejected calls to hold on to past technological, economic, or legal status quos.
Will America rise to the challenge once again by adopting wise policies to facilitate the next great technological revolution? I’m ready for that fight. I hope you are, too, because it will be the most important technology policy battle of our lifetimes.
___________
Recent Essays & Papers on AI & Robotics Policy
Adam Thierer, “ Why is the US Following the EU’s Lead on Artificial Intelligence Regulation ?” The Hill, July 21, 2022.Adam Thierer, “ Algorithmic Auditing and AI Impact Assessments: The Need for Balance ,” Medium, July 13, 2022.Adam Thierer, “ What I Learned about the Power of AI at the Cleveland Clinic ,” Medium, May 6, 2022.Adam Thierer, Governing Emerging Technology in an Age of Policy Fragmentation and Disequilibrium , American Enterprise Institute (April 2022).Adam Thierer, “ A Global Clash of Visions: The Future of AI Policy ,” The Hill, May 4, 2021.Adam Thierer, “ A Brief History of Soft Law in ICT Sectors: Four Case Studies ,” Jurimetrics, Vol. 61 (Fall 2021): 79-119.Adam Thierer, “ U.S. Artificial Intelligence Governance in the Obama–Trump Years ,” IEEE Transactions on Technology and Society, Vol, 2, Issue 4 (2021).Adam Thierer, “ The Worst Regulation Ever Proposed ,” The Bridge, September 2019.Ryan Hagemann, Jennifer Huddleston Skees & Adam Thierer, “ Soft Law for Hard Problems: The Governance of Emerging Technologies in an Uncertain Future ,” Colorado Technology Law Journal, Vol. 17 (2018).Adam Thierer & Trace Mitchell, “ No New Tech Bureaucracy ,” Real Clear Policy, September 10, 2020.Adam Thierer, “ OMB’s AI Guidance Embodies Wise Tech Governance ,” Mercatus Center Public Comment, March 13, 2020.Adam Thierer, “ Europe’s New AI Industrial Policy ,” Medium, February 20, 2020.Adam Thierer, “ Trump’s AI Framework & the Future of Emerging Tech Governance ,” Medium, January 8, 2020.Adam Thierer, “ Soft Law: The Reconciliation of Permissionless & Responsible Innovation ,” Chapter 7 in Adam Thierer, Evasive Entrepreneurs & the Future of Governance (Washington, DC: Cato Institute, 2020): 183-240.Andrea O’Sullivan & Adam Thierer, “ Counterpoint: Regulators Should Allow the Greatest Space for AI Innovation ,” Communications of the ACM, Volume 61, Issue 12, (December 2018): 33-35.Adam Thierer, Andrea O’Sullivan & Raymond Russell, “ Artificial Intelligence and Public Policy ,” Mercatus Research, Mercatus Center at George Mason University, Arlington, VA, (2017).Adam Thierer, “ Are ‘Permissionless Innovation’ and ‘Responsible Innovation’ Compatible ?” Technology Liberation Front, July 12, 2017.Adam Thierer, “ The Growing AI Technopanic ,” Medium, April 27, 2017.Adam Thierer, “ The Day the Machines Took Over ,” Medium, May 11, 2017.Adam Thierer, “ When the Trial Lawyers Come for the Robot Cars ,” Slate, June 10, 2016.Adam Thierer, “ Problems with Precautionary Principle-Minded Tech Regulation & a Federal Robotics Commission ,” Medium, September 22, 2014.Adam Thierer, “ On the Line between Technology Ethics vs. Technology Policy ,” Technology Liberation Front, August 1, 2013.America Shouldn’t Follow EU’s Lead on AI Regulation
For my latest regular column in The Hill, I took a look at the trade-offs associated with the EU’s AI Act. This is derived from a much longer chapter on European AI policy that is in my forthcoming book, and I also plan on turning it into a free-standing paper at some point soon. My oped begins as follows:
In the intensifying race for global competitiveness in artificial intelligence (AI), the United States, China and the European Union are vying to be the home of what could be the most important technological revolution of our lifetimes. AI governance proposals are also developing rapidly, with the EU proposing an aggressive regulatory approach to add to its already-onerous regulatory regime.
It would be imprudent for the U.S. to adopt Europe’s more top-down regulatory model, however, which already decimated digital technology innovation in the past and now will do the same for AI. The key to competitive advantage in AI will be openness to entrepreneurialism, investment and talent, plus a flexible governance framework to address risks.
Jump over to The Hill to read the entire thing. And down below you will find all my recent writing on AI and robotics. This will be my primary research focus in coming years.
Additional Reading :
Adam Thierer, “ Why is the US Following the EU’s Lead on Artificial Intelligence Regulation ?” The Hill, July 21, 2022.Adam Thierer, “ Algorithmic Auditing and AI Impact Assessments: The Need for Balance ,” Medium, July 13, 2022.Adam Thierer, “ What I Learned about the Power of AI at the Cleveland Clinic ,” Medium, May 6, 2022.Adam Thierer, Governing Emerging Technology in an Age of Policy Fragmentation and Disequilibrium , American Enterprise Institute (April 2022).Adam Thierer, “ A Global Clash of Visions: The Future of AI Policy ,” The Hill, May 4, 2021.Adam Thierer, “ A Brief History of Soft Law in ICT Sectors: Four Case Studies ,” Jurimetrics, Vol. 61 (Fall 2021): 79-119.Adam Thierer, “ U.S Artificial Intelligence Governance in the Obama–Trump Years ,” IEEE Transactions on Technology and Society, Vol, 2, Issue 4 (2021).Adam Thierer, “ The Worst Regulation Ever Proposed ,” The Bridge, September 2019.Ryan Hagemann, Jennifer Huddleston Skees & Adam Thierer, “ Soft Law for Hard Problems: The Governance of Emerging Technologies in an Uncertain Future ,” Colorado Technology Law Journal, Vol. 17 (2018).Adam Thierer & Trace Mitchell, “ No New Tech Bureaucracy ,” Real Clear Policy, September 10, 2020.Adam Thierer, “ OMB’s AI Guidance Embodies Wise Tech Governance ,” Mercatus Center Public Comment, March 13, 2020.Adam Thierer, “ Europe’s New AI Industrial Policy ,” Medium, February 20, 2020.Adam Thierer, “ Trump’s AI Framework & the Future of Emerging Tech Governance ,” Medium, January 8, 2020.Adam Thierer, “ Soft Law: The Reconciliation of Permissionless & Responsible Innovation ,” Chapter 7 in Adam Thierer, Evasive Entrepreneurs & the Future of Governance (Washington, DC: Cato Institute, 2020): 183-240.Andrea O’Sullivan & Adam Thierer, “ Counterpoint: Regulators Should Allow the Greatest Space for AI Innovation ,” Communications of the ACM, Volume 61, Issue 12, (December 2018): 33-35.Adam Thierer, Andrea O’Sullivan & Raymond Russell, “ Artificial Intelligence and Public Policy ,” Mercatus Research, Mercatus Center at George Mason University, Arlington, VA, (2017).Adam Thierer, “ Are ‘Permissionless Innovation’ and ‘Responsible Innovation’ Compatible ?” Technology Liberation Front, July 12, 2017.Adam Thierer, “ The Growing AI Technopanic ,” Medium, April 27, 2017.Adam Thierer, “ The Day the Machines Took Over ,” Medium, May 11, 2017.Adam Thierer, “ When the Trial Lawyers Come for the Robot Cars ,” Slate, June 10, 2016.Adam Thierer, “ Problems with Precautionary Principle-Minded Tech Regulation & a Federal Robotics Commission ,” Medium, September 22, 2014.Adam Thierer, “ On the Line between Technology Ethics vs. Technology Policy ,” Technology Liberation Front, August 1, 2013.July 13, 2022
Event Video on Algorithmic Auditing and AI Impact Assessments
On July 12, I participated in a Bipartisan Policy Center event on “Civil Society Perspectives on Artificial Intelligence Impact Assessments.” It was an hour-long discussion moderated by Michele Nellenbach, Vice President of Strategic Initiatives at the Bipartisan Policy Center, and which also featured Miriam Vogel, President and CEO of EqualAI. We discussed the ins and outs of algorithmic auditing and impact assessments for artificial intelligence. This is one of the hottest topics in the field of AI governance today, with proposals multiplying rapidly in academic and public policy circles. Several governments are already considering mandating AI auditing and impact assessments.
You can watch the entire discussion here, and down below I have included some of my key talking points from the session. I am currently finishing up my next book, which is on how to craft a flexible governance framework for AI and algorithmic technologies. It includes a lengthy chapter on this issue and I also plan on eventually publishing a stand-alone study on this topic.
Algorithmic auditing and AI impact assessments represent an important step toward the professionalization of AI ethics.
Audits and impact assessments can help ensure organizations live up their promises as it pertains to “baking in” ethical best practices (on issues like safety, security, privacy, and non-discrimination).Audits and impact assessments are already utilized in other fields to address safety practices, financial accountability, labor practices and human rights issues, supply chain practices, and various environmental concerns.Internal auditing / Institute of Internal Auditors (IIA) efforts could expand to include AI risksEventually, more and more organizations will expand their internal auditing efforts to incorporate AI risks because it makes good business sense to stay on top of these issues and avoid liability, negative publicity, or other customer backlash.Build on the IAPP model to help “professionalize” AI ethics in data-driven organizations
the International Association of Privacy Professionals (IAPP) trains and certifies privacy professionals through formal credentialing programs, supplemented by regular meetings, annual awards, and a variety of outreach and educational initiatives.We should use similar model for AI and start by supplementing Chief Privacy Officers with Chief Ethical Officers.This is how we formalize the ethical frameworks and best practices that have been formulated by various professional associations such as IEEE, ISO, ACM and others.AI auditing and impact assessment process can be rooted in the voluntary risk assessment frameworks developed by OECD & NIST
OECD — Framework for the Classification of AI Systems with the twin goals of helping “to develop a common framework for reporting about AI incidents that facilitates global consistency and interoperability in incident reporting,” and advancing “related work on mitigation, compliance and enforcement along the AI system lifecycle, including as it pertains to corporate governance.”NIST — AI Risk Management Framework “to better manage risks to individuals, organizations, and society associated with artificial intelligence.”These frameworks being developed through a consensus-driven, open, transparent, and collaborative process. Not through top-down regulation.Many AI developers and business groups have endorsed the use of such audits and assessments. BSA|The Software Alliance has said that, “By establishing a process for personnel to document key design choices and their underlying rationale, impact assessments enable organizations that develop or deploy high-risk AI to identify and mitigate risks that can emerge throughout a system’s lifecycle.”Developers can still be held accountable for violations of certain ethical norms and bast practices both through private and potentially even through formal sanctions by consumer protection agencies (Federal Trade Commission / comparable state offices / by state AGs).Independent AI auditing bodies are already being formulated and could play an important role in help to further professionalize AI ethics.
EqualAI / WEF — “Badge Program for Responsible AI Governance”field of algorithmic consulting continues to expand (ex: O’Neil Risk Consulting)Downsides:Algorithmic audits and impact assessments are confronted with the same sort of definitional challenges that pervade AI more generally.
constitutes a harm or impact in any given context will often be a contentious matter.Auditing algorithms is nothing like auditing an accounting ledger, where the numbers either add up or they don’t.With algorithms there are no binary metrics that can quantify the correct amount of privacy, safety, or security in any given system.Audits and impact assessments should not become a formal regulatory process
E.U. AI act will be a disaster for AI innovation and investmentProposed U.S. Algorithmic Accountability Act of 2022 would require that developers perform impact assessments and file them with the Federal Trade Commission. A new Bureau of Technology would be created inside the agency to oversee the process.If enforced through a rigid regulatory regime and another federal bureaucracy, compliance with algorithmic auditing mandates would likely become a convoluted, time-consuming bureaucratic process. That would likely slow the pace of AI development significantly.Academic literature on AI auditing / impact assessment ignores potential costs; Mandatory auditing and assessments are treated as a sort of frictionless nirvana when we already know that such a process would entire significant costs.The National Environmental Policy Act (NEPA) is a bad model for AI impact assessments.
Some AI scholars suggest that NEPA should be model for AI impact assessments / audits.NEPA assessments were initially quite short (sometimes less than 10 pages), but today the average length of these statements is more than 600 pages and include appendices that average over 1,000 pages on top of that.NEPA assessments take an average of 4.5 years to complete and that, between 2010 and 2017, there were four assessments that took at least 17 years to complete.Many important public projects never get done or take far too long to complete at considerably higher expenditure than originally predicted.Applying the NEPA model to algorithmic systems would mean that much AI innovation would grind to a halt in the face of lengthy delays, paperwork burdens, and considerable compliance costs.
would create a number of veto points that opponents of AI could use to stop much progress in the field. This is the “vetocracy” problem.We cannot wait years or even months for bureaucracies to eventually getting around to formally signing off on audits or assessments, many of which would be obsolete before they were even done.Many AI developers would likely look to innovate elsewhere if auditing or impact assessments became a bureaucratic and highly convoluted compliance nightmare like that.
“global innovation arbitrage” problem would kick in: Innovators and investors increasingly relocate to the jurisdictions where they are treated most hospitably.Mandated algorithmic auditing could give rise to a final problem: Political meddling.
Both parties already accuse digital technology companies of manipulating their algorithms to censor their views.Whichever party is in power at any given time could use the process to politicize terms like “safety,” “security,” and “non-discrimination” to nudge or even force private AI developers to alter their algorithms to satisfy the desires of partisan politicians or bureaucrats.FCC abused its ambiguous authority to regulate “in the public interest” and indirectly censor broadcasters through intimidation via jawboning tactics and other “agency threats.” or “regulation by raised eyebrow”There are potentially profound First Amendment issues in play with the regulation of algorithms that have not been explored here but which could become a major part of AI regulatory efforts going forward.Summary:Auditing and impact assessments can be a part of a more decentralized, polycentric governance framework.Even in the absence of any sort of hard law mandates, algorithmic auditing and impact reviews represent an important way to encourage responsible AI development.But we should be careful about mandating such things due to the many unanticipated cost and consequences of converting this into a top-down, bureaucratic regulatory regime.The process should evolve gradually and organically, as it has in many other fields and sectors.July 5, 2022
Again, We Should Not Ban All Teens from Social Media
A growing number of conservatives are calling for Big Government censorship of social media speech platforms. Censorship proposals are to conservatives what price controls are to radical leftists: completely outlandish, unworkable, and usually unconstitutional fantasies of controlling things that are ultimately much harder to control than they realize. And the costs of even trying to impose and enforce such extremist controls are always enormous.
Earlier this year, The Wall Street Journal ran a response I wrote to a proposal set forth by columnist Peggy Noonan in which she proposed banning everyone under 18 from all social-media sites (“We Can Protect Children and Keep the Internet Free,” Apr. 15). I expanded upon that letter in an essay here entitled, “Should All Kids Under 18 Be Banned from Social Media?” National Review also recently published an article penned by Christine Rosen in which she also proposes to “Ban Kids from Social Media.” And just this week, Zach Whiting of the Texas Public Policy Foundation published an essay on “Why Texas Should Ban Social Media for Minors.”
I’ll offer a few more thoughts here in addition to what I’ve already said elsewhere. First, here is my response to the Rosen essay. National Review gave me 250 words to respond to her proposal:
While admitting that “law is a blunt instrument for solving complicated social problems,” Christine Rosen (“Keep Them Offline,” June 27) nonetheless downplays the radicalness of her proposal to make all teenagers criminals for accessing the primary media platforms of their generation. She wants us to believe that allowing teens to use social media is the equivalent of letting them operate a vehicle, smoke tobacco, or drink alcohol. This is false equivalence. Being on a social-media site is not the same as operating two tons of steel and glass at speed or using mind-altering substances.
Teens certainly face challenges and risks in any new media environment, but to believe that complex social pathologies did not exist before the Internet is folly. Echoing the same “lost generation” claims made by past critics who panicked over comic books and video games, Rosen asks, “Can we afford to lose another generation of children?” and suggests that only sweeping nanny-state controls can save the day. This cycle is apparently endless: Those “lost generations” grow up fine, only to claim it’s the next generation that is doomed!
Rosen casually dismisses free-speech concerns associated with mass-media criminalization, saying that her plan “would not require censorship.” Nothing could be further from the truth. Rosen’s prohibitionist proposal would deny teens the many routine and mostly beneficial interactions they have with their peers online every day. While she belittles media literacy and other educational and empowerment-based solutions to online problems, those approaches continue to be a better response than the repressive regulatory regime she would have Big Government impose on society.
I have a few more things to say beyond these brief comments.
First, as I alluded to in my short response to Rosen, we’ve heard similar “lost generation” stories before. Rosen might as well be channeling the ghost of Dr. Fredric Wertham (author of Seduction of the Innocent), who in the 1950s declared comics books a public health menace and lobbied lawmakers to restrict teen access to them, insisting such comics were “the cause of a psychological mutilation of children.” The same sort of “lost generation” predictions were commonplace in countless anti-video game screeds of the 1990s. Critics were writing books with titles like Stop Teaching Our Kids to Kill and referring to video games as “murder simulators,” Ironically, just as the video game panic was heating up, juvenile crime rates were plummeting. But that didn’t stop the pundits and policymakers from suggesting that an entire generation of so-called “vidiots” were headed for disaster. (See my 2019 short history: “Confessions of a ‘Vidiot’: 50 Years of Video Games & Moral Panics“).
It is consistently astonishing to me how, as I noted in 2012 essay, “We Always Sell the Next Generation Short.” There seems to be a never-ending cycle of generational mistrust. “There has probably never been a generation since the Paleolithic that did not deplore the fecklessness of the next and worship a golden memory of the past,” notes Matt Ridley, author of The Rational Optimist.
For example, in 1948, the poet T. S. Eliot declared: “We can assert with some confidence that our own period is one of decline; that the standards of culture are lower than they were fifty years ago; and that the evidences of this decline are visible in every department of human activity.” We’ve heard parents (and policymakers) make similar claims about every generation since then.
What’s going on here? Why does this cycle of generational pessimism and mistrust persist? In a 1992 journal article, the late journalism professor Margaret A. Blanchard offered this explanation:
“[P]arents and grandparents who lead the efforts to cleanse today’s society seem to forget that they survived alleged attacks on their morals by different media when they were children. Each generation’s adults either lose faith in the ability of their young people to do the same or they become convinced that the dangers facing the new generation are much more substantial than the ones they faced as children.”
In a 2009 book on culture, my colleague Tyler Cowen also noted how, “Parents, who are entrusted with human lives of their own making, bring their dearest feelings, years of time, and many thousands of dollars to their childrearing efforts.” Unsurprisingly, therefore, “they will react with extreme vigor against forces that counteract such an important part of their life program.” This explains why “the very same individuals tend to adopt cultural optimism when they are young, and cultural pessimism once they have children,” Cowen says.
Building on Blanchard and Cowen’s observation, I have explained how the most simple explanation for this phenomenon is that many parents and cultural critics have passed through their “adventure window.” The willingness of humans to try new things and experiment with new forms of culture—our “adventure window”—fades rapidly after certain key points in life, as we gradually settle in our ways. As the English satirist Douglas Adams once humorously noted: “Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works. Anything that’s invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it. Anything invented after you’re thirty-five is against the natural order of things.”
There is no doubt social media can create or exacerbate certain social pathologies among youth. But pro-censorship conservatives wants to take the easy way out with a Big Government media ban for the ages.
Ultimately, it’s a solution that will not be effective. Raising children and mentoring youth is certainly the hardest task we face as adults because simple solutions rarely exist to complex human challenges–and the issues kids face are often particularly hard for many parents and adults to grapple with because we often fail to fully understand both the unique issues each generation might face, and we definitely fail to fully grasp the nature of each new medium that youth embrace. Simplistic solution–even proposals for outright bans–will not work or solve serious problems.
An outright government ban on online platforms or digital devices is likely never going to happen due to First Amendment constraints, but even ignoring the jurisprudential barriers, bans won’t work for a reason that these conservatives never bother considering: Many parents will help their kids get access to those technologies and to evade restrictions on their use. Countless parents already do so in violation of COPPA rules, and not just because they worry that their kid won’t have access to what some other kids have. Rather, many parents (like me) both wanted to make sure I could more easily communicate with them, and also ensure that they could enjoy those technologies and use them to explore the world.
These conservatives might think some parents like me are monsters for allowing my (now grown) children to get on social media when they were teens. I wasn’t blind to the challenges, but recognized that sticking one’s head in the ground or hoping for divine intervention from the Nanny State was impractical and unwise. The hardest conversations I ever had with my kids were about the ugliness they sometimes experienced online, but those conversations were also countered by the many joys that I knew online interactions brought them. Shall I tell you about everything my son learned online before 13 about building model rockets or soapbox derby cars? Or the countless sites my daughter visited gathering ideas for her arts and crafts projects when, before the age of 13, she started hand-painting and selling jean jackets (eventually prompting her to pursue an art school degree)? Again, as I noted in my National Review response, Rosen’s prohibitionist proposal would deny teens these experiences and the countless other routine and entirely beneficial interactions that they have with their peers online every day.
There is simply no substitute for talking to your kids in the most open, understanding, and loving fashion possible. My #1 priority with my own children was not foreclosing all the new digital media platforms and devices at their disposal. That was going to be almost impossible. Other approaches are needed.
Yes, of course, the world can be an ugly place. I mean, have you ever watched the nightly news on television? It’s damn ugly. Shouldn’t we block youth access to it when scenes of war and violence are shown? Newspapers are full of ugliness, too. Should a kid be allowed to see the front page of the paper when it discusses or shows the aftermath of school shootings, acts of terrorism, or even just natural disasters? I could go on, but you get the point. And you could try to claim that somehow today’s social media environment is significantly worse for kids than the mass media of old, but you cannot prove it.
Of course you’ll have anecdotes, and many of them will again point to complex social pathologies. But I have entire shelves full of books on my office wall that made similar claims about the effects of books, the telephone, radio and television, comics, cable TV, every musical medium ever, video games, and advertising efforts across all these mediums. Hundreds upon hundreds of studies were done over the past half century about the effects of depictions of violence in movies, television, and video games. And endless court battles ensued.
In the end, nothing came out of it because the literature was inconclusive and frequently contradictory. After many years of panicking about youth and media violence, in 2020, the American Psychological Association issued a new statement slowly reversing course on misguided past statements about video games and acts of real-world violence. The APA’s old statement said that evidence “confirms [the] link between playing violent video games and aggression.” But the APA has come around and now says that, “there is insufficient scientific evidence to support a causal link between violent video games and violent behavior.” More specifically, the APA now says: “Violence is a complex social problem that likely stems from many factors that warrant attention from researchers, policy makers and the public. Attributing violence to violent video gaming is not scientifically sound and draws attention away from other factors.”
This is exactly what we should expect to find true for youth and social media. Most of the serious scholars in the field already note studies and findings about youth and social media must be carefully evaluated and that many other factors need to be considered whenever evaluating claims about complex social phenomenon.
While Rosen belittles media literacy and other educational and empowerment-based solutions to online problems, those approaches continue to represent the best first-order response when compared to the repressive regulatory regime she would impose on society.
Finally, I want to just reiterate what I said in my brief National Review response about the enormous challenges associated with mass criminalization or speech platforms. Rosen seems to image that all the costs and controversies will lie on the supply-side of social media. Just call for a ban and then magically all kids disappear from social media and the big evil tech capitalists eat all the costs and hassles. Nonsense. It’s the demand-side of criminalization efforts where the most serious costs lie. What do you really think kids are going to do if Uncle Sam suddenly does ban everyone under 18 from going on a “social media site,” whatever that very broad term entails? This will become another sad chapter in the history of Big Government prohibitionist efforts that fail miserably, but not before declaring mass groups of people criminals–this time including everyone under 18–and then trying to throw the book at them when they seek to avoid those repressive controls. There are better ways to address these problems than with such extremist proposals.
________________________________________
Additional Reading from Adam Thierer on Media & Content Regulation :
“Should All Kids Under 18 Be Banned from Social Media?”“Why Do We Always Sell the Next Generation Short?”“The APA’s Welcome New Statement on Video Game Violence““Video Games and Moral Panic“ Parental Controls & Online Child Protection: A Survey of Tools & Methods, 4th Edition.“Technopanics and the Great Social Networking Scare““More on Monkey See-Monkey Do Theories about Media Violence & Real-World Crime““Video Games, Media Violence & the Cathartic Effect Hypothesis.”)“The Classical Liberal Approach to Digital Media Free Speech Issues““Left and right take aim at Big Tech — and the First Amendment““When It Comes to Fighting Social Media Bias, More Regulation Is Not the Answer““FCC’s O’Rielly on First Amendment & Fairness Doctrine Dangers““Conservatives & Common Carriage: Contradictions & Challenges““The Great Deplatforming of 2021““A Good Time to Re-Read Reagan’s Fairness Doctrine Veto““Sen. Hawley’s Radical, Paternalistic Plan to Remake the Internet““How Conservatives Came to Favor the Fairness Doctrine & Net Neutrality““Sen. Hawley’s Moral Panic Over Social Media““The White House Social Media Summit and the Return of ‘Regulation by Raised Eyebrow’““The Not-So-SMART Act““The Surprising Ideological Origins of Trump’s Communications Collectivism““Why Regulate Broadcasting: Toward a Consistent First Amendment Standard for the Information Age,” Catholic University Law School, 15 CommLaw Conspectus (Summer 2007): 431-482.Testimony at FCC’s Hearing on “Serving the Public Interest in the Digital Era”, March 3, 2010.“FCC v. Fox and the Future of the First Amendment in the Information Age,”June 15, 2022
3 Questions about Progress: The Profectus Progress Roundtable
I was honored to be asked by Clay Routledge to contribute answers to those questions alongside others, including: Steven Pinker (Harvard University), Jason Crawford (Roots of Progress), Matt Clancy (Institute for Progress), Marian Tupy (HumanProgress.org), James Pethokoukis (AEI). I encourage you to jump over the roundtable and read all their excellent responses. I’ve included my answers down below:
What is progress?
Progress is the advancement of human health, happiness, and general well-being. Measures of well-being can be challenging, however, so we should consider a broad range of metrics, including: life expectancy, infant mortality, poverty measures, energy production/consumption, GDP, productivity, agricultural yields/nourishment, and access to various important goods, services, and conveniences. While each of these metrics may have limitations, taken together, they stand for something meaningful that represents a rough proxy for progress.
But we should always remember what progress means at a deeper level for every individual. Innovation and economic growth are important because they allow us to live lives of our own choosing and enjoy the fruits of a prosperous, pluralistic society. Progress “is not just bigger piles of money,” as Hans Rosling once noted. “The ultimate goal is to have the freedom to do what we want.” Accordingly, we should aim to broaden the range of opportunities available to all people to help them flourish.
What are the most significant barriers holding back further progress?
The most significant threat to continued progress is the risk of stagnation accompanying efforts to protect the status quo. As Virginia Postrel taught us in her wonderful book The Future & Its Enemies, we should reject stasis-minded thinking and instead shoot for a world of dynamism, which cherishes and protects the freedom to think and act differently.
Progress hinges upon the growth of knowledge. Knowledge comes from experience, and the most important experiences involve trial-and-error learning. Public attitudes and policies that restrict people and ideas from intermingling freely are a recipe for intellectual, social, and economic stagnation. Accordingly, when we consider public policies toward progress, we should first seek to identify and remove legal and regulatory impediments that limit risk-taking, entrepreneurialism, and technological innovation. As science writer Matt Ridley provocatively puts it, to unlock more growth and prosperity, we must first remove obstacles to “ideas having sex.”
The free movement of people and capital is essential to this process. Openness to immigration is the easiest way for a nation to expand its potential for innovation and growth. But domestic labor skills and mobility are equally important. For entrepreneurs and workers, we need to reframe the battle for progress as “the freedom to innovate” and “the right to earn a living.”
Unfortunately, many barriers exist to advancing those goals, like occupational licensing rules and permitting processes, cronyist industrial protectionist schemes, inefficient tax schemes, and many other layers of regulatory red tape. Reforming or eliminating such rules is crucial for broadening opportunities.
Finally, we need to address cultural barriers to progress. Technology and entrepreneurs often get a bad rap in the media and popular culture. Fear and pessimism dominate their narratives. We must do a better job communicating the benefits of openness to change and give people more reasons to be optimistic about a dynamic future.
If those challenges can be overcome, what does the world look like in 50 years?
I agree with Yogi Berra that “It’s tough to make predictions, especially about the future.” Nonetheless, history shows we can achieve remarkable things when we get the prerequisites for progress right and let people tap into their inherent inquisitiveness and inventiveness. Moving the needle on innovation and growth even just a little will yield compounding returns to future generations. But we should dare to dream bigger and think what progress means for each person today and in the future.
A pro-progress agenda will help us lead longer lives and significantly expand our capabilities because that is what people have always desired most. Accordingly, I believe the most significant advance of the next 50 years will be a radical increase in life expectancy and dramatic improvements in our physical and mental capabilities while we are alive.
Today’s tech critics often claim that technological innovation somehow undermines our humanity. They couldn’t be more wrong. There are few things more human than acts of invention. When we take steps to address practical human needs and wants, we enrich our lives and the lives of countless others. The future will be wonderful, so long as we are free to make it so.
June 13, 2022
VIDEO: My London Talk about the Future of AI Governance
On Thursday, June 9, it was my great pleasure to return to my first work office at the Adam Smith Institute in London and give a talk on the future of innovation policy and the governance of artificial intelligence. James Lawson, who is affiliated with the ASI and wrote a wonderful 2020 study on AI policy, introduced me and also offered some remarks. Among the issues discussed:
What sort of governance vision should govern the future of innovation generally and AI in particular: the “precautionary principle” or “permissionless innovation”?Which AI sectors are witnessing the most exciting forms of innovation currently?What are the fundamental policy fault lines in the AI policy debates today?Will fears about disruption and automation lead to a new Luddite movement?How can “soft law” and decentralized governance mechanism help us solve pressing policy concerns surrounding AI?How did automation affect traditional jobs and sectors?Will the European Union’s AI Act become a global model for regulation and will it have a “Brussels Effect” in terms of forcing innovators across the world to come into compliance with EU regulatory mandates?How will global innovation arbitrage affect the efforts by governments in Europe and elsewhere to regulate AI innovation?Can the common law help address AI risk? How is the UK common law system superior to the US legal system?What do we mean by “existential risk” as it pertains to artificial intelligence?I have a massive study in the works addressing all these issues. In the meantime, you can watch the video of my London talk here. And thanks again to my friends at the Adam Smith Institute for hosting!
Additional Reading:
Adam Thierer, “The Proper Governance Default for AI,” Medium, May 26, 2022, https://medium.com/@AdamThierer/the-p... Thierer, “What I Learned about the Power of AI at the Cleveland Clinic,” Medium, May 6, 2022, https://medium.com/@AdamThierer/what-i-learned-about-the-power-of-ai-at-the-cleveland-clinic-e5b7768d057d.Adam Thierer, Governing Emerging Technology in an Age of Policy Fragmentation and Disequilibrium , American Enterprise Institute (April 2022), https://platforms.aei.org/can-the-knowledge-gap-between-regulators-and-innovators-be-narrowedAdam Thierer, “A Global Clash of Visions: The Future of AI Policy,” The Hill, May 4, 2021, https://thehill.com/opinion/technology/551562-a-global-clash-of-visions-the-future-of-ai-policy.Adam Thierer, “S. Artificial Intelligence Governance in the Obama–Trump Years,” IEEE Transactions on Technology and Society, Vol, 2, Issue 4, (2021), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4013880.Ryan Hagemann, Jennifer Huddleston Skees & Adam Thierer, “Soft Law for Hard Problems: The Governance of Emerging Technologies in an Uncertain Future,” Colorado Technology Law Journal, Vol. 17 (2018), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3118539.Adam Thierer, “OMB’s AI Guidance Embodies Wise Tech Governance,” Mercatus Center Public Comment, March 13, 2020, https://www.mercatus.org/publications/technology-and-innovation/ombs-ai-guidance-embodies-wise-tech-governance.Adam Thierer, “Europe’s New AI Industrial Policy,” Medium, February 20, 2020, https://medium.com/@AdamThierer/europes-new-ai-industrial-policy-c5d945c5579f.Adam Thierer, “Trump’s AI Framework & the Future of Emerging Tech Governance,” Medium, January 8, 2020, https://medium.com/@AdamThierer/trumps-ai-framework-the-future-of-emerging-tech-governance-e504943e07d4.Adam Thierer, “Soft Law: The Reconciliation of Permissionless & Responsible Innovation,” Chapter 7 in Adam Thierer, Evasive Entrepreneurs & the Future of Governance (Washington, DC: Cato Institute, 2020): 183-240, https://www.mercatus.org/publications/technology-and-innovation/soft-law-reconciliation-permissionless-responsible-innovation.Andrea O’Sullivan & Adam Thierer, “Counterpoint: Regulators Should Allow the Greatest Space for AI Innovation,” Communications of the ACM, Volume 61, Issue 12, (December 2018): 33-35, https://doi.org/10.1145/3241035.Adam Thierer, Andrea O’Sullivan & Raymond Russell, “Artificial Intelligence and Public Policy,” Mercatus Research, Mercatus Center at George Mason University, Arlington, VA, (2017), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3021135.Adam Thierer, “Are ‘Permissionless Innovation’ and ‘Responsible Innovation’ Compatible?” Technology Liberation Front, July 12, 2017, https://techliberation.com/2017/07/12/are-permissionless-innovation-and-responsible-innovation-compatible.Adam Thierer, “The Growing AI Technopanic,” Medium, April 27, 2017, https://aboveintelligent.com/the-grow... Thierer, “The Day the Machines Took Over,” Medium, May 11, 2017, https://medium.com/@AdamThierer/the-d... Thierer, “When the Trial Lawyers Come for the Robot Cars,” Slate, June 10, 2016, https://slate.com/technology/2016/06/... Thierer, “Problems with Precautionary Principle-Minded Tech Regulation & a Federal Robotics Commission,” Medium, September 22, 2014, https://medium.com/tech-liberation/problems-with-precautionary-principle-minded-tech-regulation-a-federal-robotics-commission-c71f6f20d8bd.Adam Thierer, “On the Line between Technology Ethics vs. Technology Policy,” Technology Liberation Front, August 1, 2013, https://techliberation.com/2013/08/01/on-the-line-between-technology-ethics-vs-technology-policy.
May 26, 2022
The Proper Governance Default for AI
[This is a draft of a section of a forthcoming study on “A Flexible Governance Framework for Artificial Intelligence,” which I hope to complete shortly. I welcome feedback. I have also cross-posted this essay at Medium.]
Debates about how to embed ethics and best practices into AI product design is where the question of public policy defaults becomes important. To the extent AI design becomes the subject of legal or regulatory decision-making, a choice must be made between two general approaches: the precautionary principle or the proactionary principle. While there are many hybrid governance approaches in between these two poles, the crucial issue is whether the initial legal default for AI technologies will be set closer to the red light of the precautionary principle (i.e., permissioned innovation) or to the green light of the proactionary principle (i.e., (permissionless innovation). Each governance default will be discussed.
The precautionary principle holds that innovations are to be curtailed or potentially even disallowed until the creators of those new technologies can prove that they will not cause any theoretical harms. The classic formulation of the precautionary principle can be found in the “Wingspan Statement,” which was formulated at an academic conference that took place at the Wingspread Conference Center in Wisconsin in 1998. It read: “Where an activity raises threats of harm to the environment or human health, precautionary measures should be taken even if some cause and effect relationships are not fully established scientifically.” There have been many reformulations of the precautionary principle over time but, as legal scholar Cass Sunstein has noted, “in all of them, the animating idea is that regulators should take steps to protect against potential harms, even if causal chains are unclear and even if we do not know that those harms will come to fruition.” Put simply, under almost all varieties of the precautionary principle, innovation is treated as “guilty until proven innocent.” We can also think of this as permissioned innovation.
The logic animating the precautionary principle reflects a well-intentioned desire to play it safe in the face of uncertainty. The problem lies in the way this instinct gets translated into law and regulation. Making the precautionary principle the public policy default for any given technology or sector has a strong bearing on how much innovation we can expect to flow from it. When trial-and-error experimentation is preemptively forbidden or discouraged by law, it can limit many of the positive outcomes that typically accompany efforts by people to be creative and entrepreneurial. This can, in turn, give rise to different risks for society in terms of forgone innovation, growth, and corresponding opportunities to improve human welfare in meaningful ways.
St. Thomas Aquinas once observed that if the sole goal of a captain were to preserve their ship, the captain would keep it in port forever. But that clearly is not the captain’s highest goal. Aquinas was making a simple but powerful point: There can be no reward without some effort and even some risk-taking. Ship captains brave the high seas because they are in search of a greater good, such as recognition, adventure, or income. Keeping ships in port forever would preserve their vessels, but at what cost?
Similarly, consider the wise words of Wilbur Wright, who pioneered human flight. Few people better understood the profound risks associated with entrepreneurial activities. After all, Wilbur and his brother were trying to figure out how to literally lift humans off the Earth. The dangers were real, but worth taking. “If you are looking for perfect safety,” Wright said, “you would do well to sit on a fence and watch the birds.” Humans would have never taken to the skies if the Wright brothers had not gotten off the fence and taken the risks they did. Risk-taking drives innovation and, over the long-haul, improves our well-being. Nothing ventured, nothing gained.
These lessons can be applied to public policy by considering what would happen if, in the name of safety, public officials told captains to never leave port or told aspiring pilots to never leave the ground. The opportunity cost of inaction can be hard to quantify, but it should be clear that if we organized our entire society around a rigid application of the precautionary principle, progress and prosperity would suffer.
Heavy-handed preemptive restraints on creative acts can have deleterious effects because they raise barriers to entry, increase compliance costs, and create more risk and uncertainty for entrepreneurs and investors. Thus, it is the unseen costs—primarily in the form of forgone innovation opportunities—that makes the precautionary principle so problematic as a policy default. This is why scientist Martin Rees speaks of “the hidden cost of saying no” that is associated with the precautionary principle.
The precise way the precautionary principle leads to this result is that it derails the so-called learning curve by limiting opportunities to learn from trial-and-error experimentation with new and better ways of doing things. The learning curve refers to the way that individuals, organizations, or industries are able to learn from their mistakes, improve their designs, enhance productivity, lower costs, and then offer superior products based on the resulting knowledge. In his recent book, Where Is My Flying Car?, J. Storrs Hall documents how, over the last half century, “regulation clobbered the learning curve” for many important technologies in the U.S., especially nuclear, nanotech, and advanced aviation. Hall shows how society was denied many important innovations due to endless foot-dragging or outright opposition to change from special interests, anti-innovation activists, and over-zealous bureaucrats.
In many cases, innovators don’t even know what they are up against because, as many scholars have noted, “the precautionary principle, in all of its forms, is fraught with vagueness and ambiguity.” It creates confusion and fear about the wisdom of taking action in the face of uncertainty. Worst case thinking paralyzes regulators who aim to “play it safe” at all costs. The result is an endless snafu of red tape as layer upon layer of mandates build up and block progress. The result is what many scholars now decry as a culture of “vetocracy,” which describes the many veto points within modern political systems that hold back innovation, development and economic opportunity. This endless accumulation of potential veto points in the policy process in the form of mandates and restrictions can greatly curtail innovation opportunities. “Like sediment in a harbor, law has steadily accumulated, mainly since the 1960s, until most productive activity requires slogging through a legal swamp,” says Philip K. Howard, chair of Common Good. “Too much law,” he argues, “can have similar effects as too little law,” because:
People slow down, they become defensive, they don’t initiate projects because they are surrounded by legal risks and bureaucratic hurdles. They tiptoe through the day looking over their shoulders rather than driving forward on the power of their instincts. Instead of trial and error, they focus on avoiding error.
This is exactly why it is important that policymakers not get too caught up in attempts to preemptively resolve every potential hypothetical worst case scenarios associated with AI technologies. The problem with that approach was succinctly summarized by the political scientist Aaron Wildavsky when he noted, “If you can do nothing without knowing first how it will turn out, you cannot do anything at all.” Or, as I have stated in a book on this topic, “living in constant fear of worst-case scenarios—and premising public policy on them—means that best-case scenarios will never come about.”
This does not mean society should dismiss all concerns about the risks surrounding AI. Some technological risks do necessitate a degree of precautionary policy, but proportionality is crucial, notes Gabrielle Bauer, a Toronto-based medical writer. “Used too liberally,” she argues, “the precautionary principle can keep us stuck in a state of extreme risk-aversion, leading to cumbersome policies that weigh down our lives. To get to the good parts of life, we need to accept some risk.” It is not enough to simply hypothesize that certain AI innovations might entail some risk. The critics need to prove it using risk analysis techniques that properly weigh both the potential costs and benefits. Moreover, when conducting such analyses, the full range of trade-offs associated with preemptive regulation must be evaluated. Again, where precautionary constraints might deny society life-enriching devices or services, those costs must be acknowledged.
Generally speaking, the most extreme precautionary controls should only be imposed when the potential harms in question are highly probable, tangible, immediate, irreversible, catastrophic, or directly threatening to life and limb in some fashion. In the context of AI and ML systems, it may be the case that such a test is satisfied already for law enforcement use of certain algorithmic profiling techniques. And that test is satisfied for so-called “killer robots,” or autonomous military technology. These are often described as “existential risks.” The precautionary principle is the right default in these cases because it is abundantly clear how unrestricted use would have catastrophic consequences. For similar reasons, governments have long imposed comprehensive restrictions on certain types of weapons. And although nuclear and chemical technologies have many important applications, their use must also be limited to some degree even outside of militaristic applications because they can pose grave danger if misused.
But the vast majority of AI-enabled technologies are not like this. Most innovations should not be treated the same a hand grenade or a ticking time bomb. In reality, most algorithmic failures will be more mundane and difficult to foresee in advance. By their very nature, algorithms are constantly evolving because programs and systems are being endlessly tweaked by designers to improve them. In his books on the evolution of engineering and systems design, Henry Petroski has noted that “the shortcomings of things are what drive their evolution.” The normal state of things is “ubiquitous imperfection,” he notes, and it is precisely that reality that drives efforts to continuously innovate and iterate.
Regulations rooted in the precautionary principle hope to preemptively find and address product imperfections before any harm comes from them. In reality, and as explained more below, it is only through ongoing experimentation that we find both the nature of failures and the knowledge to know how to correct them. As Petroski observes, “the history of engineering in general, may be told in its failures as well as in its triumphs. Success may be grand, but disappointment can often teach us more.” This is particularly true for complex algorithmic systems, where rapid-fire innovation and incessant iteration are the norm.
Importantly, the problem with precautionary regulation for AI is not just that it might be over-inclusive in seeking to regulate hypothetical problems that never develop. Precautionary regulation can also be under-inclusive by missing problematic behavior or harms that no one anticipated before the fact. Only experience and experimentation reveal certain problems.
In sum, we should not presume that there is a clear preemptive regulatory solution to every problem some people raise about AI, nor should we presume we can even accurately identify all such problems that might come about in the future. Moreover, some risks will never be eliminated entirely, meaning that risk mitigation is the wiser approach. This is why a more flexible bottom-up governance strategy focused on responsiveness and resiliency makes more sense than heavy-handed, top-down strategies that would only avoid risks by making future innovations extremely difficult if not impossible.
The “Proactionary Principle” is the Better Default for AI PolicyThe previous section made it clear why the precautionary principle should generally not be used as our policy default if we hope to encourage the development of AI applications and services. What we need is a policy approach that:
objectively evaluates the concerns raised about AI systems and applications;considers whether more flexible governance approaches might be available to address them; and,does so without resorting to the precautionary principle as a first-order response.The proactionary principle is the better general policy default for AI because it satisfies these three objectives. Philosopher Max More defines the proactionary principle as the idea that policymakers should, “[p]rotect the freedom to innovate and progress while thinking and planning intelligently for collateral effects.” There are different names for this same concept, including the innovation principle, which Daniel Castro and Michael McLaughlin of the Information Technology and Innovation Foundation say represents the belief that “the vast majority of new innovations are beneficial and pose little risk, so government should encourage them.” Permissionless innovation is another name for the same idea. Permissionless innovation refers to the idea that experimentation with new technologies and business models should generally be permitted by default.
What binds these concepts together is the belief that innovation should generally be treated as innocent until proven guilty. There will be risks and failures, of course, but the permissionless innovation mindset views them as important learning experiences. These experiences are chances for individuals, organizations, and all of society to make constant improvements through incessant experimentation with new and better ways of doing things. As Virginia Postrel argued in her 1998 book, The Future and Its Enemies, progress demands “a decentralized, evolutionary process” and mindset in which mistakes are not viewed as permanent disasters but instead as “the correctable by-products of experimentation.” “No one wants to learn by mistakes,” Petroski once noted, “but we cannot learn enough from successes to go beyond the state of the art.” Instead we must realize, as other scholars have observed, that “[s]uccess is the culmination of many failures” and understand “failure as the natural consequence of risk and complexity.”
This is why the default for public policy for AI innovation should, whenever possible, be more green lights than red ones to allow for the maximum amount of trial-and-error experimentation, which encourages ongoing learning. “Experimentation matters,” observes Stefan H. Thomke of the Harvard Business School, “because it fuels the discovery and creation of knowledge and thereby leads to the development and improvement of products, processes, systems, and organizations.”
Obviously, risks and mistakes are “the very things regulators inherently want to avoid,” but “if innovators fear they will be punished for every mistake,” Daniel Castro and Alan McQuinn argue, “then they will be much less assertive in trying to develop the next new thing.” And for all the reasons already stated, that would represent the end of progress because it would foreclose the learning process that allows society to discover new, better, and safer ways of doing things. Technology author Kevin Kelly puts it this way:
technologies must be evaluated in action, by action. We test them in labs, we try them out in prototypes, we use them in pilot programs, we adapt our expectations, we monitor their alterations, we redefine their aims as they are modified, we retest them given actual behavior, we re-direct them to new jobs when we are not happy with their outcomes.
In other words, the proactionary principle appreciates the benefits that flow from learning by doing. The goal is to continuously assess and prioritize risks from natural and human-made systems alike, and then formulate and reformulate our toolkit of possible responses to those risks using the most practical and effective solutions available. This should make it clear that the proactionary approach is not synonymous with anarchy. Various laws, government bodies, and especially the courts play an important role in protecting rights, health, and order. But policies need to be formulated such that innovators and innovation are given the benefit of the doubt and risks are analyzed and addressed in a more flexible fashion.
Some of the most effective ways to address potential AI risks already exist in the form of “soft law” and decentralized governance solution. These will be discussed at greater length below. But existing legal remedies include various common law solutions (torts, class actions, contract law, etc), recall authority possessed by many regulatory agencies, and various consumer protection policies. Ex post remedies are generally superior to ex ante prior restraints if we hope to maximize innovation opportunities. Ex ante regulatory defaults are too often set closer to the red light of the precautionary principle and then enforced through volumes of convoluted red tape.
This is what the World Economic Forum has referred to as a “regulate-and-forget” system of governance, or what others call a “build-and-freeze model” or regulation. In such technological governance regimes, older rules are almost never revisited, even after new social, economic, and technical realities render them obsolete or ineffective. A 2017 survey of U.S. Code of Regulations by Deloitte consultants revealed that 68 percent of federal regulations have never been updated and that 17 percent have only been updated once. Public policies for complex and fast-moving technologies like AI cannot be set in stone and forgotten like that if America hopes to remain on the cutting edge of this sector.
Advocates of the proactionary principle look to counter this problem not by eliminating all laws or agencies, but by bringing them in line with flexible governance principles rooted in more decentralized approaches to policy concerns. As many regulatory advocates suggest, it is important to embed or “bake in” various ethical best practices into AI systems to ensure that they benefit humanity. But this, too, is a process of ongoing learning and there are many ways to accomplish such goals without derailing important technological advances. What is often referred to as “value alignment” or “ethically-aligned design” is challenged by the fact that humans regularly disagree profoundly about many moral issues. “Before we can put our values into machines, we have to figure out how to make our values clear and consistent,” says Harvard University psychologist Joshua D. Greene.
The “Three Laws of Robotics” famously formulated decades ago by Isaac Asimov in his science fiction stories continue to be widely discussed today as a guide to embedding ethics into machines. They read:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.A robot must obey orders given it by human beings except where such orders would conflict with the First Law.A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.What is usually forgotten about these principles, as AI expert Melanie Mitchell reminds us, is the way Asimov, “often focused on the unintended consequences of programming ethical rules into robots,” and how he made it clear that, if applied too literally, “such a set of rules would inevitably fail.”
This is why flexibility and humility are essential virtues when thinking about AI policy. The optimal governance regime for AI can be shaped by responsible innovation practices and embed important ethical principles by design without immediately defaulting to a rigid application of the precautionary principle. In other words, an innovation policy regime rooted in the proactionary principle can also be infused with the same values that animate a precautionary principle-based system. The difference is that the proactionary principle-based approach will look to achieve these goals in a more flexible fashion using a variety of experimental governance approaches and ex post legal enforcement options, while also encouraging still more innovation to solve problems past innovations may have caused.
To reiterate, not every AI risk is foreseeable, and many risks and harms are more amorphous or uncertain. In this sense, the wisest governance approach for AI was recently outlined by the National Institute of Standards and Technology (NIST) in its initial draft AI Risk Management Framework, which is a multistakeholder effort “to describe how the risks from AI-based systems differ from other domains and to encourage and equip many different stakeholders in AI to address those risks purposefully.” NIST notes that the goal of the Framework is:
to be responsive to new risks as they emerge rather than enumerating all known risks in advance. This flexibility is particularly important where impacts are not easily foreseeable, and applications are evolving rapidly. While AI benefits and some AI risks are well-known, the AI community is only beginning to understand and classify incidents and scenarios that result in harm.
This is a sensible framework for how to address AI risks because it makes it clear that it will be difficult to preemptively identify and address all potential AI risks. At the same time, there will be a continuing need to advance AI innovation while addressing AI-related harms. The key to striking that balance will be decentralized governance approaches and soft law techniques described below.
[Note: The subsequent sections of the study will detail how decentralized governance approaches and soft law techniques already are helping to address concerns about AI risks.]
Endnotes:
Adam Thierer, Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom, 2nd ed. (Arlington, VA: Mercatus Center at George Mason University, 2016): 1-6, 23-38; Adam Thierer, Evasive Entrepreneurs & the Future of Governance (Washington, DC: Cato Institute, 2020): 48-54.
“Wingspread Statement on the Precautionary Principle,” January 1998, https://www.gdrc.org/u-gov/precaution....
Cass R. Sunstein, Laws of Fear: Beyond the Precautionary Principle (Cambridge, UK: Cambridge University Press, 2005). (“The Precautionary Principle takes many forms. But in all of them, the animating idea is that regulators should take steps to protect against potential harms, even if causal chains are unclear and even if we do not know that those harms will come to fruition.”)
Henk van den Belt, “Debating the Precautionary Principle: ‘Guilty until Proven Innocent’ or ‘Innocent until Proven Guilty’?” Plant Physiology 132 (2003): 1124.
H.W. Lewis, Technological Risk (New York: WW. Norton & Co., 1990): x. (“The history of the human race would be dreary indeed if none of our forebears had ever been willing to accept risk in return for potential achievement.”)
Martin Rees, On the Future: Prospects for Humanity (Princeton, NJ: Princeton University Press, 2018): 136.
Adam Thierer, “Failing Better: What We Learn by Confronting Risk and Uncertainty,” in Sherzod Abdukadirov (ed.), Nudge Theory in Action: Behavioral Design in Policy and Markets (Palgrave Macmillan, 2016): 65-94.
Adam Thierer, “How to Get the Future We Were Promised,” Discourse, January 18, 2022, https://www.discoursemagazine.com/cul....
J. Storrs Hall, Where Is My Flying Car? (San Francisco: Stripe Press, 2021)
Derek Turner and Lauren Hartzell Nichols, “The Lack of Clarity in the Precautionary Principle,” Environmental Values, Vol 13, No. 4 (2004): 449.
William Rinehart, “Vetocracy, the Costs of Vetos and Inaction,” Center for Growth & Opportunity at Utah State University, March 24, 2022, https://www.thecgo.org/benchmark/vetocracy-the-costs-of-vetos-and-inaction; Adam Thierer, “Red Tape Reform is the Key to Building Again,” The Hill, April 28, 2022, https://thehill.com/opinion/finance/3....
Philip K. Howard, “Radically Simplify Law,” Cato Institute, Cato Online Forum, http://www.cato.org/publications/cato....
Ibid.
Aaron Wildavsky, Searching for Safety (New Brunswick, NJ: Transaction Publishers, 1989): 38.
Thierer, Permissionless Innovation, at 2.
Gabrielle Bauer, “Danger: Caution Ahead,” The New Atlantis, February 4, 2022, https://www.thenewatlantis.com/publications/danger-caution-ahead.
Richard B. Belzer, “Risk Assessment, Safety Assessment, and the Estimation of Regulatory Benefits” (Mercatus Working Paper, Mercatus Center at George Mason University, Arlington, VA, 2012), 5, http://mercatus.org/publication/risk-assessment-safety-assessment-and-estimation-regulatory-benefits; John D. Graham and Jonathan Baert Wiener, eds. Risk vs. Risk: Tradeoffs in Protecting Health and the Environment, (Cambridge, MA: Harvard University Press, 1995).
Thierer, Permissionless Innovation, at 33-8.
Adam Satariano, Nick Cumming-Bruce and Rick Gladstone, “Killer Robots Aren’t Science Fiction. A Push to Ban Them Is Growing,” New York Times, December 17, 2021, https://www.nytimes.com/2021/12/17/wo....
Adam Thierer, “Soft Law: The Reconciliation of Permissionless & Responsible Innovation,” in Adam Thierer, Evasive Entrepreneurs & the Future of Governance (Washington, DC: Cato Institute, 2020): 183-240, https://www.mercatus.org/publications....
Henry Petroski, The Evolution of Useful Things (New York: Vintage Books, 1994): 34.
Ibid., 27,
Henry Petroski, To Engineer is Human: The Role of Failure in Successful Design (New York: Vintage, 1992): 9.
James Lawson, These Are the Droids You’re Looking For: An Optimistic Vision for Artificial Intelligence, Automation and the Future of Work (London: Adam Smith Institute, 2020): 86, https://www.adamsmith.org/research/th....
Max More, “The Proactionary Principle (March 2008),” Max More’s Strategic Philosophy, March 28, 2008, http://strategicphilosophy.blogspot.c....
Daniel Castro & Michael McLaughlin, “Ten Ways the Precautionary Principle Undermines Progress in Artificial Intelligence,” Information Technology and Innovation Foundation, February 4, 2019, https://itif.org/publications/2019/02....
Thierer, Permissionless Innovation.
Thierer, “Failing Better.”
Virginia Postrel, The Future and Its Enemies (New York: The Free Press, 1998): xiv.
Henry Petroski, To Engineer is Human: The Role of Failure in Successful Design (New York: Vintage, 1992): 62.
Kevin Ashton, How to Fly a Horse: The Secret History of Creation, Invention, and Discovery (New York: Doubleday, 2015): 67.
Megan McArdle, The Up Side of Down: Why Failing Well is the Key to Success (New York: Viking, 2014), 214.
F. A. Hayek, The Constitution of Liberty (London: Routledge, 1960, 1990): 81. (“Humiliating to human pride as it may be, we must recognize that the advance and even preservation of civilization are dependent upon a maximum of opportunity for accidents to happen.”)
Stefan H. Thomke, Experimentation Matters: Unlocking the Potential of New Technologies for Innovation (Harvard Business Review Press, 2003), 1.
Daniel Castro and Alan McQuinn, “How and When Regulators Should Intervene,” Information Technology and Innovation Foundation Reports, (February 2015): 2 http://www.itif.org/publications/how-....
Ibid.
Kevin Kelly, “The Pro-Actionary Principle,” The Technium, November 11, 2008, https://kk.org/thetechnium/the-pro-ac....
World Economic Forum, Agile Regulation for the Fourth Industrial Revolution (Geneva: Switzerland: 2020): 4, https://www.weforum.org/projects/agil....
Jordan Reimschisel and Adam Thierer, “’Build & Freeze’ Regulation Versus Iterative Innovation,” Plain Text, November 1, 2017, https://readplaintext.com/build-freez....
Adam Thierer, “Spring Cleaning for the Regulatory State,” AIER, May 23, 2019, https://www.aier.org/article/spring-c....
Daniel Byler, Beth Flores & Jason Lewris, “Using Advanced Analytics to Drive Regulatory Reform: Understanding Presidential Orders on Regulation Reform,” Deloitte, 2017, https://www2.deloitte.com/us/en/pages/public-sector/articles/advanced-analytics-federal-regulatory-reform.html.
Adam Thierer, Governing Emerging Technology in an Age of Policy Fragmentation and Disequilibrium, American Enterprise Institute (April 2022), https://platforms.aei.org/can-the-kno....
Brian Christian, The Alignment Problem: Machine Learning and Human Values (New York: W.W. Norton & Company, 2020).
Joshua D. Greene, “Our Driverless Dilemma,” Science (June 2016): 1515.
Susan Leigh Anderson, “Asimov’s ‘Three Laws of Robotics’ and Machine Metaethics,” AI and Society, Vol. 22, No. 4, (2008): 477-493.
Melanie Mitchell, Artificial Intelligence: A Guide for Thinking Humans (New York: Farrar, Straus and Giroux, 2019): 126 [Kindle edition.]
Thomas A. Hemphill, “The Innovation Governance Dilemma: Alternatives to the Precautionary Principle,” Technology in Society, Vol. 63 (2020): 6, https://ideas.repec.org/a/eee/teinso/....
Adam Thierer, “Are ‘Permissionless Innovation’ and ‘Responsible Innovation’ Compatible?” Technology Liberation Front, July 12, 2017, https://techliberation.com/2017/07/12....
The National Institute of Standards and Technology, “AI Risk Management Framework: Initial Draft,” (March 17, 2022): 1, https://www.nist.gov/itl/ai-risk-mana....
Ibid., at 5.
May 25, 2022
Event Notice: “2022 Tech and Innovation Summit”
Just FYI, the James Madison Institute will be hosting its “2022 Tech and Innovation Summit” on Thursday, September 15 and Friday, September 16 in Coral Gables, Florida. I’m honored to be included among the roster of speakers announced so far, which includes:
Ajit Pai, Former Chairman of the Federal Communications CommissionAdam Thierer, the Mercatus Center at George Mason UniversityWill Duffield, Cato InstituteUtah State Representative Cory MaloyDane Ishihara, Director of Utah’s Office of Regulatory ReliefRegistration info is here.
May 23, 2022
Podcast: Why Ban Direct Electric Vehicle Sales?
Why is it illegal in many states to purchase an electric vehicle directly from a manufacturer? In this new Federalist Society podcast, Univ. of Michigan law school professor Daniel Crane and I examine how state protectionist barriers block choice and innovation for no good reason whatsoever. The only group that benefits from these protectionist, anti-consumer direct sales bans are local car dealers who don’t want the competition.
Additional Reading :
Daniel A. Crane, “Reforming Michigan Vehicle Direct Sales Laws,” Regulation, Summer 2021.Adam Thierer, “Why make direct car-buying illegal?” Tribune News Service, April 18, 2022.Adam Thierer & Christopher M. Kaiser, “The Contradictions and Confusion of Getting Americans To Buy Electric Cars,” Discourse, March 10, 2022.May 10, 2022
Podcast: Remember FAANG?
Corbin Barthold invited me on Tech Freedom’s “Tech Policy Podcast” to discuss the history of antitrust and competition policy over the past half century. We covered a huge range of cases and controversies, including: the DOJ’s mega cases against IBM & AT&T, Blockbuster and Hollywood Video’s derailed merger, the Sirius-XM deal, the hysteria over the AOL-Time Warner merger, the evolution of competition in mobile markets, and how we finally ended that dreaded old MySpace monopoly!
What does the future hold for Google, Facebook, Amazon, and Netflix? Do antitrust regulators at the DOJ or FTC have enough to mount a case against these firms? Which case is most likely to have legs?
Corbin and I also talked about the of progress more generally and the troubling rise of more and more Luddite thinking on both the left and right. I encourage you to give it a listen:
Adam Thierer's Blog
- Adam Thierer's profile
- 1 follower
