Adam Thierer's Blog, page 2

June 16, 2023

New Report: Do We Need Global Government to Address AI Risk?

Can we advance AI safety without new international regulatory bureaucracies, licensing schemes or global surveillance systems? I explore that question in my latest R Street Institute study, “Existential Risks & Global Governance Issues around AI & Robotics.” (31 pgs)  My report rejects extremist thinking about AI arms control & stresses how the “realpolitik” of international AI governance is such that things cannot and must not be solved through silver-bullet gimmicks and grandiose global government regulatory regimes.

The report uses Nick Bostrom’s “vulnerable world hypothesis” as a launching point and discusses how his five specific control mechanisms for addressing AI risks have started having real-world influence with extreme regulatory proposals now being floated. My report also does a deep dive into the debate about a proposed global ban on “killer robots” and looks at how past treaties and arms control efforts might apply, or what we can learn from them about what won’t work.

I argue that proposals to impose global controls on AI through a worldwide regulatory authority are both unwise and unlikely to work. Calls for bans or “pauses” on AI developments are largely futile because many nations will not agree to them. As with nuclear and chemical weapons, treaties, accords, sanctions and other multilateral agreements can help address some threats of malicious uses of AI or robotics. But trade-offs are inevitable, and addressing one type of existential risk sometimes can give rise to other risks.

A culture of AI safety by design is critical. But there is an equally compelling interest in ensuring algorithmic innovations are developed and made widely available to society. The most effective solution to technological problems usually lies in more innovation, not less. Many other multistakeholder and multilateral efforts can help AI safety. Final third of my study is devoted to a discussion of that. Continuous communication, coordination, and cooperation—among countries, developers, professional bodies and other stakeholders—will be essential.

My new report on concludes with a plea to reject fatalism and fanaticism when discussing global AI risks. It’s worth recalling what Bertrand Russell said in 1951 about how only global government could save humanity. He predicted, “[t]he end of human life, perhaps of all life on our planet,” before the end
of the century unless the world unified under “a single government, possessing a monopoly of all the major weapons of war.” He was very wrong, of course, and thank God he did not get his wish because an effort to unite the world under one global government would have entailed different existential risks that he never bothered seriously considering. We need to reject extremist global government solutions as the basis for controlling technological risk.

Three quick notes.

First, this new report is the third in a trilogy of major R Street Institute studies on bottom-up, polycentric AI governance. If you only read one, make it this: “Flexible, Pro-Innovation Governance Strategies for Artificial Intelligence.” 

Second, I wrapped up this latest report a few months ago, before the Microsoft and OpenAI floated new comprehensive AI regulatory controls. So, for an important follow-up to this report, please read: “Microsoft’s New AI Regulatory Framework & the Coming Battle over Computational Control.”

Finally, if you’d like to hear me discuss many of the findings from these new reports and essays at greater length, check out my recent appearance on TechFreedom’s “Tech Policy Podcast,” with Corbin K. Barthold. We do a deep dive on all these AI governance trends and regulatory proposals.

As always, all my writing on AI, ML and robotics can be found here and my most recent things are found below.

Additional Reading :

INTERVIEW: “ 5 Quick Questions for AI policy analyst Adam Thierer ,” interview for the Faster Please! newsletter with James Pethokoukis, June 12, 2024.PODCAST: “ Who’s Afraid of Artificial Intelligence ?” Tech Freedom TechPolicyPodcast, June 12, 2023.FILING: Comments of Adam Thierer, R Street Institute to the National Telecommunications and Information Administration (NTIA) on “AI Accountability Policy,” June 12, 2023.PODCAST: Adam Thierer: “ Artificial Intelligence For Dummies ,” SheThinks (Independent Women’s Forum) podcast, June 9, 2023.EVENT: “ Does the US Need a New AI Regulator ?” Center for Data Innovation & R Street Institute, June 6, 2023.Neil Chilson & Adam Thierer, “ The Problem with AI Licensing & an ‘FDA for Algorithms ,’” Federalist Society Blog, June 5, 2023.Adam Thierer, “ Microsoft’s New AI Regulatory Framework & the Coming Battle over Computational Control ,” Medium, May 29, 2023.PODCAST: Neil Chilson & Adam Thierer, “ The Future of AI Regulation: Examining Risks and Rewards ,” Federalist Society Regulatory Transparency Project podcast, May 26, 2023.Adam Thierer, “ Here Come the Code Cops: Senate Hearing Opens Door to FDA for Algorithms & AI Occupational Licensing ,” Medium, May16, 2023.Adam Thierer, “ What OpenAI’s Sam Altman Should Say at the Senate AI Hearing ,” R Street Institute Blog, May 15, 2023.PODCAST: “ Should we regulate AI ?” Adam Thierer and Matthew Lesh discuss artificial intelligence policy on the Institute for Economic Affairs podcast, May 6, 2023.Adam Thierer, “ The Biden Administration’s Plan to Regulate AI without Waiting for Congress ,” Medium, May 4, 2023.Adam Thierer, “ NEPA for Al? The Problem with Mandating Algorithmic Audits & Impact Assessments ,” Medium, April 23, 2023.Adam Thierer, “ Flexible, Pro-Innovation Governance Strategies for Artificial Intelligence ,” R Street Institute Policy Study No. 283 (April 2023).Adam Thierer, “ A balanced AI governance vision for America ,” The Hill, April 16, 2023.Adam Thierer, Brent Orrell, & Chris Meserole, “ Stop the AI Pause ,” AEI Ideas, April 6, 2023.Adam Thierer, “ Getting AI Innovation Culture Right ,” R Street Institute Policy Study No. 281 (March 2023).Adam Thierer, “ Can We Predict the Jobs and Skills Needed for the AI Era ?,” R Street Institute Policy Study No. 278 (March 2023).Adam Thierer, “ U.S. Chamber AI Commission Report Offers Constructive Path Forward ,” R Street Blog, March 9, 2023.Adam Thierer, “ Statement for the Record on ‘Artificial Intelligence: Risks and Opportunities ,’” U.S. Senate Homeland Security and Governmental Affairs Committee, March 8, 2023.Adam Thierer, “ What If Everything You’ve Heard about AI Policy is Wrong ?” Medium, February 20, 2023.Adam Thierer, “ Policy Ramifications of the ChatGPT Moment: AI Ethics Meets Evasive Entrepreneurialism ,” Medium, February 14, 2023.Adam Thierer, “ Mapping the AI Policy Landscape Circa 2023: Seven Major Fault Lines ,” R Street Blog, February 9, 2023.Adam Thierer, “ Artificial Intelligence Primer: Definitions, Benefits & Policy Challenges ,” Medium, December 2, 2022.Neil Chilson & Adam Thierer, “ The Coming Onslaught of ‘Algorithmic Fairness’ Regulations ,” Regulatory Transparency Project of the Federalist Society, November 2, 2022.Adam Thierer, “ We Really Need To ‘Have a Conversation’ About AI … or Do We ?” Discourse, October 6, 2022.
 •  0 comments  •  flag
Share on Twitter
Published on June 16, 2023 06:27

June 12, 2023

Podcast: “Who’s Afraid of Artificial Intelligence?”

This week, I appeared on the Tech Freedom Tech Policy Podcast to discuss “Who’s Afraid of Artificial Intelligence?” It’s an in-depth, wide-ranging conversation about all things AI related. Here’s a summary of what host what Corbin Barthold and I discussed:

1. The “little miracles happening every day” thanks to AI

2. Is AI a “born free” technology?

3. Potential anti-competitive effects of AI regulation

4. The flurry of joint letters

5. new AI regulatory agency political realities

6. the EU’s Precautionary Principle tech policy disaster

7. The looming “war on computation” & open source

8. The role of common law for AI

9. Is Sam Altman breaking the very laws he proposes?

10. Do we need an IAEA for AI or an “AI Island”

11. Nick Bostrom’s global control & surveillance model

12. Why “doom porn” dominates in academic circles

13. Will AI take all the jobs?

14. Smart regulation of algorithmic technology

15. How the “pacing problem” is sometimes the “pacing benefit”

 

 •  0 comments  •  flag
Share on Twitter
Published on June 12, 2023 10:30

Podcast: “Artificial Intelligence for Dummies”

It was my pleasure to recently appear on the Independent Women’s Forum’s “She Thinks” podcast to discuss “Artificial Intelligence for Dummies.” In this 24-minute conversation with host Beverly Hallberg, I outline basic definitions, identify potential benefits, and then consider some of the risks associated with AI, machine learning, and algorithmic systems.

Reminder, you can find all my relevant past work on these issues via my, “Running List of My Research on AI, ML & Robotics Policy.”

 •  0 comments  •  flag
Share on Twitter
Published on June 12, 2023 05:29

June 7, 2023

event video: “Does the US Need a New AI Regulator?”

Here’s the video from a June 6th event on, “Does the US Need a New AI Regulator?” which was co-hosted by Center for Data Innovation & R Street Institute. We discuss algorithmic audits, AI licensing, an “FDA for algorithms” and other possible regulatory approaches, as well as various “soft law” self-regulatory efforts and targeted agency efforts. The event was hosted by Daniel Castro and included Lee Tiedrich, Shane Tews, Ben Shneiderman and me.

Additional Reading :

Neil Chilson & Adam Thierer, “ The Problem with AI Licensing & an ‘FDA for Algorithms ,’” June 5, 2023.Adam Thierer, “ Microsoft’s New AI Regulatory Framework & the Coming Battle over Computational Control ,” Medium, May 29, 2023.PODCAST: Neil Chilson & Adam Thierer, “ The Future of AI Regulation: Examining Risks and Rewards ,” Federalist Society Regulatory Transparency Project podcast, May 26, 2023.Adam Thierer, “ Here Come the Code Cops: Senate Hearing Opens Door to FDA for Algorithms & AI Occupational Licensing ,” Medium, May16, 2023.Adam Thierer, “ What OpenAI’s Sam Altman Should Say at the Senate AI Hearing ,” R Street Institute Blog, May 15, 2023.PODCAST: “ Should we regulate AI ?” Adam Thierer and Matthew Lesh discuss artificial intelligence policy on the Institute for Economic Affairs podcast, May 6, 2023.Adam Thierer, “ The Biden Administration’s Plan to Regulate AI without Waiting for Congress ,” Medium, May 4, 2023.Adam Thierer, “ NEPA for Al? The Problem with Mandating Algorithmic Audits & Impact Assessments ,” Medium, April 23, 2023.Adam Thierer, “ Flexible, Pro-Innovation Governance Strategies for Artificial Intelligence ,” R Street Institute Policy Study No. 283 (April 2023).Adam Thierer, “ A balanced AI governance vision for America ,” The Hill, April 16, 2023.Adam Thierer, Brent Orrell, & Chris Meserole, “ Stop the AI Pause ,” AEI Ideas, April 6, 2023.Adam Thierer, “ Getting AI Innovation Culture Right ,” R Street Institute Policy Study No. 281 (March 2023).Adam Thierer, “ Can We Predict the Jobs and Skills Needed for the AI Era ?,” R Street Institute Policy Study No. 278 (March 2023).Adam Thierer, “ U.S. Chamber AI Commission Report Offers Constructive Path Forward ,” R Street Blog, March 9, 2023.Adam Thierer, “ Statement for the Record on ‘Artificial Intelligence: Risks and Opportunities ,’” U.S. Senate Homeland Security and Governmental Affairs Committee, March 8, 2023.Adam Thierer, “ What If Everything You’ve Heard about AI Policy is Wrong ?” Medium, February 20, 2023.Adam Thierer, “ Policy Ramifications of the ChatGPT Moment: AI Ethics Meets Evasive Entrepreneurialism ,” Medium, February 14, 2023.Adam Thierer, “ Mapping the AI Policy Landscape Circa 2023: Seven Major Fault Lines ,” R Street Blog, February 9, 2023.Adam Thierer, “ Artificial Intelligence Primer: Definitions, Benefits & Policy Challenges ,” Medium, December 2, 2022.Neil Chilson & Adam Thierer, “ The Coming Onslaught of ‘Algorithmic Fairness’ Regulations ,” Regulatory Transparency Project of the Federalist Society, November 2, 2022.
 •  0 comments  •  flag
Share on Twitter
Published on June 07, 2023 05:41

May 8, 2023

Podcast: Should We Regulate AI?

It was my pleasure to recently join Matthew Lesh, Director of Public Policy and Communications for the London-based Institute of Economic Affairs (IEA), for the IEA podcast discussion, “Should We Regulate AI?” In our wide-ranging 30-minute conversation, we discuss how artificial intelligence policy is playing out across nations and I explained why I feel the UK has positioned itself smartly relative to the US & EU on AI policy. I argued that the UK approach encourages a better ‘innovation culture’ than the new US model being formulated by the Biden Administration.

We also went through some of the many concerns driving calls to regulate AI today, including: fears about job dislocations, privacy and security issues, national security and existential risks, and much more.

Additional reading:

Adam Thierer, “ The Biden Administration’s Plan to Regulate AI without Waiting for Congress ,” Medium, May 4, 2023.Adam Thierer, “ NEPA for Al? The Problem with Mandating Algorithmic Audits & Impact Assessments ,” Medium, April 23, 2023.Adam Thierer, “ Flexible, Pro-Innovation Governance Strategies for Artificial Intelligence , R Street Institute Policy Study №283 (April 2023).Adam Thierer, “ A balanced AI governance vision for America ,” The Hill, April 16, 2023.Adam Thierer, Brent Orrell, & Chris Meserole, “ Stop the AI Pause ,” AEI Ideas, April 6, 2023.Adam Thierer, “ Getting AI Innovation Culture Right ,” R Street Institute Policy Study №281 (March 2023).Adam Thierer, “ Can We Predict the Jobs and Skills Needed for the AI Era ?,” R Street Institute Policy Study №278 (March 2023).Adam Thierer, “ U.S. Chamber AI Commission Report Offers Constructive Path Forward ,” R Street Blog, March 9, 2023.Adam Thierer, “ What If Everything You’ve Heard about AI Policy is Wrong ?” Medium, February 20, 2023.Adam Thierer, “ Policy Ramifications of the ChatGPT Moment: AI Ethics Meets Evasive Entrepreneurialism ,” Medium, February 14, 2023.Adam Thierer, “ Mapping the AI Policy Landscape Circa 2023: Seven Major Fault Lines ,” R Street Blog, February 9, 2023.Adam Thierer, “ Artificial Intelligence Primer: Definitions, Benefits & Policy Challenges ,” Medium, December 2, 2022.Neil Chilson & Adam Thierer, “ The Coming Onslaught of ‘Algorithmic Fairness’ Regulations ,” Regulatory Transparency Project of the Federalist Society, November 2, 2022.Adam Thierer, “ We Really Need To ‘Have a Conversation’ About AI … or Do We ?” Discourse, October 6, 2022.
 •  0 comments  •  flag
Share on Twitter
Published on May 08, 2023 05:15

May 7, 2023

Finally: Clearer FAA Guidance on State and Local Airspace Restrictions

I stumbled across a surprising drone policy update in the FAA’s Aeronautical Information Manual (Manual) last week. The Manual contains official guidance and best practices to US airspace users. (My friend Marc Scribner reminds me that the Manual is not formally regulatory, though it often restates or summarizes regulations.) The manual has a (apparently) new section: “Airspace Access for UAS.” In subsection “Airspace Restrictions To Flight” (11-4-6) it notes:

There can be certain local restrictions to airspace. While the FAA is designated by federal law to be the regulator of the NAS [national airspace system], some state and local authorities may also restrict access to local airspace. UAS pilots should be aware of these local rules.

In sharing this provision around with aviation and drone experts, each agreed this was a new and surprising policy guidance. The drone provisions appear to have been part of updates made on April 20, 2023. In my view, it’s very welcome guidance.

Some background: In 2015, the FAA released helpful “fact sheet” to state and local officials about drone regulations, as state legislatures began regulating drone operations in earnest. The FAA noted the several drone-related areas, including aviation safety, where federal aviation rules are extensive. The agency noted:

Laws traditionally related to state and local police power – including land use, zoning, privacy,
trespass, and law enforcement operations – generally are not subject to federal regulation.

To ensure state and federal drone laws were not in conflict, the FAA recommended that state and local officials consult with the FAA before creating “operational UAS restrictions on flight altitude, flight paths; operational bans; any regulation of the navigable airspace.”

That guidance is still current and still useful. Around 2017, however, some within the FAA began publicly and privately taking a rather harder line regarding state and local rules about drone operations. I’ve heard, for instance, from several state aviation officials, that FAA employees they spoke to seemed to oppose any state and local rules that affect drone operations (like banning drones at low altitudes above prisons, schools, and critical infrastructure). It didn’t help that in July 2018, someone at the FAA posted a confusing and brief new statement about state and local drone rules that is hard to reconcile with the 2015 guidance.

Others noticed and reported to Congress a change at the FAA and the legal uncertainty created as companies wanted to deploy and states and cities wanted reasonable rules on operations to protect their residents. The USDOT Inspector General told Congress recently, in 2018, a lead State participant in an FAA drone program requested a clarification as to whether particular State laws regarding drones conflicted with FAA rules. According to FAA, four years had passed, and “FAA has not yet provided an opinion in response to that request.” The GAO likewise told Congress a few years ago, an unsettled question has plagued the drone industry and state lawmakers for years: Can states enforce local restrictions for surface airspace? GAO reported that the federal government had not taken a formal position regarding whether local restrictions were enforceable.

Finally the FAA makes clear: Yes, in some circumstances.

Unfortunately the drone industry and aviation regulators nationwide have lost several years (and many companies) waiting for a clear federal position.

The answer was plain to legal scholars years ago. Around 2016, I was new to drone and aviation policy. I set out to write a policy research paper on the need for the FAA to unilaterally create clear and uniform federal rules about low-altitude airspace that small drones use (“surface airspace”). I ran into a problem with my thesis: surface airspace policy is not a straightforward exercise of federal regulation. Analysis by legal scholars like Prof. Troy Rule (ASU Law) Prof. Laura Donohue (Georgetown Law), and Prof. Henry Smith (Harvard Law) convinced me that any federal aviation rules purporting to authorize drone flights into surface airspace (say, below 200 feet altitude or so) would run into a buzzsaw of legal challenges from state governments and landowners concerning state authority, trespass, and private property takings.

That’s because it is black-letter law that “real property” in the US has a three-dimensional aspect that includes surface airspace. Further, determinations about landowners’ property rights and entitlements are typically determined by common law and state law, not federal aviation officials. 

My original thesis scrapped, my paper went in new direction. My research about drone policy took me through the history of surface airspace propertization, back to 19th century Anglo-American legal treatises and court decisions, which I explored in a working paper published by the Mercatus Center in 2020 (and edited and republished by the Akron Law Review). To accelerate commercial drone deployments nationwide, I proposed a “cooperative federalism”–not FAA alone–approach to permitting drone operations in surface airspace.

Many drone advocates, even recently, assert that states and local regulators can’t restrict surface airspace. Some incorrectly claim, among other things, that only the FAA can regulate airspace and that state and local airspace rules are subject to “field preemption.” Courts have ruled against drone advocates in the three cases I’m aware of where field preemption was raised: Singer v. City of Newton, NPPA v. McCraw, and Xizmo v. New York City. As the court said in Singer:

the FAA explicitly contemplates state or local regulation of pilotless aircraft, defeating Singer’s argument that the whole field is exclusive to the federal government.

So: courts have been clear about this, legal scholars have been clear about this, and now, finally, the FAA has been clear about this in the updated Manual: “Some state and local authorities may also restrict access to local airspace. UAS pilots should be aware of these local rules.” 

With that long-awaited clear statement in April 2023, the major stakeholders–including FAA, state aviation offices, the drone industry, and local officials–can begin the hard work of building world-class commercial drone operations nationwide while protecting the property and privacy expectations of residents.

 •  0 comments  •  flag
Share on Twitter
Published on May 07, 2023 20:17

April 20, 2023

My Latest Study on AI Governance

The R Street Institute has just released my latest study on AI governance and how to address “alignment” concerns in a bottom-up fashion. The 40-page report is entitled, “Flexible, Pro-Innovation Governance Strategies for Artificial Intelligence.”

My report asks, is it possible to address AI alignment without starting with the Precautionary Principle as the governance baseline default? I explain how that is indeed possible. While some critics claim that no one is seriously trying to deal with AI alignment today, my report explains how no technology in history has been more heavily scrutinized this early in its life-cycle as AI, machine learning and robotics. The number of ethical frameworks out there already is astonishing. We don’t have too few alignment frameworks; we probably have too many!

We need to get serious about bringing some consistency to these efforts and figure out more concrete ways to a culture of safety by embedding ethics-by-design. But there is an equally compelling interest in ensuring that algorithmic innovations are developed and made widely available to society.

Although some safeguards will be needed to minimize certain AI risks, a more agile and iterative governance approach can address these concerns without creating overbearing, top-down mandates, which would hinder algorithmic innovations – especially at a time when America is looking to stay ahead of China and other nations in the global AI race.

My report explores the many ethical frameworks that professional associations have already formulated as well as the various other “soft law” frameworks that have been devised. I also consider how AI auditing and algorithmic impact assessments can be used to help formalize the twin objectives of “ethics-by-design” and keeping “humans in the loop,” which are the two principles that drive most AI governance frameworks. But it is absolutely essential that audits and impact assessments are done right to ensure it does not become an overbearing, compliance-heavy, and politicized nightmare that would undermine algorithmic entrepreneurialism and computational innovation.

Finally, my report reviews the extensive array of existing government agencies and policies that ALREADY govern artificial intelligence and robotics as well as the wide variety of court-based common law solutions that cover algorithmic innovations. The notion that America has no law or regulation covering artificial intelligence today is massively wrong, as my report explains in detail.

I hope you’ll take the time to check out my new report. This and my previous report on “Getting AI Innovation Culture Right” serve as the foundation of everything we have coming on AI and robotics from the R Street Institute. Next up will be a massive study on global AI “existential risks” and national security issues. Stay tuned. Much more to come!

In the meantime, you can find all my recent work here on my “Running List of My Research on AI, ML & Robotics Policy.”

______________

Additional Reading:

Adam Thierer, “ A balanced AI governance vision for America ,” The Hill, April 16, 2023.Adam Thierer, Brent Orrell, & Chris Meserole, “ Stop the AI Pause ,” AEI Ideas, April 6, 2023.Adam Thierer, “ Getting AI Innovation Culture Right ,” R Street Institute Policy Study No. 281 (March 2023).Adam Thierer, “ Can We Predict the Jobs and Skills Needed for the AI Era ?,” R Street Institute Policy Study No. 278 (March 2023).Adam Thierer, “ U.S. Chamber AI Commission Report Offers Constructive Path Forward ,” R Street Blog, March 9, 2023.Adam Thierer, “ What If Everything You’ve Heard about AI Policy is Wrong ?” Medium, February 20, 2023.Adam Thierer, “ Mapping the AI Policy Landscape Circa 2023: Seven Major Fault Lines ,” R Street Blog, February 9, 2023.Adam Thierer, “ Artificial Intelligence Primer: Definitions, Benefits & Policy Challenges ,” Medium, December 2, 2022.Neil Chilson & Adam Thierer, “ The Coming Onslaught of ‘Algorithmic Fairness’ Regulations ,” Regulatory Transparency Project of the Federalist Society, November 2, 2022.Adam Thierer, “ AI Eats the World: Preparing for the Computational Revolution and the Policy Debates Ahead ,” Medium, September 10, 2022.Adam Thierer, “ How Science Fiction Dystopianism Shapes the Debate over AI & Robotics ,” Discourse, July 26, 2022.
 •  0 comments  •  flag
Share on Twitter
Published on April 20, 2023 11:25

April 7, 2023

On “Pausing” AI

Recently, the Future of Life Institute released an open letter that included some computer science luminaries and others calling for a 6-month “pause” on the deployment and research of “giant” artificial intelligence (AI) technologies. Eliezer Yudkowsky, a prominent AI ethicist, then made news by arguing that the “pause” letter did not go far enough and he proposed that governments consider “airstrikes” against data processing centers, or even be open to the use of nuclear weapons. This is, of course, quite insane. Yet, this is the state of the things today as a AI technopanic seems to growing faster than any of the technopanic that I’ve covered in my 31 years in the field of tech policy—and I have covered a lot of them.

In a new joint essay co-authored with Brent Orrell of the American Enterprise Institute and Chris Messerole of Brookings, we argue that “the ‘pause’ we are most in need of is one on dystopian AI thinking.” The three of us recently served on a blue-ribbon Commission on Artificial Intelligence Competitiveness, Inclusion, and Innovation, an independent effort assembled by the U.S. Chamber of Commerce. In our essay, we note how:

Many of these breakthroughs and applications will already take years to work their way through the traditional lifecycle of development, deployment, and adoption and can likely be managed through legal and regulatory systems that are already in place. Civil rights laws, consumer protection regulations, agency recall authority for defective products, and targeted sectoral regulations already govern algorithmic systems, creating enforcement avenues through our courts and by common law standards allowing for development of new regulatory tools that can be developed as actual, rather than anticipated, problems arise.

“Instead of freezing AI we should leverage the legal, regulatory, and informal tools at hand to manage existing and emerging risks while fashioning new tools to respond to new vulnerabilities,” we conclude. Also on the pause idea, it’s worth checking out this excellent essay from Bloomberg Opinion editors on why “An AI ‘Pause’ Would Be a Disaster for Innovation.”

The problem is not with the “pause” per se. Even if the signatories could somehow enforce a worldwide stop-work order, six months probably wouldn’t do much to halt advances in AI. If a brief and partial moratorium draws attention to the need to think seriously about AI safety, it’s hard to see much harm. Unfortunately, a pause seems likely to evolve into a more generalized opposition to progress.

The editors continue on to rightly note:

This is a formula for outright stagnation. No one can ever be fully confident that a given technology or application will only have positive effects. The history of innovation is one of trial and error, risk and reward. One reason why the US leads the world in digital technology — why it’s home to virtually all the biggest tech platforms — is that it did not preemptively constrain the industry with well-meaning but dubious regulation. It’s no accident that all the leading AI efforts are American too.

That is 100% right, and I appreciate the Bloomberg editors linking to my latest study on AI governance when they made this point. In this new R Street Institute study, I explain why “Getting AI Innovation Culture Right,” is essential to make sure we can enjoy the many benefits that algorithmic systems offer, while also staying competitive in the global race for competitive advantage in this space.

That report is the first in a trilogy of big studies on decentralized, flexible governance of artificial intelligence. We can achieve AI safety without crushing top-down bans or unworkable “pauses,” I argue. My next two papers are, “Flexible, Pro-Innovation Governance Strategies for Artificial Intelligence” (due out April 20th) and “Existential Risks & Global Governance Issues Surrounding AI & Robotics” (due out late May or early June). I’m also working on a co-authored essay taking a deep dive into the idea of AI impact assessments / auditing (late Spring / early Summer).

Relatedly, on April 7th, DeepLearningAI held an event on “Why at 6-Month AI Pause is a Bad Idea” featuring leading AI scientists Andrew Ng and Yann LeCun discussing the trade-offs associated with the proposal. A crucial point made in the discussion is that a pause, especially a pause in the form of a governmental ban, would be a misguided innovation policy decision. They stressed that there will be policy interventions to address targeted risks from specific algorithmic applications, but that it would be a serious mistake to stop the overall development of the underlying technological capabilities. It’s worth watching.

For more on AI policy, here’s a list of some of my latest reports and essays. Much more to come. AI policy will be the biggest tech policy fight of the our lifetimes.

Adam Thierer, “ Getting AI Innovation Culture Right ,” R Street Institute Policy Study No. 281 (March 2023).Adam Thierer, “ Can We Predict the Jobs and Skills Needed for the AI Era ?,” R Street Institute Policy Study No. 278 (March 2023).Adam Thierer, “ U.S. Chamber AI Commission Report Offers Constructive Path Forward ,” R Street Blog, March 9, 2023.Adam Thierer, “ What If Everything You’ve Heard about AI Policy is Wrong ?” Medium, February 20, 2023.Adam Thierer, “ Policy Ramifications of the ChatGPT Moment: AI Ethics Meets Evasive Entrepreneurialism ,” Medium, February 14, 2023.Adam Thierer, “ Mapping the AI Policy Landscape Circa 2023: Seven Major Fault Lines ,” R Street Blog, February 9, 2023.Adam Thierer, “ Artificial Intelligence Primer: Definitions, Benefits & Policy Challenges ,” Medium, December 2, 2022.Neil Chilson & Adam Thierer, “ The Coming Onslaught of ‘Algorithmic Fairness’ Regulations ,” Regulatory Transparency Project of the Federalist Society, November 2, 2022.Adam Thierer, “ We Really Need To ‘Have a Conversation’ About AI … or Do We ?” Discourse, October 6, 2022.Adam Thierer, “ How the Embedding of AI Ethics Works in Practice & How It Can Be Improved ,” Medium, September 22, 2022.Adam Thierer, “ No Goldilocks Formula for Content Moderation in Social Media or the Metaverse, But Algorithms Still Help ,” Medium, September 13, 2022.Adam Thierer, “ AI Eats the World: Preparing for the Computational Revolution and the Policy Debates Ahead ,” Medium, September 10, 2022.Adam Thierer, “‘ Running Code and Rough Consensus’ for AI: Polycentric Governance in the Algorithmic Age ,” Medium, September 1, 2022.Adam Thierer, “ AI Governance ‘on the Ground’ vs ‘on the Books ,’” Medium, August 19, 2022.Adam Thierer, “ Why the Future of AI Will Not Be Invented in Europe ,” Technology Liberation Front, August 1, 2022.Adam Thierer, “ Existential Risks & Global Governance Issues around AI & Robotics ,” [DRAFT CHAPTER, July 2022].Adam Thierer, “ How Science Fiction Dystopianism Shapes the Debate over AI & Robotics ,” Discourse, July 26, 2022.Adam Thierer, “ Why is the US Following the EU’s Lead on Artificial Intelligence Regulation ?” The Hill, July 21, 2022.Adam Thierer, “ Algorithmic Auditing and AI Impact Assessments: The Need for Balance ,” Medium, July 13, 2022.Adam Thierer, “ The Proper Governance Default for AI ,” Medium, May 26, 2022.Adam Thierer, “ What I Learned about the Power of AI at the Cleveland Clinic ,” Medium, May 6, 2022.

 

 

 •  0 comments  •  flag
Share on Twitter
Published on April 07, 2023 10:36

April 2, 2023

What Policy Vision for Artificial Intelligence?

In my latest R Street Institute report, I discuss the importance of “Getting AI Innovation Culture Right.” This is the first of a trilogy of major reports on what sort of policy vision and set of governance principles should guide the development of  artificial intelligence (AI), algorithmic systems, machine learning (ML), robotics, and computational science and engineering more generally. More specifically, these reports seek to answer the question, Can we achieve AI safety without innovation-crushing top-down mandates and massive new regulatory bureaucracies? 

These questions are particular pertinent as we just made it through a week in which we’ve seen a major open letter issued that calls for a 6-month freeze on the deployment of AI technologies, while a prominent AI ethicist argued that governments should go further and consider airstrikes data processing centers even if the exchange of nuclear weapons needed to be considered! On top of that, Italy became the first major nation to ban ChatGPT, the popular AI-enabled chatbot created by U.S.-based OpenAI.

My report begins from a different presumption: AI, ML and algorithmic technologies present society with enormously benefits and, while real risks are there, we can find better ways of addressing them. As I summarize:

The danger exists that policy for algorithmic systems could be formulated in such a way that innovations are treated as guilty until proven innocent—i.e., a precautionary principle approach to policy—resulting in many important AI applications never getting off the drawing board. If regulatory impediments block or slow the creation of life-enriching, and even life-saving, AI innovations, that would leave society less well-off and give rise to different types of societal risks.

I argue that it is essential we not trap AI in an “innovation cage” by establishing the wrong policy default for algorithmic governance but instead work through challenges as they come at us. The right policy default for the internet and for AI continues to be “innovation allowed.” But AI risks do require serious governance steps. Luckily, many tools exist and others are being created. While my next major report (due out April 20th) offers far more detail, this paper sketches out some of those mechanisms. 

The goal of algorithmic policy should be for policymakers and innovators to work together to find flexible, iterative, agile, bottom-up governance solutions over time. We can promote a culture of responsibility among leading AI innovators and balance safety and innovation for complex, rapidly evolving computational and computing technologies like AI. This approach is buttressed by existing laws and regulations, as well as common law and the courts.

The new Biden Admin “AI Bill of Rights” unfortunately represents a fear-based model of technology policymaking that breaks from the superior Clinton framework for the internet & digital technology. Our nation’s policy toward AI, robotics & algorithmic innovation should instead embrace a dynamic future and the enormous possibilities that await us.

Please check out my new paper for more details. Much more to come. And you can also check out my running list of research on AI, ML robotics policy.

Spectrum of Technological Governance Options
 •  0 comments  •  flag
Share on Twitter
Published on April 02, 2023 14:32

March 11, 2023

Why Isn’t Everyone Already Unemployed Due to Automation?

I have a new R Street Institute policy study out this week doing a deep dive into the question: “Can We Predict the Jobs and Skills Needed for the AI Era?” There’s lots of hand-wringing going on today about AI and the future of employment, but that’s really nothing new. In fact, in light of past automation panics, we might want to step back and ask: Why isn’t everyone already unemployed due to technological innovation?

To get my answers, please read the paper! In the meantime, here’s the executive summary:


To better plan for the economy of the future, many academics and policymakers regularly attempt to forecast the jobs and worker skills that will be needed going forward. Driving these efforts are fears about how technological automation might disrupt workers, skills, professions, firms and entire industrial sectors. The continued growth of artificial intelligence (AI), robotics and other computational technologies exacerbate these anxieties.


Yet the limits of both our collective knowledge and our individual imaginations constrain well-intentioned efforts to plan for the workforce of the future. Past attempts to assist workers or industries have often failed for various reasons. However, dystopian predictions about mass technological unemployment persist, as do retraining or reskilling programs that typically fail to produce much of value for workers or society. As public efforts to assist or train workers move from general to more specific, the potential for policy missteps grows greater. While transitional-support mechanisms can help alleviate some of the pain associated with fast-moving technological disruption, the most important thing policymakers can do is clear away barriers to economic dynamism and new opportunities for workers.


I do discuss some things that government can do to address automation fears at the end of the paper, but it’s important that policymakers first understand all the mistakes we’ve made with past retraining and reskilling efforts. The easiest thing to do to help in the short-term is clear away barriers to labor mobility and economic dynamism, I argue. Again, read the study for details.

For more info on other AI policy developments, check out my running list of research on AI, ML robotics policy.

 •  0 comments  •  flag
Share on Twitter
Published on March 11, 2023 06:16

Adam Thierer's Blog

Adam Thierer
Adam Thierer isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Adam Thierer's blog with rss.