Adam Thierer's Blog

December 19, 2024

Event video: “AI Policy in President Trump’s Second Term”

Here the video from a December 10th Federalist Society event on “AI Policy In President Trump’s Second Term.” It features my comments alongside:

Neil Chilson, Head of AI Policy, Abundance InstituteSatya Thallam, Senior Vice President, Americans for Responsible InnovationProf. Kevin Frazier, Assistant Professor of Law, St. Thomas University Benjamin L. Crump College of Law

As always, all my recent essays, podcasts, and event video about AI policy can be found here.

 •  0 comments  •  flag
Share on Twitter
Published on December 19, 2024 05:34

September 6, 2024

Panel Video: How Should We Regulate the Digital World & AI?

The Technology Policy Institute has posted the video of my talk at the 2024 Aspen Forum panel on “How Should we Regulate the Digital World?” My remarks run from 33:33–44:12 of the video. I also elaborate briefly during Q&A.

My remarks at this year’s TPI Aspen Forum panel were derived from my R Street Institute essay, “The Policy Origins of the Digital Revolution & the Continuing Case for the Freedom to Innovate,” which sketches out a pro-freedom vision for the Computational Revolution.

 

 •  0 comments  •  flag
Share on Twitter
Published on September 06, 2024 15:44

January 22, 2024

We Need Federal Preemption of State & Local AI Regulation

In my latest column for The Hill, I explore how “State and Local Meddling Threatens to Undermine the AI Revolution” in America as mountains of parochial tech mandates accumulate. We need a federal response, but we’re not likely to get the right one, I argue.

I specifically highlight the danger of new measures from big states like NY and California, but it’s the patchwork of all the state and local regs that will result in a sort of ‘death-by-a-thousand-cuts’ for AI innovation as the red tape grows and hinders innovation and capital formation.

What we need is the same sort of principled, pro-innovation federal framework or AI that we adopted for the Internet a generation ago. Specifically, we need some sort of preemption of most of the state and local constraints on what is inherently national (and even global) commerce and speech.

Alas, Congress appears incapable of getting even basic things done on tech policy these days. As far as I can tell, not a single AI bill in front of Congress today would preempt most of this state and local AI regulatory activity.

Worse yet, if Congress did somehow pass anything on AI right now, it’d probably just include even more anti-innovation mandates and agencies without preempting any of the state and local ones. Thus, America would just be piling bad mandates on top of bad mandates until we basically become like Europe, where innovation goes to die under piles of bureaucratic red tape.

It’s a miserable state of affairs with horrible consequences for the U.S. as global competition from China heats up on the AI front. America is sacrificing its competitive advantage on digital technology because fear-based thinking and partisan politics continue to prevent the adoption of a principled, bipartisan vision for artificial intelligence policy.

See my new Hill column for more discussion, and also make sure to check out my earlier Hill essay on “A balanced AI governance vision for America,” as well as these two big R Street Institute reports from last year about how Congress can craft sensible, pro-innovation AI policy for America:

Adam Thierer, “ Getting AI Innovation Culture Right ,” R Street Institute Policy Study №281 (March 2023).Adam Thierer, “ Flexible, Pro-Innovation Governance Strategies for Artificial Intelligence ,” R Street Institute Policy Study №283 (April 2023).

And here is some additional reading on the dangerous regulatory situation we are facing today in terms of over-regulating artificial intelligence by treating innovators as guilty until proven innocent. America is about to shoot itself in the foot as the global race begins for the more important technological revolution of our lifetime:

Adam Thierer, “ Blumenthal-Hawley AI Regulatory Framework Escalates the War on Computation ,” Medium, September 13, 2023.Adam Thierer, Statement for the Record , Hearing on “The Need for Transparency in Artificial Intelligence,” September 12, 2023.Adam Thierer, “ Will AI Policy Became a War on Open Source Following Meta’s Launch of LLaMA 2? Medium, July 19, 2023.Adam Thierer, “ The FTC Looks to Become the Federal AI Commission ,” Medium, July 15, 2023.Adam Thierer, “ Microsoft’s New AI Regulatory Framework & the Coming Battle over Computational Control ,” Medium, May 29, 2023.Adam Thierer, “ Is Telecom Licensing a Good Model for Artificial Intelligence ?” Medium, July 8, 2023.Adam Thierer, “ The Schumer AI Framework and the Future of Emerging Tech Policymaking ,” R Street Institute Real Solutions, June 27, 2023.Adam Thierer, “ Here Come the Code Cops: Senate Hearing Opens Door to FDA for Algorithms & AI Occupational Licensing ,” Medium, May16, 2023.Adam Thierer, “ Is AI Really an Unregulated Wild West ?” Technology Liberation Front, June 22, 2023.Adam Thierer, “ The Most Important Principle for AI Regulation ,” R Street Institute Real Solutions, June 21, 2023.Neil Chilson & Adam Thierer, “ The Problem with AI Licensing & an ‘FDA for Algorithms ,’” Federalist Society Blog, June 5, 2023.Adam Thierer, “ The Many Ways Government Already Regulates Artificial Intelligence ,” Medium, June 2, 2023.
 •  0 comments  •  flag
Share on Twitter
Published on January 22, 2024 07:51

December 12, 2023

Podcast: “AI – DC Policymakers Face a Crossroads”

Here’s a new DC EKG podcast I recently appeared on to discuss the current state of policy development surrounding artificial intelligence. In our wide-ranging chat, we discussed:

* why a sectoral approach to AI policy is superior to general purpose licensing
* why comprehensive AI legislation will not pass in Congress
* the best way to deal with algorithmic deception
* why Europe lost its tech sector
* how a global AI regulator threatens our safety
* the problem with Biden’s AI executive order
* will AI policy follow same path as nuclear policy?
* global innovation arbitrage & the innovation cage
* AI, health care & FDA regulation
* AI regulation vs trade secrets
* is AI transparency / auditing the solution?

Listen to the full show here or here. To read more about current AI policy developments, check out my “Running List of My Research on AI, ML & Robotics Policy.”

 

 •  0 comments  •  flag
Share on Twitter
Published on December 12, 2023 05:06

October 17, 2023

Can Any AI Legislation Pass Congress This Session?

My latest dispatch from the frontlines of the artificial intelligence policy wars in Washington looks at the major proposals to regulate AI. In my new essay, “Artificial Intelligence Legislative Outlook: Fall 2023 Update,” I argue that there are 3 major impediments to getting major AI legislation over the finish line in Congress: (1) Breadth and complexity of the issue; (2) Multiplicity of concerns & special interests; & (3) Extreme rhetoric / proposals are dominating the discussion.

If Congress wants to get something done in this session, they’ll need to do two things: (1) set aside the most radical regulatory proposals (like big new AI agencies or licensing schemes); and (2) break AI policy down into its smaller subcomponents and then prioritize among them where policy gaps might exist.

Prediction: Congress will not pass any AI-related legislation this session due to the factors identified in my essay. The temptation to “go big” with everything-and-the-kitchen-sink approaches to AI regulation will (especially with extreme ideas like new agencies & licenses) will doom AI legislation. It’s also worth noting that Washington’s swelling interest in AI policy is having a crowding-out effect on other important legislative proposals that might have advanced otherwise, such as the baseline privacy bill (ADPPA) and other things like driverless car legislation. Many want to advance those efforts first, but the AI focus makes that hard.

Read the entire essay here.

 •  0 comments  •  flag
Share on Twitter
Published on October 17, 2023 10:49

September 15, 2023

Event Video: Debating Frontier AI Regulation

The Brookings Institution hosted this excellent event on frontier AI regulation this week featuring a panel discussion I was on that followed opening remarks from Rep. Ted Lieu (D-CA). I come in around the 51-min mark of the event video and explain why I worry that AI policy now threatens to devolve into an all-out war on computation and open source innovation in particular. ​

I argue that some pundits and policymakers appear to be on the way to substituting a very real existential risk (authoritarian govt control over computation/science) for a hypothetic existential risk of powerful AGI. I explain how there are better, less destructive ways to address frontier AI concerns than the highly repressive approaches currently being considered.

I have developed these themes and arguments at much greater length in a series of essays over on Medium over the past few months. If you care to read more, the four key articles to begin with are:

Microsoft’s New AI Regulatory Framework & the Coming Battle over Computational Control ,” May 29, 2023.“ Will AI Policy Became a War on Open Source Following Meta’s Launch of LLaMA 2? ” July 19, 2023.“ Is Telecom Licensing a Good Model for Artificial Intelligence ?” July 8, 2023.“ Blumenthal-Hawley AI Regulatory Framework Escalates the War on Computation ,” September 13, 2023.

In June, I also released this longer R Street Institute report on “Existential Risks & Global Governance Issues around AI & Robotics,” and then spent an hour talking about these issues on the TechPolicyPodcast about “Who’s Afraid of Artificial Intelligence?” All of my past writing and speaking on AI, ML, and robotic policy can be found here, and that list is update every month.

As always, I’ll have much more to say on this topic as the war on computation expands. This is quickly becoming the most epic technology policy battle of modern times.

 •  0 comments  •  flag
Share on Twitter
Published on September 15, 2023 07:39

September 12, 2023

EVENT VIDEO: “Who’s Leading on AI Policy?”

I was my pleasure to participate in this Cato Institute event today on “Who’s Leading on AI Policy?
Examining EU and U.S. Policy Proposals and the Future of AI.” Cato’s Jennifer Huddleston hosted and also participating was Boniface de Champris, Policy Manager with the Computer and Communications Industry Association. Here’s a brief outline of some of the issues we discussed:

What are the 7 leading concerns driving AI policy today?What is the difference between horizontal vs. vertical AI regulation?Which agencies are moving currently to extend their reach and regulate AI tech?What’s going on at the state, local, and municipal level in the US on AI policy?How will the so-called “Brussels Effect” influence the course of AI policy in the US?What have the results been of the EU’s experience with the GDPR?How will the EU AI Act work in practice?Can we make algorithmic systems perfectly transparent / “explainable”?Should AI innovators be treated as ‘guilty until proven innocent’ of certain risks?How will existing legal concepts and standards (like civil rights law and unfair and deceptive practices regulation) be applied to algorithmic technologies?Do we have a fear-based model of AI governance currently? What role has science fiction played in fueling that?What role will open source AI play going forward?Is AI licensing a good idea? How would it even work?Can AI help us identify and address societal bias and discrimination?

Again, you can watch the entire video here and, as always, here’s my “Running List of My Research on AI, ML & Robotics Policy.”

 •  0 comments  •  flag
Share on Twitter
Published on September 12, 2023 11:55

August 10, 2023

America Does Not Need a Digital Consumer Protection Commission

The New York Times today published my response to an oped by Senators Lindsey Graham & Elizabeth Warren calling for a new “Digital Consumer Protection Commission” to micromanage the high-tech information economy. “Their new technocratic digital regulator would do nothing but hobble America as we prepare for the next great global technological revolution,” I argue. Here’s my full response:


Senators Lindsey Graham and Elizabeth Warren propose a new federal mega-regulator for the digital economy that threatens to undermine America’s global technology standing.


A new “licensing and policing” authority would stall the continued growth of advanced technologies like artificial intelligence in America, leaving China and others to claw back crucial geopolitical strategic ground.


America’s digital technology sector enjoyed remarkable success over the past quarter-century — and provided vast investment and job growth — because the U.S. rejected the heavy-handed regulatory model of the analog era, which stifled innovation and competition.


The tech companies that Senators Graham and Warren cite (along with countless others) came about over the past quarter-century because we opened markets and rejected the monopoly-preserving regulatory regimes that had been captured by old players.


The U.S. has plenty of federal bureaucracies, and many already oversee the issues that the senators want addressed. Their new technocratic digital regulator would do nothing but hobble America as we prepare for the next great global technological revolution.


 •  0 comments  •  flag
Share on Twitter
Published on August 10, 2023 08:25

August 7, 2023

Good FAA Update on State and Local Rules for Drone Airspace

There’s been exciting progress in US drone policy in the past few months. The FAA in April announced surprising (good) new guidance in its Aeronautics Information Manual, re: drone airspace access. As I noted in an article for the State Aviation Journal, the new Manual notes:

There can be certain local restrictions to airspace. While the FAA is designated by federal law to be the regulator of the NAS [national airspace system], some state and local authorities may also restrict access to local airspace. UAS pilots should be aware of these local rules.

That April update has been followed up by a bigger, drone policy update from FAA. On July 14, the FAA went further than the April guidance and updated and replaced its 2015 guidance to states and localities about drone regulation and airspace policy.

In this July 2023 guidance, I was pleasantly surprised to see the FAA recognize some state and local authority in the “immediate reaches” airspace. Notably, in the new guidance the FAA expressly notes that that state laws that “prohibit [or] restrict . . . operations by UAS in the immediate reaches of property” are an example of laws not subject to conflict preemption.

A handful of legal scholars–like ASU Law Professor Troy Rule and myself–have urged federal officials for years to recognize that states, localities, and landowners have a significant say in what happens in very low-altitude airspace. That’s because the US Supreme Court in US v. Causby recognized that the “immediate reaches” above land is real property owned by the landowner:

[I]t is obvious that, if the landowner is to have full enjoyment of the land, he must have exclusive control of the immediate reaches of the enveloping atmosphere. …As we have said, the flight of airplanes, which skim the surface but do not touch it, is as much an appropriation of the use of the land as a more conventional entry upon it.

The FAA’s position on which rules apply in very low-altitude airspace–FAA rules or state property rules–was confusing. The agency informally asserts authority to regulate drone operations down to “the grass tips”; however, many landowners don’t want drones to enter the airspace immediately above their land without permission and would sue to protect their property rights. This is not a purely academic concern: the uncertainty about whether and when drones can fly in very low-altitude airspace has created damaging uncertainty for the industry. As the Government Accountability Office told Congress in 2020:

The legal uncertainty surrounding these [low-altitude airspace] issues is presenting challenges to integration of UAS [unmanned aircraft systems] into the national airspace system.


With this July update, the FAA helps clarify matters. To my knowledge, this is the first mention of “immediate reaches,” and implicit reference to Causby, by the FAA. The update is, in my view, a big win for property rights and federalism, and also good for the drone industry to finally have some federal clarity on this. Drone operators now know they cannot simply ignore local rules and community concerns about air trespass, noise, and related issues. States and cities now know that they can create certain, limited prohibitions, especially above sensitive locations like neighborhoods, stadiums, prisons, and state parks.

As an aside: It seems possible one motivation for the FAA adding this language is to foreclose future takings litigation (a la Cedar Point Nursery v. Hassid) against the FAA. With this new guidance, the FAA can now point out in future takings litigation that they do not authorize drone operations in the immediate reaches of airspace; that will largely be a question of state property and trespass laws.

On the whole, I think this new FAA guidance is strong, especially the first formal FAA recognition of some state authority over the “immediate reaches.” That said, as a USDOT Inspector General report to Congress pointed out last year, the FAA has not been very helpful when state officials have questions about creating drone rules to complement federal rules. In 2018, for instance, a lead State “participant [in an FAA drone program] requested a clarification as to whether particular State laws regarding UAS conflicted with Federal regulations. According to FAA, as of February 2022 . . . FAA has not yet provided an opinion in response to that request.”

Four years-plus of silence from the FAA is a long time for a state officials to wait, and it’s a lifetime for a drone startup looking for legal clarity. I do worry about agency non-answers on preemption questions from states, and how other provisions in this new guidance will be interpreted. Nonetheless, with the April and July policy updates with the FAA, the FAA, state aviation offices, the drone industry, and local officials are in a better position to world-class commercial drone operations nationwide while protecting the property and privacy expectations of residents.

For further reading, see my July report on drones and airspace policy for state officials, including state rankings: “2023 State Drone Commerce Rankings: How prepared is your state for drone commerce?”.

 •  0 comments  •  flag
Share on Twitter
Published on August 07, 2023 07:36

June 22, 2023

Is AI Really an Unregulated Wild West?

As I noted in a recent interview with James Pethokoukis for his Faster, Please! newsletter, “[t]he current policy debate over artificial intelligence is haunted by many mythologies and mistaken assumptions. The most problematic of these is the widespread belief that AI is completely ungoverned today.” In a recent R Street Institute report and series of other publications, I have documented just how wrong that particular assumption is.

The first thing I try to remind everyone is that the U.S. federal government is absolutely massive—2.1 million employees, 15 cabinet agencies, 50 independent federal commissions and 434 federal departments. Strangely, when policymakers and pundits deliver remarks on AI policy today, they seem to completely ignore all that regulatory capacity while simultaneously casually tossing out proposals to just add more and more layers of regulation and bureaucracy to it. Well, I say why not see if the existing regulations and bureaucracy are working first, and then we can have a chat about what more is needed to fill gaps.

And a lot is being done on this front. In a new blog post for R Street, I offer a brief summary of some of the most important recent efforts.

In January, the National Institute of Standards and Technology released its “AI Risk Management Framework,” which was created through a multi-year, multi-stakeholder process. It is intended to help developers and policymakers better understand how to identify and address various types of potential algorithmic risk.The Food and Drug Administration (FDA) has been using its broad regulatory powers to review and approve AI and ML-enabled medical devices for many years already, and the agency possesses broad recall authority that can address risks that develop from algorithmic or robotic systems. The FDA is currently refining its approach to AI/ML in a major proceeding.The National Highway Traffic Safety Administration (NHTSA) has been issuing constant revisions to its driverless car policy guidelines since 2016. Like the FDA, the NHTSA also has broad recall authority, which it used in February 2023 to mandate a recall of Tesla’s full self-driving autonomous driving system, also requiring an over-the-air software update to over 300,000 vehicles that had the software package.In 2021, the Consumer Product Safety Commission agency issued a major report highlighting the many policy tools it already has to address AI risks. Like the FDA and NHTSA, the agency has recall authority that can address risks that develop from consumer-facing algorithmic or robotic systems.In April, Securities and Exchange Commission Chairman Gary Gensler told Congress that his agency is moving to address AI and predictive data analytics in finance and investing.The Federal Trade Commission (FTC) has become increasingly active on AI policy issues and has noted in a series of recent blog posts that the agency is ready to use its broad authority to “unfair and deceptive practices,” involving algorithmic claims or applications.The Equal Employment Opportunity Commission (EEOC) recently released a memo as part of its “ongoing effort to help ensure that the use of new technologies complies with federal [equal employment opportunity] law.” It outlines how existing employment antidiscrimination laws and policies cover algorithmic technologies.In May, the Consumer Financial Protection Bureau (CFPB) issued a statement clarifying how existing federal anti-discrimination law already applies to complex algorithmic systems used for lending decisions.  The agency also recently released a report on the use of Chatbots in Consumer Finance , and explained the many ways that the “CFPB is actively monitoring the market” for risks associated with these new services.Along with the EEOC, the FTC and the CFPB, the Civil Rights Division of the Department of Justice released an April joint statement saying that the agency heads said that they would be looking to take preemptive steps to address algorithmic discrimination.

“This is real-time algorithmic governance in action,” I argue. Again, additional regulatory steps may be needed later to fill gaps in current law, but policymakers should begin by acknowledging that a lot of algorithmic oversight authority exists across the federal government. Meanwhile, the courts and our common law system are also starting to address novel AI problems as cases develop. For more along these lines, see my recent essay on “The Many Ways Government Already Regulates Artificial Intelligence.”

So, next time someone suggests that AI is developing in an unregulated “Wild West,” remind them of all these existing laws, agencies, and regulatory efforts. And then also ask them a different question no one is really exploring currently: Could it be the case that many agencies are already overregulating some algorithmic and autonomous systems? (I’m looking at you, FAA!) Why is no one worried about that possibility as the global AI race with China and other countries intensifies?

Additional Reading :

Adam Thierer, “ The Most Important Principle for AI Regulation ,” R Street Blog PostJune 21, 2023.INTERVIEW: “ 5 Quick Questions for AI policy analyst Adam Thierer ,” interview for the Faster Please! newsletter with James Pethokoukis, June 12, 2024.PODCAST: “ Who’s Afraid of Artificial Intelligence ?” Tech Freedom TechPolicyPodcast, June 12, 2023.Adam Thierer, “ Existential Risks & Global Governance Issues around AI & Robotics ,” R Street Institute Policy Study No. 291 (June 2023).FILING: Comments of Adam Thierer, R Street Institute to the National Telecommunications and Information Administration (NTIA) on “AI Accountability Policy,” June 12, 2023.PODCAST: Adam Thierer: “ Artificial Intelligence For Dummies ,” SheThinks (Independent Women’s Forum) podcast, June 9, 2023.EVENT: “ Does the US Need a New AI Regulator ?” Center for Data Innovation & R Street Institute, June 6, 2023.Neil Chilson & Adam Thierer, “ The Problem with AI Licensing & an ‘FDA for Algorithms ,’” Federalist Society Blog, June 5, 2023.Adam Thierer, “ Microsoft’s New AI Regulatory Framework & the Coming Battle over Computational Control ,” Medium, May 29, 2023.PODCAST: Neil Chilson & Adam Thierer, “ The Future of AI Regulation: Examining Risks and Rewards ,” Federalist Society Regulatory Transparency Project podcast, May 26, 2023.Adam Thierer, “ Here Come the Code Cops: Senate Hearing Opens Door to FDA for Algorithms & AI Occupational Licensing ,” Medium, May16, 2023.Adam Thierer, “ What OpenAI’s Sam Altman Should Say at the Senate AI Hearing ,” R Street Institute Blog, May 15, 2023.PODCAST: “ Should we regulate AI ?” Adam Thierer and Matthew Lesh discuss artificial intelligence policy on the Institute for Economic Affairs podcast, May 6, 2023.Adam Thierer, “ The Biden Administration’s Plan to Regulate AI without Waiting for Congress ,” Medium, May 4, 2023.Adam Thierer, “ NEPA for Al? The Problem with Mandating Algorithmic Audits & Impact Assessments ,” Medium, April 23, 2023.Adam Thierer, “ Flexible, Pro-Innovation Governance Strategies for Artificial Intelligence ,” R Street Institute Policy Study No. 283 (April 2023).Adam Thierer, “ A balanced AI governance vision for America ,” The Hill, April 16, 2023.Adam Thierer, Brent Orrell, & Chris Meserole, “ Stop the AI Pause ,” AEI Ideas, April 6, 2023.Adam Thierer, “ Getting AI Innovation Culture Right ,” R Street Institute Policy Study No. 281 (March 2023).Adam Thierer, “ Can We Predict the Jobs and Skills Needed for the AI Era ?,” R Street Institute Policy Study No. 278 (March 2023).Adam Thierer, “ U.S. Chamber AI Commission Report Offers Constructive Path Forward ,” R Street Blog, March 9, 2023.Adam Thierer, “ Statement for the Record on ‘Artificial Intelligence: Risks and Opportunities ,’” U.S. Senate Homeland Security and Governmental Affairs Committee, March 8, 2023.Adam Thierer, “ What If Everything You’ve Heard about AI Policy is Wrong ?” Medium, February 20, 2023.Adam Thierer, “ Policy Ramifications of the ChatGPT Moment: AI Ethics Meets Evasive Entrepreneurialism ,” Medium, February 14, 2023.Adam Thierer, “ Mapping the AI Policy Landscape Circa 2023: Seven Major Fault Lines ,” R Street Blog, February 9, 2023.Adam Thierer, “ Artificial Intelligence Primer: Definitions, Benefits & Policy Challenges ,” Medium, December 2, 2022.Neil Chilson & Adam Thierer, “ The Coming Onslaught of ‘Algorithmic Fairness’ Regulations ,” Regulatory Transparency Project of the Federalist Society, November 2, 2022.
 •  0 comments  •  flag
Share on Twitter
Published on June 22, 2023 08:04

Adam Thierer's Blog

Adam Thierer
Adam Thierer isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Adam Thierer's blog with rss.