Analysis Example: Moralizing Large Language Models

I wrote this one quickly, just after the White House released its executive order regulating AI, and before our policy class on Wednesday. I wanted to reflect on the moral ideas in the executive order, but ended up writing a mish mash of AI history, regulatory criticism, and moral grandstanding. I think to polish it, I would probably read more thoroughly about the history of technology regulation, have several people read it to develop its ideas further, and then submit. For now, it's weak use of evidence, and somewhat disjoint argumentation puts it at around a 3.3 for me.

Moralizing Large Language Models

Amy J. Ko

Artificial Intelligence has meant many things in the history of computing. In the beginning, it was hypothetical, an imagined capability that no one had yet invented. And then it was theoretical: a Turing machine that reliably and deterministically manipulated symbols. And then it was real: computer programs that we could write, that computers could run independently, and do the work of millions with minimal intelligence. This kind of "AI" was literal: it was the intelligence we used to pay people to do, that we successfully created machines to do.

Of course, AI evolved to mean other things. For a while, it meant playing and winning well-scoped games like poker, chess, and checkers. Then it meant having little agents run around virtual worlds of combat, making reasonable strategic choices about a battle. And then it mean synthesized speech, or robotic arms, or airline voice reservation assistants. As the definition of AI keeps evolving to reflect whatever we cannot yet explain, we keep chasing different notions of intelligence, hoping that one of these times, we won't be able to explain it. And yet over and over, AI becomes so mundane that we just call it software.

This "AI effect", broadly acknowledged by AI researchers as a cynical account of the field's slippery definitions of intelligence [1], are in full effect right now with the AI known as "large language models." We simultaneously have immense marketing fueled hype machines telling us that LLMs are sentient, that it has real thoughts, and that it's going to save all of us. And at the same time, we have quickly accepted that our email clients will happily write our emails for us, and that these emails will look reasonable, but actually be quite absurd in their attempt to provide meaningful replies.

This particular moment of AI hype, however, is raising unprecedented policy questions about how to regulate it. Past AI capabilities merely reinforced poverty and killed people [2]; LLMs, and the broader technology of convolutional neural nets trained on large data sets, however, can pretend to be people, their words, their voices, their faces, and their ideas. And they can do it in ways that do not reveal that they are AIs, that give no credit to the billions of people who created the content on which they were trained, and give no compensation to these people from the profits they derive. It's not entirely clear why indirect, bureaucratic oppression was not sufficient motivation to regulate predictive technologies, but mimicry is. Perhaps mimicry fuels our fear of being replaced more than it does our fear of being denied our rights due to dysfunctional, inhumane IT processes.

What exactly does this time of AI regulation call for, then? The Biden White House just released an incoherent grab bag of ideas on Monday, October 30th, 2023:

  • Require AI developers conduct third-party safety tests and share their results with the U.S. government.
  • Develop standards and tools to ensure AI is safe, secure, and trustworthy.
  • Mitigate risks of AI's use in engineering dangerous biological material
  • Protect Americans from AI fraud
  • Develop AI tools to find cybersecurity defects
  • Address algorithmic discrimination through training
  • Develop best practices in the criminal justice system
  • Catalyze AI research
  • Expand legal immigration of AI professionals

What I see in this list is two competing goals: 1) protect the public, and 2) ensure the U.S. is leading in the development of AI that protects its interests. I understand why the White House wants to thread that needle: it wants to protect the economic opportunities of LLMs, and find positive uses of them, while also regulating the negative uses of them. And yet, expanding the development of LLM-based AIs will surely come with unintended harms and consequences. And so the list reads to me like "we're gonna do this, but don't worry, we'll keep the mess to a minimum."

What I don't see in this list is a clear underlying moral principle that guides these regulatory ideas. Well, perhaps there is one: faith in the moral righteousness of regulatory capitalism. But that is not so much a moral principle as it is an economic theory tied to utilitarianism. The kind of moral principle I'm looking for is something like people are more important than profit. That kind of principle would have different regulatory implications, such as:

  • No LLM is approved for use in the United States unless there is clear evidence from a limited trial that it improves the health, wellbeing, and safety of humanity globally. (As we do with pharmaceuticals).
  • LLMs that spread lies about people and history for profit are illegal. (We don't let people do this without consequences, why should we let machines?).
  • Enterprises using LLMs for profit must share their profits with everyone who contributed intellectual property to their training data. (i.e., everyone who has ever posted anything on the internet).

These would be regulations that would prioritize people over profit. And it's clear why they do not appear in the White House's list: because they would slow down innovation, reduce profits, and make most applications of LLMs illegal.

This won't be the last time we have to make regulatory decisions about AI. AI researchers started work on LLMs 40 years ago; we already have a good guess about which AI research started 30 years ago that will mature in the next 10 years, and require more conversations. And even more predictably, we know what these conversations will be about, because AI has always been about replacing people — usually with the goal of helping a narrow subset of humanity at the expense of everyone else. So the next time we engage in a debate about AI regulation, let us remember what we are actually talking about: who gets to prosper in society, and who does not. And let us remember that that conversation is not primarily a political or regulatory one, but a moral one.

[1] Kahn, Jennifer (March 2002). "It's Alive". Wired Links to an external site.. Vol. 10, no. 30.

[2] Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin's Press.

[3] https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence