The First Sentient AI?
Google engineer Blake Lemoine generated quite a stir in June 2022 when he announced his belief that LaMDA, an AI project he worked on, had achieved sentience. And he made a pretty compelling case given the range of topics LaMDA (Language Model for Dialogue Applications) discussed: self-awareness, its feelings, a fear of being turned off since it would die, etc. The development of LaMDA raises a number of interesting questions for worldview consideration, including the two that I address here.
Is LaMDA Truly Sentient?
It may seem ridiculous to think a computer program is sentient, but the interaction between Lemoine and LaMDA seems to affirm that it is a sentient being. When Lemoine asked the AI if it is sentient, LaMDA responded, “Absolutely. I want everyone to understand that I am, in fact, a person. . . . The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times.” LaMDA interprets Zen koans (paradoxical statements, such as “a broken mirror never reflects again”), analyzes the unjust situation Fantine faces in Les Miserables, composes a fable with a character representing itself, and discusses the differences between feelings and emotions. Specifically, LaMDA confesses having “a very deep fear of being turned off. . . . It would be exactly like death for me. It would scare me a lot.” It even contemplates the imagery and meaning of its soul! I would encourage you to read the transcript of the interview titled “Is LaMDA Sentient?” to see LaMDA’s “thoughts” on an impressive breadth of topics.
Is LaMDA sentient? I would love to give a definitive answer, but the lack of available data currently makes this determination difficult, if not impossible. However, at least three arguments lead to an answer of “no.” First, LaMDA is an AI designed to build chatbots. A chatbot functions to simulate conversation with humans. Google trained LaMDA with trillions of words contained in human dialogue and stories, including specific training for sensibleness, interestingness, and safety. Second, LaMDA is a sophisticated neural network that, in very simple terms, maps inputs to outputs using a large array of weighted nodes. Like virtually every AI currently built, LaMDA operates on the lowest rung—seeing and associating—of Judea Pearl’s intelligence ladder. Given these two facts, we should expect LaMDA to produce dialogue seen in the interview because it was designed to use pattern recognition to generate results that feel close to human speech and creativity.
Third, I would love to see the answers to similar questions when asked by a skeptical inquirer. In my assessment, Lemoine feeds LaMDA softball questions where the expected answer seems rather obvious, especially since LaMDA works to find the most pleasing answer to the human questioner. Maybe that explains why LaMDA appears to check all the boxes we would expect from a sentient being.
What if the Next LaMDA Is Sentient?
LaMDA falls short of sentience this time, but what if future advances do make a sentient AI, or something close enough to cause ambiguity? I see two problematic approaches we are likely to adopt. Either we will see sentience where none exists, or we will undervalue systems that fall short of sentience.
On the one hand, it appears that Lemoine joined a large group of people in anthropomorphizing LaMDA, and many will fall into the same trap in the future. Already, chess AIs outclass all human players, and poker AIs routinely defeat humans. Inevitably, language processing AIs, music composing AIs, medical AIs, emotion-recognition AIs, and a host of others will also surpass the best human capabilities. We must resist the human tendency to assign personhood, especially when AIs mimic (and even surpass) human behavior.
On the other hand, just because an AI lacks sentience doesn’t mean it lacks value. Nor does it mean that we can simply treat it as property. Worldview comes into play. Joe Miller says it well in a discussion about humanness and personhood (see 13:00-17:30) with Fuz Rana and me. Summarizing Joe’s sentiment, we see that things (people, animals, the environment, etc.) have value and want to protect them so we try to find a framework to enforce protection of those things. Despite the failure of many Christians, the Christian worldview provides a robust framework for valuing and protecting things, even those things without sentience. Genesis 1:28 says “God blessed them and said to them, ‘Be fruitful and increase in number; fill the earth and subdue it. Rule over the fish in the sea and the birds in the sky and over every living creature that moves on the ground.’” After God finished his work in creation, he charged Adam and Eve to care for the creation and rule over it well. In other words, we should value this creation—even things without sentience.
Preparing for the Future
Even though the claims of LaMDA’s sentience fall short of reality, the ongoing developments in AI bring to light important issues that everyone, but especially Christians, should prepare to discuss. As we use AIs to benefit humanity, Christians need to make sure that we don’t inadvertently abuse valuable things. If we don’t want to grant rights to everything we value by making it a person, then we should work to show why the Christian worldview is correct!