Aliens, AI, and God: The Search for Superintelligence
Modern culture is enamored with the idea of an alien intelligence that exceeds human mental capacities the same way that a human exceeds the mental capacities of a gnat. Such an intelligence may be biological, having evolved in another galaxy far, far away, or it may be artificial, the product of human ingenuity in the field of artificial intelligence. But however it comes to be, superintelligence both excites and terrifies us.
Stephen Hawking warned us to stop sending signals out into space because any alien species that can reach us will be far more technologically advanced than we are, and, as our own history teaches us, such encounters rarely work out well for the natives.1 And in his book Superintelligence, University of Oxford philosopher Nick Bostrom warns that a superintelligent AI may annihilate humanity in the process of becoming the dominant form of life on earth.2 But why, with so much talk of superintelligent AIs and aliens, is the idea of a superintelligent God alien to the modern imagination? In the search for superintelligence, what if we’re looking in all the wrong places?
In this article, I discuss what superintelligence means, why there is so much buzz around it, whether the buzz is justified, and how superintelligence may open doors to share about a God whose ways are higher than our own, even as the heavens are higher than the earth (Isaiah 55:9).
What Is Superintelligence?
In his book, Bostrom defines three types of superintelligence—speed intelligence (same quality as humans but faster), collective intelligence (many minds together—think the Borg from Star Trek), and quality intelligence (as humans are smarter than animals, so this intelligence is above that of humans). If we develop an AI as smart as humans, it’s likely to automatically achieve a speed advantage over us—given that the electrical signal in the brain moves at 1/100,000th the speed of the signal in a silicon chip. However, at this point in history even artificial general intelligence (AGI), an AI intelligence equivalent to humans, is still science fiction. In fact, even with all of our advances in the field of AI, we still struggle to create a test to determine whether or not a machine is truly as intelligent as a human.
The Turing test, in which a human tries to determine if they’re talking to a human or an AI, is insufficient according to philosopher John Searle. He proposed the Chinese room argument, where a man appears to know Chinese because he always passes the correct Chinese characters out of a room to his interlocutor, but in fact the man is simply following the instructions of a computer program and does not understand a single character of Chinese. François Chollet, an AI researcher at Google, points out that the human mind is capable of what is called “extreme generalization,” which Chollet defines as “adaptation to unknown unknowns across an unknown range of tasks and domains.” Humans can figure out how to tie a knot, solve a sudoku, paint a work of art, navigate a relational issue, and then walk the dog.
By contrast, current AI systems are only good at solving a single task, such as recognizing or generating images, and no single AI can solve a broad set of tasks—especially unknown tasks. Unforeseen circumstances often cause current AI systems to break. So why are philosophers and influencers speculating about superintelligence when we still don’t have AGI or even understand how to evaluate AI for human-level intelligence?
Why All the Buzz?
To the modern secular mind the emergence of superintelligence is inevitable because evolution must progress. Accordingly, given that elephants and whales have more raw computing power than humans in their brains,3 surely human intelligence is merely a byproduct of our brain architecture and the evolutionary process. The idea that the human race, which has only been around for a tiny fraction of the 14 billion years the universe has existed, is the apex intelligence seems an absurd, antiquated, anthropocentric view. Combine this low view of humanity’s place in the overall history of the universe with the recent hype surrounding AI models that (1) beat the world’s best Go players (AlphaGo), (2) score better than most humans on standardized tests, (3) produce impeccable text (ChatGPT), and (4) produce art that can win competitions against human artists (Midjourney), and it’s not hard to understand why people are waiting for sentient AI to pop into existence and even surpass human capabilities.
Tim Urban, well-known for his blog Wait But Why, embodies this perspective in an article he wrote titled “The AI Revolution: The Road to Superintelligence.”4 Urban argues that most people do not understand how quickly AI will overtake humanity in terms of intelligence. But do scientists who are experts in AI agree with this view or is popular culture’s take on AI merely science fiction?
Is the Buzz Justified?
Andrew Ng, cofounder of Google Brain and professor at Stanford University famous for his online AI classes, emphasized to the US Senate AI Insight Forum that AI doomsday scenarios are unlikely.5 To understand Ng’s position, consider the following words posted by Yann LeCun, one of the three “godfathers” of AI, on December 17, 2023 on LinkedIn:6
The emergence of superhuman AI will not be an event. Progress is going to be progressive.
It will start with systems that can learn how the world works, like baby animals. Then we’ll have machines that are objective driven and that satisfy guardrails. Then, we’ll have machines that can plan and reason to satisfy those objectives and guardrails. Then we’ll have machines that can plan hierarchically.
At first, those machines will be barely smarter than a mouse or a rat. Then we’ll scale up those machines to be as smart as a dog or a crow. Then, we’ll adjust the guardrails to make those systems controllable and safe as we scale them up. Then we’ll train them on a wide variety of environments and tasks. Then we’ll fine tune them on all the tasks that we want them to accomplish.
At some point, we will realize that the systems we’ve built are smarter than us in almost all domains. This doesn’t necessarily mean that these systems will have sentience or “consciousness” (whatever you mean by that). But they will be better than us at executing the tasks we set for them. They will be under our control.
Notice how LeCun changes the narrative around superintelligent AI. There is no singularity event where AI becomes sentient and suddenly decides to assault humanity. AI is a human-controlled system trained by humans to excel at a set of tasks. Humans may misuse AI, the same way that humans misuse any other technology to dominate others, but the chances of AI going rogue are slim.
Moreover, superintelligence will not come about explosively (as Urban suggests). Rather, each step toward superintelligence will be progressive. And even once AI is better than humans at a broad set of tasks, that doesn’t mean that AI will have godlike powers.
Superintelligence and God
The idea of a superintelligent God may be a comfort or a terror. For the person who trusts in the goodness, grace, and mercy of God, a comfort. For the person who either does not believe God is good or who intends to do no good themselves, a terror.
Likewise, the idea of a superintelligent God may grow or stymie the intellect. For the person who recognizes a superintelligent God as an opportunity to ask the most challenging questions of a Person who is unthreatened by, and the author of, our intellect, the mind will flourish. For the person who wants to use the idea of a superintelligent God to avoid facing the difficult questions, the intellect will be stymied.
Finally, the idea of a superintelligent God may open doors to a fun and thoughtful conversation. Whether or not humans ever create a superintelligent AI, as Christians we can use the idea of superintelligence as a thought-provoking conversation starter. What if God is superintelligent? What if his ways really are higher than our ways? What if a superintelligent God loves us so much that he came into this messy world and died a cruel death and rose again from the grave—all that we might love, worship, and dwell with him forever in paradise?
The apostle Paul puts it in perspective:
Oh, the depth of the riches of the wisdom and knowledge of God!
How unsearchable his judgments,
and his paths beyond tracing out!
“Who has known the mind of the Lord?
Or who has been his counselor?”
“Who has ever given to God,
that God should repay them?”
For from him and through him and for him are all things.
To him be the glory forever! Amen. (Romans 11:33–36)
Endnotes
- Leo Hickman, “Stephen Hawking Takes a Hard Line on Aliens,” The Guardian, April 26, 2010.
- Nick Bostrom, Superintelligence: Paths, Dangers, Strategies (Oxford, England: Oxford University Press, 2014), 91.
- Bostrom, Superintelligence, 69.
- Tim Urban, “The AI Revolution: The Road to Superintelligence,” Wait but Why (blog), January 22, 2015.
- Andrew Ng, The Batch, no. 227, December 13, 2023, https://www.deeplearning.ai/the-batch/issue-227/.
- Yann LeCun, LinkedIn, December 17, 2023, https://www.linkedin.com/posts/yann-lecun_the-emergence-of-superhuman-ai-will-not-be-activity-7142009252721655808-tVMa.