Can Artificial Intelligence Think Like a Human?

Can Artificial Intelligence Think Like a Human?

How close are we to developing machines that can understand and learn anything that humans can? Could such inventions eventually become self-conscious?

A great wealth of information exists regarding the pursuit of what scientists call artificial intelligence. Every now and again, I run across an idea that helps clarify a crucial issue surrounding the pursuit of an intelligence similar to humanity. Computer scientist Judea Pearl articulated one of those ideas in his book, The Book of Why, and titled it “the Ladder of Causation.” This three-level abstraction (see image below) helps identify the key steps to move from an artificial narrow intelligence (ANI) to an artificial general intelligence (AGI), meaning the entity would be able to think like a human being.

Rung 1: Seeing/Observing (“Association”)

The first rung of the ladder entails the ability to see and connect inputs with outcomes. The inputs and outcomes can be complicated and the connections rather hidden, so getting computer programs to do this still represents quite an accomplishment. Everything currently termed artificial intelligence (Siri, Alexa, language translators, facial/voice recognition, even driverless cars) sits on this rung of the ladder. These examples (all are ANIs) operate by using the available data to find correlations in order to make a decision following a predetermined algorithm.

Rung 2: Doing/Intervening (“Intervention”)

The next rung up the ladder of increasing sophistication adds the ability to intervene in an environment and respond appropriately. Pearl illustrates this change by two questions.

  • Rung 1: What is the likelihood that someone who bought toothpaste will also buy dental floss? Correlations in existing sales data will answer this question.
  • Rung 2: What will happen to floss sales if we double the price of toothpaste? In order to find a good answer to this question, one must intervene in the system to gather new data that addresses the question or develop a model that extrapolates from known environments to this new environment.

Scientists routinely exercise rung 2 skills. They ask a currently unanswered question about how the world works, perform experiments or observations to gather appropriate data, and then provide an answer/model that answers the question.

Rung 3: Imagining/Understanding (“Counterfactuals”)

On this top rung, one has the capacity to understand environments that don’t exist. According to Pearl, the toothpaste question becomes: “What is the probability that a customer who bought toothpaste would still have bought it if we had doubled the price?” In other words, this rung requires the ability to imagine something different than the physical world that already exists.

Humans consistently and effortlessly operate on this third rung. We routinely think about how things would be different if we had chosen the “other” option. The theological importance of this level is that humans recognize our place in this physical universe as well as the existence of reality completely separate from it. All the evidence to date indicates that only humanity operates on this intellectual plane. This evidence aligns well with the biblical idea that humanity alone was created in the image of God.

Not only does Pearl’s ladder of causation provide a great image of the challenges that lie ahead in the quest for true artificial intelligence, it also highlights humanity’s unique understanding of our place in the cosmos. And that fact affirms the validity of Christianity.