Artificial,Intelligence,(ai),machine,Learning,With,Data,Mining,Technology,On,Virtual

Is Artificial Intelligence a Misnomer?

You have undoubtedly heard about the amazing things that artificial intelligence (AI) can do, but have you ever pondered the following questions: How does AI work? Who invented AI? Is AI replacing humans? 

Having devoted my entire career to AI, starting as a graduate student in the 1980s, I am encouraged and exhilarated by what AI can do today. It has achieved widespread success in multiple applications, ranging from social marketing and face recognition to language translation and autonomous driving. AI has performed well beyond predictions made by many AI researchers as recently as one or two decades ago. As the research continues, AI will undoubtedly continue to make our lives better through technological breakthroughs that researchers have been dreaming of for decades.

However, some of the recent speculations and predictions about AI are puzzling, especially those that appear to raise theological questions. Does AI disprove the existence and necessity of an intelligent God? This short article answers this important question by focusing on a fundamental question: Is there any intelligence in AI? Perhaps a historical roadmap on AI will shed some light.

Births of Artificial Intelligence and Neural Networks

Over a half-century ago, two major academic disciplines emerged with the objective of making intelligent machines. These disciplines were artificial intelligence and neural networks (NN). They took on drastically different approaches. AI was spawned by the computer science community and it focused on symbolic representations and processing. NN was proposed by engineering researchers and it concentrated on numerical representations and computations. In 1969, Marvin Minsky and Seymour Papert, two of the early fathers of AI at the Massachusetts Institute of Technology (MIT), wrote a book carefully explaining what they saw as NN’s severe limitations.1 Consequently, research on NN stopped for about a decade.

Resurgence of Neural Networks

However, in the 1980s, there was a resurgence of interest in neural networks.2 A new mathematical formulation was proposed, leading to optimism that NN could solve many more problems than had been thought by people like Minsky and Papert. This new mathematical solution reignited the longtime debate: which technology would be more powerful? The AI community argued that there was too much “black magic” inside NN. They said that the way NN solved problems resembled neither human intelligence nor artificial intelligence. The NN community, however, argued that the interconnections inside NN resembled, at least by appearance, the physiology of neurons in the human brain. They claimed that with sufficient learning examples, interconnections, and computer processing power, NN would “learn” to solve many problems. As we now know, this claim turns out to be correct.

Alien in Artificial Intelligence

I found myself feeling like an alien when I was a PhD student in Tech Square, a building full of world-renowned professors, scientists, researchers, graduate, and undergraduate students from the MIT Artificial Intelligence Laboratory and the MIT Laboratory for Computer Science (both laboratories have now merged into the MIT Computer Science and Artificial Intelligence Laboratory, or CSAIL). I was frustrated by AI approaches that required tremendous handcrafting of rules and heuristics. NN fascinated me because of its formal mathematical framework to solve problems. I became one of the first few researchers in the world to do a PhD thesis on how to use NN and pattern recognition techniques for speech recognition. At the time, I found only two kinds of people among my AI friends: (a) the majority who considered NN as a bag of tricks, and (b) the minority who remained silent about NN.

Statistical Pattern Recognition

The debate between the AI and NN communities was not new. For decades, mathematicians and engineers had been working on pattern recognition (PR) to solve problems. A new statistical PR approach, called hidden Markov modeling (HMM), became mainstream technology for speech recognition.3 This PR community believed in the vigor of mathematical frameworks and argued that AI was too heuristic and labor-intensive. And they also rejected NN because (a) it lacked a tractable mathematical formulation, and (b) researchers had no idea what was going on inside the NN.

The Melting Pot

Fast forward to the 1990s and the 2000s, and the three disciplines of AI, NN, and PR were starting to merge slowly. New editions of PR textbooks began to add new chapters on NN and HMM.4 Similarly, AI textbooks started to teach about NN, HMM, and other PR techniques (in contrast to earlier editions).5,6 Within the AI community today, many of these NN and PR techniques now reside under the umbrella of machine learning. From an academic perspective, it is awesome when researchers learn to reconcile their differences. Their cooperation allows the disciplines to merge, thereby pushing technology forward faster than ever before.

Many advances today ranging from face recognition and natural language to autonomous machines and medical diagnosis have been hailed as AI successes. However, thanks to today’s much faster computer hardware with significantly higher memory capacity, most, if not all, of these great successes are based on NN, whose fundamental concept has remained unchanged since its resurgence in the 1980s. For decades, the AI community did not consider NN as anything intelligent, but NN has now become the keystone of AI.

Where Is the Intelligence?

This melting pot of AI, NN, and PR could have been named anything, although “artificial intelligence” garners attention. AI includes other sophisticated terms such as deep learning and neural networks, but the name artificial intelligence captures the imagination of a continuum of communities: researchers, developers, managers, marketers, media, sponsors, fiction writers, and the public. If a different name had been chosen, this melting pot probably would not have drawn as much attention and controversy as it does today. Each group in this continuum seems to have a different perspective on AI. From my observations, the farther away the group is from the research, the more speculative (both optimistic and pessimistic) it becomes. But is “intelligence” a good descriptor for this melting pot? I don’t think so.

Technologists and engineers have developed numerous automatic machines over the past century. Automobiles run faster than humans. Computers add numbers faster than humans. Airplanes fly. These technologies make our lives better. Yet, they have no chance of replacing humanity. They are merely tools and do not cause any issues with the Christian faith other than how they are used.

Why would this AI melting pot be any different? Successful AI today is the culmination of decades of research in a vast spectrum of scientific, technological, engineering, and mathematical disciplines. And in my view, the NN in AI is not artificial or intelligent. It is real techno-engineering. Humans have designed, developed, and refined the technology at every step. As is the case with any technology, our creativity in this melting pot is a reflection of God’s image. The intelligence lies in human agents who have been charged with using our minds and creativity to serve humanity.

Endnotes

  1. Marvin Minsky and Seymour A. Papert, Perceptrons: An Introduction to Computational Geometry, expanded ed. (Cambridge, MA: MIT Press, 2017).
  2. David E. Rumelhart, James L. McClelland, and the PDP Research Group, Parallel Distributed Processing: Explorations in the Microstructure of Cognition (Cambridge, MA: MIT Press, 1987).
  3. Lalit R. Bahl, Frederick Jelinek, and Robert L. Mercer, “A Maximum Likelihood Approach to Continuous Speech Recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence 5, no. 2 (March 1983): 179–190, doi:10.1109/TPAMI.1983.4767370.
  4. Richard O. Duda, Peter E. Hart, and David G. Stork, Pattern Classification, 2nd ed. (Hoboken, NJ: John Wiley & Sons, 2000).
  5. Patrick Henry Winston, Artificial Intelligence, 3rd ed. (Boston, MA: Addison-Wesley, 1992);
  6. Nils J. Nilsson, The Quest for Artificial Intelligence: A History of Ideas and Achievements (Cambridge, UK: Cambridge University Press, 2009).