Glowing,Circuit,Brain,On,Blurry,Mathematical,Formulas,Background.,Ai,And

Fixing Bias in Artificial Intelligence Can’t Fix Human Bias

Some people have compared the artificial intelligence (AI) boom to the industrial revolution. The industrial revolution automated manual labor and AI can help automate many common intellectual tasks. It is undeniable that AI is improving our lives, from creating more capable digital assistants to self-driving vehicles. But while AI is helping us in many ways, what people often overlook is how AI can also perpetuate human bias.

Algorithm Bias

A classic example of AI bias is found in COMPAS (Correctional Offender Management Profiling for Alternative Sanctions), a software support tool that assesses the likelihood of a defendant becoming a recidivist, which issues a risk score between 1 and 10 to indicate the likelihood of rearrest if a prisoner is released. It turns out that algorithms that try to predict recidivism can be heavily influenced by historical arrest rates. And because African Americans have been unfairly targeted for arrest in the past, these algorithms may also unfairly target the same population.

But there are many less obvious examples, such as the fact that until recently Twitter would center white male faces in images at the expense of minority individuals in the same image.1  These issues of bias in AI extend into many different domains, including healthcare (accurate diagnoses), employment (who gets the job), and finance (who receives a loan).

How AI Bias Works: A Simple Example

In order to understand how bias gets into AI systems, let’s consider a simple hypothetical example of how AI uses data. The data in this example is based on real tip data, but is not representative of either New York City or Illinois. Let’s say that a restaurant chain owner—we’ll call him Joe—is opening a new restaurant in New York City. Most of his current locations are in rural Illinois. Because rent for the building in NYC is expensive, he decides to include the tip in a customer’s bill and wants to be fair about the size of these tips.

Joe decides to hire a data scientist to use tip data from his other restaurants to set the tip value. The data scientist has done this before for other customers in NYC and explains to Joe how it works. By fitting a line (y = mx + b) to tip data where “y” is the tip amount and “x” is the meal cost, you can then attempt to predict future tips using that same line (“m” is the slope of the line and “b” is the y-intercept of a line). This is called linear regression and it’s one of the simplest machine learning algorithms—when you hear about AI, you are usually hearing about machine learning, which is a subfield of AI. In this case, as shown in figure 1, the data scientist shows that a line fit to tip data collected from other restaurants in NYC would predict a tip of $5.10 for a meal that cost $40.00.

The data scientist then uses a similar process on Joe’s tip data from rural Illinois. It turns out that folks didn’t tip as much in rural Illinois. If Joe uses the Illinois tip data, his servers will receive a $3.00 tip for a $40.00 meal rather than a $5.10 dollar tip (see figure 2). These lower tips would be unfair wages for Joe’s servers in NYC even though they were fair wages for his workers in rural Illinois.

Where did that “bias” against waiters in NYC come from? It came from the data. All AI algorithms are built using data, so inaccurate or biased data leads to inaccurate or biased models. Or, as is often said, “junk in, junk out.” In this case, the tip data from Illinois should not be used to calculate tips in NYC. But the same principle applies to any other discipline, whether facial recognition, criminal prosecution, recommender systems, or healthcare.

Figure 1: Example of Linear Regression on Hypothetical NYC Tip Data

Source: Sean Oesch

Figure 2: Difference between Model Based on Hypothetical Illinois Tip Data (Red) and NYC Tip Data (Blue)

Source: Sean Oesch

Fixing Bias in AI Is Nontrivial and Doesn’t Fix Human Bias

There are several reasons why fixing bias in AI is a difficult problem:

  1. Sometimes the data needed to create an unbiased system is simply not available so researchers encounter an ethical dilemma. Should we use a somewhat biased dataset or nothing at all?
  2. Bias in AI systems can be difficult to identify until it shows up, especially when algorithms are trained on datasets so large that no human can effectively evaluate them for bias. This problem is more pronounced in fields such as natural language processing where massive bodies of text scraped from the Internet may be used to train models.
  3. A model that is less biased can still be misused by biased humans. For example, China may have used facial recognition technology to target and detain the Uighur population in internment camps. In such cases, better facial recognition models hardly benefit the minority.
  4. These observations lead to two significant questions worth mulling over. First, is AI always the right solution? We assume we should fix bias in AI, but maybe we shouldn’t be collecting and harnessing massive amounts of data in the first place.2 AI is a powerful tool and it is worth considering when and how it should or should not be used. Secondly, how do we address the issue of bias in the human heart? Olga Russakovsky, a professor of computer science at Princeton, notes that, “Debiasing humans is harder than debiasing AI systems.”3 And this is really the crux of the problem: fixing bias in an algorithm doesn’t lead to correcting bias in society.

Christ Offers a Path to Addressing Bias in the Human Heart

It would be simplistic to say that believing the Christian message of all people being made in God’s image and equally in need of salvation fixes bias in the human heart. Setting the issue of hypocrisy aside, even committed followers of Jesus struggle with bias. One way to combat such bias is to admit our faults and seek the humility to listen across gender, racial, political, and theological divides—allowing others to convict and correct us when we go astray. Christ offers us this path to address the bias in our own hearts. There is no quick fix for human bias, but what Jesus gives us is the power to love and learn from those who are different than us and the motivation to make it happen.

Endnotes

  1. Andrea Kulkarni, “Bias in AI and Machine Learning: Sources and Solutions,” Lexablog, Lexalytics, June 26, 2021, lexalytics.com/lexablog/bias-in-ai-machine-learning.
  2. Julia Powles, “The Seductive Diversion of ‘Solving’ Bias in Artificial Intelligence,” Medium, OneZero, December 7, 2018, onezero.medium.com/the-seductive-diversion-of-solving-bias-in-artificial-intelligence-890df5e5ef53.
  3. Will Knight, “AI Is Biased. Here’s How Scientists Are Trying to Fix It,” Wired (December 19, 2019), https://www.wired.com/story/ai-biased-how-scientists-trying-fix/.