AI Promises to Help Make Lifesaving Drugs, but This Superpower Has a Dark Side

Artificial intelligence (AI) is fundamentally altering the ways that we interact with and experience the world, from AI-powered chatbots like ChatGPT and AI-generated art such as DALL-E to finance, linguistics, and governance. But is AI a superhero or a supervillain? Will AI corrupt the human heart like the “one ring to rule them all” in J. R. R. Tolkien’s Lord of the Rings trilogy or can humanity master their own creation?

In this article I focus on one application where AI can be used for either good or evil—drug discovery. I will discuss the groundbreaking success of DeepMind’s AlphaFold, which got the AI drug discovery ball rolling, the potential dark side of AI for drug discovery, and how we might address the ethical dilemma presented to us by AI drug discovery technologies.

Developing a drug that is safe and effective currently takes 11–16 years and it costs 1–2 billion dollars,1 but AI promises to drastically reduce both the time and cost of making lifesaving drugs. One reason AI holds such promise in drug discovery is that it can help automate the process of enumerating the 1060(that staggering number is a 1 followed by 60 zeros) small molecule compounds that exist, only a tiny fraction of which have been explored to understand their medicinal properties. Such astounding processing ability comes with a caveat.

In a recent article published in Nature,2 researchers found that AI models used to avoid toxic chemical compounds when creating drugs can, with minimal changes, also be used to produce toxic compounds. In other words, AI models intended to make drugs safe could, in theory, be used to create chemical weapons.

How AlphaFold Changed Drug Discovery
Accurate prediction of protein structures is crucial for understanding how proteins function, which in turn can help scientists design new drugs, understand the underlying causes of diseases, and develop treatments for a wide range of conditions. In 1994, scientists interested in protein folding formed CASP (Critical Assessment of protein Structure Prediction), a competition to predict the structure of unreleased proteins. AlphaFold is a machine learning system developed by DeepMind that is designed to predict the three-dimensional structure of proteins. AlphaFold shocked the world in 2018 and 2020 by significantly outperforming any other known method of identifying protein structure in the CASP competition.3

AlphaFold’s ability to accurately predict protein structures is particularly important for drug discovery because many drugs are designed to target specific proteins in the body. By accurately predicting the structure of these proteins, scientists can more easily design drugs that will bind to and inhibit the function of the target protein, which can be an effective way to treat a variety of diseases. In addition, AlphaFold’s ability to accurately predict the structure of proteins that are difficult to study experimentally can help scientists to better understand the underlying causes of diseases and identify new therapeutic targets.

The head of data science at The Institute of Cancer Research, Professor Bissan Al-Lazikani, who was part of the team that won CASP in 2000, said “If we can effectively harness DeepMind’s technology, we will gain a much better understanding of all the proteins and mutations that cause cancers. It will help us to accurately design and discover better, safer drugs that could successfully treat or cure countless people.”4

The Dark Side of AI for Drug Discovery
Cell biologist Fabio Urbina worked for Collaborations Pharmaceuticals, a company that focuses on treating rare infectious diseases and using AI for drug discovery and toxicology assessment. When he received an invitation from The Spiez Laboratory (the Swiss institute for protection against nuclear, biological, and chemical threats and dangers) to speak at their 2021 “convergence” conference on how AI technologies for drug discovery could be abused, the idea of misusing these technologies had never occurred to him. But as Dr. Urbina thought more about it, he quickly realized that these technologies could easily be modified to harm rather than help.

Urbina’s company had an AI-based molecule generator called MegaSyn that normally penalized toxicity and awarded bioactivity. But by awarding toxicity instead of penalizing it, Urbina was able to use MegaSyn to generate 40,000 potentially deadly molecules, some of which were known chemical warfare agents. In Urbina’s words,

“Our toxicity models were originally created for use in avoiding toxicity, enabling us to better virtually screen molecules (for pharmaceutical and consumer product applications) before ultimately confirming their toxicity through in vitro testing. The inverse, however, has always been true: the better we can predict toxicity, the better we can steer our generative model to design new molecules in a region of chemical space populated by predominantly lethal molecules.”5

Preventing the Abuse of AI
Urbina suggests several safety measures to prevent the abuse of drug discovery AI technologies and he points specifically to the Hague Ethical Guidelines6 and safety measures put in place by AI chatbot technologies such as GPT (see this article for an explanation and examination of the ethical implications of AI-powered chatbots such as Google’s LaMBDA). His suggestions include the following:

  1. Restrict access to the software code and data used to create these algorithms by providing limited API access to technologies like MegaSyn. By limiting who can use these technologies and restricting access to the data/knowledge necessary to create them in the first place, we can limit misuse.
  2. Monitor usage of these technologies to detect efforts to abuse them.
  3. Set up a government hotline to report the potential abuse of these technologies to harm humans.
  4. Implement quality ethical training for students at universities who are learning to harness these technologies.

Viking Longships, AI, and the Human Heart
Which is more frightening—a Scandinavian longship or an AI drug discovery model? It depends. If the longship is transporting a raging Viking horde bent on destroying a peaceful village, that is quite scary. On the other hand, if the AI drug discovery model is used to create a deadly neurotoxin, that’s also terrifying. Yet either technology could also be used for much less nefarious purposes, such as transporting goods down the coast to trade or creating lifesaving cancer treatment drugs. Whether a technology is used for good or evil is ultimately up to the humans wielding it.

Artificial intelligence, like the Internet or smart phones, will significantly alter the way that humans work, learn, and interact. Widespread adoption will bring unintended consequences and the need to think ethically and creatively about how to integrate and adapt AI in everyday life to prevent injustice and harm to humans. And there may be specific uses for AI that would be better left untouched because of the inevitability of misuse.

But the root of the problem is not artificial intelligence—it’s the human heart. It’s humans who misuse technology to harm, extort, and profit from other humans. As Christians, we are called to be salt to the world by loving both God and our fellow humans, so let us lead the way by creating technology that honors the people who use it and supporting technologies that benefit society as a whole. Let us use AI for good and protect the powerless from those who would abuse or extort them through technology.

Endnotes  

  1. Ekaterina Pesheva, “Can AI Transform the Way We Discover New Drugs?” Harvard Medical School, November 17, 2022, https://hms.harvard.edu/news/can-ai-transform-way-we-discover-new-drugs.
  2. Fabio Urbina et al., “Dual Use of Artificial-Intelligence-Powered Drug Discovery,” Nature Machine Intelligence 4 ( March 7, 2022): 189–91, doi:10.1038/s42256-022-00465-9.
  3. See the figure in Andrea Downing Peck, “Google DeepMind’s AlphaFold Wins CASP14 Competition, Helps Solve Mystery of Protein Folding in a Discovery That Might Be Used in New Medical Laboratory Tests,” DARK Daily, December 18, 2020; see also John Jumper et al., “Highly Accurate Protein Structure Prediction with AlphaFold,” Nature 596 (August 26, 2021): 583–89, doi:10.1038/s41586-021-03819-2.
  4. Paul Workman, “Reflecting on DeepMind’s AlphaFold Artificial Intelligence Success—What’s the Real Significance for Protein Folding Research and Drug Discovery?” The Institute of Cancer Research, August 13, 2021, https://bit.ly/3hGIinf.
  5. F. Urbina et al., “Dual Use of Artificial-Intelligence.”
  6. The Hague Ethical Guidelines, Organisation for the Prohibition of Chemical Weapons, accessed January 27, 2023, https://www.opcw.org/hague-ethical-guidelines.