Where Science and Faith Converge
  • Ancient Mouse Fur Discovery with Mighty Implications

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Jun 26, 2019

    “What a mouse! . . . WHAT A MOUSE!”

    The narrator’s exclamation became the signature cry each time the superhero Mighty Mouse carried out the most impossible of feats.

    A parody of Superman, Mighty Mouse was the 1942 creation of Paul Terry of Terrytoons Studio for 20th Century Fox. Since then, Mighty Mouse has appeared in theatrical shorts and films, Saturday morning cartoons, and comic books.

    blog__inline--ancient-mouse-fur-discovery-1

    Figure 1: Mighty Mouse. Image credit: Wikipedia

    Throughout each episode, the characters sing faux arias—mocking opera—with Mighty Mouse belting out, “Here I am to save the day!” each time he flies into action. As you would expect, many of the villains Mighty Mouse battles are cats, with his archnemesis being a feline named Oil Can Harry.

    Mouse Fur Discovery

    Recently, a team of researchers headed by scientists from the University of Manchester in the UK went to heroic measures to detect pigments in a 3-million-year-old mouse fossil, nicknamed—you guessed it—“mighty mouse.”1 To detect the pigments, the researchers developed a new method that employs Synchrotron Rapid Scanning X-Ray Fluorescence Imaging to map metal distributions in the fossil, which, in turn, correlate with the types of pigments found in the animal’s fur when it was alive.

    This work paves the way for paleontologists to develop a better understanding of past life on Earth, with fur pigmentation being unusually important. The color of an animal’s fur has physiological and behavioral importance and can change relatively quickly over the course of geological timescales through microevolutionary mechanisms.

    This discovery also carries importance for the science-faith conversation. Some Christians believe that the recovery of soft tissue remnants, such as the pigments that make up fur, call into question the scientific methods used to determine the age of geological formations and the fossil record. This uncertainty opens up the possibility that our planet (and life on Earth) may be only 6,000 years old.

    Is the young-earth interpretation of this advance valid? Is it possible for soft tissue materials to survive for millions of years? If so, how?

    Detection of 3-Million-Year-Old Pigment

    University of Manchester researchers applied their methodology to an exceptionally well-preserved 3-million-year-old fossil specimen (Apodemus atavus) recovered from the Willershausen conservation site in Germany. The specimen was compressed laterally during the fossilization process and is so well-preserved that imprints of its fur are readily visible.

    The research team indirectly identified the pigments that at one time colored the fur by mapping the distribution of metals in the fossil specimen. These metals are known to associate with the pigments eumelanin and pheomelanin, the two main forms of melanin. (Eumelanin produces black and brown hues. Pheomelanin imparts fur, skin, and feathers with a light reddish-brown color.) As it turns out, copper ions chemically interact with eumelanin and pheomelanin. On the other hand, zinc (Zn) ions interact exclusively with pheomelanin by binding to sulfur (S) atoms that are part of this pigment’s molecular structure. Zinc doesn’t interact with eumelanin because sulfur is not part of its chemical composition.

    The research team mapped the Zn and S distributions of the mighty mouse fossil and concluded that much of the fur was colored with pheomelanin and, therefore, must have been reddish brown. They failed to detect any pigment in the fur coating the animal’s underbelly and feet, leading them to speculate that the mouse had white fur coating its stomach and feet.

    What a piece of science! . . . WHAT A PIECE OF SCIENCE!

    Soft Tissues and the Scientific Case for a Young Earth

    Paleontologists see far-reaching implications for this work. Roy Wogelius, one of the scientists leading the study, hopes that “these results will mean that we can become more confident in reconstructing extinct animals and thereby add another dimension to the study of evolution.”2

    Young-earth creationists (YECs) also see far-reaching implications for this study. Many argue that advances such as this one provide compelling evidence that the earth is young and that the fossil record was laid down as a consequence of a recent global flood.

    The crux of the YEC argument centers around the survivability of soft tissue materials. According to common wisdom, soft tissue materials should rapidly degrade once the organism dies. If this is the case, then there is no way soft tissue remnants should hang around for thousands of years, let alone millions. The fact that these materials can be recovered from fossil specimens indicates that the preserved organisms must be only a few thousand years old. And if thats the case, then the methods used to date the fossils cannot be valid.

    At first glance, the argument carries some weight. Most people find it hard to envision how soft tissue materials could survive for vast periods of time, given the wide range of mechanisms that drive the degradation of biological materials.

    Preservation Mechanisms for Soft Tissues in Fossils

    Despite this initial impression, over the last decade or so paleontologists have identified a number of mechanisms that can delay the degradation of soft tissues long enough for them to become entombed within a mineral shell. When this entombment occurs, the degradation process dramatically slows down. In other words, it is a race against time. Can mineral entombment take place before the soft tissue materials fully decompose? If so, then soft tissue remnants can survive for hundreds of millions of years. And any chemical or physical process that can delay the degradation will contribute to soft tissue survival by giving the entombment process time to take place.

    In Dinosaur Blood and the Age of the Earth, I describe several mechanisms that likely promote soft tissue survival. I also discuss the molecular features that contribute to soft tissue preservation in fossils. Not all molecules are made equally. Some are fragile and some robust. Two molecular properties that make molecules unusually durable are cross-linking and aromaticity. As it turns out, eumelanin and pheomelanin possess both.

    blog__inline--ancient-mouse-fur-discovery-2

    Figure 2: Chemical Structure of Eumelanin. Image credit: Wikipedia

    blog__inline--ancient-mouse-fur-discovery-3

    Figure 3: Chemical Structure of Pheomelanin. Image credit: Wikipedia

    When considering the chemical structures of eumelanin and pheomelanin, it isn’t surprising that these materials persist in the fossil record for millions of years. In fact, researchers have isolated eumelanin from a fossilized cephalopod ink sac that dates to around 160 million years ago.3

    It is also worth noting that the mouse specimen was well-preserved, making it even more likely that durable soft-tissue materials would persist in the fossil. And, keep in mind that the research team detected trace amounts of pigments using sophisticated, state-of-the-art chemical instrumentation.

    In short, the recovery of trace levels of soft-tissue materials from fossil remains is not surprising. Soft-tissue materials associated with the mighty mouse specimen—and other fossils, for that matter— can’t save the day for the young-earth paradigm, but they find a ready explanation in an old-earth framework.

    Resources

    Endnotes
    1. Phillip L. Manning et al., “Pheomelanin Pigment Remnants Mapped in Fossils of an Extinct Mammal,” Nature Communications 10, (May 21, 2019): 2250, doi:10.1038/s41467-019-10087-2.
    2. DOE/SLAC National Accelerator Laboratory, “In a First, Researchers Identify Reddish Coloring in an Ancient Fossil,” Science Daily, May 21, 2019, https://www.sciencedaily.com/releases/2019/05/190521075110.htm
    3. Keely Glass et al., “Direct Chemical Evidence for Eumelanin Pigment from the Jurassic Period,” Proceedings of the National Academy of Sciences USA 109, no. 26 (June 26, 2012): 10218–23, doi:10.1073/pnas.1118448109.
  • Satellite DNA: Critical Constituent of Chromosomes

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Jun 19, 2019

    There is a lot that evolutionary biologists can learn about the purpose of junk DNA from my wife.

    Let me explain.

    Recently, I wound up with a disassembled cabinet in the trunk of my car. Neither my wife Amy nor I could figure out where to put the cabinet in our home and we didn’t want to store it in the garage. The cabinet had all its pieces and was practically new. So, I offered it to a few people, but there were no takers. It seemed that nobody wanted to assemble the cabinet.

    Getting Rid of the Junk

    After driving around with the cabinet pieces in my trunk for a few days, I channeled my inner Marie Kondo. This cabinet wasn’t giving me any joy by taking up valuable space in the trunk. So, I made a quick detour on my way home from the office and donated the cabinet to a charity.

    When I told Amy what I had done, she expressed surprise and a little disappointment. If she had known I was going to donate the cabinet, she would have kept it for its glass doors. In other words, if I hadn’t donated the cabinet, it would have eventually wound up in our garage because it has nice glass doors that Amy thinks she could have repurposed.

    There is a point to this story: The cabinet was designed for a purpose and, at one time, it served a useful function. But once it was disassembled and put in the trunk of my car, nobody seemed to want it. Disassembling the cabinet transformed it into junk. And since my wife loves to repurpose things, she saw a use for it. She didn’t perceive the cabinet as junk at all.

    The moral of my little story also applies to the genomes of eukaryotic organisms. Specifically, is it time that evolutionary scientists view some kinds of DNA not as junk, but rather as purposeful genetic elements?

    Junk in the Genome

    Many biologists hold the view that a vast proportion of the genomes of other eukaryotic organisms is junk, just like the disassembled cabinet I temporarily stored in my car. They believe that, like the unwanted cabinet, many of the different types of “junk” DNA in genomes originated from DNA sequences that at one time performed useful functions. But these functional DNA sequences became transformed (like the disassembled cabinet) into nonfunctional elements.

    Evolutionary biologists consider the existence of “junk” DNA as one of the most potent pieces of evidence for biological evolution. According to this view, junk DNA results when undirected biochemical processes and random chemical and physical events transform a functional DNA segment into a useless molecular artifact. Junk pieces of DNA remain part of an organisms genome, persisting from generation to generation as a vestige of evolutionary history.

    Evolutionary biologists highlight the fact that, in many instances, identical (or nearly identical) segments of junk DNA appear in a wide range of related organisms. Frequently, the identical junk DNA segments reside in corresponding locations in these genomes—and for many biologists, this feature clearly indicates that these organisms shared a common ancestor. Accordingly, the junk DNA segment arose prior to the time that the organisms diverged from their shared evolutionary ancestor and then persisted in the divergent evolutionary lines.

    One challenging question these scientists ask is, Why would a Creator purposely introduce nonfunctional, junk DNA at the exact location in the genomes of different, but seemingly related, organisms?

    Satellite DNA

    Satellite DNA, which consists of nucleotide sequences that repeat over and over again, is one class of junk DNA. This highly repetitive DNA occurs within the centromeres of chromosomes and also in the chromosomal regions adjacent to centromeres (referred to as pericentromeric regions).

    blog__inline--satellite-dna

    Figure: Chromosome Structure. Image credit: Shutterstock

    Biologists have long regarded satellite DNA as junk because it doesn’t encode any useful information. Satellite DNA sequences vary extensively from organism to organism. For evolutionary biologists, this variability is a sure sign that these DNA sequences cant be functional. Because if they were, natural selection would have prevented the DNA sequences from changing. On top of that, molecular biologists think that satellite DNAs highly repetitive nature leads to chromosomal instability, which can result in genetic disorders.

    A second challenging question is, Why would a Creator intentionally introduce satellite DNA into the genomes of eukaryotic organisms?

    What Was Thought to Be Junk Turns Out to Have Purpose

    Recently, a team of biologists from the University of Michigan (UM) adopted a different stance regarding the satellite DNA found in pericentromeric regions of chromosomes. In the same way that my wife Amy saw a use for the cabinet doors, the researchers saw potential use for satellite DNA. According to Yukiko Yamashita, the UM research head, “We were not quite convinced by the idea that this is just genomic junk. If we don’t actively need it, and if not having it would give us an advantage, then evolution probably would have gotten rid of it. But that hasn’t happened.”1

    With this mindset—refreshingly atypical for most biologists who view satellite DNA as junk—the UM research team designed a series of experiments to determine the function of pericentromeric satellite DNA.2 Typically, when molecular biologists seek to understand the functional role of a region of DNA, they either alter it or splice it out of the genome. But, because the pericentromeric DNA occupies such a large proportion of chromosomes, neither option was available to the research team. Instead, they made use of a protein found in the fruit fly Drosophila melanogaster, called D1. Previous studies demonstrated that this protein binds to satellite DNA.

    The researchers disabled the gene that encodes D1 and discovered that fruit fly germ cells died. They observed that without the D1 protein, the germ cells formed micronuclei. These structures reflect chromosomal instability and they form when a chromosome or a chromosomal fragment becomes dislodged from the nucleus.

    The team repeated the study, but this time they used a mouse model system. The mouse genome encodes a protein called HMGA1 that is homologous to the D1 protein in fruit flies. When they damaged the gene encoding HMGA1, the mouse cells also died, forming micronuclei.

    As it turns out, both D1 and HMGA1 play a crucial role, ensuring that chromosomes remain bundled in the nucleus. These proteins accomplish this feat by binding to the pericentromeric satellite DNA. Both proteins have multiple binding sites and, therefore, can simultaneously bind to several chromosomes at once. The multiple binding interactions collect chromosomes into a bundle to form an association site called a chromocenter.

    The researchers aren’t quite sure how chromocenter formation prevents micronuclei formation, but they speculate that these structures must somehow stabilize the nucleus and the chromosomes housed in its interior. They believe that this functional role is universal among eukaryotic organisms because they observed the same effects in fruit flies and mice.

    This study teaches us two additional lessons. One, so-called junk DNA may serve a structural role in the cell. Most molecular biologists are quick to overlook this possibility because they are hyper-focused on the informational role (encoding the instructions to make proteins) DNA plays.

    Two, just because regions of the genome readily mutate without consequences doesn’t mean these sequences aren’t serving some kind of functional role. In the case of pericentromeric satellite DNA, the sequences vary from organism to organism. Most molecular biologists assume that because the sequences vary, they must not be functionally important. For if they were, natural selection would have prevented them from changing. But this study demonstrates that DNA sequences can vary—particularly if DNA is playing a structural role—as long as they dont compromise DNA’s structural utility. In the case of pericentromeric DNA, apparently the nucleotide sequence can vary quite a bit without compromising its capacity to bind chromocenter-forming proteins (such as D1 and HMGA1).

    Is the Evolutionary Paradigm the Wrong Framework to Study Genomes?

    Scientists who view biology through the lens of the evolutionary paradigm are often quick to conclude that the genomes of organisms reflect the outworking of evolutionary history. Their perspective causes them to see the features of genomes, such as satellite DNA, as little more than the remnants of an unguided evolutionary process. Within this framework, there is no reason to think that any particular DNA sequence element harbors function. In fact, many life scientists regard these “evolutionary vestiges” as junk DNA. This clearly was the case for satellite DNA.

    Yet, a growing body of data indicates that virtually every category of so-called junk DNA displays function. In fact, based on the available data, a strong case can be made that most sequence elements in genomes possess functional utility. Based on these insights, and the fact that pericentromeric satellite DNA persists in eukaryotic genomes, the team of researchers assumed that it must be functional. Its a clear departure from the way most biologists think about genomes.

    Based on this study (and others like it), I think it is safe to conclude that we really don’t understand the molecular biology of genomes.

    It seems to me that we live in the midst of a revolution in our understanding of genome structure and function. Instead of being a wasteland of evolutionary debris, the architecture and operations of genomes appear to be far more elegant and sophisticated than anyone ever imagined—at least within the confines of the evolutionary paradigm.

    This insight also leads me to wonder if we have been using the wrong paradigm all along to think about genome structure and function. I contend that viewing biological systems as the Creator’s handiwork provides a superior framework for promoting scientific advance, particularly when the rationale for the structure and function of a particular biological system is not apparent. Also, in addressing the two challenging questions, if biological systems have been created, then there must be good reasons why these systems are structured and function the way they do. And this expectation drives further study of seemingly nonfunctional, purposeless systems with the full anticipation that their functional roles will eventually be uncovered.

    Though committed to an evolutionary interpretation of biology, the UM researchers were rewarded with success when they broke ranks with most evolutionary biologists and assumed junk regions of the genome were functional. Their stance illustrates the power of a creation model approach to biology.

    Sadly, most evolutionary biologists are like me when it comes to old furniture. We lack vision and are quick to see it as junk, when in fact a treasure lies in front of us. And, if we let it, this treasure will bring us joy.

    Resources

    Endnotes
    1. University of Michigan, “Scientists Discover a Role for ‘Junk’ DNA,” ScienceDaily (April 11, 2018), www.sciencedaily.com/releases/2018/04/180411131659.htm.
    2. Madhav Jagannathan, Ryan Cummings, and Yukiko M. Yamashita, “A Conserved Function for Pericentromeric Satellite DNA,” eLife 7 (March 26, 2018): e34122, doi:10.7554/eLife.34122.
  • Frog Choruses Sing Out a Song of Creation

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Jun 12, 2019

    My last name, Rana, is Sanskrit in origin, referring to someone who descends from the Thar Ghar aristocracy. Living in Southern California means I don’t often meet Urdu-speaking people who would appreciate the regal heritage connected to my family name. But I do meet a lot of Spanish speakers. And when I introduce myself, I often see raised eyebrows and smiles.

    In Spanish, Rana means frog.

    My family has learned to embrace our family’s namesake. In fact, when our kids were little, my wife affectionately referred to our five children as ranitas—little frogs.

     

     

    blog__inline--frog-choruses-sing-out-a-song-of-creation-1

    Image: Five Ranitas. Image credit: Shutterstock

    Our feelings about these cute and colorful amphibians aside, frogs are remarkable creatures. They engage in some fascinating behaviors. Take courtship, as an example. In many frog species, the males croak to attract the attention of females, with each frog species displaying its own distinct call.

    Male frogs croak by filling their vocal sacs with air. This allows them to amplify their croaks for up to a mile away. Oftentimes, male frogs in the same vicinity will all croak together, forming a chorus.

    blog__inline--frog-choruses-sing-out-a-song-of-creation-2

    Image: Male Frog Croaking to Attract a Female. Image credit: Shutterstock

    As it turns out, female frogs aren’t the only ones who respond to frog croaks.

    A research team from Japan has spent a lot of time listening to and analyzing frog choruses with the hopes of understanding the mathematical structure of the sounds that frogs collectively make when they call out to females. Once they had the mathematical model in hand, the researchers discovered that they could use it to improve the efficiency of wireless data transfer systems.1

    This work serves as one more example of scientists and engineers applying insights from biology to drive technology advances and breakthroughs. This approach to technology development (called biomimetics and bioinspiration)—exemplified by the impressive work of the Japanese researchers—has significance that extends beyond engineering. It can be used to make the case that a Creator must have played a role in the design and history of life by marshaling support for two distinct arguments for God’s existence:

    Frog Choruses: A Cacophony or a Symphony?

    Anyone who has spent time near a pond at night certainly knows the ruckus that an army of male frogs can make when each of them is vying for the attention of females.

    All the male frogs living near the pond want to attract females to the same breeding site, but, in doing so, each individual also wants to attract females to his specific territory. Field observations indicate that, instead of engaging in a croaking free-for-all (with neighboring frogs trying to outperform one another), the army of frogs engages in a carefully orchestrated acoustical presentation. As a result, male frogs avoid call overlap with neighboring males on a short timescale, while synchronizing their croaks with the other frogs to produce a chorus on a longer timescale.

    The frogs avoid call overlap by alternating between silence and croaking, coordinating with neighboring frogs so that when one frog rests, another croaks. This alternating back-and-forth makes it possible for each individual frog to be heard amid the chorus, and it also results in a symphonic chorus of frog croaks.

    The Mathematical Structure of Frog Choruses

    To dissect the mathematical structure of frog choruses, the research team placed three male Japanese tree frogs into individual mesh cages that were set along a straight line, with a two-foot separation between each cage. The researchers recorded the frog’s croaks using microphones placed by each cage.

    They observed that all three frogs alternated their calls, forming a triphasic synchronization. One frog croaked continuously for a brief period of time and then would rest, while the other two frogs took their turn croaking and resting. The researchers determined that the rest breaks for the frogs were important because of the amount of energy it takes the frogs to produce a call.

    All three frogs would synchronize the start and stop of their calls to produce a chorus followed by a period of silence. They discovered that the time between choruses varied quite a bit, without rhyme or reason, and was typically much longer than the chorus time. On the other hand, the croaking of each individual lasted for a predictable time duration that was followed immediately by the croaking of a neighboring frog.

    By analyzing the acoustical data, the researchers developed a mathematical model to describe the croaking of individual frogs and the collective behavior of the frogs when they belted out a chorus of calls. Their model consisted of both deterministic and stochastic components.

    Use of Frog Choruses for Managing Data Traffic

    The researchers realized that the mathematical model they developed could be applied to control wireless sensor networks, such as those that make up the internet of things. These networks entail an array of sensor nodes that transmit data packets, delivering them to a gateway node by multi-hop communication, with data packets transmitted from sensor to sensor until it reaches the gate. During transmission, it is critical for the system to avoid the collision of data packets. It is also critical to regulate the overall energy consumption of the system, to avoid wasting valuable energy resources.

    blog__inline--frog-choruses-sing-out-a-song-of-creation-3

    Image: The Internet of Things Made Up of Wireless Sensors. Image credit: Shutterstock

    Through simulation studies, the Japanese team demonstrated that the mathematical model inspired by frog choruses averted the collision of data packets in a wireless sensor array, maximized network connectivity, and enhanced efficiency of the array by minimizing power consumption. The researchers conclude, “This study highlights the unique dynamics of frog choruses over multiple time scales and also provides a novel bio-inspired technology.”2

    As important as this work may be for inspiring new technologies, as a Christian, I find its real significance in the theological arena.

    Frog Choruses and the Argument from Beauty

    The grandeur of nature touches the very core of who we are—if we take the time to let it. But, as the work by the Japanese researchers demonstrates, the grandeur we see all around us in nature isn’t confined to what we perceive with our immediate senses. It exists in the underlying mathematical structure of nature. It is nothing short of amazing to think that such exquisite organization and orchestration characterizes frog choruses, so much so that it can inspire sophisticated data management techniques.

    From my vantage point, the beauty and mathematical elegance of nature points to the reality of a Creator.

    If God created the universe, then it is reasonable to expect it to be a beautiful universe, one that displays an even deeper underlying beauty in the mathematical structure that defines the universe itself and phenomena within the universe. Yet if the universe came into existence through mechanism alone, there isn’t any real reason to think it would display beauty. In other words, the beauty in the world around us signifies the divine.

    Furthermore, if the universe originated through uncaused physical mechanisms, there is no reason to think that humans would possess an appreciation for beauty.

    A quick survey of the scientific and popular literature highlights the challenge that the origin of our aesthetic sense creates for the evolutionary paradigm.3 Plainly put: evolutionary biologists have no real explanation for the origin of our aesthetic sense. To be clear, evolutionary biologists have posited explanations to account for the genesis of our capacity to appreciate beauty. But after examining these ideas, we walk away with the strong sense that they are not much more than “just-so stories,” lacking any real evidential support.

    On the other hand, if human beings are made in God’s image, as Scripture teaches, we should be able to discern and appreciate the universe’s beauty, made by our Creator to reveal his glory and majesty.

    Frog Choruses and the Converse Watchmaker Argument

    The idea that biological designs—such as the courting behavior of male frogs—can inspire engineering and technology advances is also highly provocative for other reasons. First, it highlights just how remarkable and elegant the designs found throughout the living realm actually are.

    I think that the elegance of these designs points to a Creator’s handiwork. It also makes possible a new argument for God’s existence—one I have named the converse Watchmaker argument. (For a detailed discussion, see my essay titled “The Inspirational Design of DNA” in the book Building Bridges.)

    The argument can be stated like this:

    • If biological designs are the work of a Creator, then these systems should be so well-designed that they can serve as engineering models for inspiring the development of new technologies.
    • Indeed, this scenario plays out in the engineering discipline of biomimetics.
    • Therefore, it becomes reasonable to think that biological designs are the work of a Creator.

    In fact, I will go one step further. Biomimetics and bioinspiration logically arise out of a creation model approach to biology. That designs in nature can be used to inspire engineering makes sense only if these designs arose from an intelligent Mind.

    In fact, I will go one step further. Biomimetics and bioinspiration logically arise out of a creation model approach to biology. That designs in nature can be used to inspire engineering makes sense only if these designs arose from an intelligent Mind. The mathematical structure of frog choruses is yet another example of such bioinspiration.

    Frogs really are amazing—and regal—creatures. Listening to a frog chorus can connect us to the beauty of the world around us. And it will one day help all of our electronic devices to connect together. And that’s certainly something to sing about.

    Resources

    Endnotes
    1. Ikkyu Aihara et al., “Mathematical Modelling and Application of Frog Choruses As an Autonomous Distributed Communication System, Royal Society Open Science 6, no. 1 (January 2, 2019): 181117, doi:10.1098/rsos.181117.
    2. Aihara et al., “Mathematical Modelling and Application.
    3. For example, see Ferris Jabr, “How Beauty is Making Scientists Rethink Evolution,” The New York Times Magazine, January 9, 2019, https://www.nytimes.com/2019/01/09/magazine/beauty-evolution-animal.html.
  • Why Would God Create a World with Parasites?

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Jun 05, 2019

    A being so powerful and so full of knowledge as a God who could create the universe, is to our finite minds omnipotent and omniscient, and it revolts our understanding to suppose that his benevolence is not unbounded, for what advantage can there be in the sufferings of millions of lower animals throughout almost endless time? This very old argument from the existence of suffering against the existence of an intelligent first cause seems to me a strong one; whereas, as just remarked, the presence of much suffering agrees well with the view that all organic beings have been developed through variation and natural selection.1

    —Charles Darwin, The Autobiography of Charles Darwin

    If God exists and if he is all-powerful, all-knowing, and all-good, why is there so much pain and suffering in the world? This conundrum keeps many skeptics and seekers from the Christian faith and even troubles some Christians.

    Perhaps nothing epitomizes the problem of pain and suffering more than the cruelty observed in nature. Indeed, what advantage can there be in the suffering of millions of animals?

    Often, the pain and suffering animals experience is accompanied by unimaginable and seemingly unnecessary cruelty.

    Take nematodes (roundworms) as an example. There are over 10,000 species of nematodes. Some are free-living. Others are parasitic. Nematode parasites infect humans, animals, plants, and insects, causing untold pain and suffering. But their typical life cycle in insects seems especially cruel.

    Nematodes that parasitize insects usually are free-living in their adult form but infest their host in the juvenile stage. The infection begins when the juvenile form of the parasite enters into the insect host, usually through a body opening, such as the mouth or anus. Sometimes the juveniles drill through the insect’s cuticle.

    Once inside the host, the juveniles release bacteria that infect and kill the host, liquefying its internal tissues. As long as the supply of host tissue holds out, the juveniles will live within the insect’s body, even reproducing. When the food supply runs out, the nematodes exit the insect and seek out another host.

    blog__inline--why-would-god-create-a-world-with-parasites

    Figure 1: An Entomopathogenic Nematode Juvenile. Image credit: Shutterstock

    Why would God create a world with parasitism? Could God really be responsible for a world like the one we inhabit? Many skeptics would answer “no” and conclude that God must not exist.

    A Christian Response to the Problem of Evil

    One way to defend God’s existence and goodness in the face of animal pain and suffering is to posit that there just might be good reasons for God to create the world the way it is. Perhaps what we are quick to label as evil may actually serve a necessary function.

    This perspective gains support based on some recent insights into the benefits that insect parasites impart to ecosystems. A research team from the University of Georgia (UGA) recently unearthed one example of the important role played by these parasites.2 These researchers demonstrated that nematode-infected horned passalus beetles (bess beetles) are more effective at breaking down dead logs in the forest than their parasite-free counterparts—and this difference benefits the ecosystem. Here’s how.

    The Benefit Parasites Provide to the Ecosystem

    The horned passalus lives in decaying logs. The beetles consume wood through a multistep process. After ingesting the wood, these insects excrete it in a partially digested form. The wood excrement becomes colonized by bacteria and fungi and then is later re-consumed by the beetle.

    These insects can become infected by a nematode parasite (Chondronema passali). The parasite inhabits the abdominal cavity of the beetle (though not its gastrointestinal tract). When infected, the horned passalus can harbor thousands of individual nematodes.

    To study the effect of this parasite on the horned passalus and the forest ecosystem inhabited by the insect, researchers collected 113 individuals from the woods near the UGA campus. They also collected pieces of wood from the logs bearing the beetles.

    In the laboratory, they placed each of the beetles in separate containers that also contained pieces of wood. After three months, they discovered that the beetles infected with the nematode parasite processed 15 percent more wood than beetles that were parasite-free. Apparently, the beetles compensate for the nematode infection by consuming more food. One possible reason for the increased wood consumption may be due to the fact that the parasites draw away essential nutrients from the beetle host, requiring the insect to consume more food.

    While it isn’t clear if the parasite infestation harms the beetle (infected beetles have reduced mobility and loss of motor function), it is clear that the infestation benefits the ecosystem. These beetles play a key role in breaking down dead logs and returning nutrients to the forest soil. By increasing the beetles wood consumption, the nematodes accelerate this process, benefiting the ecosystem’s overall health.

    Cody Prouty, one of the projects researchers, points out “that although the beetle and the nematode have a parasitic relationship, the ecosystem benefits from not only the beetle performing its function, but the parasite increasing the efficiency of the beetle. Over the course of a few years, the parasitized beetles could process many more logs than unparasitized beetles, and lead to an increase of organic matter in soils.”3

    This study is not the first to discover benefits parasites impart to ecosystems. Parasites play a role in shaping ecosystem biodiversity and they intertwine with the food web. The researchers close their article this way: “Countering long-standing unpopular views of parasites is certainly challenging, but perhaps evidence like that presented here will be of use in this effort.”4

    Such evidence does not revolt our understanding, as Darwin might suggest, but instead enhances our insights into the creation and helps counter the challenge of the problem of evil. Even creatures as gruesome as parasites can serve a beneficial purpose in creation and maybe could rightfully be understood as good.

    Resources

    Endnotes
    1. Charles Darwin, The Autobiography of Charles Darwin: 1809–1882 (New York: W. W. Norton, 1969), 90.
    2. Andrew K. Davis and Cody Prouty, “The Sicker the Better: Nematode-Infected Passalus Beetles Provide Enhanced Ecosystem Services, Biology Letters 15, no. 5 (2019): 20180842, doi:10.1098/rsbl.2018.0842.
    3. University of Georgia, “Parasites Help Beetle Hosts Function More Effectively,” ScienceDaily (May 1, 2019), https://www.sciencedaily.com/releases/2019/05/190501131435.htm.
    4. Davis and Prouty,“The Sicker the Better,” 3.
  • Biochemical Grammar Communicates the Case for Creation

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | May 29, 2019

    As I get older, I find myself forgetting things—a lot. But, thanks to smartphone technology, I have learned how to manage my forgetfulness by using the “Notes” app on my iPhone.

    blog__inline--biochemical-grammar-communicates-1

    Figure 1: The Apple Notes app icon. Image credit: Wikipedia

    This app makes it easy for me to:

    • Jot down ideas that suddenly come to me
    • List books I want to read and websites I want to visit
    • Make note of musical artists I want to check out
    • Record “to do” and grocery lists
    • Write down details I need to have at my fingertips when I travel
    • List new scientific discoveries with implications for the RTB creation model that I want to blog about, such as the recent discovery of a protein grammar calling attention to the elegant design of biochemical systems

    And the list goes on. I will never forget, again!

    On top of that, I can use the Notes app to categorize and organize all my notes and house them in a single location. Thus, I don’t have to manage scraps of paper that invariably wind up getting scattered all over the place—and often lost.

    And, as a bonus, the Notes app anticipates the next word I am going to use even before I type it. I find myself relying on this feature more and more. It is much easier to select a word than type it out. In fact, the more I use this feature, the better the app becomes at anticipating the next word I want to type.

    Recently, a team of bioinformaticists from the University of Alabama, Birmingham (UAB) and the National Institutes of Health (NIH) used the same algorithm the Notes app uses to anticipate word usage to study protein architectures.1 Their analysis reveals new insight into the structural features of proteins and also highlights the analogy between the information housed in these biomolecules and human language. This analogy contributes to the revitalized Watchmaker argument presented in my book The Cell’s Design.

    N-Gram Language Modeling

    The algorithm used by the Notes app to anticipate the next word the user will likely type is called n-gram language modeling. This algorithm determines the probability of a word being used based on the previous word (or words) typed. (If the probability is based on a single word, it is called a unigram probability. If the calculation is based on the previous two words, it is called a bigram probability, and so on.) This algorithm “trains” the Notes app so that the more I use it, the more reliable the calculated probabilities—and, hence, the better the word recommendations.

    N-Gram Language Modeling and the Case for a Creator

    To understand why the work of research team from UAB and NIH provides evidence for a Creator’s role in the origin and design of life, a brief review of protein structure is in order.

    Protein Structure

    Proteins are large complex molecules that play a key role in virtually all of the cell’s operations. Biochemists have long known that the three-dimensional structure of a protein dictates its function.

    Because proteins are such large complex molecules, biochemists categorize protein structure into four different levels: primary, secondary, tertiary, and quaternary structures. A protein’s primary structure is the linear sequence of amino acids that make up each of its polypeptide chains.

    The secondary structure refers to short-range three-dimensional arrangements of the polypeptide chain’s backbone arising from the interactions between chemical groups that make up its backbone. Three of the most common secondary structures are the random coil, alpha (α) helix, and beta (β) pleated sheet.

    Tertiary structure describes the overall shape of the entire polypeptide chain and the location of each of its atoms in three-dimensional space. The structure and spatial orientation of the chemical groups that extend from the protein backbone are also part of the tertiary structure.

    Quaternary structure arises when several individual polypeptide chains interact to form a functional protein complex.

     

    blog__inline--biochemical-grammar-communicates-2

    Figure 2: The four levels of protein structure. Image credit: Shutterstock

    Protein Domains

    Within the tertiary structure of proteins, biochemists have discovered compact, self-contained regions that fold independently. These three-dimensional regions of the protein’s structure are called domains. Some proteins consist of a single compact domain, but many proteins possess several domains. In effect, domains can be thought to be the fundamental units of a protein’s tertiary structure. Each domain possesses a unique biochemical function. Biochemists refer to the spatial arrangement of domains as a protein’s domain architecture.

    Researchers have discovered several thousand distinct protein domains. Many of these domains recur in different proteins, with each protein’s tertiary structure comprised of a mix-and-match combination of protein domains. Biochemists have also learned that a relationship exists between the complexity of an organism and the number of unique domains found in its set of proteins and the number of multi-domain proteins encoded by its genome.

    blog__inline--biochemical-grammar-communicates-3

    Figure 3: Pyruvate kinase, an example of a protein with three domains. Image credit: Wikipedia

    The Key Question in Protein Chemistry

    As much progress as biochemists have made characterizing protein structure over the last several decades, they still lack a fundamental understanding of the relationship between primary structure (the amino acid sequence) and tertiary structure and, hence, protein function. In order to develop this insight, they need to determine the “rules” that dictate the way proteins fold. Treating proteins as information systems can help determine some of these rules.

    Protein as Information Systems

    Proteins are not only large, complex molecules but also information-harboring systems. The amino acid sequence that defines a protein’s primary structure is a type of information—biochemical information—with the individual amino acids analogous to the letters that make up an alphabet.

    N-Gram Analysis of Proteins

    To gain insight into the relationship between a protein’s primary structure and its tertiary structures, the researchers from UAB and NIH carried out an n-gram analysis on the 23 million protein domains found in the protein sets of 4,800 species found across all three domains of life.

    These researchers point out that an individual amino acid in a protein’s primary structure doesn’t contain information just as an individual letter in an alphabet doesn’t harbor any meaning. In human language, the most basic unit that conveys meaning is a word. And, in proteins, the most basic unit that conveys biochemical meaning is a domain.

    To decipher the “grammar” used by proteins, the researchers treated adjacent pairs of protein domains in the tertiary structure of each protein in the sample set as a bigram (similar to two words together). Surveying the proteins found in their data set of 4,800 species, they discovered that 95% of all the possible domain combinations don’t exist!

    This finding is key. It indicates that there are, indeed, rules that dictate the way domains interact. In other words, just like certain word combinations never occur in human languages because of the rules of grammar, there appears to be a protein “grammar” that constrains the domain combinations in proteins. This insight implies that physicochemical constraints (which define protein grammar) dictate a protein’s tertiary structure, preventing 95% of conceivable domain-domain interactions.

    Entropy of Protein Grammar

    In thermodynamics, entropy is often used as a measure of the disorder of a system. Information theorists borrow the concept of entropy and use it to measure the information content of a system. For information theorists, the entropy of a system is indirectly proportional to the amount of information contained in a sequence of symbols. As the information content increases, the entropy of the sequence decreases, and vice versa. Using this concept, the UAB and NIH researchers calculated the entropy of the protein domain combinations.

    In human language, the entropy increases as the vocabulary increases. This makes sense because, as the number of words increases in a language, the likelihood that random word combinations would harbor meaning decreases. In like manner, the research team discovered that the entropy of the protein grammar increases as the number of domains increases. (This increase in entropy likely reflects the physicochemical constraints—the protein grammar, if you will—on domain interactions.)

    Human languages all carry the same amount of information. That is to say, they all display the same entropy content. Information theorists interpret this observation as an indication that a universal grammar undergirds all human languages. It is intriguing that the researchers discovered that the protein “languages” across prokaryotes and eukaryotes all display the same level of entropy and, consequently, the same information content. This relationship holds despite the diversity and differences in complexity of the organism in their data set. By analogy, this finding indicates that a universal grammar exists for proteins. Or to put it another way, the same set of physicochemical constraints dictate the way protein domains interact for all organisms.

    At this point, the researchers don’t know what the grammatical rules are for proteins, but knowing that they exist paves the way for future studies. It also generates hope that one day biochemists might understand them and, in turn, use them to predict protein structure from amino acid sequences.

    This study also illustrates how fruitful it can be to treat biochemical systems as information systems. The researchers conclude that “The similarities between natural languages and genomes are apparent when domains are treated as functional analogs of words in natural languages.”2

    In my view, it is this relationship that points to a Creator’s role in the origin and design of life.

    Protein Grammar and the Case for a Creator

    As discussed in The Cell’s Design, the recognition that biochemical systems are information-based systems has interesting philosophical ramifications. Common, everyday experience teaches that information derives solely from the activity of human beings. So, by analogy, biochemical information systems, too, should come from a divine Mind. Or at least it is rational to hold that view.

    But the case for a Creator strengthens when we recognize that it’s not merely the presence of information in biomolecules that contributes to this version of a revitalized Watchmaker analogy. Added vigor comes from the UAB and NIH researchers’ discovery that the mathematical structure of human languages and biochemical languages is identical.

    Skeptics often dismiss the updated Watchmaker argument by arguing that biochemical information is not genuine information. Instead, they maintain that when scientists refer to biomolecules as harboring information, they are employing an illustrative analogy—a scientific metaphor—and nothing more. They accuse creationists and intelligent design proponents of misconstruing their use of analogical language to make the case for design.3

    But the UAB and NIH scientists’ work questions the validity of this objection. Biochemical information has all of the properties of human language. It really is information, just like the information we conceive and use to communicate.

    Is There a Biochemical Anthropic Principle?

    This discovery also yields another interesting philosophical implication. It lends support to the existence of a biochemical anthropic principle. Discovery of a protein grammar means that there are physicochemical constraints on protein structure. It is remarkable to think that protein tertiary structures may be fundamentally dictated by the laws of nature, instead of being the outworking of an historically contingent evolutionary history. To put it differently, the discovery of a protein grammar reveals that the structure of biological systems may reflect some deep, underlying principles that arise from the very nature of the universe itself. And yet these structures are precisely the types of structures life needs to exist.

    I interpret this “coincidence” as evidence that our universe has been designed for a purpose. And as a Christian, I find that notion to resonate powerfully with the idea that life manifests from an intelligent Agent—namely, God.

    Resources to Dig Deeper

    Endnotes
    1. Lijia Yu et al., “Grammar of Protein Domain Architectures,” Proceedings of the National Academy of Sciences, USA 116, no. 9 (February 26, 2019): 3636–45, doi:10.1073/pnas.1814684116.
    2. Yu et al., 3636–45.
    3. For example, see Massimo Pigliucci and Maarten Boudry, “Why Machine-Information Metaphors Are Bad for Science and Science Education,” Science and Education 20, no. 5–6 (May 2011): 453–71; doi:10.1007/s11191-010-9267-6.
  • Why Would God Create a World Where Animals Eat Their Offspring?

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | May 22, 2019

    What a book a Devil’s chaplain might write on the clumsy, wasteful, blundering, low and horridly cruel works of nature!

    –Charles Darwin, “Letter to J. D. Hooker,” Darwin Correspondence Project

    You may not have ever heard of him, but he played an important role in ushering in the Darwinian revolution in biology. His name was Asa Gray.

    Gray (1810–1888) was a botanist at Harvard University. He was among the first scientists in the US to adopt Darwin’s theory of evolution. Asa Gray was also a devout Christian.

    blog__inline--why-would-god-create-a-world-where-animals-eat-their-offspring-1

    Asa Gray in 1864. Image credit: John Adams Whipple, Wikipedia

    Gray was convinced that Darwin’s theory of evolution was sound. He was also convinced that nature displayed unmistakable evidence for design. For this reason, he reasoned that God must have used evolution as the means to create and, in doing so, Gray may have been the first person to espouse theistic evolution.

    In his book Darwinia, Asa Gray presents a number of essays defending Darwin’s theory. Yet, he also expresses his deepest convictions that nature is filled with indicators of design. He attributed that design to a type of God-ordained, God-guided process. Gray argued that God is the source of all evolutionary change.

    blog__inline--why-would-god-create-a-world-where-animals-eat-their-offspring-2

    Gray and Darwin struck up a friendship and exchanged around 300 letters. In the midst of their correspondence, Gray asked Darwin if he thought it possible that God used evolution as the means to create. Darwins reply revealed that he wasn’t very impressed with this idea.

    I cannot persuade myself that a beneficent & omnipotent God would have designedly created the Ichneumonidæ with the express intention of their feeding within the living bodies of caterpillars, or that a cat should play with mice. Not believing this, I see no necessity in the belief that the eye was expressly designed. On the other hand I cannot anyhow be contented to view this wonderful universe & especially the nature of man, & to conclude that everything is the result of brute force. I am inclined to look at everything as resulting from designed laws, with the details, whether good or bad, left to the working out of what we may call chance. Not that this notion at all satisfies me. I feel most deeply that the whole subject is too profound for the human intellect. A dog might as well speculate on the mind of Newton. Let each man hope & believe what he can.1

    Darwin could not embrace Gray’s theistic evolution because of the cruelty he saw in nature that seemingly causes untold pain and suffering in animals. Darwin—along with many skeptics today—couldn’t square a world characterized by that much suffering with the existence of a God who is all-powerful, all-knowing, and all-good.

    Filial Cannibalism

    The widespread occurrence of filial cannibalism (when animals eat their young or consume their eggs after laying them) and abandonment (leading to death) exemplify such cruelty in animals. It seems such a low and brutal feature of nature.

    Why would God create animals that eat their offspring and abandon their young?

    Is Cruelty in Nature Really Evil?

    But what if there are good reasons for God to allow pain and suffering in the animal kingdom? I have written about good scientific reasons to think that a purpose exists for animal pain and suffering (see “Scientists Uncover a Good Purpose for Long-Lasting Pain in Animals” by Fazale Rana).

    And, what if animal death is a necessary feature of nature? Other studies indicate that animal death promotes biodiversity and ecosystem stability (see “Of Weevils and Wasps: God’s Good Purpose in Animal Death” by Maureen Moser, and “Animal Death Prevents Ecological Meltdown” by Fazale Rana).

    There also appears to be a reason for filial cannibalism and offspring abandonment, at least based on a study by researchers from Oxford University (UK) and the University of Tennessee.2 These researchers demonstrated that filial cannibalism and offspring abandonment comprise a form of parental care.

    What? How is that conclusion possible?

    It turns out that when animals eat their offspring or abandon their young, the reduction promotes the survival of the remaining offspring. To arrive at this conclusion, the researchers performed mathematical modeling of a generic egg-laying species. They discovered that when animals sacrificed a few of their young, the culling led to greater fitness for their offspring than when animals did not engage in filial cannibalism or egg abandonment.

    These behaviors become important when animals lay too many eggs. In order to properly care for their eggs (protect, incubate, feed, and clean), animals confine egg-laying to a relatively small space. This practice leads to a high density of eggs. But this high density can have drawbacks, making the offspring more vulnerable to diseases and lack of sufficient food and oxygen. Filial cannibalism reduces the density, ensuring a greater chance of survival for those eggs that are left behind. So, ironically, when egg density is too high for the environmental conditions, more offspring survive when the parents consume some, rather than none, of the eggs.

    So, why lay so many eggs in the first place?

    In general, the more eggs that are laid, the greater the number of surviving offspring—assuming there are unlimited resources and no threats of disease. But it is difficult for animals to know how many eggs to lay because the environment is unpredictable and constantly changing. A better way to ensure reproductive fitness is to lay more eggs and remove some of them if the environment can’t sustain the egg density.

    So, it appears as if there is a good reason for God to create animals that eat their young. In fact, you might even argue that filial cannibalism leads to a world with less cruelty and suffering than a world where filial cannibalism doesnt exist at all. This feature of nature is consistent with the idea of an all-powerful, all-knowing, and all-good God who has designed the creation for his good purposes.

    Resources

    Endnotes
    1. To Asa Gray 22 May [1860],” Darwin Correspondence Project, University of Cambridge, accessed May 15, 2019, https://www.darwinproject.ac.uk/letter/DCP-LETT-2814.xml.
    2. Mackenzie E. Davenport, Michael B. Bansall, and Hope Klug, “Unconventional Care: Offspring Abandonment and Filial Cannibalism Can Function as Forms of Parental Care,Frontiers in Ecology and Evolution 7 (April 17, 2019): 113, doi:10.3389/fevo.2019.00113.
  • Competitive Endogenous RNA Hypothesis Supports the Case for Creation

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | May 15, 2019

    When Francis Crick, codiscoverer of the DNA double helix, first conceived of molecular biology’s organizing principle in 1958, he dubbed it the central dogma. He soon came to regret the term. In his autobiographical account, What Mad Pursuit, Crick writes:

    I called this idea the central dogma, for two reasons, I suspect. I had already used the obvious word hypothesis in the sequence hypothesis, and in addition I wanted to suggest that this new assumption was more central and more powerful….As it turned out, the use of the word dogma caused almost more trouble than it was worth. Many years later Jacques Monod pointed out to me that I did not appear to understand the correct use of the word dogma, which is a belief that cannot be doubted. I did apprehend this in a vague sort of way but since I thought that all religious beliefs were without foundation, I used the word the way I myself thought about it, not as most of the world does, and simply applied it to a grand hypothesis that, however plausible, had little direct experimental support.1

    Even though Crick rued labeling his idea as “dogma,” the term seems to fit, all the connotations aside, because of its singular importance to molecular biology.

    The Central Dogma of Molecular Biology

    The central dogma of molecular biology describes the directional flow of information in the cell, which moves from DNA to RNA to proteins. Information can flow from DNA to DNA during DNA replication, from DNA to RNA during transcription, and from RNA back to DNA during reverse transcription. However, biochemical information can’t flow from proteins to either RNA or DNA.

    blog__inline--competitive-endogenous-rna

    Figure 1: The Central Dogma of Molecular Biology. Image credit: Shutterstock

    Is There a New Dogma in Molecular Biology?

    In my opinion as a biochemist, if there is an idea that has the potential to rival the significance of the central dogma, it just might be the competitive endogenous RNA (ceRNA) hypothesis. This newer model provides a comprehensive description of the role messenger RNA (mRNA) molecules play in regulating gene expression, thereby influencing the flow of information from DNA to proteins.

    The ceRNA hypothesis also provides an elegant rationale for why the genomes of eukaryotic organisms contain pseudogenes (including unitary pseudogenes) and encode long noncoding RNA molecules. Additionally, it explains why duplicated pseudogenes resemble corresponding intact genes. In doing all this, the ceRNA hypothesis provides support for the RTB’s genomics model—which interprets the structure and activities associated with genomes from a creation or design standpoint. (An overview of the RTB genomics model can be found in the updated and expanded 2nd edition of Who Was Adam?)

    The Competitive Endogenous RNA Hypothesis

    I discuss the ceRNA hypothesis in a previous article. So, I’ll offer just a brief description here. According to the central dogma, the final step in the flow of biochemical information is the production of proteins at the ribosome, directed by the information housed in mRNA. Biochemists have discovered an elaborate mechanism that selectively degrades mRNA transcripts before they can reach this point. This degradation process controls gene expression by dictating the amount of protein produced.

    Molecules called microRNAs bind to the mRNA’s 3′ untranslated region, which flags the transcript for destruction by RNA-induced silencing complex (RISC). A number of distinct microRNA species exist in the cell. Each microRNA species bind to specific sites in the 3′ untranslated region of mRNA transcripts. (These binding locations are called microRNA response elements or MREs.)

    A network of genes shares the same set of MREs and, consequently, will bind to the same set of microRNAs. When one gene is transcribed, it will influence the expression of all the other genes in its network. And when one gene in the network becomes up-regulated (leading to increased transcription of that gene), the expression of all the genes in the network increases. Why? Because the increased level of that particular transcript exerts a “sponge effect” that consumes more of the microRNAs that would otherwise target other transcripts for degradation.

    The Competitive Endogenous RNA Hypothesis and the Role of Junk DNA

    The ceRNA hypothesis elegantly explains the functional utility of three classes of junk DNA: duplicated and unitary pseudogenes, plus long noncoding RNAs. As it turns out, the transcripts produced from these types of so-called junk DNA also harbor MREs. None of these transcripts codes for proteins yet they play an indispensable role in regulating gene expression. In fact, all three are much better suited for the role of molecular sponges precisely because they aren’t translated into proteins.

    Of particular utility are duplicated pseudogenes due to their close structural resemblance to the corresponding coding genes. Duplicated pseudogenes not only exert a sponge effect but also serve as decoys that allow the transcripts of the intact genes to escape degradation and to be translated into proteins.

    Is the Competitive Endogenous RNA Hypothesis Valid?

    This question has generated a minor scientific controversy. Some studies provide experimental support for this idea while others question the physiological relevance of ceRNAs. In light of this debate, a team of researchers headed by investigators from Columbia University sought to validate this hypothesis on a large-scale.2 They discovered that ceRNA interactions can disrupt the expression of thousands of genes. The team concluded that “ceRNA regulation is the norm not the exception…and that ceRNA interactions have genome-wide effects on gene expression.”3

    These researchers think that this insight sheds light on tumor biology because dysregulation of ceRNAs have been implicated in some cancers. Their work also has theological significance because it undermines one of the most significant challenges to design arguments and, in turn, can be marshaled in support of the RTB genomics model.

    The Competitive Endogenous Hypothesis and the Case for a Creator

    Evolutionary biologists have long maintained that identical (or nearly identical) junk DNA sequences (such as pseudogene sequences) found in corresponding locations in genomes of organisms that naturally cluster together (such as humans and the great apes) provide compelling evidence that these organisms must have evolved from a shared ancestor. This interpretation was compelling because junk DNA sequences seemed to be useless vestiges of evolutionary history.

    Creationists and intelligent design proponents had little to offer by way of evidence for the intentional design of genomes. But research in recent years has revealed that virtually every class of junk DNA has function. It seems, then, that shared junk DNA sequences can be understood as shared designs, which is what the RTB genomics model predicts.

    Additionally, the ceRNA hypothesis supports the RTB genomics even further. This hypothesis provides an elegant explanation for the widespread existence of pseudogenes in genomes and their structural similarity to intact genes.

    Could it be that the idea of religious dogma affirming a Creator’s role in life’s design and history has merit?

    Resources

    Endnotes
    1. Francis Crick, What Mad Pursuit (New York: Basic Books, 1988), 109.
    2. Hua-Sheng Chiu et al., “High-Throughput Validation of ceRNA Regulatory Networks,” BMC Genomics 18 (2017): 418, doi:10.1186/s12864-017-3790-7.
    3. Chiu et al., 418.
  • Pseudogene Discovery Pains Evolutionary Paradigm

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | May 08, 2019

    It was one of the most painful experiences I ever had. A few years ago, I had two back-to-back bouts of kidney stones. I remember it as if it were yesterday. Man, did it hurt when I passed the stones! All I wanted was for the emergency room nurse to keep the Demerol coming.

    blog__inline--pseudogene-discovery-pains-1

    Figure 1: Schematic Depiction of Kidney Stones Moving through the Urinary Tract. Image Credit: Shutterstock

    When all that misery was going down, I wished I was one of those rare individuals who doesn’t experience pain. There are some people who, due to genetic mutations, live pain-free lives. This condition is called hypoalgesia. (Of course, there is a serious downside to hypoalgesia. Pain lets us know when our body is hurt or sick. Because hypoalgesics can’t experience pain, they are prone to serious injury, etc.)

    Biomedical researchers possess a keen interest in studying people with hypoalgesia. Identifying the mutations responsible for this genetic condition helps investigators understand the physiological processes that undergird the pain sensation. This insight then becomes indispensable to guiding efforts to develop new drugs and techniques to treat pain.

    By studying the genetic profile of a 66-year-old woman who lived a lifetime with pain-free injuries, a research team from the UK recently discovered a novel genetic mutation that causes hypoalgesia.1 The mutation responsible for this patient’s hypoalgesia occurred in a pseudogene, a region of the genome considered nonfunctional “junk DNA.”

    This discovery adds to the mounting evidence that shows junk DNA is functional. At this point, molecular geneticists have demonstrated that virtually every class of junk DNA has function. This notion undermines the best evidence for common descent and, hence, undermines an evolutionary interpretation of biology. More importantly, the discovery adds support for the competitive endogenous RNA hypothesis, which can be marshaled to support RTB’s genomics model. It is becoming more and more evident to me that genome structure and function reflect the handiwork of a Creator.

    The Role of a Pseudogene in Mediating Hypoalgesia

    To identify the genetic mutation responsible for the 66-year-old’s hypoalgesia, the research team scanned her DNA along with samples taken from her mother and two children. The team discovered two genetic changes: (1) mutations to the FAAH gene that reduced its expression, and (2) deletion of part of the FAAH pseudogene.

    The FAAH gene encodes for a protein called fatty acid amide hydrolase (FAAH). This protein breaks down fatty acid amides. Some of these compounds interact with cannabinoid receptors. These receptors are located in the membranes of cells found in tissues throughout the body. They mediate pain sensation, among other things. When fatty acid amide concentrations become elevated in the circulatory system, it produces an analgesic effect.

    Researchers found elevated fatty acid amide levels in the patient’s blood, consistent with reduced expression of the FAAH gene. It appears that both mutations are required for the complete hypoalgesia observed in the patient. The patient’s mother, daughter, and son all display only partial hypoalgesia. The mother and daughter have the same mutation in the FAAH gene but an intact FAAH pseudogene. The patient’s son is missing the FAAH pseudogene, but has a “normal” FAAH gene.

    Based on the data, it looks like proper expression levels of the FAAH gene require an intact FAAH pseudogene. This is not the first time that biomedical researchers have observed the same effect. There are a number of gene-pseudogene pairs in which both must be intact and transcribed for the gene to be expressed properly. In 2011, researchers from Harvard University proposed that the competitive endogenous RNA hypothesis explains why transcribed pseudogenes are so important for gene expression.2

    The Competitive Endogenous RNA Hypothesis

    Biochemists and molecular biologists have long believed that the primary mechanism for regulating gene expression centered around controlling the frequency and amount of mRNA produced during transcription. For housekeeping genes, mRNA is produced continually, while for genes that specify situational proteins, it is produced as needed. Greater amounts of mRNA are produced for genes expressed at high levels and limited amounts for genes expressed at low levels.

    Researchers long thought that once the mRNA was produced it would be translated into proteins, but recent discoveries indicate this is not the case. Instead, an elaborate mechanism exists that selectively degrades mRNA transcripts before they can be used to direct the protein production at the ribosome. This mechanism dictates the amount of protein produced by permitting or preventing mRNA from being translated. The selective degradation of mRNA also plays a role in gene expression, functioning in a complementary manner to the transcriptional control of gene expression.

    Another class of RNA molecules, called microRNAs, mediates the selective degradation of mRNA. In the early 2000s, biochemists recognized that by binding to mRNA (in the 3′ untranslated region of the transcript), microRNAs play a crucial role in gene regulation. Through binding, microRNAs flag the mRNA for destruction by RNA-induced silencing complex (RISC).

    blog__inline--pseudogene-discovery-pains-2

    Figure 2: Schematic of the RNA-Induced Silencing Mechanism. Image Credit: Wikipedia

    Various distinct microRNA species in the cell bind to specific sites in the 3′ untranslated region of mRNA transcripts. (These binding locations are called microRNA response elements.) The selective binding by the population of microRNAs explains the role that duplicated pseudogenes play in regulating gene expression.

    The sequence similarity between the duplicated pseudogene and the corresponding “intact” gene means that the same microRNAs will bind to both mRNA transcripts. (It is interesting to note that most duplicated pseudogenes are transcribed.) When microRNAs bind to the transcript of the duplicated pseudogene, it allows the transcript of the “intact” gene to escape degradation. In other words, the transcript of the duplicated pseudogene is a decoy. The mRNA transcript can then be translated and, hence, the “intact” gene expressed.

    It is not just “intact” and duplicated pseudogenes that harbor the same microRNA response elements. Other genes share the same set of microRNA response elements in the 3′ untranslated region of the transcripts and, consequently, will bind the same set of microRNAs. These genes form a network that, when transcribed, will influence the expression of all genes in the network. This relationship means that all the mRNA transcripts in the network can function as decoys. This recognition accounts for the functional utility of unitary pseudogenes.

    One important consequence of this hypothesis is that mRNA has dual functions inside the cell. First, it encodes information needed to make proteins. Second, it helps regulate the expression of other transcripts that are part of its network.

    Junk DNA and the Case for Creation

    Evolutionary biologists have long maintained that identical (or nearly identical) pseudogene sequences found in corresponding locations in genomes of organisms that naturally group together (such as humans and the great apes) provide compelling evidence for shared ancestry. This interpretation was persuasive because molecular geneticists regarded pseudogenes as nonfunctional, junk DNA. Presumably, random biochemical events transformed functional DNA sequences (genes) into nonfunctional garbage.

    Creationists and intelligent design proponents had little to offer by way of evidence for the intentional design of genomes. But all this changed with the discovery that virtually every class of junk DNA has function, including all three types of pseudogenes (processed, duplicated, and unitary).

    If junk DNA is functional, then the sequences previously thought to show common descent could be understood as shared designs. The competitive endogenous RNA hypothesis supports this interpretation. This model provides an elegant rationale for the structural similarity between gene-pseudogene pairs and also makes sense of the widespread presence of unitary pseudogenes in genomes.

    Of course, this insight also supports the RTB genomics model. And that sure feels good to me.

    Resources

    Endnotes
    1. Abdella M. Habib et al., “Microdeletion in a FAAH Pseudogene Identified in a Patient with High Anandamide Concentrations and Pain Insensitivity,” British Journal of Anaesthesia, advanced access publication, doi:10.1016/j.bja.2019.02.019.
    2. Ana C. Marques, Jennifer Tan, and Chris P. Ponting, “Wrangling for microRNAs Provokes Much Crosstalk,” Genome Biology 12, no. 11 (November 2011): 132, doi:10.1186/gb-2011-12-11-132; Leonardo Salmena et al., “A ceRNA Hypothesis: The Rosetta Stone of a Hidden RNA Language?”, Cell 146, no. 3 (August 5, 2011): 353–58, doi:10.1016/j.cell.2011.07.014.
  • Why Mitochondria Make My List of Best Biological Designs

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | May 01, 2019

    A few days ago, I ran across a BuzzFeed list that catalogs 24 of the most poorly designed things in our time. Some of the items that stood out from the list for me were:

    • serial-wired Christmas lights
    • economy airplane seats
    • clamshell packaging
    • juice cartons
    • motion sensor faucets
    • jewel CD packaging
    • umbrellas

    What were people thinking when they designed these things? It’s difficult to argue with BuzzFeed’s list, though I bet you might add a few things of your own to their list of poor designs.

    If biologists were to make a list of poorly designed things, many would probably include…everything in biology. Most life scientists are influenced by an evolutionary perspective. Thus, they view biological systems as inherently flawed vestiges cobbled together by a set of historically contingent mechanisms.

    Yet as our understanding of biological systems improves, evidence shows that many “poorly designed” systems are actually exquisitely assembled. It also becomes evident that many biological designs reflect an impeccable logic that explains why these systems are the way they are. In other words, advances in biology reveal that it makes better sense to attribute biological systems to the work of a Mind, not to unguided evolution.

    Based on recent insights by biochemist and origin-of-life researcher Nick Lane, I would add mitochondria to my list of well-designed biological systems. Lane argues that complex cells and, ultimately, multicellular organisms would be impossible if it weren’t for mitochondria.1 (These organelles generate most of the ATP molecules used to power the operations of eukaryotic cells.) Toward this end, Lane has demonstrated that mitochondria’s properties are just-right for making complex eukaryotic cells possible. Without mitochondria, life would be limited to prokaryotic cells (bacteria and archaea).

    To put it another way, Nick Lane has shown that prokaryotic cells could never evolve the complexity needed to form cells with complexity akin to the eukaryotic cells required for multicellular organisms. The reason has to do with bioenergetic constraints placed on prokaryotic cells. According to Lane, the advent of mitochondria allowed life to break free from these constraints, paving the way for complex life.

    blog__inline--why-mitochondria-make-my-list-1

    Figure 1: A Mitochondrion. Image credit: Shutterstock

    Through Lane’s discovery, mitochondria reveal exquisite design and logical architecture and operations. Yet this is not necessarily what I (or many others) would have expected if mitochondria were the result of evolution. Rather, we’d expect biological systems to appear haphazard and purposeless, just good enough for the organism to survive and nothing more.

    To understand why I (and many evolutionary biologists) would hold this view about mitochondria and eukaryotic cells (assuming that they were the product of evolutionary processes), it is necessary to review the current evolutionary explanation for their origins.

    The Endosymbiont Hypothesis

    Most biologists believe that the endosymbiont hypothesis is the best explanation for the origin of complex eukaryotic cells. This hypothesis states that complex cells originated when single-celled microbes formed symbiotic relationships. “Host” microbes (most likely archaea) engulfed other archaea and/or bacteria, which then existed inside the host as endosymbionts.

    The presumption, then, is that organelles, including mitochondria, were once endosymbionts. Evolutionary biologists believe that, once engulfed, the endosymbionts took up permanent residency within the host cell and even grew and divided inside the host. Over time, the endosymbionts and the host became mutually interdependent. For example, the endosymbionts provided a metabolic benefit for the host cell, such as serving as a source of ATP. In turn, the host cell provided nutrients to the endosymbionts. The endosymbionts gradually evolved into organelles through a process referred to as genome reduction. This reduction resulted when genes from the endosymbionts’ genomes were transferred into the genome of the host organism.

    Based on this scenario, there is no real rationale for the existence of mitochondria (and eukaryotic cells). They are the way they are because they just wound up that way.

    But Nick Lane’s insights suggest otherwise.

    Lane’s analysis identifies a deep-seated rationale that accounts for the features of mitochondria (and eukaryotic cells) related to their contribution to cellular bioenergetics. To understand why mitochondria and eukaryotic cells are the way they are, we first need to understand why prokaryotic cells can never evolve into large complex cells, a necessary step for the advent of complex multicellular organisms.

    Bioenergetics Constraints on Prokaryotic Cells

    Lane has discovered that bioenergetics constraints keep bacterial and archaeal cells trapped at their current size and complexity. Key to discovering this constraint is a metric Lane devised called Available Energy per Gene (AEG). It turns out that AEG in eukaryotic cells can be as much as 200,000 times larger than the AEG in prokaryotic cells. This extra energy allows eukaryotic cells to engage in a wide range of metabolic processes that support cellular complexity. Prokaryotic cells simply can’t afford such processes.

    An average eukaryotic cell has between 20,000 to 40,000 genes; a typical bacterial cell has about 5,000 genes. Each gene encodes the information the cell’s machinery needs to make a distinct protein. And proteins are the workhorse molecules of the cell. More genes mean a more diverse suite of proteins, which means greater biochemical complexity.

    So, what is so special about eukaryotic cells? Why don’t prokaryotic cells have the same AEG? Why do eukaryotic cells have an expanded repertoire of genes and prokaryotic cells don’t?

    In short, the answer is: mitochondria.

    On average, the volume of eukaryotic cells is about 15,000 times larger than that of prokaryotic cells. Eukaryotic cells’ larger size allows for their greater complexity. Lane estimates that for a prokaryotic cell to scale up to this volume, its radius would need to increase 25-fold and its surface area 625-fold.

    Because the plasma membrane of bacteria is the site for ATP synthesis, increases in the surface area would allow the hypothetically enlarged bacteria to produce 625 times more ATP. But this increased ATP production doesn’t increase the AEG. Why is that?

    The bacteria would have to produce 625 times more proteins to support the increased ATP production. Because the cell’s machinery must access the bacteria’s DNA to make these proteins, a single copy of the genome is insufficient to support all of the activity centered around the synthesis of that many proteins. In fact, Lane estimates that for bacteria to increase its ATP production 625-fold, it would require 625 copies of its genome. In other words, even though the bacteria increased in size, in effect, the AEG remains unchanged.

    blog__inline--why-mitochondria-make-my-list-2

    Figure 2: ATP Production at the Cell Membrane Surface. Image credit: Shutterstock

    Things become more complicated when factoring in cell volume. When the surface area (and concomitant ATP production) increase by a factor of 625, the volume of the cell expands 15,000 times. To satisfy the demands of a larger cell, even more copies of the genome would be required, perhaps as many as 15,000. But energy production tops off at a 625-fold increase. This mismatch means that the AEG drops by 25 percent per gene. For a genome consisting of 5,000 genes, this drop means that a bacterium the size of a eukaryotic cell would have about 125,000 times less AEG than a typical eukaryotic cell and 200,000 times less AEG when compared to eukaryotes with genome sizes approaching 40,000 genes.

    Bioenergetic Freedom for Eukaryotic Cells

    Thanks to mitochondria, eukaryotic cells are free from the bioenergetic constraints that ensnare prokaryotic cells. Mitochondria generate the same amount of ATP as a bacterial cell. However, their genome consists of only 13 proteins, thus the organelle’s ATP demand is low. The net effect is that the mitochondria’s AEG skyrockets. Furthermore, mitochondrial membranes come equipped with an ATP transport protein that can pump the vast excess of ATP from the organelle interior into the cytoplasm for the eukaryotic cell to use.

    To summarize, mitochondria’s small genome plus its prodigious ATP output are the keys to eukaryotic cells’ large AEG.

    Of course, this raises a question: Why do mitochondria have genomes at all? Well, as it turns out, mitochondria need genomes for several reasons (which I’ve detailed in previous articles).

    Other features of mitochondria are also essential for ATP production. For example, cardiolipin in the organelle’s inner membrane plays a role in stabilizing and organizing specific proteins needed for cellular energy production.

    From a creation perspective it seems that if a Creator was going to design a eukaryotic cell from scratch, he would have to create an organelle just like a mitochondrion to provide the energy needed to sustain the cell’s complexity with a high AEG. Far from being an evolutionary “kludge job,” mitochondria appear to be an elegantly designed feature of eukaryotic cells with a just-right set of properties that allow for the cellular complexity needed to sustain complex multicellular life. It is eerie to think that unguided evolutionary events just happened to traverse the just-right evolutionary path to yield such an organelle.

    As a Christian, I see the rationale that undergirds the design of mitochondria as the signature of the Creator’s handiwork in biology. I also view the anthropic coincidence associated with the origin of eukaryotic cells as reason to believe that life’s history has purpose and meaning, pointing toward the advent of complex life and humanity.

    So, now you know why mitochondria make my list.

    Resources

    Endnotes
    1. Nick Lane, “Bioenergetic Constraints on the Evolution of Complex Life,” Cold Spring Harbor Perspectives in Biology 6, no. 5 (May 2014): a015982, doi:10.1101/cshperspect.a015982.
  • Self-Assembly of Protein Machines: Evidence for Evolution or Creation?

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Apr 17, 2019

    I finally upgraded my iPhone a few weeks ago from a 5s to an 8 Plus. I had little choice. The battery on my cell phone would no longer hold a charge.

    I’d put off getting a new one for as long as possible. It just didn’t make sense to spend money chasing the latest and greatest technology when current cell phone technology worked perfectly fine for me. Apart from the battery life and a less-than-ideal camera, I was happy with my iPhone 5s. Now I am really glad I made the switch.

    Then, the other day I caught myself wistfully eyeing the iPhone X. And, today, I learned that Apple is preparing the release of the iPhone 11 (or XI or XT). Where will Apple’s technology upgrades take us next? I can’t wait to find out.

    Have I become a technology junkie?

    It is remarkable how quickly cell phone technology advances. It is also remarkable how alluring new technology can be. The next thing you know, Apple will release an iPhone that will assemble itself when it comes out of the box. . . . Probably not.

    But, if the work of engineers at MIT ever reaches fruition, it is possible that smartphone manufacturers one day just might rely on a self-assembly process to produce cell phones.

    A Self-Assembling Cell Phone

    The Self-Assembly Lab at MIT has developed a pilot process to manufacture cell phones by self-assembly.

    To do this, they designed their cell phone to consist of six parts that fit together in a lock-in-key manner. By placing the cell phone pieces into a tumbler that turns at the just-right speed, the pieces automatically combine with one another, bit by bit, until the cell phone is assembled.

    Few errors occur during the assembly process. Only pieces designed to fit together combine with one another because of the lock-in-key fabrication.

    Self-Assembly and the Case for a Creator

    It is quite likely that the work of MIT’s Self-Assembly Lab (and other labs like it) will one day revolutionize manufacturing—not just for iPhones, but for other types of products as well.

    As alluring as this new technology might be, I am more intrigued by its implications for the creation-evolution controversy. What do self-assembly processes have to do with the creation-evolution debate? More than we might realize.

    I believe self-assembly processes strengthen the watchmaker argument for God’s existence (and role in the origin of life). Namely, this cutting-edge technology makes it possible to respond to a common objection leveled against this design argument.

    To understand why this engineering breakthrough is so important for the Watchmaker argument, a little background is necessary.

    The Watchmaker Argument

    Anglican natural theologian William Paley (1743–1805) posited the Watchmaker argument in the eighteenth century. It went on to become one of the best-known arguments for God’s existence. The argument hinges on the comparison Paley made between a watch and a rock. He argued that a rock’s existence can be explained by the outworking of natural processes—not so for a watch.

    The characteristics of a watch—specifically the complex interaction of its precision parts for the purpose of telling time—implied the work of an intelligent designer. Employing an analogy, Paley asserted that just as a watch requires a watchmaker, so too, life requires a Creator. Paley noted that biological systems display a wide range of features characterized by the precise interplay of complex parts designed to interact for specific purposes. In other words, biological systems have much more in common with a watch than a rock. This similarity being the case, it logically follows that life must stem from the work of a Divine Watchmaker.

    Biochemistry and the Watchmaker Argument

    As I discuss in my book The Cell’s Design, advances in biochemistry have reinvigorated the Watchmaker argument. The hallmark features of biochemical systems are precisely the same properties displayed in objects, devices, and systems designed and crafted by humans.

    Cells contain protein complexes that are structured to operate as biomolecular motors and machines. Some molecular-level biomachines are strict analogs to machinery produced by human designers. In fact, in many instances, a one-to-one relationship exists between the parts of manufactured machines and the molecular components of biomachines. (A few examples of these biomolecular machines are discussed in the articles listed in the Resources section.)

    We know that machines originate in human minds that comprehend and then implement designs. So, when scientists discover example after example of biomolecular machines inside the cell with an eerie and startling similarity to the machines we produce, it makes sense to conclude that these machines and, hence, life, must also have originated in a Mind.

    A Skeptic’s Challenge

    As you might imagine, skeptics have leveled objections against the Watchmaker argument since its introduction in the 1700s. Today, when skeptics criticize the latest version of the Watchmaker argument (based on biochemical designs), the influence of Scottish skeptic David Hume (1711–1776) can be seen and felt.

    In his 1779 work Dialogues Concerning Natural Religion, Hume presented several criticisms of design arguments. The foremost centered on the nature of analogical reasoning. Hume argued that the conclusions resulting from analogical reasoning are only sound when the things compared are highly similar to each other. The more similar, the stronger the conclusion. The less similar, the weaker the conclusion.

    Hume dismissed the original version of the Watchmaker argument by maintaining that organisms and watches are nothing alike. They are too dissimilar for a good analogy. In other words, what is true for a watch is not necessarily true for an organism and, therefore, it doesn’t follow that organisms require a Divine Watchmaker, just because a watch does.

    In effect, this is one of the chief reasons why some skeptics today dismiss the biochemical Watchmaker argument. For example, philosopher Massimo Pigliucci has insisted that Paley’s analogy is purely metaphorical and does not reflect a true analogical relationship. He maintains that any similarity between biomolecular machines and human designs reflects merely illustrative analogies that life scientists use to communicate the structure and function of these protein complexes via familiar concepts and language. In other words, it is illegitimate to use the “analogies” between biomolecular machines and manufactured machines to make a case for a Creator.1

    A Response Based on Insights from Nanotechnology

    I have responded to this objection by pointing out that nanotechnologists have isolated biomolecular machines from the cell and incorporated these protein complexes into nanodevices and nanosystems for the explicit purpose of taking advantage of their machine-like properties. These transplanted biomachines power motion and movements in the devices, which otherwise would be impossible with current technology. In other words, nanotechnologists view these biomolecular systems as actual machines and utilize them as such. Their work demonstrates that biomolecular machines are literal, not metaphorical, machines. (See the Resources section for articles describing this work.)

    Is Self-Assembly Evidence of Evolution or Design?

    Another criticism—inspired by Hume—is that machines designed by humans don’t self-assemble, but biochemical machines do. Skeptics say this undermines the Watchmaker analogy. I have heard this criticism in the past, but it came up recently in a dialogue I had with a skeptic in a Facebook group.

    I wrote that “What we discover when we work out the structure and function of protein complexes are features that are akin to an automobile engine, not an outcropping of rocks.”

    A skeptic named Maurice responded: “Your analogy is false. Cars do not spontaneously self-assemble—in that case there is a prohibitive energy barrier. But hexagonal lava rocks can and do—there is no energy barrier to prohibit that from happening.”

    Maurice argues that my analogy is a poor one because protein complexes in the cell self-assemble, whereas automobile engines can’t. For Maurice (and other skeptics), this distinction serves to make manufactured machines qualitatively different from biomolecular machines. On the other hand, hexagonal patterns in lava rocks give the appearance of design but are actually formed spontaneously. For skeptics like Maurice, this feature indicates that the design displayed by protein complexes in the cell is apparent, not true, design.

    Maurice added: “Given that nature can make hexagonal lava blocks look ‘designed,’ it can certainly make other objects look ‘designed.’ Design is not a scientific term.”

    Self-Assembly and the Watchmaker Argument

    This is where the MIT engineers’ fascinating work comes into play.

    Engineers continue to make significant progress toward developing self-assembly processes for manufacturing purposes. It very well could be that in the future a number of machines and devices will be designed to self-assemble. Based on the researchers’ work, it becomes evident that part of the strategy for designing machines that self-assemble centers on creating components that not only contribute to the machine’s function, but also precisely interact with the other components so that the machine assembles on its own.

    The operative word here is designed. For machines to self-assemble they must be designed to self-assemble.

    This requirement holds true for biochemical machines, too. The protein subunits that interact to form the biomolecular machines appear to be designed for self-assembly. Protein-protein binding sites on the surface of the subunits mediate this self-assembly process. These binding sites require high-precision interactions to ensure that the binding between subunits takes place with a high degree of accuracy—in the same way that the MIT engineers designed the cell phone pieces to precisely combine through lock-in-key interactions.

    blog__inline--self-assembly-of-protein-machines

    Figure: ATP Synthase is a biomolecular motor that is literally an electrically powered rotary motor. This biomachine is assembled from protein subunits. Credit: Shutterstock

    The level of design required to ensure that protein subunits interact precisely to form machine-like protein complexes is only beginning to come into full view.2 Biochemists who work in the area of protein design still don’t fully understand the biophysical mechanisms that dictate the assembly of protein subunits. And, while they can design proteins that will self-assemble, they struggle to replicate the complexity of the self-assembly process that routinely takes place inside the cell.

    Thanks to advances in technology, biomolecular machines’ ability to self-assemble should no longer count against the Watchmaker argument. Instead, self-assembly becomes one more feature that strengthens Paley’s point.

    The Watchmaker Prediction

    Advances in self-assembly also satisfy the Watchmaker prediction, further strengthening the case for a Creator. In conjunction with my presentation of the revitalized Watchmaker argument in The Cell’s Design, I proposed the Watchmaker prediction. I contend that many of the cell’s molecular systems currently go unrecognized as analogs to human designs because the corresponding technology has yet to be developed.

    The possibility that advances in human technology will ultimately mirror the molecular technology that already exists as an integral part of biochemical systems leads to the Watchmaker prediction. As human designers develop new technologies, examples of these technologies, though previously unrecognized, will become evident in the operation of the cell’s molecular systems. In other words, if the Watchmaker argument truly serves as evidence for a Creator’s existence, then it is reasonable to expect that life’s biochemical machinery anticipates human technological advances.

    In effect, the developments in self-assembly technology and its prospective use in future manufacturing operations fulfill the Watchmaker prediction. Along these lines, it’s even more provocative to think that cellular self-assembly processes are providing insight to engineers who are working to develop similar technology.

    Maybe I am a technology junkie, after all. I find it remarkable that as we develop new technologies we discover that they already exist in the cell, and because they do the Watchmaker argument becomes more and more compelling.

    Can you hear me now?

    Resources

    The Biochemical Watchmaker Argument

    Challenges to the Biochemical Watchmaker Argument

    Endnotes
    1. Massimo Pigliucci and Maarten Boudry, “Why Machine-Information Metaphors are Bad for Science and Science Education,” Science and Education 20, no. 5–6 (May 2011): 453–71; doi:10.1007/s11191-010-9267-6.
    2. For example, see Christoffer H. Norn and Ingemar André, “Computational Design of Protein Self-Assembly,” Current Opinion in Structural Biology 39 (August 2016): 39–45, doi:10.1016/j.sbi.2016.04.002.
  • Does Transhumanism Refute Human Exceptionalism? A Response to Peter Clarke

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Apr 03, 2019

    I just finished binge-watching Altered Carbon. Based on the 2002 science fiction novel written by Richard K. Morgan, this Netflix original series is provocative, to say the least.

    Altered Carbon takes place in the future, where humans can store their personalities as digital files in devices called stacks. These disc-like devices are implanted at the top of the spinal column. When people die, their stacks can be removed from their body (called sleeves) and stored indefinitely until they are re-sleeved—if and when another body becomes available to them.

    In this world, people who possess extreme wealth can live indefinitely, without ever having to spend any time in storage. Referred to as Meths (after the biblical figure Methuselah, who lived 969 years), the wealthy have the financial resources to secure a continual supply of replacement bodies through cloning. Their wealth also affords them the means to back up their stacks once a day, storing the data in a remote location in case their stacks are destroyed. In effect, Meths use technology to attain a form of immortality.

    Forthcoming Posthuman Reality?

    The world of Altered Carbon is becoming a reality right before our eyes. Thanks to recent advances in biotechnology and bioengineering, the idea of using technology to help people live indefinitely no longer falls under the purview of science fiction. Emerging technologies such as CRISPR-Cas9 gene editing and brain-computer interfaces offer hope to people suffering from debilitating diseases and injuries. They can also be used for human enhancements—extending our physical, intellectual, and psychological capabilities beyond natural biological limits.

    These futuristic possibilities give fuel to a movement known as transhumanism. Residing on the fringe of the academy and culture for several decades, the movement has gone mainstream in the ivory towers of the academy and on the street. Sociologist James Hughes describes the transhumanist vision this way in his book Citizen Cyborg:

    “In the twenty-first century the convergence of artificial intelligence, nanotechnology and genetic engineering will allow human beings to achieve things previously imagined only in science fiction. Lifespans will extend well beyond a century. Our senses and cognition will be enhanced. We will gain control over our emotions and memory. We will merge with machines, and machines will become more like humans. These technologies will allow us to evolve into varieties of “posthumans” and usher us into a “transhuman” era and society. . . . Transhuman technologies, technologies that push the boundaries of humanism, can radically improve our quality of life, and . . . we have a fundamental right to use them to control our bodies and minds. But to ensure these benefits we need to democratically regulate these technologies and make them equally available in free societies.”1

    blog__inline--does-transhumanism-refute-human-exceptionalism

    Figure 1: The transhumanism symbol. Image credit: Wikimedia Commons

    In short, transhumanists want us to take control of our own evolution, transforming human beings into posthumans and in the process creating a utopian future that carves out a path to immortality.

    Depending on one’s philosophical or religious perspective, transhumanists’ vision and the prospects of a posthuman reality can bring excitement or concern or a little bit of both. Should we pursue the use of technology to enhance ourselves, transcending the constraints of our biology? What role should these emerging biotechnologies play in shaping our future? What are the boundaries for developing and using these technologies? Should there be any boundaries?2

    All of these questions revolve around a central question: Who are we as human beings?

    Are Humans Exceptional?

    Prior to the rising influence of transhumanism, the answer to this question followed along one of two lines. For people who hold to a Judeo-Christian worldview, human beings are exceptional, standing apart from all other creatures on the planet. Accordingly, our exceptional nature results from the image of God. As image bearers, human beings have infinite worth and value.

    On the other hand, those influenced by the evolutionary paradigm maintain that human beings are nothing more than animals—differing in degree, not kind, from other creatures. In fact, many who hold this view of humanity find the notion of human exceptionalism repugnant. In their view, to elevate the value of human beings above that of other creatures constitutes speciesism and reflects an unjustifiable arrogance.

    And now transhumanism enters into the fray. People on both sides of the controversy about human nature and identity argue that transhumanism brings an end to any notion about human exceptionalism, once and for all.

    One is Peter Clarke. In an article published on the Areo website entitled “Transhumanism and the Death of Human Exceptionalism,” Clarke says:

    “As a philosophical movement, transhumanism advocates for improving humanity through genetic modifications and technological augmentations, based upon the position that there is nothing particularly sacred about the human condition. It acknowledges up front that our bodies and minds are riddled with flaws that not only can but should be fixed. Even more radically, as the name implies, transhumanism embraces the potential of one day moving beyond the human condition, transitioning our sentience into more advanced forms of life, including genetically modified humans, superhuman cyborgs, and immortal digital intelligences.”3

    On the other side of the aisle is Wesley J. Smith of the Discovery Institute. In his article “Transhumanist Bill of Wrongs,” Smith writes:

    “Transhumanism would shatter human exceptionalism. The moral philosophy of the West holds that each human being is possessed of natural rights that adhere solely and merely because we are human. But transhumanists yearn to remake humanity in their own image—including as cyborgs, group personalities residing in the Internet Cloud, or AI-controlled machines. That requires denigrating natural man as unexceptional to justify our substantial deconstruction and redesign.”4

    In other words, transhumanism highlights the notion that our bodies, minds, and personalities are inherently flawed and we have a moral imperative, proponents say, to correct these flaws. But this view denigrates humanity, opponents say, and with it the notion of human exceptionalism. For Clarke, this nonexceptional perspective is something to be celebrated. For Smith, transhumanism is of utmost concern and must be opposed.

    Evidence of Exceptionalism

    While I am sympathetic to Smith’s concern, I would take a differing perspective. I find that transhumanism provides one of the most powerful pieces of evidence for human exceptionalism—and along with it the image of God.

    In my forthcoming book (coauthored with Ken Samples), Humans 2.0, I write:

    “Ironically, progress in human enhancement technology and the prospects of a posthuman future serve as one of the most powerful arguments for human exceptionalism and, consequently, the image of God. Human beings are the only species that exists—or that has ever existed—that can create technologies to enhance our capabilities beyond our biological limits. We alone work toward effecting our own immortality, take control of evolution, and look to usher in a posthuman world. These possibilities stem from our unique and exceptional capacity to investigate and develop an understanding of nature (including human biology) through science and then turn that insight into technology.”5

    Our ability to carry out the scientific enterprise and develop technology stems from four qualities that a growing number of anthropologists and primatologists think are unique to humans, including:

    • symbolism
    • open-ended generative capacity
    • theory of mind
    • our capacity to form complex social networks

    From my perspective as a Christian, these qualities stand as scientific descriptors of the image of God.

    As human beings, we effortlessly represent the world with discrete symbols. We denote abstract concepts with symbols. And our ability to represent the world symbolically has interesting consequences when coupled with our abilities to combine and recombine those symbols in a nearly infinite number of ways to create alternate possibilities.

    Human capacity for symbolism manifests in the form of language, art, music, and even body ornamentation. And we desire to communicate the scenarios we construct in our minds with other human beings.

    For anthropologists and primatologists who think that human beings differ in kind—not degree—from other animals, these qualities demarcate us from the great apes and Neanderthals. The separation becomes most apparent when we consider the remarkable technological advances we have made during our tenure as a species. Primatologist Thomas Suddendorf puts it this way:

    “We reflect on and argue about our present situation, our history, and our destiny. We envision wonderful harmonious worlds as easily as we do dreadful tyrannies. Our powers are used for good as they are for bad, and we incessantly debate which is which. Our minds have spawned civilizations and technologies that have changed the face of the Earth, while our closest living animal relatives sit unobtrusively in their remaining forests. There appears to be a tremendous gap between human and animal minds.”6

    Moreover, no convincing evidence exists that leads us to think that Neanderthals shared the qualities that make us exceptional. Neanderthals—who first appear in the fossil record around 250,000 to 200,000 years ago and disappear around 40,000 years ago—existed on Earth longer than modern humans have. Yet our technology has progressed exponentially, while Neanderthal technology remained largely static.

    According to paleoanthropologist Ian Tattersall and linguist Noam Chomsky (and their coauthors):

    “Our species was born in a technologically archaic context, and significantly, the tempo of change only began picking up after the point at which symbolic objects appeared. Evidently, a new potential for symbolic thought was born with our anatomically distinctive species, but it was only expressed after a necessary cultural stimulus had exerted itself. This stimulus was most plausibly the appearance of language. . . . Then, within a remarkably short space of time, art was invented, cities were born, and people had reached the moon.”7

    In other words, the evolution of human technology signifies that there is something special—exceptional—about us as human beings. In this sense, transhumanism highlights our exceptional nature precisely because the prospects for controlling our own evolution stem from our ability to advance technology.

    To be clear, transhumanism possesses an existential risk for humanity. Unquestioningly, it has the potential to strip human beings of dignity and worth. But, ironically, transhumanism is possible only because we are exceptional as human beings.

    Responsibility as the Crown of Creation

    Ultimately, our exceptional nature demands that we thoughtfully deliberate on how to use emerging biotechnologies to promote human flourishing, while ensuring that no human being is exploited or marginalized by these technologies. It also means that we must preserve our identity as human beings at all costs.

    It is one thing to enjoy contemplating a posthuman future by binge-watching a sci-fi TV series. But, it is another thing altogether to live it out. May we be guided by ethical wisdom to live well.

    Resources

    Endnotes
    1. James Hughes, Citizen Cyborg: Why Democratic Societies Must Respond to the Redesigned Humans of the Future (Cambridge, MA: Westview Press, 2004), xii.
    2. Ken Samples and I take on these questions and more in our book Humans 2.0, due to be published in July of 2019.
    3. Peter Clarke, Transhumanism and the Death of Human Exceptionalism, Areo (March 6, 2019), https://areomagazine.com/2019/03/06/transhumanism-and-the-death-of-human-exceptionalism/.
    4. Wesley J. Smith,Transhumanist Bill of Wrongs, Discovery Institute (October 23, 2018), https://www.discovery.org/a/transhumanist-bill-of-wrongs/.
    5. Fazale Rana with Kenneth Samples, Humans 2.0: Scientific, Philosophical, and Theological Perspectives on Transhumanism (Covina, CA: RTB Press, 2019) in press.
    6. Thomas Suddendorf, The Gap: The Science of What Separates Us from Other Animals (New York: Basic Books, 2013), 2.
    7. Johan J. Bolhuis et al., “How Could Language Have Evolved?” PLoS Biology 12, no.8 (August 26, 2014): e1001934, doi:10.1371/journal.pbio.1001934.
  • Timing of Neanderthals’ Disappearance Makes Art Claims Unlikely

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Mar 27, 2019

    In Latin it literally means, “somewhere else.”

    Legal experts consider an alibi to be one of the most effective legal defenses available in a court of law because it has the potential to prove a defendant’s innocence. It goes without saying: if a defendant has an alibi, it means that he or she was somewhere else when the crime was committed.

    As it turns out, paleoanthropologists have discovered that Neanderthals have an alibi, of sorts. Evidence indicates that they weren’t the ones to scratch up the floor of Gorham’s Cave.

    Based on recent radiocarbon dates measured for samples from Bajondillo Cave (located on the southern part of the Iberian Peninsula—southwest corner of Europe), a research team from the Japan Agency for Marine-Earth Science and Technology and several Spanish institutions determined that modern humans made their way to the southernmost tip of Iberia around 43,000 years ago, displacing Neanderthals.1

    Because Neanderthals disappeared from Iberia at that time, it becomes unlikely that they were responsible for hatch marks (dated to be 39,000 years in age) made on the floor of Gorham’s Cave on the island of Gibraltar. These scratches have been interpreted by some paleoanthropologists as evidence that Neanderthals possessed symbolic capabilities.

    But how could Neanderthals have made the hatch marks if they weren’t there? Ladies and gentlemen of the jury: the perfect alibi. Instead, it looks as if modern humans were the culprits who marked up the cave floor.

    blog__inline--timing-of-neanderthals-disappearance-1

    Figure 1: Gorham’s Cave. Image credit: Wikipedia

    The Case for Neanderthal Exceptionalism

    Two of the biggest questions in anthropology today relate to Neanderthals:

    • When did these creatures disappear from Europe?
    • Did they possess symbolic capacity like modern humans, thus putting their cognitive abilities on par with ours as a species?

    For paleoanthropologists, these two questions have become inseparable. With regard to the second question, some paleoanthropologists are convinced that Neanderthals displayed symbolic capabilities.

    It is important to note that the case for Neanderthal symbolism is largely based on correlations between the archaeological and fossil records. Toward this end, some anthropologists have concluded that Neanderthals possessed symbolism because researchers have recovered artifacts (presumably reflecting symbolic capabilities) from the same layers that harbored Neanderthal fossils. Unfortunately, this approach is complicated by other studies that show that the cave layers have been mixed by either cave occupants (either hominid or modern human) or animals living in the caves. This mixing leads to the accidental association of fossil and archaeological remains. In other words, the mixing of layers raises questions about who the manufacturers of these artifacts were.

    Because we know modern humans possess the capacity for symbolism, it is much more likely that modern humans, not Neanderthals, made the symbolic artifacts, in these instances. Then, only through an upheaval of the cave layers did the artifacts mix with Neanderthal remains. (See the Resources section for articles that elaborate this point.)

    More often than not, archaeological remains are unearthed by themselves with no corresponding fossil specimens. This is the reason why understanding the timing of Neanderthals disappearance and modern humans arrival in different regions of Europe becomes so important (and why the two questions interrelate). Paleoanthropologists believe that if they can show that Neanderthals lived in a locale at the time symbolic artifacts were produced, then it becomes conceivable that these creatures made the symbolic items. This interpretation increases in plausiblity if no modern humans were around at the time.

    Some researchers have argued along these lines regarding the hatch marks found on the floor of Gorham’s Cave.2 The markings were made in the bedrock of the cave floor. The layers above the bedrock date to between 30,000 and 39,000 years in age. Some paleoanthropologists argue that Neanderthals must have made the markings. Why? Because, even though modern humans were already in Europe by that time, these paleoanthropologists think that modern humans had not yet made their way to the southern part of the Iberian Peninsula. These same researchers also think that Neanderthals survived in Iberia until about 32,000 years ago, even though their counterparts in other parts of Europe had already disappeared. So, on this basis, paleoanthropologists conclude that Neanderthals produced the hatch marks and, thus, displayed symbolic capabilities.

    blog__inline--timing-of-neanderthals-disappearance-2

    Figure 2: Hatch marks on the floor of Gorham’s Cave. Image credit: Wikipedia

    When Did Neanderthals Disappear from Iberia?

    But recent work challenges this conclusion. The Spanish and Japanese team took 17 new radiocarbon measurements from layers of the Bajondillo Cave (located in southern Iberia, near Gorham’s Cave) with the hopes of precisely documenting the change in technology from Mousterian (made by Neanderthals) to Aurignacian (made by modern humans). This transition corresponds to the replacement of Neanderthals by modern humans elsewhere in Europe.

    The researchers combined the data from their samples with previous measurements made at the site to pinpoint this transition at around 43,000 years ago—not 32,000 years ago. In other words, modern humans occupied Iberia at the same time they occupied other places in Europe. This result also means that Neanderthals had disappeared from Iberia well before the hatch marks in Gorham’s Cave were made.

    Were Neanderthals Exceptional Like Modern Humans?

    Though claims of Neanderthal exceptionalism abound in the scientific literature and in popular science articles, the claims universally fail to withstand ongoing scientific scrutiny, as this latest discovery attests. Simply put, based on the archaeological record, there are no good reasons to think that Neanderthals displayed symbolism.

    From my perspective, the case for Neanderthal symbolism seems to be driven more by ideology than actual scientific evidence.

    It is also worth noting that comparative studies on Neanderthal and modern human brain structures also lead to the conclusion that humans displayed symbolism and Neanderthals did not. (See the Resources section for articles that describe this work in more detail.)

    Why Does It Matter?

    Questions about Neanderthal symbolic capacity and, hence, exceptionalism have bearing on how we understand human beings. Are human beings unique in our capacity for symbolism or is this quality displayed by other hominins? If humans are not alone in our capacity for symbolism, then we aren’t exceptional. And, if we aren’t exceptional then it becomes untenable to embrace the biblical concept of human beings as God’s image bearers. (As a Christian, I see symbolism as a manifestation of the image of God.)

    But, based on the latest scientific evidence, the verdict is in: modern humans are the only species to display the capacity for symbolism. In this way, scientific advance affirms that humans are exceptional in a way that aligns with the biblical concept of the image of God.

    The Neanderthals alibi holds up. They werent there, but humans were. Case closed.

    Resources

    Endnotes
    1. Miguel Cortés-Sánchez et al., “An Early Aurignacian Arrival in Southwestern Europe,” Nature Ecology and Evolution 3 (January 21, 2019): 207–12, doi:10.1038/s41559-018-0753-6.
    2. Joaquín Rodríguez-Vidal et al., “A Rock Engraving Made by Neanderthals in Gibraltar,” Proceedings of the National Academy of Sciences USA 111, no. 37 (September 16, 2014): 13301–6, doi:10.1073/pnas.1411529111.
  • Origins of Monogamy Cause Evolutionary Paradigm Breakup

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Mar 20, 2019

    Gregg Allman fronted the Allman Brothers Band for over 40 years until his death in 2017 at the age of 69. Writer Mark Binelli described Allman’s voice as “a beautifully scarred blues howl, old beyond its years.”1

    A rock legend who helped pioneer southern rock, Allman was as well known for his chaotic, dysfunctional personal life as for his accomplishments as a musician. Allman struggled with drug abuse and addiction. He was also married six times, with each marriage ending in divorce and, at times, in a public spectacle.

    In a 2009 interview with Binelli for Rolling Stone, Allman reflected on his failed marriages: “To tell you the truth, it’s my sixth marriage—I’m starting to think it’s me.”2

    Allman isn’t the only one to have trouble with marriage. As it turns out, so do evolutionary biologists—but for different reasons than Greg Allman.

    To be more exact, evolutionary biologists have made an unexpected discovery about the evolutionary origin of monogamy (a single mate for at least a season) in animals—an insight that raises questions about the evolutionary explanation. Based on recent work headed by a large research team of investigators from the University of Texas (UT), Austin, it looks like monogamy arose independently, multiple times, in animals. And these origin events were driven, in each instance, by the same genetic changes.3

    In my view, this remarkable example of evolutionary convergence highlights one of the many limitations of evolutionary theory. It also contributes to my skepticism (and that of other intelligent design proponents/creationists) about the central claim of the evolutionary paradigm; namely, the origin, design, history, and diversity of life can be fully explained by evolutionary mechanisms.

    At the same time, the independent origins of monogamy—driven by the same genetic changes—(as well as other examples of convergence) find a ready explanation within a creation model framework.

    Historical Contingency

    To appreciate why I believe this discovery is problematic for the evolutionary paradigm, it is necessary to consider the nature of evolutionary mechanisms. According to the evolutionary biologist Stephen Jay Gould (1941–2002), evolutionary transformations occur in a historically contingent manner.4 This means that the evolutionary process consists of an extended sequence of unpredictable, chance events. If any of these events were altered, it would send evolution down a different trajectory.

    To help clarify this concept, Gould used the metaphor of “replaying life’s tape.” If one were to push the rewind button, erase life’s history, and then let the tape run again, the results would be completely different each time. In other words, the evolutionary process should not repeat itself. And rarely should it arrive at the same end point.

    Gould based the concept of historical contingency on his understanding of the mechanisms that drive evolutionary change. Since the time of Gould’s original description of historical contingency, several studies have affirmed his view. (For descriptions of some representative studies, see the articles listed in the Resources section.) In other words, researchers have experimentally shown that the evolutionary process is, indeed, historically contingent.

    A Failed Prediction of the Evolutionary Paradigm

    Given historical contingency, it seems unlikely that distinct evolutionary pathways would lead to identical or nearly identical outcomes. Yet, when viewed from an evolutionary standpoint, it appears as if repeated evolutionary outcomes are a common occurrence throughout life’s history. This phenomenon—referred to as convergence—is widespread. Evolutionary biologists Simon Conway Morris and George McGhee point out in their respective books, Life’s Solution and Convergent Evolution, that identical evolutionary outcomes are a characteristic feature of the biological realm.5 Scientists see these repeated outcomes at the ecological, organismal, biochemical, and genetic levels. In fact, in my book The Cell’s Design, I describe 100 examples of convergence at the biochemical level.

    In other words, biologists have made two contradictory observations within the evolutionary framework: (1) evolutionary processes are historically contingent and (2) evolutionary convergence is widespread. Since the publication of The Cell’s Design, many new examples of convergence have been unearthed, including the recent origin of monogamy discovery.

    Convergent Origins of Monogamy

    Working within the framework of the evolutionary paradigm, the UT research team sought to understand the evolutionary transition to monogamy. To achieve this insight, they compared the gene expression profiles in the neural tissues of reproductive males for closely related pairs of species, with one species displaying monogamous behavior and the other nonmonogamous reproduction.

    The species pairs spanned the major vertebrate groups and included mice, voles, songbirds, frogs, and cichlids. From an evolutionary perspective, these organisms would have shared a common ancestor 450 million years ago.

    Monogamous behavior is remarkably complex. It involves the formation of bonds between males and females, care of offspring by both parents, and increased territorial defense. Yet, the researchers discovered that in each instance of monogamy the gene expression profiles in the neural tissues of the monogamous species were identical and distinct from the gene expression patterns for their nonmonogamous counterparts. Specifically, they observed the same differences in gene expression for the same 24 genes. Interestingly, genes that played a role in neural development, cell-cell signaling, synaptic activity, learning and memory, and cognitive function displayed enhanced gene expression. Genes involved in gene transcription and AMPA receptor regulation were down-regulated.

    So, how do the researchers account for this spectacular example of convergence? They conclude that a “universal transcriptomic mechanism” exists for monogamy and speculate that the gene modules needed for monogamous behavior already existed in the last common ancestor of vertebrates. When needed, these modules were independently recruited at different times in evolutionary history to yield monogamous species.

    Yet, given the number of genes involved and the specific changes in gene expression needed to produce the complex behavior associated with monogamous reproduction, it seems unlikely that this transformation would happen a single time, let alone multiple times, in the exact same way. In fact, Rebecca Young, the lead author of the journal article detailing the UT research team’s work, notes that “Most people wouldn’t expect that across 450 million years, transitions to such complex behaviors would happen the same way every time.”6

    So, is there another way to explain convergence?

    Convergence and the Case for a Creator

    Prior to Darwin (1809–1882), biologists referred to shared biological features found in organisms that cluster into disparate biological groups as analogies. (In an evolutionary framework, analogies are referred to as evolutionary convergences.) They viewed analogous systems as designs conceived by the Creator that were then physically manifested in the biological realm and distributed among unrelated organisms.

    In light of this historical precedence, I interpret convergent features (analogies) as the handiwork of a Divine mind. The repeated origins of biological features equate to the repeated creations by an intelligent Agent who employs a common set of solutions to address a common set of problems facing unrelated organisms.

    Thus, the idea of monogamous convergence seems to divorce itself from the evolutionary framework, but it makes for a solid marriage in a creation model framework.

    Resources

    Endnotes
    1. Mark Binelli, “Gregg Allman: The Lost Brother,” Rolling Stone, no. 1082/1083 (July 9–23, 2009), https://www.rollingstone.com/music/music-features/gregg-allman-the-lost-brother-108623/.
    2. Binelli, “Gregg Allman: The Lost Brother.”
    3. Rebecca L. Young et al., “Conserved Transcriptomic Profiles underpin Monogamy across Vertebrates,” Proceedings of the National Academy of Sciences, USA 116, no. 4 (January 22, 2019): 1331–36, doi:10.1073/pnas.1813775116.
    4. Stephen Jay Gould, Wonderful Life: The Burgess Shale and the Nature of History (New York: W. W. Norton & Company, 1990).
    5. Simon Conway Morris, Life’s Solution: Inevitable Humans in a Lonely Universe (New York: Cambridge University Press, 2003); George McGhee, Convergent Evolution: Limited Forms Most Beautiful (Cambridge, MA: MIT Press, 2011).
    6. University of Texas at Austin, “Evolution Used Same Genetic Formula to Turn Animals Monogamous,” ScienceDaily (January 7, 2019), www.sciencedaily.com/releases/2019/01/1901071507.htm.
  • Biochemical Synonyms Restate the Case for a Creator

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Mar 13, 2019

    Sometimes I just can’t help myself. I know it’s clickbait but I click on the link anyway.

    A few days ago, as a result of momentary weakness, I found myself reading an article from the ScoopWhoop website, “16 Things Most of Us Think Are the Same but Actually Aren’t.”

    OK. OK. Now that you saw the title you want to click on the link, too.

    To save you from wasting five minutes of your life, here is the ScoopWhoop list:

    • Weather and Climate
    • Turtle and Tortoise
    • Jam and Jelly
    • Eraser and Rubber
    • Great Britain and the UK
    • Pill and Tablet
    • Shrimp and Prawn
    • Butter and Margarine
    • Orange and Tangerine
    • Biscuits and Cookies
    • Cupcakes and Muffins
    • Mushrooms and Toadstools
    • Tofu and Paneer
    • Rabbits and Hares
    • Alligators and Crocodiles
    • Rats and Mice

    And there you have it. Not a very impressive list, really.

    If I were putting together a biochemist’s version of this list, I would start with synonymous mutations. Even though many life scientists think they are the same, studies indicate that they “actually aren’t.”

    If you have no idea what I am talking about or what this insight has to do with the creation/evolution debate, let me explain by starting with some background information, beginning with the central dogma of molecular biology and the genetic code.

    Central Dogma of Molecular Biology

    According to this tenet of molecular biology, the information stored in DNA is functionally expressed through the activities of proteins. When it is time for the cell’s machinery to produce a particular protein, it copies the appropriate information from the DNA molecule through a process called transcription and produces a molecule called messenger RNA (mRNA). Once assembled, mRNA migrates to the ribosome, where it directs the synthesis of proteins through a process known as translation.

    blog__inline--biochemical-synonyms-restate-1

    Figure 1: The central dogma of molecular biology. Image credit: Shutterstock

    The Genetic Code

    At first glance, there appears to be a mismatch between the stored information in DNA and the information expressed in proteins. A one-to-one relationship cannot exist between the four different nucleotides that make up DNA and the twenty different amino acids used to assemble proteins. The cell handles this mismatch by using a code comprised of groupings of three nucleotides, called codons, to specify the twenty different amino acids.

     

    blog__inline--biochemical-synonyms-restate-2

    Figure 2: Codons. Image credit: Wikipedia

    The cell uses a set of rules to relate these nucleotide triplet sequences to the twenty amino acids that comprise proteins. Molecular biologists refer to this set of rules as the genetic code. The nucleotide triplets represent the fundamental units of the genetic code. The code uses each combination of nucleotide triplets to signify an amino acid. This code is essentially universal among all living organisms.

    Sixty-four codons make up the genetic code. Because the code only needs to encode twenty amino acids, some of the codons are redundant. That is, different codons code for the same amino acid. In fact, up to six different codons specify some amino acids. Others are specified by only one codon.1

    blog__inline--biochemical-synonyms-restate-3

    Figure 3: The genetic code. Image credit: Shutterstock

    A little more background information about mutations will help fill out the picture.

    Mutations

    A mutation refers to any change that takes place in the DNA nucleotide sequence. DNA can experience several different types of mutations. Substitution mutations are one common type. When a substitution mutation occurs, one (or more) of the nucleotides in the DNA strand is replaced by another nucleotide. For example, an A may be replaced by a G, or a C may be replaced by a T. This substitution changes the codon. Interestingly, the genetic code is structured in such a way that when substitution mutations take place, the resulting codon often specifies the same amino acid (due to redundancy) or an amino acid that has similar chemical and physical properties to the amino acid originally encoded.

    Synonymous and Nonsynonymous Mutations

    When substitution mutations generate a new codon that specifies the same amino acid as initially encoded, it’s referred to as a synonymous mutation. However, when a substitution produces a codon that specifies a different amino acid, it’s called a nonsynonymous mutation.

    Nonsynonymous mutations can be deleterious if they affect a critical amino acid or if they significantly alter the chemical and physical profile along the protein chain. If the substituted amino acid possesses dramatically different physicochemical properties from the native amino acid, it may cause the protein to fold improperly. Improper folding impacts the protein’s structure, yielding a biomolecule with reduced or even lost function.

    On the other hand, biochemists have long thought that synonymous mutations have no effect on protein structure and function because these types of mutations don’t change the amino acid sequences of proteins. Even though biochemists think that synonymous mutations are silent—having no functional consequences—evolutionary biologists find ways to use them, including using patterns of synonymous mutations to establish evolutionary relationships.

    Patterns of Synonymous Mutations and the Case for Biological Evolution

    Evolutionary biologists consider shared genetic features found in organisms that naturally group together as compelling evidence for common descent. One feature of particular interest is the identical (or nearly identical) DNA sequence patterns found in genomes. According to this line of reasoning, the shared patterns arose as a result of a series of substitution mutations that occurred in the common ancestor’s genome. Presumably, as the varying evolutionary lineages diverged from the nexus point, they carried with them the altered sequences created by the primordial mutations.

    Synonymous mutations play a significant role in this particular argument for common descent. Because synonymous mutations don’t alter the amino acid sequence of proteins, their effects are considered to be inconsequential. So, when the same (or nearly the same) patterns of synonymous mutations are observed in genomes of organisms that cluster together into the same group, most life scientists interpret them as compelling evidence of the organisms’ common evolutionary history.

    It is conceivable that nonsynonymous mutations, which alter the protein amino acid sequences, may impart some type of benefit and, therefore, shared patterns of nonsynonymous changes could be understood as evidence for shared design. (See the last section of this article.) But this is not the case when it comes to synonymous mutations, which raises the question: Why would a Creator intentionally introduce new codons that code for the same amino acid into genes when these changes have no functional utility?

    Apart from invoking a Creator, the shared patterns of synonymous mutations make perfect sense if genomes have been shaped by evolutionary processes and an evolutionary history. However, this argument for biological evolution (shared ancestry) and challenge to a creation model interpretation (shared design) hinges on the underlying assumption that synonymous mutations have no functional consequence.

    But what if this assumption no longer holds?

    Synonymous Mutations Are Not Interchangeable

    Biochemists used to think that synonymous mutations had no impact whatsoever on protein structure and, hence, function, but this view is changing thanks to studies such as the one carried out by researchers at University of Colorado, Boulder.2

    These researchers discovered synonymous mutations that increase the translational efficiency of a gene (found in the genome of Salmonella enterica). This gene codes for an enzyme that plays a role in the biosynthetic pathway for the amino acid arginine. (This enzyme also plays a role in the biosynthesis of proline.) They believe that these mutations alter the three-dimensional structure of the DNA sequence near the beginning of the coding portion of the gene. They also think that the synonymous mutations improve the stability of the messenger RNA molecule. Both effects would lead to greater translational efficiency at the ribosome.

    As radical (and unexpected) as this finding may seem to be, it follows on the heels of other recent discoveries that also recognize the functional importance of synonymous mutations.3 Generally speaking, biochemists have discovered that synonymous mutations function to influence not only the rate and efficiency of translation (as the scientists from the University of Colorado, Bolder learned) and the folding of the proteins after they are produced at the ribosome.

    Even though synonymous mutations leave the amino acid sequence of the protein unchanged, they can exert influence by altering the:

    • regulatory regions of the gene that influence the transcription rate
    • secondary and tertiary structure of messenger RNA that influences the rate of translation
    • stability of messenger RNA that influences the amount of protein produced
    • translation rate that influences the folding of the protein as it exits the ribosome

    Biochemists are just beginning to come to terms with the significance of these discoveries, but it is already clear that synonymous mutations have biomedical consequences.4 They also impact models for molecular evolution. But for now, I want to focus on the impact these discoveries has on the creation/evolution debate.

    Patterns of Synonymous Mutations and the Case for Creation

    As noted, many people consider the most compelling evidence for common descent to be the shared genetic features displayed by organisms that naturally cluster together. But if life is the product of a Creator’s handiwork, the shared genetic features could be understood as shared designs deployed by a Creator. In fact, a historical precedent exists for the common design interpretation. Prior to Darwin, biologists viewed shared biological features as manifestations of archetypical designs that existed in the Creator’s mind.

    But the common design interpretation requires that the shared features be functional. (Or, that they arise independently in a nonrandom manner.) For those who view life from the framework of the evolutionary paradigm, the shared patterns of synonymous mutations invalidate the common design explanation—because these mutations are considered to be functionally insignificant.

    But in the face of mounting evidence for the functional importance of synonymous mutations, this objection to common design has begun to erode. Though many life scientists are quick to dismiss the common design interpretation of biology, advances in molecular biology continue to strengthen this explanation and, with it, the case for a Creator.

    Resources

    Endnotes
    1. As I discuss in The Cell’s Design, the rules of the genetic code and the nature of the redundancy appear to be designed to minimize errors in translating information from DNA into proteins that would occur due to substitution mutations. This optimization stands as evidence for the work of an intelligent Agent.
    2. JohnCarlo Kristofich et al., “Synonymous Mutations Make Dramatic Contributions to Fitness When Growth Is Limited by Weak-Link Enzyme,” PLoS Genetics 14, no. 8 (August 27, 2018): e1007615, doi:10.1371/journal.pgen.1007615.
    3. Here are a few representative studies that ascribe functional significance to synonymous mutations: Anton A. Komar, Thierry Lesnik, and Claude Reiss, “Synonymous Codon Substitutions Affect Ribosome Traffic and Protein Folding during in vitro Translation,” FEBS Letters 462, no. 3 (November 30, 1999): 387–91, doi:10.1016/S0014-5793(99)01566-5; Chung-Jung Tsai et al., “Synonymous Mutations and Ribosome Stalling Can Lead to Altered Folding Pathways and Distinct Minima,” Journal of Molecular Biology 383, no. 2 (November 7, 2008): 281–91, doi:10.1016/j.jmb.2008.08.012; Florian Buhr et al., “Synonymous Codons Direct Cotranslational Folding toward Different Protein Conformations,” Molecular Cell Biology 61, no. 3 (February 4, 2016): 341–51, doi:10.1016/j.molcel.2016.01.008; Chien-Hung Yu et al., “Codon Usage Influences the Local Rate of Translation Elongation to Regulate Co-translational Protein Folding,” Molecular Cell Biology 59, no. 5 (September 3, 2015): 744–55, doi:10.1016/j.molcel.2015.07.018.
    4. Zubin E. Sauna and Chava Kimchi-Sarfaty,” Understanding the Contribution of Synonymous Mutations to Human Disease,” Nature Reviews Genetics 12 (August 31, 2011): 683–91, doi:10.1038/nrg3051.
  • Discovery of Intron Function Interrupts Evolutionary Paradigm

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Mar 06, 2019

    Nobody likes to be interrupted when they are talking. It feels disrespectful and can be frustrating. Interruptions derail the flow of a conversation.

    The editors tell me that I need to interrupt this lead to provide a “tease” for what is to come. So, here goes: Interruptions happen in biochemical systems, too. Life scientists long thought that these interruptions disrupted the flow of biochemical information. But, it turns out these interruptions serve an important function, offering a rejoinder a common argument against intelligent design.

    Now back to the lead.

    Perhaps it is no surprise that some psychologists study interruptions1 with the hope of discovering answers to questions such as:

    • Why do people interrupt?
    • Who is most likely to interrupt?
    • Do we all perceive interruptions in the same way?

    While there is still much to learn about the science of interruptions, psychologists have discovered that men interrupt more often than women. Ironically, men often view women who interrupt as ruder and less intelligent than men who interrupt during conversations.

    Researchers have also found that a person’s cultural background influences the likelihood that he or she will interrupt during a discourse. Personality also plays a role. Some people are more sensitive to pauses in conversation and, therefore, find themselves interrupting more often than those who are less uncomfortable with periods of silence.

    Psychologists have learned that not all interruptions are the same. Some people interrupt because they want the “floor.” These people are called intrusive interrupters. Cooperative interrupters help move the conversation along by agreeing with the speaker and finishing the speaker’s thoughts.

    Interruptions are not confined to conversations. They are a part of life, including the biochemical operations that take place inside the cell.

    In fact, biochemists have discovered that the information harbored in genes, which contains the instructions to build proteins—the workhorse molecules of the cell—experience interruptions in their coding sequences. These intrusive interruptions would disrupt the flow of information in the cell during the process of protein synthesis if the interrupting sequences weren’t removed by the cell’s machinery.

    Molecular biologists have long viewed these genetic “interruptions” (called introns) as serving no useful purpose for the cell, with introns comprising a portion of the junk DNA found in the genomes of eukaryotic organisms. But it turns out that introns—like cooperative interruptions during a conversation—serve a useful purpose, according to the recent work of two independent teams of molecular biologists.

    Introns Are Abundant

    Noncoding regions within genes, introns consist of DNA sequences that interrupt the coding regions (called exons) of a gene. Introns are pervasive in genomes of eukaryotic organisms. For example, 90 percent of genes in mammals consists of introns, with an average of 8 per gene.

    After the information stored in a gene is copied into messenger RNA, the intron sequences are excised, and the exons spliced together by a protein-RNA complex known as a spliceosome.

    blog__inline--discovery-of-intron-function-1

    Figure 1: Drawing of pre-mRNA to mRNA. Image credit: Wikipedia

    Molecular biologists have long wondered why eukaryotic genes would be riddled with introns. Introns seemingly make the structure and expression of eukaryotic genes unnecessarily complicated. What possible purpose could introns serve? Researchers also thought that once the introns were spliced out of the messenger RNA sequences, they were discarded as genetic debris.

    Introns Serve a Functional Purpose

    But recent work by two independent research teams from Sherbrooke University in Quebec, Canada, and MIT, respectively, indicates that molecular biologists have been wrong about introns. They have learned that once spliced from messenger RNA, these fragments play a role in helping cells respond to stress.

    Both research teams studied baker’s yeast. One advantage of using yeast as a model organism relates to the relatively small number of introns (295) in its genome.

    blog__inline--discovery-of-intron-function-2

    Figure 2: A depiction of baker’s yeast. Image credit: Shutterstock

    Taking advantage of the limited number of introns in baker’s yeast, the team from Sherbrooke University created hundreds of yeast strains—each one missing just one of its introns. When grown under normal conditions with a ready supply of available nutrients, the strains missing a single intron grew normally—suggesting that introns aren’t of much importance. But when the researchers grew the yeast cells under conditions of food scarcity, the yeast with the deleted introns frequently died.2

    The MIT team observed something similar. They noticed that during the stationary phase of growth (when nutrients become depleted, slowing down growth), introns spliced from RNA accumulated in the growth medium. The researchers deleted the specific introns that they found in the growth medium from the baker’s yeast genome and discovered that the resulting yeast strains struggled to survive under nutrient-poor conditions.3

    At this point, it isn’t clear how introns help cells respond to stress caused by a lack of nutrients, but they have some clues. The Sherbrooke University team thinks that the spliced-out introns play a role in repressing the production of proteins that help form ribosomes. These biochemical machines manufacture proteins. Because protein synthesis requires building block materials and energy, during periods when nutrients are scarce, protein production slows down in cells. Ratcheting down protein synthesis impedes cell growth but affords them a better chance to survive a lack of nutrients. One way cells can achieve this objective is to stop making ribosomes.

    The MIT team thinks that some spliced-out introns interact with spliceosomes, preventing them from splicing out other introns. When this disruption happens, it slows down protein synthesis.

    Both research groups believe that in times when nutrients are abundant, the spliced-out introns are broken down by the cell’s machinery. But when nutrients are scarce, that condition triggers intron accumulation.

    At this juncture, it isn’t clear if the two research teams have uncovered distinct mechanisms that work collaboratively to slow down protein production, or if they are observing facets of the same mechanism. Regardless, it is evident that introns display functional utility. It’s a surprising insight that has important ramifications for our understanding of the structure and function of genomes. This insight has potential biomedical utility and theological implications, as well.

    Intron Function and the Case for Creation

    Scientists who view biology through the lens of the evolutionary paradigm are quick to conclude that the genomes of organisms reflect the outworking of evolutionary history. Their perspective causes them to see the features of genomes, such as introns, as little more than the remnants of an unguided evolutionary process. Within this framework, there is no reason to think that any particular DNA sequence element, including introns, harbors function. In fact, many life scientists regard the “evolutionary vestiges” in the genome as junk DNA. This clearly has been the case for introns.

    Yet, a growing body of data indicates that virtually every category of so-called junk DNA displays function. We can now add introns—cooperative interrupters—to the list. And based on the data on hand, we can make a strong case that most of the sequence elements in genomes possess functional utility.

    Could it be that scientists really don’t understand the biology of genomes? Or maybe we have the wrong paradigm?

    It seems to me that science is in the midst of a revolution in our understanding of genome structure and function. Instead of being a wasteland of evolutionary debris, most of the genome appears to be functional. And the architecture and operations of genomes appear to be far more elegant and sophisticated than anyone ever imagined—at least within the confines of the evolutionary paradigm.

    But what if the genome is viewed from a creation model framework?

    The elegance and sophistication of genomes are features that are increasingly coming into scientific view. And this is precisely what I would expect if genomes were the product of a Mind—the handiwork of a Creator.

    Now that is a discovery worth talking about.

    Resources

    Endnotes
    1. Teal Burrell, “The Science behind Interrupting: Gender, Nationality and Power, and the Roles They Play,” Post Magazine (March 14, 2018), https://www.scmp.com/magazines/post-magazine/long-reads/article/2137023/science-behind-interrupting-gender-nationality; Alex Shashkevich, “Why Do People Interrupt? It Depends on Whom You’re Talking To,” The Guardian (May 18, 2018), https://www.theguardian.com/lifeandstyle/2018/may/18/why-do-people-interrupt-it-depends-on-whom-youre-talking-to.
    2. Julie Parenteau et al., “Introns Are Mediators of Cell Response to Starvation,” Nature 565 (January 16, 2019): 612–17, doi:10.1038/s41586-018-0859-7.
    3. Jeffrey T. Morgan, Gerald R. Fink, and David P. Bartel, “Excised Linear Introns Regulate Growth in Yeast,” Nature 565 (January 16, 2019): 606–11, doi:10.1038/s41586-018-0828-1.
  • Molecular Logic of the Electron Transport Chain Supports Creation

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Feb 27, 2019

    “It was said that some scientists attended the oxidative phosphorylation sessions of the Federation meetings because they knew a good punch up was on the cards.”

    —John Prebble, Department of Biological Sciences, University of London

    It has been described as one of the most “heated and acrimonious debates in biochemistry during the twentieth century,”1 and its resolution carries implications for a different ideological conflict—that of the origin of life.

    This battle royale (dubbed the Ox Phos Wars) took place in the 1960s and early 1970s. At that time, biochemists were trying to decipher the mechanism used by mitochondria to produce the high-energy compound called ATP (adenosine triphosphate) through a process called oxidative phosphorylation (Ox Phos for short). Many components of the cell’s machinery use ATP to power their operations.

    blog__inline--molecular-logic-of-the-electron-transport-chain-1

    Figure 1: A schematic of the synthesis and breakdown cycle of ATP and ADP. Image credit: Shutterstock

    So acrimonious was the debate that scientists involved in this controversy often came close to blows when publicly debating the mechanism of oxidative phosphorylation. Much of the controversy centered around an idea known as the chemiosmotic theory, proposed by biochemist Peter Mitchell. He argued that the electron transport chain generates a proton gradient across the mitochondrial inner membrane and, in turn, exploits that gradient through a coupling process to drive the synthesis of ATP from ADP (adenosine diphosphate) and inorganic phosphate. (see figures 1 and 2). (The reverse reaction liberates chemical energy that drives many biochemical processes.)

    blog__inline--molecular-logic-of-the-electron-transport-chain-2

    Figure 2: A schematic of the chemiosmotic theory. Image credit: Shutterstock

    At the time, this idea was met with a large measure of skepticism by biochemists. It didn’t fit with the orthodoxy, characteristic of classical biochemistry. Biochemists found Mitchell’s ideas hard to understand and his personality abrasive, both of which led to the acrimony.

    Origin-of-life researcher Leslie Orgel referred to the chemiosmotic theory as one of the most counterintuitive ideas to ever come out of biology, comparing it to the ideas that formed the foundations of quantum mechanics and relativity.2

    Many biochemists preferred the chemical theory of oxidative phosphorylation over Mitchell’s chemiosmotic theory. Researchers thought that the phosphate group added to ADP was transferred from one of the components of the electron transport chain. In an attempt to support this idea, many biochemists frantically searched for a chemical intermediate with a high-energy phosphate moiety that could power the synthesis of ATP.

    The chemical theory was based on a process called substrate-level phosphorylation, exemplified by two reactions that form ATP during glycolysis. In one reaction, 1,3-diphosphoglycerate transfers one of its phosphate groups to ADP to form ATP. (In this case, 3-diphosphoglycerate serves as the intermediate with a high-energy phosphate moiety.) In the second reaction, phosphoenolpyruvate transfers a phosphate group to ADP to make ATP, with phosphoenolpyruvate functioning as the intermediate bearing a high-energy phosphate residue. (See figure 3.)

    As it turns out, the elusive intermediate was never found, forcing adherents of the chemical theory to abandon their model. Peter Mitchell’s idea won the day. In fact, Mitchell was awarded the Nobel Prize in Chemistry in 1978 for his contribution to understanding the mechanism of oxidative phosphorylation.

    Today, biochemists readily recognize the importance of proton gradients and the chemiosmotic process. Proton gradients are pervasive in living systems. Mitochondria are not alone. Chloroplasts rely on proton gradients during the process of photosynthesis. Bacteria and archaea also use proton gradients across their plasma membranes to harvest energy. Cells use proton gradients to transport material across cell membranes. And proton gradients even power the bacterial flagellum.

    Now that oxidative phosphorylation is understood, some evolutionary biologists and origin-of-life researchers have turned their attention to two questions: (1) How did chemiosmosis originate? and (2) Why are proton gradients so central to biochemical operations?

    Oxidative Phosphorylation and the Evolutionary Paradigm

    For many evolutionary biologists, understanding the origin of oxidative phosphorylation (and the use of proton gradients, in general) assumes a position of unique prominence because of the central role this process plays in harvesting energy in both prokaryotic and eukaryotic organisms. In other words, understanding the origin of oxidative phosphorylation (and use of proton gradients) is central to understanding the origin of life and the fundamental design of biochemical systems.

    Because the use of proton gradients in living systems is odd and counterintuitive, it becomes tempting for many origin-of-life researchers and evolutionary biologists to conclude that chemiosmosis reflects the outworking of a historically contingent evolutionary process that relied on existing systems and designs that were co-opted and, in turn, modified. This notion becomes reinforced by the work of origin-of-life researcher Nick Lane.

    Lane and his collaborators conclude that proton gradients must have been integral to the biochemistry of LUCA (the last universal common ancestor) because proton gradients are a near-universal feature of living systems. If so, then the use of proton gradients must have emerged during the origin-of-life process before LUCA originated. Lane and his team go so far as to propose that the first proto-cells emerged near hydrothermal vents and made use of naturally occurring proton gradients found in these environments as their energy source.3

    Once this system was in place, the strategy was retained in the cell lines that diverged from these early proto-cellular entities as the electron transport chain evolved from a simple, naturally occurring vent process to the complex process found in both prokaryotic and eukaryotic organisms. In other words, it would seem that the odd, counterintuitive nature of proton gradients reflects the happenstance outworking of chemical evolution that began when the naturally occurring proton gradients were co-opted in the early stages of chemical evolution.

    But Lane’s recent insight indicates that, though counterintuitive, the use of proton gradients to harvest the energy required to make ATP makes sense, displaying an exquisite molecular rationale.4 And if so, it forces a rethink of the explanation for the origin of chemiosmosis. To appreciate this shift in perspective, it is helpful to understand the process of oxidative phosphorylation, beginning with glycolysis and the Kreb’s cycle.

    Glycolysis and the Kreb’s Cycle

    The glycolytic pathway converts the fuel molecule glucose (a 6-carbon sugar) into two pyruvate molecules (3-carbon). This process proceeds through eleven chemical intermediates and nets 2 molecules of ATP (generated through substrate-level phosphorylation) and two molecules of NADH (nicotinamide adenine dinucleotide). NADH harbors high-energy electrons generated from the energy liberated from the breakdown and oxidation of glucose. As it turns out, the NADH molecules play a central role in generating most of the ATP produced when a sugar molecule breaks down.

    blog__inline--molecular-logic-of-the-electron-transport-chain-3

    Figure 3: Glycolysis. Image credit: Shutterstock

    The pyruvate generated by glycolysis is transported across the mitochondrial inner membrane into the matrix of the organelle. Here pyruvate is transformed into a molecule of carbon dioxide and a 2-carbon intermediate called acetyl CoA. This process generates 2 additional molecules of NADH.

    In turn, the Kreb’s cycle converts each acetyl CoA molecule into two molecules of carbon dioxide. (The net reaction: a 6-carbon glucose molecule breaks down into 6 carbon dioxide molecules.) During the process, the breakdown of each acetyl CoA molecule generates 1 ATP molecule (via substrate-level phosphorylation) and 3 molecules of NADH. Additionally, 1 molecule of FADH2 is formed. Like NADH, this molecule possesses high-energy electrons. (See figure 4.)

    blog__inline--molecular-logic-of-the-electron-transport-chain-4

    Figure 4: Kreb’s cycle. Image credit: Shutterstock

    The Electron Transport Chain and Oxidative Phosphorylation

    The high-energy electrons of NADH and FADH2 are transferred to the electron transport chain, which is embedded in the inner membrane.

    Four protein complexes (dubbed I, II, III, and IV) make up the electron transport chain. The high-energy electrons from NADH and FADH2 are shuffled from one protein complex to the next, with each transfer releasing energy that is used to transport protons from the mitochondrial matrix across the inner membrane, establishing the proton gradient. (See figure 5.) Oxygen is the final electron acceptor in the electron transport chain. The electrons transferred to oxygen lead to the formation of a water molecule.

    Because protons are positively charged, the exterior region outside the inner membrane is positively charged and the interior region is negatively charged. The charge differential created by the proton gradient is analogous to a battery and the inner membrane is like a capacitor.

    blog__inline--molecular-logic-of-the-electron-transport-chain-5

    Figure 5: Electron Transport Chain. Image credit: Shutterstock

    The coupling of the proton gradient to ATP synthesis occurs as a result of the flow of positively charged protons through the F0 component of a protein complex called F1-F0ATPase (also embedded in the mitochondrial inner membrane). F1-F0ATPase uses this flux to convert electrochemical energy into mechanical energy that, in turn, is used to drive the formation of ATP from ADP and inorganic phosphate.

    The Molecular Logic of Proton Gradients

    So, why are chemiosmosis and proton gradients universal features of living systems? Are they an outworking of a historically contingent evolutionary process? Or is there something more at work?

    Even though proton gradients seem counterintuitive at first glance, the use of proton gradients to power the production of ATP and other cellular processes reflects an underlying ingenuity and exquisite molecular logic. Research shows that proton gradients allow the cell to efficiently extract as much energy as possible from the breakdown of glucose (and other biochemical foodstuffs).5 On the other hand, if ATP was produced exclusively by substrate-level phosphorylation, using a high-energy chemical intermediate, much of the energy liberated from the breakdown of glucose would be lost as heat.

    To understand why this is so, consider this analogy. Suppose people in a particular community receive their daily allotment of water in a 10-gallon bucket. The water they receive each day is retrieved from a reservoir with a 12-gallon bucket and then transferred to their bucket. In the process, two gallons of water is lost. Now, suppose the water from the reservoir is retrieved with a 12-gallon bucket but dumped into a secondary reservoir that has a tap. The tap allows each 10-gallon bucket to be filled without losing two gallons. Though the procedure is indirect and more complicated, using the secondary reservoir to distribute water is more efficient in the long run. In the first scenario, it takes 60 gallons of water (transferred from the reservoir in five 12-gallon buckets) to fill up five 10-gallon buckets. In the second scenario, the same amount of water transferred from the reservoir can fill six 10-gallon buckets. With each transfer, the additional two gallons accumulate in the reservoir until there is enough water to fill another 10-gallon bucket.

    With substrate-level phosphorylation, when the phosphate group is transferred from the high-energy intermediate to ADP to form ATP, excess energy released during the transfer is lost as heat. It takes 7 kcal/mole of energy to add a phosphate group to ADP to form ATP. Let’s say that the hypothetical chemical intermediate releases 10 kcal/mole when its high-energy phosphate bond is broken. Three kcal/mole of energy is lost.

    On the other hand, using the electron transport chain to build up a proton gradient is like the reservoir in our analogy. It allows that extra three kcal/mole to be stored in the proton gradient. We can think of the F1-F0ATPase as analogous to the tap. It uses 7 kcal/mole of energy released when protons flow through its channels to drive the formation of ATP from ADP and inorganic phosphate. The unused energy from the proton gradient continues to accumulate until enough energy is available to form another ATP molecule. So, in our hypothetical scenario, if the cell used substrate-level phosphorylation to make ATP, 70 molecules of the high-energy intermediate would yield 70 molecules of ATP with 210 kcal/mole of energy released as heat. But, using the electron transport chain to generate proton gradients yields 100 ATP molecules with no energy lost as heat.

    Chemiosmotic Theory and the Case for Creation

    The elegant molecular rationale that undergirds the use of proton gradients to harvest energy and to power certain cellular processes makes it unlikely that this biochemical feature reflects the outcome of a historically contingent process that just happened upon proton gradients. Instead, it points to a set of principles that underlie the structure and function of biochemical systems—principles that appear to have been set in place from the beginning of the universe.

    The most obvious and direct way for the first protocells to harvest energy would seemingly involve some type of mechanism that resembled substrate-level phosphorylation, not an indirect and more complicated mechanism that relies on proton gradients. If the origin of chemiosmosis and the use of proton gradients was, indeed, a historically contingent outcome—predicated on the fact that the first protocells just happened to employ a natural proton gradient—it seems almost eerie to think that evolutionary processes blindly stumbled upon what would later become such an elegant and efficient energy-harvesting process. And a process necessary for advanced life to be possible on Earth.

    If not for chemiosmosis, it is unlikely that eukaryotic cells and, hence, complex life such as animals, plants, and fungi, could have ever existed. Substrate-level phosphorylation just isn’t efficient enough to support the energy demands of eukaryotic organisms.

    It is also difficult to imagine how the natural proton gradients exploited by the first protocells could have been co-opted and then evolved so quickly into the complex components of the electron transport chain and F1-F0ATPase coupling mechanism found in cells that preceded LUCA. Not only are the components of the electron transport chain complex, but they have to work together in an integrated manner to establish the proton gradient across mitochondrial membranes (and the plasma membranes of bacteria and archaea). Without the existence of the F1-F0ATPase (or some other mechanism) to couple proton gradients to the synthesis of ATP, the generation of proton gradients would be for naught. The origin of the electron transport chain and F1-F0ATPase have to coincide.

    On the other hand, the ingenious use of proton gradients and the elegant molecular logic that accounts for their universal use by living systems are exactly the features I would expect if life stems from the work of a Mind. Moreover, the architecture and operation of complex I and F1-F0ATPase add to the case for creation. These two complexes are molecular motors that bear an startling similarity to man-made machines, revitalizing the Watchmaker argument for God’s existence.

    As noted, the use of proton gradients points to a set of deep, underlying principles that arise from the very nature of the universe itself and dictate how life must be. The molecular rationale that undergirds the use of proton gradients and their near-universal occurrence in living organisms suggests that proton gradients are an indispensable feature of living organisms. In other words, without the use of proton gradients to harvest energy and drive cellular processes, advanced life would not be possible. Or another way to say it: if life was discovered elsewhere in the universe, it would have to employ proton gradients to harvest energy.

    It is remarkable to think that proton gradients, which are a manifestation of the laws of nature, are, at the same time, precisely the type of system advanced life needs to exist. One way to interpret this “coincidence” is that it serves as evidence that our universe has been designed for a purpose.

    And as a Christian, I find that notion to resonate powerfully with the idea that life manifests from an intelligent Agent—namely, God.

    Resources

    Endnotes
    1. John Prebble, “Peter Mitchell and the Ox Phos Wars,” Trends in Biochemical Sciences 27 (April 1, 2002): 209–12, doi:10.1016/S0968-0004(02)02059-5.
    2. Leslie E. Orgel, “Are You Serious, Dr. Mitchell?” Nature 402 (November 4, 1999): 17, doi:10.1038/46903.
    3. Nick Lane, John F. Allen, and William Martin, “How Did LUCA Make a Living? Chemiosmosis in the Origin of Life,” Bioessays 32 (2010): 271–80, doi:10.1002/bires.200900131.
    4. Nick Lane, “Why Are Cells Powered by Proton Gradients?” Nature Education 3 (2010): 18.
    5. Nick Lane, “Bioenergetic Constraints on the Evolution of Complex Life,” Cold Spring Harbor Perspectives in Biology 6 (2014): a015982, doi:10.1101/cshperspect.a015982.
  • Electron Transport Chain Protein Complexes Rev Up the Case for a Creator

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Feb 06, 2019

    As a little kid, I spent many a Saturday afternoon “helping” my dad work on our family car. What a clunker.

    We didn’t have a garage, so we parked our car on the street in front of the house. Our home was built into a hillside and the only way to get to our house was to climb a long flight of stairs from the street.

    I wasn’t very old at the time—maybe 6 or 7—so my job was to serve as my dad’s gofer. Instead of asking me to carry his toolbox up and down the flight of stairs, he would send me back and forth when he needed a particular tool. It usually went like this: “Fuz, go get me a screwdriver.” Up and down the stairs I would go. And, when I returned: “That’s the wrong screwdriver. Get me the one with the flat head.” Again, after I returned from another roundtrip on the stairs: “No, the one with the flat head and the blue handle.” Up and down the stairs I went, but again: “Why did you bring all of the screwdrivers? Take the rest of them back up the stairs and put them in the toolbox.” By the time he finished working on our car I was frustrated and exhausted.

    Even though I didn’t have a lot of fun helping my dad, I did enjoy peering under the hood of our car. I was fascinated by the engine. From my vantage point as a little kid, the car’s engine seemed to be bewilderingly complex. And somehow my dad knew what to do to make the car run. Clearly, he understood how it was designed and assembled.

    As a graduate student, when I began studying biochemistry in earnest, I was taken aback by the bewildering complexity of the cell’s chemical systems. Like an automobile engine, the cell’s complexity isn’t haphazard, but instead displays a remarkable degree of order and organization. There is an underlying ingenuity to the way biochemical systems are put together and the way they operate. And, for the most part, biochemists have acquired a good understanding of how these systems are designed.

    Along these lines, one of the most remarkable and provocative insights into biochemical systems has been the discovery of protein complexes that serve the cell as molecular-scale machines and motors—many of which bear an eerie similarity to man-made machines. Two recent studies illustrate this stunning similarity by revealing new information about the structure and function of two protein complexes that are part of the electron transport chain: the F1-F0 ATPase and respiratory complex I. These ubiquitous protein complexes are two of the most important enzymes in biology because of the central role they play in energy-harvesting reactions.

    F1-F0 ATPase

    This well-studied protein complex plays a key role in harvesting energy for the cell to use. F1-F0 ATPase is a molecular-scale rotary motor (see figure 1). The F1 portion of the complex is mushroom-shaped and extends above the membrane’s surface. The “button of the mushroom” literally corresponds to an engine turbine. The F1-F0 ATPase turbine interacts with the part of the complex that looks like a “mushroom stalk.” This stalk-like component functions as a rotor.

     

    blog__inline--electron-transport-chain-protein-complexes-1

    Figure 1: A cartoon of the F1-F0 ATPase rotary motor. Image credit: Reasons to Believe

    Located in the inner membrane of mitochondria, F1-F0 ATPase makes use of a proton gradient across the inner membrane to drive the production of ATP (adenosine triphosphate), a high-energy compound used by the cell to power many of its operations. Because protons are positively charged, the exterior region outside the inner membrane is positively charged and the interior region is negatively charged. The charge differential created by the proton gradient is analogous to a battery and the inner membrane is like a capacitor.

    The flow of positively charged hydrogen ions through the F0 component, embedded in the cell membrane, drives the rotation of the rotor. A rod-shaped protein structure that also extends above the membrane surface serves as a stator. This protein rod interacts with the turbine, holding it stationary as the rotor rotates.

    The electrical current that flows through the channels of the F0 complex is transformed into mechanical energy, which then drives the rotor’s movement. A cam that extends at a right angle from the rotor’s surface causes displacements of the turbine. These back-and-forth motions are used to produce ATP.

    Even though biochemists have learned a lot about this protein complex, they still don’t understand some things. Recently, a team of collaborators from the US determined the path that protons take as they move through the F0 component embedded in the inner membrane.1

    To accomplish this feat, the research team trapped the enzyme complex into a single conformation by fusing the stator to the rotor. This procedure exposed the channels in the F0 complex and revealed the precise path taken by the protons as they move across the inner membrane. As protons shuttle through these channels, they trigger conformational changes that drive the rotation of the rotor by one full turn for each proton as it moves through the channel.

    Respiratory Complex I

    Respiratory complex I serves as the first enzyme complex of the electron transport chain. This complex transfers high-energy electrons from a compound called nicotinamide adenine dinucleotide (NADH) to a small molecule associated with the inner membrane of mitochondria called coenzyme Q. The high-energy electrons of NADH are captured during glycolysis and the Kreb’s cycle, two metabolic pathways involved in the breakdown of the sugar, glucose.

    During the electron-transfer process, respiratory complex I also transports four protons from the mitochondria’s interior across the inner membrane to the exterior space (figure 2). In other words, respiratory complex I helps to generate the proton gradient F1-F0 ATPase uses to generate ATP. By some estimates, respiratory complex I is responsible for establishing about 40 percent of the proton gradient across the inner membrane.

    blog__inline--electron-transport-chain-protein-complexes-2

    Figure 2: A cartoon of the electron transport chain. Image credit: Shutterstock

    Massive in size, 45 individual protein subunits comprise respiratory complex I. The subunits interact to form two arms, one embedded in the inner membrane and one extending into the mitochondrial matrix. The two arms are arranged to form an L-shaped geometry.

    blog__inline--electron-transport-chain-protein-complexes-3

    Figure 3: A cartoon of respiratory complex I. Image credit: Wikipedia

    The electron transfer process occurs in the peripheral arm that extends into the mitochondrial matrix (upward in figure 3). Conversely, the proton transport mechanism takes place in the membrane-embedded arm (to the right).

    The mechanism of proton translocation across the inner membrane served as the focus of a study conducted by a research team from Oxford University in the UK.2 These researchers discovered that proton transport across the inner membrane is driven by the machine-like behavior of respiratory complex I. The process of transferring electrons through the peripheral arm results in conformational changes (changes in shape) in this part of the complex. This conformational change drives the motion of an alpha-helix cylinder like a piston in the membrane arm of the complex. The pumping motion of the alpha-helix causes three other cylinders to tilt and, in doing so, opens up channels for protons to move through the membrane arm of the complex.

    Revitalized Watchmaker Argument

    Biochemists discovery of enzymes with machine-like domains, as exemplified by F1-F0 ATPase and respiratory complex I, revitalize the Watchmaker argument. Popularized by William Paley in the eighteenth century, this argument states that as a watch requires a watchmaker, so, too, does life require a Creator.

    This simple yet powerful analogy has been challenged by skeptics like David Hume, who assert that the necessary conclusion of a Creator, based on analogical reasoning, is only compelling if there is a high degree of similarity between the objects that form the analogy. Skeptics have long argued that nature and a watch are sufficiently dissimilar so that the conclusion drawn from the Watchmaker argument is unsound.

    But due to the striking similarity between the machine parts of these enzymes and man-made devices, the discovery of enzymes with domains that are direct analogs to man-made devices addresses this concern. Toward this end, it is provocative that the more we learn about enzyme complexes such as F1-F0 ATPase, the more its machine-like character becomes apparent. It is also thought-provoking that as biochemists study the structure and function of protein complexes, new examples of analogs to man-made machines emerge. In both cases, the Watchmaker argument receives new vitality.

    As a little kid, peering under the hood of our family car and watching my father work on the engine convinced me that some really smart people who knew what they were doing designed and built that machine. In like manner, the remarkable machine-like properties displayed by many protein complexes in the cell make it rational to conclude that life comes from the work of a Mind.

    Resources

    Endnotes
    1. Anurag P. Srivastava et al., “High-Resolution Cryo-EM Analysis of the Yeast ATP Synthase in a Lipid Membrane,” Science 360, no. 6389 (May 11, 2018), doi:10.1126/science.aas9699.
    2. Rouslan G. Efremov et al., “The Architecture of Respiratory Complex I,” Nature 465 (May 27, 2010): 441–45, doi:10.1038/nature09066.
  • Early Cave Art Supports the Image of God

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Jan 30, 2019

    The J. Paul Getty Museum in Los Angeles houses one of my favorite paintings: Édouard Manet’s The Rue Mosnier with Flags.

    blog__inline--early-cave-art-supports-the-image-of-god-1

    Figure 1: The Getty Center. Image credit: Shutterstock

    This masterpiece depicts the Rue Mosnier (an urban street) as seen from the window of Manet’s art studio on June 30, 1878, a national holiday in France. Flags line both sides of the Rue Mosnier, decorations that are part of the day’s celebration.

    Photos of this piece of art simply don’t do it justice. When viewing the original in person, the bright colors of the flags—the whites, blues, and reds—leap off the canvas. And yet, there is an element of darkness to the painting. Meant to be viewed from left to right, the first thing the viewer sees in the corner of the painting is a disabled veteran struggling to make his way up the street. The flags on the left side of the street—though brilliantly colored—hang limp. Yet, as the viewer’s gaze moves down and across the street, the flags are depicted as flapping in the breeze. The focal point of the painting is found near the center of the piece, where two women in brilliantly white dresses disembark from a carriage.

    blog__inline--early-cave-art-supports-the-image-of-god-2

    Figure 2: The Rue Mosnier with Flags by Édouard Manet. Image credit: WikiArt

    Some scholars believe that through The Rue Mosnier with Flags, Manet was portraying the inequities of French society in his day. Others see the painting as communicating a sense of optimism and hope for the future as France recovered from the Franco-Prussian War (1870–71).

    Art and Symbolic Capacity

    For many people, our ability to create and contemplate art serves as a defining feature of humanity—a quality that reflects our capacity for sophisticated cognitive processes. Art is a manifestation of symbolism. Through art (as well as music and language), we express and communicate complex ideas and emotions. We accomplish this feat by representing the world—and even ideas—with symbols. And, we can manipulate symbols, embedding one within the other to create alternate possibilities.

    Because artwork reflects the capacity for symbolism and open-ended generative capacity, it has become the focal point of some very big questions in anthropology. The earliest humans produced impressive artistic displays on the cave walls of Europe that date to around 40,000 years in age—the time when humans first made their way to this part of the world.

    But when did art first appear? Did it arise after humans made their way into Europe? Did it arise in Africa before humanity began to migrate around the world? Did art emerge suddenly? Did it appear gradually? Is artistic expression unique to human beings, or did other hominins, such as Neanderthals, produce art? The answers to these questions have important implications for how we understand humanity’s origin and place in the cosmos.

    As a Christian, I view these questions as vitally important for establishing the credibility of the biblical accounts of human origins and the biblical perspective on human beings. I believe that our capacity to make art is a manifestation of the image of God. As such, the appearance of art (as well as other artifacts that reflect our capacity for symbolism) serve as a diagnostic in the archaeological record for the image of God. The archaeological record provides the means to characterize the mode and tempo of the appearance of behavior that reflects the image of God. If the biblical account of human origins is true, then I would expect that artistic expression should be unique to modern humans and should appear at the same time that we made our first appearance as a species.

    So, is artistic expression unique to modern humans? This question has generated quite a bit of controversy. Some scientific evidence indicates that Neanderthals displayed the capacity for artistic expression (and hence, the capacity for symbolism). On the other hand, a number of studies question Neanderthal capacity for art (and, consequently, symbolism). In fact, when taken as a whole, the evidence indicates that Neanderthals were cognitively inferior to modern humans. (For more details, check out the articles listed in the Resources section.)

    When did artistic expression in humans appear? Some evidence indicates that artistic expression appeared well after anatomically modern humans first appeared. To put it another way: there is evidence that anatomically modern humans appeared before behaviorally modern humans.

    On the other hand, the most recent evidence indicates that the capacity for symbolism and advanced cognition appeared much earlier than many anthropologists thought. And that time of origin is close to the time that anatomically modern humans made their first appearance, as three recent discoveries attest.

    Oldest Animal Drawings in Asia

    Cave art in Europe has been well-known and carefully investigated by archaeologists and anthropologists for nearly a century. This work gives the impression that artistic capacity appeared only after anatomically modern humans made their way into Europe—about 100,000 years after anatomically modern humans appeared on Earth. To say it another way, anatomically modern humans appeared before our advanced cognitive abilities.

    Yet, in recent years archaeologists have gained access to a growing archaeological record in Asia—and characterizing these archaeological sites changes everything. In 2014, a large team of collaborators from Australia dated hand stencils discovered on the walls of a cave in Sulawesi, Indonesia, to be between 35,000 to 40,000 years in age.1 Originally discovered in the 1950s, this artwork was initially dated to be about 10,000 years old. The team redated the art using a newly developed technique that measures the age of calcite deposits—left behind by water flowing down the cave walls—overlaying the art. (Trace amounts of radioactive uranium and thorium isotopes associated with the calcite can be used to date the mineral deposits and, thus, provide a minimum age for the artwork.)2

    blog__inline--early-cave-art-supports-the-image-of-god-3

    Figure 3: A modern-day re-creation of hand stencils found in the caves of Europe and Asia. Image credit: Shutterstock

    Recently, this same team applied the same technique to animal drawings in the caves of Borneo, dating one drawing to be a minimum age of 40,000 years old. They also dated hand stencils from this cave to be between 37,000 and 52,000 years in age.3

    The hand stencils and art reflect the same quality and character as the cave art found in Europe. And it is older. This discovery means that modern humans most likely had the capacity to make art before beginning their migrations around the world from out of Africa.

    Oldest Abstract Drawings in Africa

    A recent discovery by a team of anthropologists and archaeologists from the University of Witwatersrand in South Africa also supports the notion that artistic expression emerged prior to the migration of humans around the world.4 These researchers recovered a silcrete flake from a layer in the Blombos Cave that dates to about 73,000 years in age. (The Blombos Cave is located in the coastal area of South Africa, around 150 miles east of Cape Town.)

    This silcrete flake looks as if it was broken off from a grindstone used to turn ochre into a powder. The silcrete flake has a crosshatch pattern on it that appears to have been intentionally drawn using an ochre crayon. Because the crosshatch markings end abruptly, it looks like they were part of a large, abstract drawing made on the grindstone. When the researchers tried to reproduce the pattern in the lab, it required a steady hand and a determined effort.

    The Blombos Cave has previously yielded other artifacts that evince the capacity for symbolism. Additionally, the crosshatch symbol has been found etched into ochre and ostrich eggshells from other sites in South Africa. But this recent find represents the first and oldest example of the symbol having been drawn on an artifact’s surface.

    Additional Evidence for Advanced Cognition

    In addition to art, anthropologists believe that another diagnostic of cognitive complexity in humans is the manufacture and use of specialized bone tools. In 2012, researchers unearthed a bone in a cave near the Atlantic coastline of Morocco. To manufacture this knife, modern humans had to remove the rib from a herbivore and then cut it in half, lengthwise. The tool manufacturers had to then scrape and chip away the bone to give it a knife-like shape. Anthropologists believe that the manufacture of bone tools, such as this rib knife, reflects the capacity for strategic planning for future survival.

    Recently, an international team of researchers provided a detailed characterization of this tool and dated it to be around 90,000 years in age.5 This insight indicates that the capacity for advanced cognition existed (at least minimally) around 90,000 years ago.

    A Convergence of Evidence

    These recent findings signify that advanced cognitive ability, including the capacity to make art, originated close to the same time that anatomically modern humans first appear in the fossil record. In fact, (as I have written about earlier) linguist Shigeru Miyagawa believes that artistic expression emerged in Africa earlier than 125,000 years ago. Archaeologists have discovered rock art produced by the San people that dates to 72,000 years ago. This art shares certain elements with the cave art found in Europe. Because the San diverged from the modern human lineage around 125,000 years ago, the ancestral people groups that gave rise to both lines must have possessed the capacity for artistic expression before that time.

    It is also significant that the globular brain shape of modern humans first appears in the archaeological record around 130,000 years ago. As I have written previously, globular brain shape allows for the expansion of the parietal lobe, which is responsible for these capacities:

    • Perception of stimuli
    • Sensorimotor transformation (which plays a role in planning)
    • Visuospatial integration (which provides hand-eye coordination needed for throwing spears and making art)
    • Imagery
    • Self-awareness
    • Working and long-term memory

    In other words, the archaeological and fossil records increasingly indicate that anatomically and behaviorally modern humans emerged at the same time, as predicted by the biblical creation accounts.

    And, while these first humans didn’t have the luxury of spending an afternoon in an art museum contemplating artistic masterpieces, they displayed the image of God by producing art that, for them, apparently had profound meaning.

    Resources

    Endnotes
    1. M. Aubert et al., “Pleistocene Cave Art from Sulawesi, Indonesia,” Nature 514 (October 9, 2014): 223–27, doi:10.1038/nature13422.
    2. It should be noted that the dating method used by these researchers has been criticized by a number of different research teams as potentially yielding artificially high ages. Knowing this concern, the team deliberately took steps to ensure that the sampling of the art and application of the dating method took into account the dating technique’s limitations.-
    3. M. Aubert et al., “Palaeolithic Cave Art in Borneo,” Nature 564 (November 7, 2018): 254–57, doi:10.1038/s41586-018-0679-9.
    4. Christopher S. Henshilwood et al., “An Abstract Drawing from the 73,000-Year-Old Levels at Blombos Cave, South Africa,” Nature 562 (2018): 115–18, doi:10.1038/s41586-018-0514-3.
    5. Abdeljalil Bouzouggar et al., “90,000 Year-Old Specialised Bone Technology in the Aterian Middle Stone Age of North Africa,” PLoS ONE 13 (October 3, 2018):e0202021, doi:10.1371/journal.pone.0202021.
  • Does Animal Planning Undermine the Image of God?

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Jan 23, 2019

    A few years ago, we had an all-white English Bulldog named Archie. He would lumber toward even complete strangers, eager to befriend them and earn their affections. And people happily obliged this playful pup.

    Archie wasn’t just an adorable dog. He was also well trained. We taught him to ring a bell hanging from a sliding glass door in our kitchen so he could let us know when he wanted to go out. He rarely would ring the bell. Instead, he would just sit by the door and wait . . . unless the neighbor’s cat was in the backyard. Then, Archie would repeatedly bang on the bell with great urgency. He had to get the cat at all costs. Clearly, he understood the bell’s purpose. He just chose to use it for his own devices.

    Anyone who has owned a cat or dog knows that these animals do remarkable things. Animals truly are intelligent creatures.

    But there are some people who go so far as to argue that animal intelligence is much more like human intelligence than we might initially believe. They base this claim, in part, on a handful of high-profile studies that indicate that some animals such as great apes and ravens can problem-solve and even plan for the future—behaviors that make them like us in some important ways.

    Great Apes Plan for the Future

    In 2006, two German anthropologists conducted a set of experiments on bonobos and orangutans in captivity that seemingly demonstrated that these creatures can plan for the future. Specifically, the test subjects selected, transported, and saved tools for use 1 hour and 14 hours later, respectively.1

    To begin the study, the researchers trained both bonobos and orangutans to use a tool to get a reward from an apparatus. In the first experiment, the researchers blocked access to the apparatus. They laid out eight tools for the apes to select—two were suitable for the task and six were unsuitable. After selecting the tools, the apes were ushered into another room where they were kept for 1 hour. The apes were then allowed back into the room and granted access to the apparatus. To gain the reward, the apes had to select the correct tool and transport it to and from the waiting area. The anthropologists observed that the apes successfully obtained the reward in 70 percent of the trials by selecting and hanging on to the correct tool as they moved from room to room.

    In the second experiment, the delay between tool selection and access to the apparatus was extended to 14 hours. This experiment focused on a single female individual. Instead of taking the test subject to the waiting room, the researchers took her to a sleeping room one floor above the waiting room before returning her to the room with the apparatus. She selected and held on to to the tool for 14 hours while she moved from room to room in 11 of the 12 trials—each time successfully obtaining the reward.

    On the basis of this study, the researchers concluded that great apes have the ability to plan for the future. They also argued that this ability emerged in the common ancestor of humans and great apes around 14 million years ago. So, even though we like to think of planning for the future as one of the “most formidable human cognitive achievements,”2 it doesn’t appear to be unique to human beings.

    Ravens Plan for the Future

    In 2017, two researchers from Lund University in Sweden demonstrated that ravens are capable of flexible planning just like the great apes.3 These cognitive scientists conducted a series of experiments with ravens, demonstrating that the large black birds can plan for future events and exert self-control for up to 17 hours prior to using a tool or bartering with humans for a reward. (Self-control is crucial for successfully planning for the future.)

    The researchers taught ravens to use a tool to gain a reward from an apparatus. As part of the training phase, the test subjects also learned that other objects wouldn’t work on the apparatus.

    In the first experiment, the ravens were exposed to the apparatus without access to tools. As such, they couldn’t gain the reward. Then the researchers removed the apparatus. One hour later, the ravens were taken to a different location and offered tools. Then, the researchers presented them with the apparatus 15 minutes later. On average, the raven test subjects selected and used tools to gain the reward in approximately 80 percent of the trials.

    In the next experiment, the ravens were trained to barter by exchanging a token for a food reward. After the training, the ravens were taken to a different location and presented with a tray containing the token and three distractor objects by a researcher who had no history of bartering with the ravens. As with the results of the tool selection experiment, the ravens selected and used the token to successfully barter for food in approximately 80 percent of the trials.

    When the scientists modified the experimental design to increase the time delay from 15 minutes to 17 hours between tool or token selection and access to the reward, the ravens successfully completed the task in nearly 90 percent of the trials.

    Next, the researchers wanted to determine if the ravens could exercise self-control as part of their planning for the future. First, they presented the ravens with trays that contained a small food reward. Of course, all of the ravens took the reward. Next, the researchers offered the ravens trays that had the food reward and either tokens or tools and distractor items. By selecting the token or the tools, the ravens were ensured a larger food reward in the future. The researchers observed that the ravens selected the tool in 75 percent of the trials and the token in about 70 percent, instead of taking the small morsel of food. After selecting the tool or token, the ravens were given the opportunity to receive the reward about 15 minutes later.

    The researchers concluded that, like the great apes, ravens can plan for the future. Moreover, these researchers argue that this insight opens up greater possibilities for animal cognition because, from an evolutionary perspective, ravens are regarded as avian dinosaurs. And mammals (including the great apes) are thought to have shared an evolutionary ancestor with dinosaurs 320 million years ago.

    Are Humans Exceptional?

    In light of these studies (and others like them), it becomes difficult to maintain that human beings are exceptional. Self-control and the ability to flexibly plan for future events is considered by many to be the cornerstone of human cognition. Planning for the future requires mental representation of temporally distant events, the ability to set aside current sensory inputs for unobservable future events, and an understanding of what current actions result in achieving a future goal.

    For many Christians, such as me, the loss of human exceptionalism is concerning because if this idea is untenable, so, too, is the biblical view of human nature. According to Scripture, human beings stand apart from all other creatures because we bear God’s image. And, because every human being possesses the image of God, every human being has intrinsic worth and value. But if, in essence, human beings are no different from animals, it is challenging to maintain that we are the crown of creation, as Scripture teaches.

    Yet recent work by biologist Johan Lind from Stockholm University (Sweden) indicates that the results of these two studies and others like them may be misleading. In effect, when properly interpreted, these studies pose no threat to human exceptionalism in any way. According to Lind, animals can engage in behavior that resembles flexible planning through a different behavior: associative learning.4 If so, this insight preserves the case for human exceptionalism and the image of God, because it means that only humans engage in genuine flexible planning for the future through higher-order cognitive processes.

    Associative Learning and Planning for the Future

    Lind points out that researchers working in artificial intelligence (AI) have long known that associative learning can produce complex behaviors in AI systems that give the appearance of having the capacity for planning. (Associative learning is the process that animals [and AI systems] use to establish an association between two stimuli or events, usually by the use of punishments or rewards.)

    blog__inline--does-animal-planning-undermine-the-image-of-god

    Figure 1: An illustration of associative learning in dogs. Image credit: Shutterstock

    Lind wonders why researchers studying animal cognition ignore the work in AI. Applying the insights from the work on AI systems, Lind developed mathematical models based on associative learning that he used to simulate results of the studies on the great apes and ravens. He discovered that associative learning produced the same behaviors as observed by the two research teams for the great apes and ravens. In other words, planning-like behavior can actually emerge through associative learning. That is, the same processes that give AI systems the capacity to beat humans in chess can, through associative learning, account for the planning-like behavior of animals.

    The results of Lind’s simulations mean that it is most likely that animals “plan” for the future in ways that are entirely different from humans. In effect, the planning-like behavior of animals is an outworking of associative learning. On the other hand, humans uniquely engage in bona fide flexible planning through advanced cognitive processes such as mental time travel, among others.

    Humans Are Exceptional

    Even though the idea of human exceptionalism is continually under assault, it remains intact, as the latest work by Johan Lind illustrates. When the entire body of evidence is carefully weighed, there really is only one reasonable conclusion: Human beings uniquely possess advanced cognitive abilities that make possible our capacity for symbolism, open-ended generative capacity, theory of mind, and complex social interactions—scientific descriptors of the image of God.

    Resources

    Endnotes
    1. Nicholas J. Mulcahy and Josep Call, “Apes Save Tools for Future Use,” Science 312 (May 19, 2006): 1038–40, doi:10.1126/science.1125456.
    2. Mulcahy and Call, “Apes Save Tools for Future Use.”
    3. Can Kabadayi and Mathias Osvath, “Ravens Parallel Great Apes in Flexible Planning for Tool-Use and Bartering,” Science 357 (July 14, 2017): 202–4, doi:10.1126/science.aam8138.
    4. Johan Lind, “What Can Associative Learning Do for Planning?” Royal Society Open Science 5 (November 28, 2018): 180778, doi:10.1098/rsos.180778.
  • Prebiotic Chemistry and the Hand of God

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Jan 16, 2019

    “Many of the experiments designed to explain one or other step in the origin of life are either of tenuous relevance to any believable prebiotic setting or involve an experimental rig in which the hand of the researcher becomes for all intents and purposes the hand of God.”

    Simon Conway Morris, Life’s Solution

    If you could time travel, would you? Would you travel to the past or the future?

    If asked this question, I bet many origin-of-life researchers would want to travel to the time in Earth’s history when life originated. Given the many scientifically impenetrable mysteries surrounding life’s genesis, I am certain many of the scientists working on these problems would love to see firsthand how life got its start.

    It is true, origin-of-life researchers have some access to the origin-of-life process through the fossil and geochemical records of the oldest rock formations on Earth—yet this evidence only affords them a glimpse through the glass, dimly.

    Because of these limitations, origin-of-life researchers have to carry out most of their work in laboratory settings, where they try to replicate the myriad steps they think contributed to the origin-of-life process. Pioneered by the late Stanley Miller in the early 1950s, this approach—dubbed prebiotic chemistry—has become a scientific subdiscipline in its own right.

    blog__inline--prebiotic-chemistry-and-the-hand-of-god-1

    Figure 1: Chemist Stanley Miller, circa 1980. Image credit: Wikipedia

    Prebiotic Chemistry

    In effect, the goals of prebiotic chemistry are threefold.

    • Proof of principle. The objective of these types of experiments is to determine—in principle—if a chemical or physical process that could potentially contribute to one or more steps in the origin-of-life pathway even exists.
    • Mechanism studies. Once processes have been identified that could contribute to the emergence of life, researchers study them in detail to get at the mechanisms undergirding these physicochemical transformations.
    • Geochemical relevance. Perhaps the most important goal of prebiotic studies is to establish the geochemical relevance of the physicochemical processes believed to have played a role in life’s start. In other words, how well do the chemical and physical processes identified and studied in the laboratory translate to early Earth’s conditions?

    Without question, over the last 6 to 7 decades, origin-of-life researchers have been wildly successful with respect to the first two objectives. It is safe to say that origin-of-life investigators have demonstrated that—in principle—the chemical and physical processes needed to generate life through chemical evolutionary pathways exist.

    But when it comes to the third objective, origin-of-life researchers have experienced frustration—and, arguably, failure.

    Researcher Intervention and Prebiotic Chemistry

    In an ideal world, humans would not intervene at all in any prebiotic study. But this ideal isn’t possible. Researchers involve themselves in the experimental design out of necessity, but also to ensure that the results of the study are reproducible and interpretable. If researchers don’t set up the experimental apparatus, adjust the starting conditions, add the appropriate reactants, and analyze the product, then by definition the experiment would never happen. Utilizing carefully controlled conditions and chemically pure reagents is necessary for reproducibility and to make sense of the results. In fact, this level of control is essential for proof-of-principle and mechanistic prebiotic studies—and perfectly acceptable.

    However, when it comes to prebiotic chemistry’s third goal, geochemical relevance, the highly controlled conditions of the laboratory become a liability. Here researcher intervention becomes potentially unwarranted. It goes without saying that the conditions of early Earth were uncontrolled and chemically and physically complex. Chemically pristine and physically controlled conditions didn’t exist. And, of course, origin-of-life researchers weren’t present to oversee the processes and guide them to their desired end. Yet, it is rare for prebiotic simulation studies to fully take the actual conditions of early Earth into account in the experimental design. It is rarer for origin-of-life investigators to acknowledge this limitation.

    blog__inline--prebiotic-chemistry-and-the-hand-of-god-2

    Figure 2: Laboratory technician. Image credit: Shutterstock

    This complication means that many prebiotic studies designed to simulate processes on early Earth seldom accomplish anything of the sort due to excessive researcher intervention. Yet, it isn’t always clear when examining an experimental design if researcher involvement is legitimate or unwarranted.

    As I point out in my book Creating Life in the Lab (Baker, 2011), one main reason for the lack of progress relates to the researcher’s role in the experimental design—a role not often recognized when experimental results are reported. Origin-of-life investigator Clemens Richert from the University of Stuttgart in Germany now acknowledges this very concern in a recent comment piece published by Nature Communications.1

    As Richert points out, the role of researcher intervention and a clear assessment of geochemical relevance is rarely acknowledged or properly explored in prebiotic simulation studies. To remedy this problem, Richert calls for origin-of-life investigators to do three things when they report the results of prebiotic studies.

    • State explicitly the number of instances in which researchers engaged in manual intervention.
    • Describe precisely the prebiotic scenario a particular prebiotic simulation study seeks to model.
    • Reduce the number of steps involving manual intervention in whatever way possible.

    Still, as Richert points out, it is not possible to provide a quantitative measure (a score) of geochemical relevance. And, hence, there will always be legitimate disagreement about the geochemical relevance of any prebiotic experiment.

    Yet, Richert’s commentary represents an important first step toward encouraging more realistic prebiotic simulation studies and a more cautious approach to interpreting the results of these studies. Hopefully, it will also lead to a more circumspect assessment on the importance of these types of studies for accounting for the various steps in the origin-of-life process.

    Researcher Intervention and the Hand of God

    One concern not addressed by Richert in his commentary piece is the fastidiousness of many of the physicochemical transformations origin-of-life researchers deem central to chemical evolution. As I discuss in Creating Life in the Lab, mechanistic studies indicate that these processes are often dependent upon exacting conditions in the laboratory. To put it another way, these processes only take place—even under the most ideal laboratory conditions—because of human intervention. As a corollary, these processes would be unproductive on early Earth. They often require chemically pristine conditions, unrealistically high concentrations of reactants, carefully controlled order of additions, carefully regulated temperature, pH, salinity levels, etc.

    As Richert states, “It’s not easy to see what replaced the flasks, pipettes, and stir bars of a chemistry lab during prebiotic evolution, let alone the hands of the chemist who performed the manipulations. (And yes, most of us are not comfortable with the idea of divine intervention.)”2

    Sadly, since I made the point about researcher intervention nearly a decade ago, it has often been ignored, dismissed, and even ridiculed by many in the scientific community—simply because I have the temerity to think that a Creator brought life into existence.

    Even though Richert and his many colleagues in the origin-of-life research community do whatever they can to eschew a Creator’s role in the origin-of-life, could it be that abiogenesis (life from nonlife) required the hand of God—divine intervention?

    I would argue that this conclusion follows from nearly seven decades of work in prebiotic chemistry and the consistent demonstration of the central role that origin-of-life researchers play in the success of prebiotic simulation studies. It is becoming increasingly evident for whoever will “see” that the hand of the researcher serves as the analog for the hand of God.

    Resources

    Endnotes
    1. Clemens Richert, “Prebiotic Chemistry and Human Intervention,” Nature Communications 9 (December 12, 2018): 5177, doi:10.1038/s41467-018-07219-5.
    2. Richert, “Prebiotic Chemistry.

About Reasons to Believe

RTB's mission is to spread the Christian Gospel by demonstrating that sound reason and scientific research—including the very latest discoveries—consistently support, rather than erode, confidence in the truth of the Bible and faith in the personal, transcendent God revealed in both Scripture and nature. Learn More »

Support Reasons to Believe

Your support helps more people find Christ through sharing how the latest scientific discoveries affirm our faith in the God of the Bible.

Donate Now

U.S. Mailing Address
818 S. Oak Park Rd.
Covina, CA 91724
  • P (855) 732-7667
  • P (626) 335-1480
  • Fax (626) 852-0178
Reasons to Believe logo