Where Science and Faith Converge
  • Meteorite Protein Discovery: Does It Validate Chemical Evolution?

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Jun 10, 2020

    “I’ll toss my coins in the fountain,

    Look for clovers in grassy lawns

    Search for shooting stars in the night

    Cross my fingers and dream on.”

    —Tracy Chapman

    Like most little kids, I was regaled with tales about genies and wizards who used their magical powers to grant people the desires of their heart. And for a time, I was obsessed with finding some way to make my own wishes become a reality, too. I blew on dandelions, hunted for four-leafed clover, tried to catch fairy insects and looked for shooting stars in the night sky. Unfortunately, nothing worked.

    But, that didn’t mean that I gave up on my hopes and dreams. In time, I realized that sometimes my imagination outpaced reality.

    I still have hopes and dreams today. Hopefully, they are more realistic than in than the ones I held to in my youth. I even have hopes and dreams about what I might accomplish as a scientist. All scientists do. It’s part of what drives us. Scientists like to solve problems and extend the frontiers of knowledge. And, they hope that they will make discoveries that do that very thing, even if their hopes sometimes outpace reality.

    Recently, a team of biochemists turned to a meteorite—a small piece of a shooting star—with the hope that their dream of finding meaningful insights into the evolutionary origin-of-life question would be realized. Using state-of-the art analytical methods, the Harvard University researchers uncovered the first-ever evidence for proteins in meteorites.1 Their work is exemplary work—science at its best. These biochemists view this discovery as offering an important clue to the chemical evolutionary origin of life. Yet, a careful analysis of their claims leads to the nagging doubt that origin-of-life researchers really aren’t any closer to understanding the origin of life and realizing their dream.

    Meteorites and the Origin of Life

    Origin-of-life researchers have long turned to meteorites for insight into the chemical evolutionary processes they believe spawned life on Earth. It makes sense. Meteorites represent a sampling of the materials that formed during the time our solar system came together and, therefore, provide a window into the physical and chemical processes that shaped the earliest stages of our solar system’s history and would have played a potential role in the origin of life.

    One group of meteorites that origin-of-life researchers find to be most valuable toward this end are carbonaceous chondrites. Some classes of carbonaceous chondrites contain relatively high levels of organic compounds that formed from materials that existed in our early solar system. Many of these meteorites have undergone chemical and physical alterations since the time of their formation. Because of this metamorphosis, these meteorites offer clues about the types of prebiotic chemical processes that could have reasonably transpired on early Earth. However, they don’t give a clear picture of what the chemical and physical environment of the early solar system was like.

    Fortunately, researchers have discovered a unique type of carbonaceous chondrite: the CV3 class. These meteorites have escaped metamorphosis, undergoing virtually no physical or chemical alterations since they formed. For this reason, these meteorites prove to be exceptionally valuable because they provide a pristine, unadulterated view of the nascent solar system.

    The Discovery of Proteins in Meteorites

    Origin-of-life investigators have catalogued a large inventory of organic compounds from carbonaceous chondrites, including some of the building blocks of life, such as amino acids, the constituents of proteins. Even though amino acids have been recovered from meteorites, there have been no reports of amino acid polymers (protein-like materials) in meteorites—at least until the Harvard team began their work.

    Figure: Reaction of Amino Acids to Form Proteins. Credit: Shutterstock

    The team’s pursuit of proteins in meteorites started in 2014 when they carried out a theoretical study that indicated to them that amino acids could polymerize to form protein-like materials in the gas nebulae that condense to form solar systems.2 In an attempt to provide experimental support for this claim, the research team analyzed two CV3 class carbonaceous chondrites: the Allende and Acfer 086 meteorites.

    Instead of extracting these meteorites for 24 hours with water at 100°C (which is the usual approach taken by origin-of-life investigators), the research team adopted a different strategy. They reasoned that the protein-like materials that would form from amino acids in gaseous nebulae would be hydrophobic. (Hydrophobic materials are water-repellent materials that are insoluble in aqueous systems.) These types of materials wouldn’t be extracted by hot water. Alternatively, these hydrophobic protein-like substances would be susceptible to breaking down into their constituent amino acids (through a process called hydrolysis) under the standard extraction method. Either way, the protein-like materials would escape detection.

    So, the researchers employed a Folch extraction at room temperature. This technique is designed to extract materials with a range of solubility properties while avoiding hydrolytic reactions. Using this approach, the Harvard researchers were able to detect evidence for amino acid polymers consisting of glycine and hydroxyglycine in extracts taken from the two meteorites.3

    In their latest work, the research team performed a detailed structural characterization of the amino acid polymers from the Acfer 086 meteorite, thanks to access to a state-of-the-art mass spectrometer that had the capabilities of analyzing low levels of materials in the meteorite extracts.

    The Harvard scientists determined that a distribution of amino acid polymer species existed in the meteorite sample.The most prominent one was a duplex formed from two protein-like chains that were 16 amino acids in length, comprised of glycine and hydroxyglycine residues. They also detected lithium ions associated with some of the hydroxyglycine subunits. Bound to both ends of the duplex was an unusual iron oxide moiety formed from two atoms of iron and three oxygen atoms. Lithium atoms were also associated with the iron oxide moiety.

    Researchers are confident that this protein-like material—which they dub hemolithin—is not due to terrestrial contamination for two reasons. First, hydroxyglycine is a non-protein amino acid. Secondly, the protein duplex is enriched in deuterium—a signature that indicates it stems from an extraterrestrial source. In fact, the deuterium enrichment is so excessive, the researchers think it may have formed in the gas nebula before it condensed to form our solar system.

    Origin-of-Life Implications

    If these results stand, they represent an important scientific milestone—the first-ever protein-like material recovered from an extraterrestrial source. A dream come true for the Harvard scientists. Beyond this acclaim, origin-of-life researchers view this work as having important implications for the origin-of-life question.

    For starters, this work affirms that chemical complexification can take place in prebiotic settings, providing support of chemical evolution. The Harvard scientists also speculate that the iron oxide complex at the ends of the amino acid polymer chains could serve as an energy source for prebiotic chemistry. This complex can absorb photons of light and, in turn, use that absorbed energy to drive chemical processes, such as cleaving water molecules.

    More importantly, this work indicates that amino acids can form and polymerize in gaseous nebulae prior to the time that these structures collapse and condense into solar systems. In other words, this work suggests that prebiotic chemistry may have been well under way before Earth formed. If so, it means that prebiotic materials could have been endogenous to (produced within) the solar system, forming an inventory of building block materials that could have jump-started the chemical evolutionary process. Alternatively, the formation of prebiotic materials prior to solar system formation opens up the possibility that these critical compounds for the origin of life didn’t have to form on early Earth. Instead, prebiotic compounds could have been delivered to the early Earth by asteroids and comets—again, contributing to the early Earth’s cache of prebiotic substances.

    Does the Protein-in-Meterorite Discovery Evince Chemical Evolution?

    In many respects, the discovery of protein species in carbonaceous chondrites is not surprising. If amino acids are present in meteorites (or gaseous nebula), it stands to reason that, under certain conditions, these materials will react to form amino acid polymers. But, even so, a protein-like material made up of glycine and hydroxyglycine residues has questionable biochemical utility and this singular compound is a far cry from the minimal biochemical complexity needed for life. Chemical evolutionary processes must traverse a long road to move from the simplest amino acid building blocks (and the polymers formed from these compounds) to a minimal cell.

    More importantly, it is questionable if the amino acid polymers in carbonaceous chondrites (or in gaseous nebula) made much of a contribution to the inventory of prebiotic materials on early Earth. Detection and characterization of the amino acid polymer in the Acfer 086 meteorite was only possible thanks to cutting-edge analytical instrumentation (the mass spectrometer) with the capability to detect and characterize low levels of materials. This requirement means that proteins found in the Acfer 086 meteorite samples must exist at relatively low levels. Once delivered to the early Earth, these materials would have been further diluted to even lower levels as they were introduced into the environment. In other words, these compounds most likely would have melded into the chemical background of early Earth, making little, if any, contribution to chemical evolution. And once the amino acid polymers dissolved into the early Earth’s oceans, a significant proportion may well have undergone hydrolysis (decomposition) into constituent amino acids.

    Earth’s geological record affirms my assessment of the research team’s claims. Geochemical evidence from the oldest rock formations on Earth, dating to around 3.8 billion years ago, makes it clear that neither endogenous organic materials nor prebiotic materials delivered to early Earth via comets and asteroids (including amino acids and protein-like materials) made any contribution to the prebiotic inventory of early Earth. If these materials did add to the prebiotic store, the carbonaceous deposits in the oldest rocks on Earth would display a carbon-13 and deuterium enrichment. But they don’t. Instead, these deposits display a carbon-13 and deuterium depletion, indicating that these carbonaceous materials result from biological activity, not extraterrestrial mechanisms.

    So, even though the Harvard investigators accomplished an important milestone in origin-of-life research, the scientific community’s dream of finding a chemical evolutionary pathway to the origin of life remains unfulfilled.

    Resources

    Endnotes
    1. Malcolm W. McGeoch, Sergei Dikler, and Julie E. M. McGeoch, “Hemolithin: A Meteoritic Protein Containing Iron and Lithium,” (February 22, 2020), preprint, https://arxiv.org/abs/2002.11688.
    2. Julie E. M. McGeoch and Malcolm W. McGeoch, “Polymer Amide as an Early Topology,” PLoS ONE 9, no. 7 (July 21, 2014): e103036, doi:10.1371/journal.pone.0103036.
    3. Julie E. M. McGeoch and Malcolm W. McGeoch, “Polymer Amide in the Allende and Murchison Meteorites,” Meteoritics and Planetary Science 50 (November 5, 2015): 1971–83, doi:10.1111/maps.12558; Julie E. M. McGeoch and Malcolm W. McGeoch, “A 4641Da Polymer of Amino Acids in Acfer 086 and Allende Meteorites,” (July 28, 2017), preprint, https://arxiv.org/pdf/1707.09080.pdf.
  • The Argument from Beauty: Can Evolution Explain Our Aesthetic Sense?

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | May 13, 2020

    Lately, I find myself spending more time in front of the TV than I normally would, thanks to the COVID-19 pandemic. I‘m not sure investing more time watching TV is a good thing, but it has allowed me to catch up on some of my favorite TV shows.

    One program that is near the top of my favorites list these days is the Canadian sitcom Kim’s Convenience. Based on the 2011 play of the same name written by Ins Choi, this sitcom is about a family of Korean immigrants who live in Toronto, where they run a convenience store.

    In the episode Best Before Appa, the traditional, opinionated, and blunt family patriarch, argues with his 20-year-old daughter about selling cans of ravioli that have expired. Janet, an art student frustrated by her parents’ commitment to Korean traditions and their tendency to parent her excessively, implores her father not to sell the expired product because it could make people sick. But Mr. Kim asserts that the ravioli isn’t bad, reasoning that the label states, “best before this date. After this date, not the best, but still pretty good.”

    The assessment “not the best, but still pretty good” applies to more than just expired cans of foods. It also applies to explanations.

    Often, competing explanations exist for a set of facts, an event in life’s history, or some phenomenon in nature. And, each explanation has merits and weaknesses. In these circumstances, it’s not uncommon to seek the best explanation among the contenders. Yet, as I have learned through experience, identifying the best explanation isn’t as easy as it might seem. For example, whether or not one considers an explanation to be the “best” or “not the best, but pretty good” depends on a number of factors, including one’s worldview.

    I have found this difference in perspective to be true as I have interacted with skeptics about the argument for God from beauty.

    Nature’s Beauty, God’s Existence, and the Biblical View of Humanity

    Every place we look in nature—whether the night sky, the oceans, the rain forests, the deserts, even the microscopic world—we see a grandeur so compelling that we are often moved to our very core. For theists, nature’s beauty points to the reality of God’s existence.

    As philosopher Richard Swinburne argues, “If God creates a universe, as a good workman he will create a beautiful universe. On the other hand, if the universe came into existence without being created by God, there is no reason to suppose that it would be a beautiful universe.”1 In other words, the best explanation for the beauty in the world around us is divine agency.

    blog__inline--the-argument-from-beauty-can-evolution-explain

    Image: Richard Swinburne Credit: Wikipedia

    Moreover, our response to the beauty in the world around us supports the biblical view of human nature. As human beings, why do we perceive beauty in the world? In response to this question, Swinburne asserts, “There is certainly no particular reason why, if the universe originated uncaused, psycho-physical laws . . . would bring about aesthetic sensibilities in human beings.”2 But if human beings are made in God’s image, as Scripture teaches, we should be able to discern and appreciate the universe’s beauty, made by our Creator to reveal his glory and majesty. In other words, Swinburne and others who share his worldview find God to be the best explanation for the beauty that surrounds us.

    Humanity’s Aesthetic Sense

    Our appreciation of beauty stands as one of humanity’s defining features. And it extends beyond our fascination with nature’s beauty. Because of our aesthetic sense, we strive to create beautiful things ourselves, such as paintings and figurative art. We adorn ourselves with body ornaments. We write and perform music. We sing songs. We dance. We create fiction and tell stories. Much of the art we produce involves depictions of imaginary worlds. And, after we create these imaginary worlds, we contemplate them. We become absorbed in them.

    What is the best explanation for our aesthetic sense? Following after Swinburne, I maintain that the biblical view of human nature accounts for our aesthetic sense. For, if we are made in God’s image, then we are creators ourselves. And the art, music, and stories we create arises as a manifestation of God’s image within us.

    As a Christian theist, I am skeptical that the evolutionary paradigm can offer a compelling explanation for our aesthetic sense.

    Though sympathetic to an evolutionary approach as a way to explain for our sense of beauty, philosopher Mohan Matthen helps frame the problem confronting the evolutionary paradigm: “But why is this good, from an evolutionary point of view? Why is it valuable to be absorbed in contemplation, with all the attendant dangers of reduced vigilance? Wasting time and energy puts organisms at an evolutionary disadvantage. For large animals such as us, unnecessary activity is particularly expensive.”3

    Our response to beauty includes the pleasure we experience when we immerse ourselves in nature’s beauty, a piece of art or music, or a riveting fictional account. But, the pleasure we derive from contemplating beauty isn’t associated with a drive that supports our survival, such as thirst, hunger, or sexual urges. When these desires are satisfied we experience pleasure, but that pleasure displays a time-dependent profile. For example, it is unpleasant when we are hungry, yet those unpleasant feelings turn into pleasure when we eat. In turn, the pleasure associated with assuaging our hunger is short-lived, soon replaced with the discomfort of our returning hunger.

    In contrast, the pleasure associated with our aesthetic sense varies little over time. The sensory and intellectual pleasure we experience from contemplating things we deem beautiful continues without end.

    On the surface it appears our aesthetic sense defies explanation within an evolutionary framework. Yet, many evolutionary biologists and evolutionary psychologists have offered possible evolutionary accounts for its origin.

    Evolutionary Accounts for Humanity’s Aesthetic Sense

    Evolutionary scenarios for the origin of human aesthetics adopt one of three approaches, viewing it as either (1) an adaptation, (2) an evolutionary by-product, or (3) the result of genetic noise.4

    1. Theories that involve adaptive mechanisms claim our aesthetic sense emerged as an adaptation that assumed a central place in our survival and reproductive success as a species.

    2. Theories that view our aesthetic sense as an evolutionary by-product maintain that it is the accidental, unintended consequence of other adaptations that evolved to serve other critical functions—functions with no bearing on our capacity to appreciate beauty.

    3. Theories that appeal to genetic drift consider our aesthetic sense to be the accidental, chance outcome of evolutionary history that just happened upon a gene network that makes our appreciation of beauty possible.

    For many people, these evolutionary accounts function as better explanations for our aesthetic sense than one relying on a Creator’s existence and role in creating a beautiful universe, including creatures who bear his image and are designed to enjoy his handiwork. Yet, for me, none of the evolutionary approaches seem compelling. The mere fact that a plethora of differing scenarios exist to explain the origin of our aesthetic sense indicates that none of these approaches has much going for it. If there truly was a compelling way to explain the evolutionary origin of our aesthetic sense, then I would expect that a singular theory would have emerged as the clear front-runner.

    Genetic Drift and Evolutionary By-Product Models

    In effect, evolutionary models that regard our aesthetic sense to be an unintended by-product or the consequence of genetic drift are largely untestable. And, of course, this concern prompts the question: Are any of these approaches genuinely scientific explanations?

    On top of that, both types of scenarios suffer from the same overarching problem; namely, human activities that involve our aesthetic sense are central to almost all that we do. According to evolutionary biologists John Tooby and Leda Cosmides:

    Aesthetically-driven activities are not marginal phenomena or elite behavior without significance in ordinary life. Humans in all cultures spend a significant amount of time engaged in activities such as listening to or telling fictional stories, participating in various forms of imaginative pretense, thinking about imaginary worlds, experiencing the imaginary creations of others, and creating public representations designed to communicate fictional experiences to others. Involvement in fictional, imagined worlds appears to be a cross-culturally universal, species-typical phenomenon . . . Involvement in the imaginative arts appears to be an intrinsically rewarding activity, without apparent utilitarian payoff.5

    As human beings we prefer to occupy imaginary worlds. We prefer absorbing ourselves in the beauty of the world or in the creations we make. Yet, as Tooby and Cosmides point out, obsession with the imaginary world detracts from our survivability.6 The ultimate rewards we receive should be those leading to our survival and reproductive success and these rewards should come from the time we spend acquiring and acting on true information about the world. In fact, we should have an appetite for accurate information about the world and a willingness to cast aside false, imaginary information.

    In effect, our obsession with aesthetics could be properly seen as maladaptive. It would be one thing if our obsession with creating and admiring beauty was an incidental part of our nature. But, because it is at the forefront of everything we think and do, its maladaptive character should have resulted in its adaptive elimination. Instead, we see the opposite. Our aesthetic sense is one of our most dominant traits as human beings.

    Evolutionary Adaptation Models

    This significant shortcoming pushes to the forefront evolutionary scenarios that explain our aesthetic sense as adaptations. Yet, generally speaking, these evolutionary scenarios leave much to be desired. For example, one widely touted model explains our attraction to natural beauty as a capacity that helped humans identify the best habitats when we were hunter-gatherers. This aesthetic sense causes us to admire idyllic settings with water and trees. And, because we admire these settings, we want to live in them, promoting our survivability and reproductive success. Yet this model doesn’t account for our attraction to settings that would make it nearly impossible to live, let alone thrive. Such settings include snow-covered mountains with sparse vegetation; the crashing waves of an angry ocean; or the molten lava flowing from a volcanic eruption. These settings are hostile, yet we are enamored with their majestic beauty. This adaptive model also doesn’t explain our attraction to animals that would be deadly to us: lions and tigers or brightly colored snakes, for example.

    Another more sophisticated model explains our aesthetic sense as a manifestation of our ability to discern patterns. The capacity to discern patterns plays a key role in our ability to predict future events, promoting our survival and reproductive success. Our perception of patterns is innate, yet, it needs to be developed and trained. So, our contemplation of beauty and our creation of art, music, literature, etc. are perceptual play—fun and enjoyable activities that develop our perceptual skills.7 If this model is valid, then I would expect that perceptual play (and consequently fascination with beauty) would be most evident in children and teenagers. Yet, we see that our aesthetic sense continues into adulthood. In fact, it becomes more elaborate and sophisticated as we grow older. Adults are much more likely to spend an exorbitant amount of time admiring and contemplating beauty and creating art and music.

    This model also fails to explain why we feel compelled to develop our perceptual abilities and aesthetic capacities far beyond the basic skills needed to survive and reproduce. As human beings, we are obsessed with becoming aesthetic experts. The drive to develop expert skill in the aesthetic arts detracts from our survivability. This drive for perfection is maladaptive. To become an expert requires time and effort. It involves difficulty—even pain—and sacrifice. It’s effort better spent trying to survive and reproduce.

    At the end of the day, evolutionary models that appeal to the adaptive value of our aesthetic sense, though elaborate and sophisticated, seem little more than evolutionary just-so stories.

    So, what is the best explanation for our aesthetic sense? It likely depends on your worldview.

    Which explanatory model is best? And which is not the best, but still pretty good? If you are a Christian theist, you most likely find the argument from beauty compelling. But, if you are a skeptic you most likely prefer evolutionary accounts for the origin of our aesthetic sense.

    So, like beauty, the best explanation may lie in the eye of the beholder.

    Resources

    Endnotes
    1. Richard Swinburne, The Existence of God, 2nd ed. (New York: Oxford University Press, 2004), 190–91.
    2. Swinburne, The Existence of God, 190–91.
    3. Mohan Matthen, “Eye Candy,” Aeon (March 24, 2014), https://aeon.co/amp/essays/how-did-evolution-shape-the-human-appreciation-of-beauty.
    4. John Tooby and Leda Cosmides, “Does Beauty Build Adaptive Minds? Towards an Evolutionary Theory of Aesthetics, Fiction and the Arts,” SubStance 30, no. 1&2 (2001): 6–27; doi: 10.1353/sub.2001.0017.
    5. Tooby and Cosmides, “Does Beauty Build Adaptive Minds?”
    6. Tooby and Cosmides, “Does Beauty Build Adaptive Minds?”
    7. Matthen, “Eye Candy.”
  • Another Disappointment for the Evolutionary Model for the Origin of Eukaryotic Cells?

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Apr 29, 2020

    We all want to be happy.

    And there is no shortage of advice on what we need to do to lead happy, fulfilled lives. There are even “experts” who offer advice on what we shouldn’t do, if we want to be happy.

    As a scientist, there is one thing that makes me (and most other scientists) giddy with delight: It is learning how things in nature work.

    Most scientists have a burning curiosity to understand the world around them, me included. Like most scientists, I derive enormous amount of joy and satisfaction when I gain insight into the inner workings of some feature of nature. And, like most in the scientific community, I feel frustrated and disappointed when I don’t know why things are the way they are. Side by side, this combination of joy and frustration serves as one of the driving forces for my work as a scientist.

    And, because many of the most interesting questions in science can appear at times to be nearly impenetrable mysteries, new discoveries typically bring me (and most other scientists) a mixture of hope and consternation.

    Trying to Solve a Mystery

    These mixed emotions are clearly evident in the life scientists who strive to understand the evolutionary origin of complex, eukaryotic cells. As science journalist Carl Zimmer rightly points out, the evolutionary process that produced eukaryotic cells from simpler microbes stands as “one of the deepest mysteries in biology.”1 And while researchers continue to accumulate clues about the origin of eukaryotic cells, they remain stymied when it comes to offering a robust, reliable evolutionary account of one of life’s key transitions.

    The leading explanation for the evolutionary origin of eukaryotic cells is the endosymbiont hypothesis. On the surface, this idea appears to be well evidenced. But digging a little deeper into the details of this model exposes gaping holes. And each time researchers present new understanding about this presumed evolutionary transition, it exposes even more flaws with the model, turning the joy of discovery into frustration, as the latest work by a team of Japanese microbiologists attests.2

    Before we unpack the work by the Japanese investigators and its implications for the endosymbiont hypothesis, a quick review of this cornerstone idea in evolutionary theory is in order. (If you are familiar with the endosymbiont hypothesis and the evidence in support of the model, please feel free to skip ahead to The Discovery of Lokiarchaeota)

    The Endosymbiont Hypothesis

    According to this idea, complex cells originated when symbiotic relationships formed among single-celled microbes after free-living bacterial and/or archaeal cells were engulfed by a “host” microbe.

    Much of the endosymbiont hypothesis centers around the origin of the mitochondrion. Presumably, this organelle started as an endosymbiont. Evolutionary biologists believe that once engulfed by the host cell, this microbe took up permanent residency, growing and dividing inside the host. Over time, the endosymbiont and the host became mutually interdependent, with the endosymbiont providing a metabolic benefit for the host cell, such as supplying a source of ATP. In turn, the host cell provided nutrients to the endosymbiont. Presumably, the endosymbiont gradually evolved into an organelle through a process referred to as genome reduction. This reduction resulted when genes from the endosymbiont’s genome were transferred into the genome of the host organism.

    blog__inline--another-disappointment-for-the-evolutionary-model

    Figure 1: A Depiction of the Endosymbiont Hypothesis. Image credit: Shutterstock

    Evidence for the Endosymbiont Hypothesis

    At least three lines of evidence bolster the hypothesis:

    • The similarity of mitochondria to bacteria. Most of the evidence for the endosymbiont hypothesis centers around the fact that mitochondria are about the same size and shape as a typical bacterium and have a double membrane structure like gram-negative cells. These organelles also divide in a way that is reminiscent of bacterial cells.
    • Mitochondrial DNA. Evolutionary biologists view the presence of the diminutive mitochondrial genome as a vestige of this organelle’s evolutionary history. They see the biochemical similarities between mitochondrial and bacterial genomes as further evidence for the evolutionary origin of these organelles.
    • The presence of the unique lipid, cardiolipin, in the mitochondrial inner membrane. This important lipid component of bacterial inner membranes is not found in the membranes of eukaryotic cells—except for the inner membranes of mitochondria. In fact, biochemists consider cardiolipin a signature lipid for mitochondria and another relic from its evolutionary past.

    The Discovery of Lokiarchaeota

    Evolutionary biologists have also developed other lines of evidence in support of the endosymbiont hypothesis. For example, biochemists have discovered that the genetic core (DNA replication and the transcription and translation of genetic information) of eukaryotic cells resembles that of the Archaea. This similarity suggests to many biologists that a microbe belonging to the archaeal domain served as the host cell that gave rise to eukaryotic cells.

    Life scientists think they may have made strides toward identifying the archaeal host. In 2015, a large international team of collaborators reported the discovery of Lokiarchaeota, a new phylum belonging to the Archaea. This phylum groups with eukaryotes on the evolutionary tree. Analysis of the genomes of Lokiarchaeota reveal the presence of genes that encode for the so-called eukaryotic signature proteins (ESPs). These genes are unique to eukaryotic organisms.3

    As exciting as the discovery has been for evolutionary biologists, it has also been a source of frustration. Researchers didn’t discover this group of microbes by isolating microbes and culturing them in the lab. Instead, they discovered them by recovering DNA fragments from the environment (a hydrothermal vent system in the Atlantic Ocean called Loki’s Castle, after Loki, the ancient Norse god of trickery) and assembling them into genome sequences. Through this process, they learned that Lokiarchaeota correspond to a new group of Archaea, called the Asgardians. The reconstructed Lokiarchaeota “genome” is low quality (1.4-fold coverage) and incomplete (8 percent of the genome is missing).

    Mystery Solved?

    So, without actual microbes to study, the best that life scientists could do was infer the cell biology of Lokiarchaeota from its genome. But this frustrating limitation recently turned into excitement as a team of Japanese microbiologists isolated and cultured the first microbe that belongs to this group of archaeons, dubbed Prometheoarchaeum syntrophicum. It took researchers nearly 12 years of laboratory work to isolate this slow-growing microbe from sediments in the Pacific Ocean and culture it in the laboratory. (It takes 14 to 25 days for the microbe to double.) But this effort is now paying off, because the research team is now able to get a glimpse into what many life scientists believe to be a representative of the host microbe that spawned the first eukaryotic cells.

    P. syntrophicum is spherically shaped and about 550 nm in size. In culture, this microbe forms aggregates around an extracellular polymeric material it secretes. It also has unusual membrane-based tentacle-like protrusions (of about 80 to 100 nm in length) that extend from the cell surface.

    Researchers were unable to produce a pure culture of P. syntrophicum because it forms a close association with other microbes. The team learned that P. syntrophicum lives a syntrophic lifestyle, meaning that it forms interdependent relationships with other microbes in the environment. Specifically, P. syntrophicum produces hydrogen and formate as metabolic by-products that, in turn, are scavenged for nutrients by partner microbes. Researchers also discovered that P. syntrophicum consumes amino acids externally supplied in the growth medium. Presumably, this observation means that in the ocean floor sediments, P. syntrophicum feeds on organic materials released by its microbial counterpart.

    P. syntrophicum and Failed Predictions of the Endosymbiont Hypothesis

    Availability of P. syntrophicum cells now allows researchers the unprecedented chance to study a microbe that they believe stands in as a representative for the archaeal host in the endosymbiont hypothesis. Has the mystery been solved? Instead of affirming the scientific predictions of leading versions of the endosymbiont hypothesis, the biology of this organism adds to the frustration and confusion surrounding the evolutionary account. Scientific analysis produces raises three questions for the evolutionary view:

    • First, this microbe has no internal cellular structures. This observation stands as a failed prediction. Because Lokiarchaeota (and other members of the Asgard archaeons) have a large number of ESPs present in their genomes, some biologists speculated that the Asgardian microbes would have complex subcellular structures. Yet, this expectation has not been realized for P. syntrophicum, even though this microbe has around 80 or so ESPs in its genome.
    • Second, this microbe can’t engulf other microbes. This inability also serves as a failed prediction. Prior to the cultivation of P. syntrophicum, analysis of the genomes of Lokiarchaeota identified a number of genes involved in membrane-related activities, suggesting that this microbe may well have possessed the ability to engulf other microbes. Again, this expectation wasn’t realized for P. syntrophicum. This observation is a significant blow to the endosymbiont hypothesis, which requires the host cell to have cellular processes in place to engulf other microbes.
    • Third, the membranes of this microbe are comprised of typical archaeal lipids and lack the enzymatic machinery to make typical bacterial lipids. This also serves as a failed prediction. Evolutionary biologists had hoped that P. syntrophicum would provide a solution to the lipid divide (next section). It doesn’t.

    What Is the Lipid Divide?

    The lipid divide refers to the difference in the chemical composition of the cell membranes found in bacteria and archaea. Phospholipids comprise the cell membranes of both sorts of microbes. But that‘s where the similarity ends. The chemical makeup of the phospholipids is distinct in bacteria and archaea, respectively.

    Bacterial phospholipids are built around a D-glycerol backbone which has a phosphate moiety bound to the glycerol in the sn-3 position. Two fatty acids are bound to the D-glycerol backbone at the sn-1 and sn-2 position. In water, these phospholipids assemble into bilayer structures.

    Archaeal phospholipids are constructed around an Lglycerol backbone (which produces membrane lipids with different stereochemistry than bacterial phospholipids). The phosphate moiety is attached to the sn-1 position of glycerol. Two isoprene chains are bound to the sn-2 and sn-3 positions of L-glycerol via ether linkages. Some archaeal membranes are formed from phospholipid bilayers, while others are formed from phospholipid monolayers.

    Presumably, the structural features of the archaeal phospholipids serve as an adaptation that renders them ideally suited to form stable membranes in the physically and chemically harsh environments in which many archaea find themselves.

    The Lipid Divide Frustrates the Endosymbiont Hypothesis

    If the host cell in the endosymbiont evolutionary mechanism is an archaeal cell, it logically follows that the membrane composition of eukaryotic cells should be archaeal-like. As it turns out, this expectation is not met. The cell membranes of eukaryotic cells closely resemble bacterial, not archaeal, membranes.

    Can Lokiarchaeota Traverse the Lipid Divide?

    Researchers had hoped that the discovery of Lokiarchaeota would shed light on the evolutionary origin of eukaryotic cell membranes. In the absence of having actual organisms to study, researchers screened the Lokiarchaeota genome for enzymes that would take part in phospholipid synthesis, with the hopes of finding clues about how this transition may have occurred.

    Based on their analysis, they argued that Lokiarchaeota could produce some type of hybrid phospholipid with features of both archaeal and bacterial phospholipids. Still, their conclusion remained speculative at best. The only way to establish Lokiarchaeota membranes as transitional between those found in archaea and bacteria is to perform chemical analysis of its membranes. With the isolation and cultivation of P. syntrophicum this analysis is possible. Yet its results only serve to disappoint evolutionary biologists, because this microbe has typical archaeal lipids in its membranes and displays no evidence of being capable of making archaeal/bacterial hybrid lipids.

    A New Model for the Endosymbiont Hypothesis?

    Not to be dissuaded by these disappointing results, the Japanese researchers propose a new version of the endosymbiont hypothesis, consistent with P. syntrophicum biology. For this model, they envision the archaeal host entangling an oxygen-metabolizing, ATP-producing bacterium in the tentacle-like structures that emanate from its cellular surface. Over time, the entangled organism forms a mutualistic relationship with the archaeal host. Eventually, the host encapsulates the entangled microbe in an extracellular structure that forms the body of the eukaryotic cell, with the host cell forming a proto-nucleus.

    Though this model is consistent with P. syntrophicum biology, it is highly speculative and lack supporting evidence. To be fair, the Japanese researchers make this very point when they state, “further evidence is required to support this conjecture.”5

    This work shows how scientific advance helps validate or invalidate models. Even though many biologists view the endosymbiont hypothesis as a compelling, well-established theory, significant gaps in our understanding of the origin of eukaryotic cells persist. (For a more extensive discussion of these outages see the Resources section.) In my view as a biochemist, some of these gaps are unbridgeable chasms that motivate my skepticism about the endosymbiont hypothesis, specifically, and the evolutionary approach to explain the origin of eukaryotic cells, generally.

    Of course, my skepticism leads to another question: Is it possible that the origin of eukaryotic cells reflects a Creator’s handiwork? I am happy to say that the answer is “yes.”

    Resources

    Challenges to the Endosymbiont Hypothesis

    In Support of A Creation Model for the Origin of Eukaryotic Cells

    Endnotes
    1. Carl Zimmer, “This Strange Microbe May Mark One of Life’s Great Leaps,” The New York Times (January 16, 2020), https://www.nytimes.com/2020/01/15/science/cells-eukaryotes-archaea.html.
    2. Hiroyuki Imachi et al., “Isolation of an Archaeon at the Prokaryote-Eukaryote Interface,” Nature 577 (January 15, 2020): 519–25, doi:10.1038/s41586-019-1916-6.
    3. Anja Spang et al., “Complex Archaea That Bridge the Gap between Prokaryotes and Eukaryotes,” Nature 521 (May 14, 2015): 173–79, doi:10.1038/nature14447; Katarzyna Zaremba-Niedzwiedzka et al., “Asgard Archaea Illuminate the Origin of Eukaryotic Cellular Complexity,” Nature 541 (January 19, 2017): 353–58, doi:10.1038/nature21031.
    4. Laura Villanueva, Stefan Shouten, and Jaap S. Sinninghe Damsté, “Phylogenomic Analysis of Lipid Biosynthetic Gene and of Archaea Shed Light on the ‘Lipid Divide,’” Environmental Microbiology 19 (January 2017): 54–69, doi:10.1111/1462-2920.13361.
    5. Imachi et al., “Isolation of an Archaeon.”
  • How Can DNA Survive for 75 Million Years? Implications for the Age of the Earth

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Apr 15, 2020

    My family’s TV viewing habits have changed quite a bit over the years. It doesn’t seem that long ago that we would gather around the TV, each week at the exact same time, to watch an episode of our favorite show, broadcast live by one of the TV networks. In those days, we had no choice but to wait another week for the next installment in the series.

    Now, thanks to the availability of streaming services, my wife and I find ourselves binge-watching our favorite TV programs from beginning to end, in one sitting. I’m almost embarrassed to admit this, but we rarely sit down to watch TV with the idea that we are going to binge watch an entire season at a time. Usually, we just intend to take a break and watch a single episode of our favorite program before we get back to our day. Inevitably, however, we find ourselves so caught up with the show we are watching that we end up viewing one episode after another, after another, as the hours of our day melt away.

    One program we couldn’t stop watching was Money Heist (available through Netflix). This Spanish TV series is a crime drama that was originally intended to span two seasons. (Because of its popularity, Netflix ordered two more seasons.) Money Heist revolves around a group of robbers led by a brilliant strategist called the Professor. The Professor and his brother nicknamed Berlin devise an ambitious, audacious plan to take control of the Royal Mint of Spain in order to print and then escape with 2.5 billion euros.

    Because their plan is so elaborate, it takes the team of robbers five months to prepare for their multi-day takeover of the Royal Mint. As you might imagine, their scheme consists of a sequence of ingenious, well-timed, and difficult-to-execute steps requiring everything to come together in the just-right way for their plan to succeed and for the robbers to make it out of the mint with a treasure trove of cash.

    Recently a team of paleontologists uncovered their own treasure trove—a haul of soft tissue materials from the 75-million-year-old fossilized skull fragments of a juvenile duck-billed dinosaur (Hypacrosaurus stebingeri).1 Included in this cache of soft tissue materials were the remnants of the dinosaur’s original DNA—the ultimate paleontological treasure. What a steal!

    This surprising discovery has people asking: How is possible that DNA could survive for that long a period of time?

    Common wisdom states that DNA shouldn’t survive for more than 1 million years, much less 75 million. Thus, young-earth creationists (YECs) claim that this soft-tissue discovery provides the most compelling reason to think that the earth is young and that the fossil record resulted from a catastrophic global deluge (Noah’s flood).

    But is their claim valid?

    Hardly. The team that made the soft-tissue discovery propose a set of mechanisms and processes that could enable DNA to survive for 75 million years. All it takes is the just-right set of conditions and a sequence of well-timed, just-right events all coming together in the just-right way and DNA will persist in fossil remains.

    Baby Dinosaur Discovery

    The team of paleontologists who made this discovery—co-led by Mary Schweitzer at North Carolina State University and Alida M. Bailleul of the Chinese Academy of Sciences—unwittingly stumbled upon these DNA remnants as part of another study. They were investigating pieces of fossilized skull and leg fragments of a juvenile Hypacrosaurus recovered from a nesting site. Because of the dinosaur’s young age, the researchers hoped to extend the current understanding of dinosaur growth by carrying out a detailed microscopic characterization of these fossil pieces. In one of the skull fragments they observed well-defined and well-preserved calcified cartilage that was part of a growth plate when the juvenile was alive.

    A growth plate is a region in a growing skeleton where bone replaces cartilage. At this chondro-osseous junction, chondrocytes (cells found in cartilage) can be found within lacunae (spaces in the matrix of bone tissues). Here, chondrocyte cells secrete an extracellular matrix made up of type II collagen and glucosamine glycans. These cells rapidly divide and grow (a condition called hypertrophy). Eventually, the cells die, leaving the lacunae empty. Afterwards, bone fills in the cavities.

    blog__inline--how-can-dna-survive-for-75-million-years-1
    A growth plate. Image credit: Wikipedia.

    The team of paleontologists detected lacunae in the translucent, well-preserved cartilage of the dinosaur skull fragment. A more careful examination of the spaces revealed several cell-like structures sharing the same lacunae. The team interpreted these cell-like structures as the remnants of chondrocytes. In some instances, the cell-like structures appeared to be doublets, presumably resulting from the final stages of cell division. In the doublets, they observed darker regions that appeared to be the remnants of nuclei and, within the nuclei, dark colored materials that were elongated and aligned to mirror each other. They interpreted these features as the leftover remnants of chromosomes, which would form condensed structure during the later stages of cell division.

    Given the remarkable degree of preservation, the investigators wondered if any biomolecular remnants persisted within these microscopic structures. To test this idea, they exposed a piece of the fossil to Alcian blue, a dye that stains cartilage of extant animals. The fact that the fossilized cartilage picked up the stain indicated to the research team that soft tissue materials still persisted in the fossils.

    Using an antibody binding assay (an analytic test), the research team detected the remnants of collagen II in the lacunae. Moreover, as a scientific first, the researchers isolated the cell-like remnants of the original chondrocytes. Exposing the chondrocyte remnants to two different dyes (PI and DAPI) produced staining in the cell interior near the nuclei. These two dyes both intercalate between the base pairs that form DNA’s interior region. This step indicated the presence of DNA remnants in the fossils, specifically in the dark regions that appear to be the nuclei.

    Implications of This Find

    This discovery adds to the excitement of previous studies that describe soft tissue remnants in fossils. These types of finds are money for paleontologists because they open up new windows into the biology of extinct life. According to Bailleul:

    “These exciting results add to growing evidence that cells and some of their biomolecules can persist in deep-time. They suggest DNA can preserve for tens of millions of years, and we hope that this study will encourage scientists working on ancient DNA to push current limits and to use new methodology in order to reveal all the unknown molecular secrets that ancient tissues have.”2

    Those molecular secrets are even more exciting and surprising for paleontologists because kinetic and modeling studies indicate that DNA should have completely degraded within the span of 1 million years.

    The YEC Response

    The surprising persistence of DNA in the dinosaur fossil remains is like bars of gold for YECs and they don’t want to hoard these treasure for themselves. YECs assert that this find is the “last straw” for the notion of deep time (the view that Earth is 4.5 billion years old and life has existed on it for upwards of 3.8 billion years). For example, YEC author David Coppedge insists that “something has to give. Either DNA can last that long, or dinosaur bones are not that old.”3 He goes on to remind us that “creation research has shown that there are strict upper limits on the survival of DNA. It cannot be tens of millions of years old.”4 For YECs, this type of discovery becomes prima facia evidence that the fossil record must be the result of a global flood that occurred only a few thousand years ago.

    Yet, in my book Dinosaur Blood and the Age of the Earth, I explain why there is absolutely no reason to think that the radiometric dating techniques used to determine the ages of geological formations and fossils are unreliable. The certainty of radiometric dating methods means that there must be mechanisms that work together to promote DNA’s survival in fossil remains. Fortunately, we don’t have to wait for the next season of our favorite program to be released by Netflix to learn what those mechanisms and processes might be.

    blog__inline--how-can-dna-survive-for-75-million-years-2

    Preservation Mechanisms for Soft Tissues in Fossils

    Even though common wisdom says that DNA can’t survive for tens of millions of years, a word of caution is in order. When I worked in R&D for a Fortune 500 company, I participated in a number of stability studies. I quickly learned an important lesson: the stability of chemical compounds can’t be predicted. The stability profile for a material only applies to the specific set of conditions used in the study. Under a different set of conditions chemical stability can vary quite extensively, even if the conditions differ only slightly from the ones employed in the study.

    So, even though researchers have performed kinetic and modeling studies on DNA during fossilization, it’s best to exercise caution before we apply them to the Hypacrosaurus fossils. To say it differently, the only way to know what the DNA stability profile should be in the Hypacrosaurus fragments is to study it under the precise set of taphonomic (burial, decay, preservation) conditions that led to fossilization. And, of course, this type of study isn’t realistic.

    This limitation doesn’t mean that we can’t produce a plausible explanation for DNA’s survival for 75 million years in the Hypacrosaurus fossil fragments. Here are some clues as to why and how DNA persisted in the young dinosaur’s remains:

    • These fossilized cartilage and chondrocytes appear to be exceptionally well-preserved. For this reason, it makes sense to think that soft tissue material could persist in these remains. So, while we don’t know the taphonomic conditions that contributed to the fossilization process, it is safe to assume that these conditions came together in the just-right way to preserve remnants of the biomolecules that make up the soft tissues, including DNA.
    • Soft tissue material is much more likely to survive in cartilage than in bone. The extracellular matrix that makes up cartilage has no vascularization (channels). This property makes it less porous and reduces the surface area compared to bone. Both properties inhibit groundwater and microorganisms from gaining access to the bulk of the soft tissue materials in the cartilage. At the growth plate, cartilage actually has a higher mineral to organic ratio than bone. Minerals inhibit the activity of environmental enzymes and microorganisms. Minerals also protect the biomolecules that make up the organic portion of cartilage because they serve as an adsorption site stabilizing even fragile molecules. Also, minerals can form cross-links with biomolecules. Cross-linking slows down the degradation of biopolymers. Because the chondrocytes in the cartilage lacunae were undergoing rapid cell division at the time of the creature’s death, they consumed most of the available oxygen in their local environment. This consumption would have created a localized hypoxia (oxygen deficiency) that would have minimized oxidative damage to the tissue in the lacunae.
    • The preserved biomolecules are not the original, unaltered materials, but are fragmented remnants that have undergone chemical alteration. Even with the molecules in this altered, fragmented state, many of the assays designed to detect the original, unaltered materials will produce positive results. For example, the antibody binding assays the research team used to detect collagen II could easily detect small fragmented pieces of collagen. These assays depend upon the binding of antibodies to the target molecule. The antibody binding site consists of a relatively small region of the molecular target. This feature of antibody binding means that the antibodies designed to target collagen II will also bind to small peptide fragments of only a few amino acids in length—as long as they are derived from collagen II.
    blog__inline--how-can-dna-survive-for-75-million-years-3
    Antibody binding to an antigen. Image credit: Wikipedia.

    The dyes used to detect DNA can bind to double-stranded regions of DNA that are only six base pairs in length. Again, this feature means that the dye molecules will as readily intercalate between the bases of intact DNA molecules as relatively small fragments derived from the original material.

    • The biochemical properties of collagen II and condensed chromosomes explain the persistence of this protein and DNA. Collagen is a heavily cross-linked material. Cross-linking imparts a high degree of stability to proteins, accounting for their long-term durability in fossil remains.

    In the later stage of cell division, chromosomes (which consist of DNA and proteins) exist in a highly compact, condensed phase. In this phase, chromosomal DNA would be protected and much more resistant to chemical breakdown than if the chromosomes existed in a more diffuse state, as is the case in other stages of the cell cycle.

    In other words, a confluence of factors worked together to promote a set of conditions that allows small pieces of collagen II and DNA to survive long enough for these materials to become entombed within a mineral encasement. At this point in the preservation process, the materials can survive for indefinite periods of time.

    More Historical Heists to Come

    Nevertheless, some people find it easier to believe that a team of robbers could walk out of the Royal Mint of Spain with 2.5 billion euros than to think that DNA could persist in 75-million-year-old fossils. Their disbelief causes them to question the concept of deep time. Yet, it is possible to devise a scientifically plausible scheme to explain DNA’s survival for tens of millions of years, if several factors all work together in the just-right way. This appears to be the case for the duck-billed dinosaur specimen characterized by Schweitzer and Bailleul’s team.

    As this latest study demonstrates, if the just-right sequence of events occurs in the just-right way with the just-right timing, scientists have the opportunity to walk out of the fossil record vault with the paleontological steal of the century.

    It is exciting to think that more discoveries of this type are just around the corner. Stay tuned!

    Resources

    Responding to Young Earth Critics

    Mechanism of Soft Tissue Preservation

    Recovery of a Wide Range of Soft Tissue Materials in Fossils

    Detection of Carbon-14 in Fossils

    Endnotes
    1. Alida M. Bailleul et al., “Evidence of Proteins, Chromosomes and Chemical Markers for DNA in Exceptionally Preserved Dinosaur Cartilage,” National Science Review, nwz206 (January 12, 2020), doi:10.1093/nsr/nwz206, https://academic.oup.com/nsr/advance-article/doi/10.1093/nsr/nwz206/5762999.
    2. Science China Press, “Cartilage Cells, Chromosomes and DNA Preserved in 75 Million-Year-Old Baby Duck-Billed Dinosaur,” Phys.org, posted February 28, 2020, https://phys.org/news/2020-02-cartilage-cells-chromosomes-dna-million-year-old.html.
    3. David F. Coppedge, “Dinosaur DNA Found!”, Creation-Evolution Headlines (website), posted February 28, 2020, https://crev.info/2020/02/dinosaur-dna-found/.
    4. Coppedge, “Dinosaur DNA Found.”
  • No Joke: New Pseudogene Function Smiles on the Case for Creation

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Apr 01, 2020

    Time to confess. I now consider myself an evolutionary creationist. I have no choice. The evidence for biological evolution is so overwhelming…

    …Just kidding! April Fool’s!

    I am still an old-earth creationist. Even though the evolutionary paradigm is the prevailing framework in biology, I am skeptical about facets of it. I am more convinced than ever that a creation model approach is the best way to view life’s origin, design, and history. It’s not to say that there isn’t evidence for common descent; there is. Still, even with this evidence, I prefer old-earth creationism for three reasons.

    • First, a creation model approach can readily accommodate the evidence for common descent within a design framework.
    • Second, the evolutionary paradigm struggles to adequately explain many of the key transitions in life’s history.
    • Third, the impression of design in biology is overwhelming—and it’s becoming more so every day.

    And that is no joke.

    Take the human genome as an example. When it comes to understanding its structure and function, we are in our infancy. As we grow in our knowledge and insight, it becomes increasingly apparent that the structural and functional features of the human genome (and the genomes of other organisms) display more elegance and sophistication than most life scientists could have ever imagined—at least, those operating within the evolutionary framework. On the other hand, the elegance and sophistication of genomes is expected for creationists and intelligent design advocates. To put it simply, the more we learn about the human genome, the more it appears to be the product of a Mind.

    In fact, the advances in genomics over the last decade have forced life scientists to radically alter their views of genome biology. When the human genome was first sequenced in 2000, biologists considered most of the sequence elements to be nonfunctional, useless DNA. Now biologists recognize that virtually every class of these so-called junk DNA sequences serve key functional roles.

    If most of the DNA sequence elements in the human genome were truly junk, then I’d agree that it makes sense to view them as evolutionary leftovers, especially because these junk DNA sequences appear in corresponding locations of the human and primate genomes. It is for these reasons that biologists have traditionally interpreted these shared sequences as the most convincing evidence for common descent.

    However, now that we have learned that these sequences are functional, I think it is reasonable to regard them as the handiwork of a Creator, intentionally designed to contribute to the genome’s biology. In this framework, the shared DNA sequences in the human and primate genomes reflect common design, not common descent.

    Still, many biologists reject the common design interpretation, while continuing to express confidence in the evolutionary model. Their certainty reflects a commitment to methodological naturalism, but there is another reason for their confidence. They argue that the human genome (and the genomes of other organisms) display other architectural and operational features that the evolutionary framework explains best—and, in their view, these features tip the scales toward the evolutionary interpretation.

    Yet, researchers continue to make discoveries about junk DNA that counterbalance the evidence for common descent, including these structural and functional features. Recent insights into pseudogene biology nicely illustrate this trend.

    Pseudogenes

    Most life scientists view pseudogenes as the remnants of once-functional genes. Along these lines, biologists have identified three categories of pseudogenes (unitary, duplicated, and processed) and proposed three distinct mechanisms to explain the origin of each class. These mechanisms produce distinguishing features that allow investigators to identify certain DNA sequences as pseudogenes. However, a pre-commitment to the evolutionary paradigm can influence many biologists to declare too quickly that pseudogenes are nonfunctional based on their sequence characteristics.1

    blog__inline--no-joke-new-pseudogene-function-smiles
    The Mechanisms of Pseudogene Formation. Image credit: Wikipedia.

    As the old adage goes: theories guide, experiments decide. There is an accumulation of experimental data which indicates that pseudogenes from all three classes have utility.

    A number of research teams have demonstrated that the cell’s machinery transcribes processed pseudogenes and, in turn, these transcripts are translated into proteins. Both duplicated and unitary pseudogenes are also transcribed. However, except for a few rare cases, these transcripts are not translated into proteins. Most of duplicated and unitary pseudogene transcripts serve a regulatory role, described by the competitive endogenous RNA hypothesis.

    In other words, the experimental support for pseudogene function seemingly hinges on the transcription of these sequences. That leads to the question: What about pseudogene sequences located in genomes that aren’t transcribed? A number of pseudogenic sequences in genomes seemingly sit dormant. They aren’t transcribed and, presumably, have no utility whatsoever.

    For many life scientists, this supports the evolutionary account for pseudogene origins, making it the preferred explanation over any model that posits the intentional design of pseudogene sequences. After all, why would a Creator introduce mutationally damaged genes that serve no function? Isn’t it better to explain the presence of functional processed pseudogenes as the result of neofunctionalization, whereby evolutionary mechanisms co-opt processed pseudogenes and use them as the raw material to evolve DNA sequence elements into new genes?

    Or, perhaps, is it better to view the transcripts of regulatory unitary and duplicated pseudogenes as the functional remnants of the original genes whose transcripts played a role in regulatory networks with other RNA transcripts? Even though these pseudogenes no longer direct protein production, they can still take part in the regulatory networks comprised of RNA transcripts.

    Are Untranscribed Pseudogenes Really Untranscribed?

    Again, remember that support for the evolutionary interpretation of pseudogenes rests on the belief that some pseudogenes are not transcribed. What happens to this support if these DNA sequences are transcribed, meaning we simply haven’t detected or identified their transcripts experimentally?

    As a case in point, in a piece for Nature Reviews, a team of collaborators from Australia argue that failure to detect pseudogene transcripts experimentally does not confirm the absence of a transcription.2 For example, the transcripts for a pseudogene transcribed at a low level may fall below the experimental detection limit. This particular pseudogene would appear inactive to researchers when, in fact, the opposite is the case. Additionally, pseudogene expression may be tissue-specific or may take place at certain points in the growth and development process. If the assay doesn’t take these possibilities into account, then failure to detect pseudogene transcripts could just mean that the experimental protocol is flawed.

    The similarity of the DNA sequences of pseudogenes and their corresponding “sister” genes causes another complication. It can be hard to experimentally distinguish between a pseudogene and its “intact” sister gene. This limitation means that, in some instances, pseudogene transcripts may be misidentified as the transcripts of the “intact” gene. Again, this can lead researchers to conclude mistakenly that the pseudogene isn’t transcribed.

    Are Untranscribed Pseudogenes Really Nonfunctional?

    These very real experimental challenges notwithstanding, there are pseudogenes that indeed are not transcribed, but it would be wrong to conclude that they have no role in gene regulation. For example, a large team of international collaborators demonstrated that a pseudogene sequence contributes to the specific three-dimensional architecture of chromosomes. By doing so, this sequence exerts influence over gene expression, albeit indirectly.3

    Another research team determined that a different pseudogene plays a role in maintaining chromosomal stability. In laboratory experiments, they discovered that deleting the DNA region that harbors this pseudogene increases chromosomal recombination events that result in the deletion of pieces of DNA. This deletion is catastrophic and leads to DiGeorge/velocardiofacial syndrome.4

    To be clear, these two studies focused on single pseudogenes. We need to be careful about extrapolating the results to all untranscribed pseudogenes. Nevertheless, at minimum, these findings open up the possibility that other untranscribed pseudogene sequences function in the same way. If past history is anything to go by when it comes to junk DNA, these two discoveries are most likely harbingers of what is to come. Simply put, we continue to uncover unexpected function for pseudogenes (and other classes of junk DNA).

    Common Design or Common Descent?

    Not that long ago, shared nonfunctional, junk DNA sequences in the human and primate genomes were taken as prima facia evidence for our shared evolutionary history with the great apes. There was no way to genuinely respond to the challenge junk DNA posed to creation models, other than to express the belief that we would one day discover function for junk DNA sequences.

    Subsequently, discoveries have fulfilled a key scientific prediction made by creationists and intelligent design proponents alike. These initial discoveries involved single, isolated pseudogenes. Later studies demonstrated that pseudogene function is pervasive, leading to new scientific ideas such as the competitive endogenous RNA hypothesis, that connect the sequence similarity of pseudogenes and “intact” genes to pseudogene function. Researchers are beginning to identify functional roles for untranscribed pseudogenes. I predict that it is only a matter of time before biologists concede that the utility of untranscribed pseudogenes is pervasive and commonplace.

    The creation model interpretation of shared junk DNA sequences becomes stronger and stronger with each step forward, which leads me to ask, When are life scientists going to stop fooling around and give a creation model approach a seat at the biology table?

    Resources

    Endnotes
    1. Seth W. Cheetham, Geoffrey J. Faulkner, and Marcel E. Dinger, “Overcoming Challenges and Dogmas to Understand the Functions of Pseudogenes,” Nature Reviews Genetics 21 (December 17, 2019): 191–201, doi:10.1038/s41576-019-0196-1.
    2. Cheetham et al., 191–201.
    3. Peng Huang, et al., “Comparative Analysis of Three-Dimensional Chromosomal Architecture Identifies a Novel Fetal Hemoglobin Regulatory Element,” Genes and Development 31, no. 16 (August 15, 2017): 1704–13, doi: 10.1101/gad.303461.117.
    4. Laia Vergés et al., “An Exploratory Study of Predisposing Genetic Factors for DiGeorge/Velocardiofacial Syndrome,” Scientific Reports 7 (January 6, 2017): id. 40031, doi: 10.1038/srep40031.
  • Does Evolutionary Bias Create Unhealthy Stereotypes about Pseudogenes?

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Mar 18, 2020

    Truth be told, we all hold to certain stereotypes whether we want to admit it or not. Though unfair, more often than not, these stereotypes cause little real damage.

    Yet, there are instances when stereotypes can be harmful—even deadly. As a case in point, researchers have shown that stereotyping disrupts the healthcare received by members of so-called disadvantaged groups, such as African Americans, Latinos, and the poor.1

    Healthcare providers are frequently guilty of bias towards underprivileged people. Often, the stereotyping is unconscious and unintentional. Still, this bias compromises the medical care received by people in these ethnic and socioeconomic groups.

    Underprivileged patients are also guilty of stereotyping. It is not uncommon for these patients to perceive themselves as the victims of prejudice, even when their healthcare providers are genuinely unbiased. As a result, these patients don’t trust healthcare workers and, consequently, withhold information that is vital for a proper diagnosis.

    Fortunately, psychologists have developed best practices that can reduce stereotyping by both healthcare practitioners and patients. Hopefully, by implementing these practices, the impact of stereotyping on the quality of healthcare can be minimized over time.

    Recently, a research team from Australia identified another form of stereotyping that holds the potential to negatively impact healthcare outcomes.2 In this case, the impact of this stereotyping isn’t limited to disadvantaged people; it affects all of us.

    A Bias Against Pseudogenes

    These researchers have uncovered a bias in the way life scientists view the human genome (and the genomes of other organisms). Too often they regard the human genome as a repository of useless, nonfunctional DNA that arises as a vestige of evolutionary history. Because of this view, life scientists and the biomedical research community eschew studying regions of the human genome they deem to be junk DNA. This posture is not unreasonable. It doesn’t make sense to invest precious scientific resources to study nonfunctional DNA.

    Many life scientists are unaware of their bias. Unfortunately, this stereotyping hinders scientific advance by delaying discoveries that could be translated into the clinical setting. Quite often, supposed junk DNA has turned out to serve a vital purpose. Failure to recognize this function not only compromises our understanding of genome biology, but also hinders biomedical researchers from identifying defects in these genomic regions that contribute to genetic diseases and disorders.

    As psychologists will point out, acknowledging bias is the first step to solving the problems that stereotyping causes. This is precisely what these researchers have done by publishing an article in Nature Review Genetics.3 The team focused on DNA sequence elements called pseudogenes. Traditionally, life scientists have viewed pseudogenes as the remnants of once functional genes. Biologists have identified three categories of pseudogenes: (1) unitary, (2) duplicated, and (3) processed.

    blog__inline--does-evolutionary-bias-cause-unhealthy-stereotypes
    Figure 1: Mechanisms for Formation of Duplicated and Processed Pseudogenes. Image credit: Wikipedia

    Researchers categorize DNA sequences as pseudogenes based on structural features. Such features indicate to the investigators that these sequence elements were functional genes at one time in evolutionary history, but eventually lost function due to mutations or other biochemical processes, such as reverse transcription and DNA insertion. Once a DNA sequence is labeled a pseudogene, bias sets in and researchers just assume that it lacks function—not because it has been experimentally demonstrated to be nonfunctional, but because of the stereotyping that arises out of the evolutionary paradigm.

    The authors of the piece acknowledge that “the annotation of genomics regions as pseudogenes constitutes an etymological signifier that an element has no function and is not a gene. As a result, pseudogene-annotated regions are largely excluded from functional screen and genomic analyses.”4 In other words, the “pseudogene” moniker biases researchers to such a degree that they ignore these sequence elements as they study genome structure and function without ever doing the hard, experimental work to determine whether it is actually nonfunctional.

    This approach is clearly misguided and detracts from scientific discovery. As the authors admit, “However, with a growing number of instances of pseudogene-annotated regions later found to exhibit biological function, there is an emerging risk that these regions of the genome are prematurely dismissed as pseudogenic and therefore regarded as void of function.”5

    Discovering Function Despite Bias

    The harmful effects of this bias become evident as biomedical researchers unexpectedly stumble upon function for pseudogenes, time and time, again, not because of the evolutionary paradigm, but despite it. These authors point out that many processed pseudogenes are transcribed and, of those, many are translated to produce proteins. Many unitary and duplicated pseudogenes are also transcribed. Some are also translated into proteins, but a majority are not. Instead they play a role in gene regulation as described by the competitive endogenous RNA hypothesis.

    Still, there are some pseudogenes that aren’t transcribed and, thus, could rightly be deemed nonfunctional. However, the researchers point out that the current experimental approaches for identifying transcribed regions are less than ideal. Many of these methods may fail to detect pseudogene transcripts. However, as the researchers point out, even if a pseudogene isn’t transcribed it still may serve a functional role (e.g., contributing to chromosome three-dimensional structure and stability).

    This Nature article raises a number of questions and concerns for me as a biochemist:

    • How widespread is this bias?
    • If this type of stereotyping exists toward pseudogenes, does it exist for other classes of junk DNA?
    • How well do we really understand genome structure and function?
    • Do we have the wrong perspective on the genome, one that stultifies scientific advance?
    • Does this bias delay the understanding and alleviation of human health concerns?

    Is the Evolutionary Paradigm the Wrong Framework to Study Genomes?

    Based on this article, I think it is safe to conclude that we really don’t understand the molecular biology of genomes. We are living in the midst of a scientific revolution that is radically changing our view of genome structure and function. The architecture and operations of genomes appear to be far more elegant and sophisticated than anyone ever imagined—at least within the confines of the evolutionary paradigm.

    This insight also leads me to question if the evolutionary paradigm is the proper framework for thinking about genome structure and function. From my perspective, treating biological systems as the Creator’s handiwork provides a superior approach to understanding the genome. A creation model approach promotes scientific advance, particularly when the rationale for the structure and function of a particular biological system is not apparent. This expectation forces researchers to keep an open mind and drives further study of seemingly nonfunctional, purposeless systems with the full anticipation that their functional roles will eventually be uncovered.

    Over the last several years, I have raised concerns about the bias life scientists have harbored as they have worked to characterize the human genome (and genomes of other organisms). It is gratifying to me to see that there are life scientists who, though committed to the evolutionary paradigm, are beginning to recognize this bias as well.

    The first step to addressing the problem of stereotyping—in any sector of society—is to acknowledge that it exists. Often, this step is the hardest one to take. The next step is to put in place structures to help overcome its harmful influence. Could it be that part of the solution to this instance of scientific stereotyping is to grant a creation model approach access to the scientific table?

    Resources

    Pseudogene Function

    The Evolutionary Paradigm Hinders Scientific Advance

    Endnotes
    1. For example, see Joshua Aronson et al., “Unhealthy Interactions: The Role of Stereotype Threat in Health Disparities,” American Journal of Public Health 103 (January 1, 2013): 50–56, doi:10.2105/AJPH.2012.300828.
    2. Seth W. Cheetham, Geoffrey J. Faulkner, and Marcel E. Dinger, “Overcoming Challenges and Dogmas to Understand the Functions of Pseudogenes,” Nature Reviews Genetics 21 (March 2020): 191–201, doi:10.1038/s41576-019-0196-1.
    3. Cheetham, Faulkner, and Dinger, 191–201.
    4. Cheetham, Faulkner, and Dinger, 191–201.
    5. Cheetham, Faulkner, and Dinger, 191–201.
  • New Genetic Evidence Affirms Human Uniqueness

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Mar 04, 2020

    It’s a remarkable discovery—and a bit gruesome, too.

    It is worth learning a bit about some of its unseemly details because this find may have far-reaching implications that shed light on our origins as a species.

    In 2018, a group of locals discovered the remains of a two-year-old male puppy in the frozen mud (permafrost) in the eastern part of Siberia. The remains date to 18,000 years in age. Remarkably, the skeleton, teeth, head, fur, lashes, and whiskers of the specimen are still intact.

    Of Dogs and People

    The Russian scientists studying this find (affectionately dubbed Dogor) are excited by the discovery. They think Dogor can shed light on the domestication of wolves into dogs. Biologists believe that this transition occurred around 15,000 years ago. Is Dogor a wolf? A dog? Or a transitional form? To answer these questions, the researchers have isolated DNA from one of Dogor’s ribs, which they think will provide them with genetic clues about Dogor’s identity—and clues concerning the domestication process.

    Biologists study the domestication of animals because this process played a role in helping to establish human civilization. But biologists are also interested in animal domestication for another reason. They think this insight will tell us something about our identity as human beings.

    In fact, in a separate study, a team of researchers from the University of Milan in Italy used insights about the genetic changes associated with the domestication of dogs, cats, sheep, and cattle to identify genetic features that make human beings (modern humans) stand apart from Neanderthals and Denisovans.1 They conclude that modern humans share some of the same genetic characteristics as domesticated animals, accounting for our unique and distinct facial features (compared to other hominins). They also conclude that our high level of cooperativeness and lack of aggression can be explained by these same genetic factors.

    This work in comparative genomics demonstrates that significant anatomical and behavioral differences exist between humans and hominins, supporting the concept of human exceptionalism. Though the University of Milan researchers carried out their work from an evolutionary perspective, I believe their insights can be recast as scientific evidence for the biblical conception of human nature; namely, creatures uniquely made in God’s image.

    Biological Changes that Led to Animal Domestication

    Biologists believe that during the domestication process, many of the same biological changes took place in dogs, cats, sheep, and cattle. For example, they think that during domestication, mild deficits in neural crest cells resulted. In other words, once animals are domesticated, they produce fewer, less active neural crest cells. These stem cells play a role in neural development; thus, neural crest cell defects tend to make animals friendlier and less aggressive. This deficit also impacts physical features, yielding smaller skulls and teeth, floppy ears, and shorter, curlier tails.

    Life scientists studying the domestication process have identified several genes of interest. One of these is BAZ1B. This gene plays a role in the maintenance of neural crest cells and controls their migration during embryological development. Presumably, changes in the expression of BAZ1B played a role in the domestication process.

    Neural Crest Deficits and Williams Syndrome

    As it turns out, there are two genetic disorders in modern humans that involve neural crest cells: Williams-Beuren syndrome (also called Williams syndrome) and Williams-Beuren region duplication syndrome. These genetic disorders involve the deletion or duplication, respectively, of a region of chromosome 7 (7q11.23). This chromosomal region harbors 28 genes. Craniofacial defects and altered cognitive and behavioral traits characterize these disorders. Specifically, people with these syndromes have cognitive limitations, smaller skulls, and elf-like faces, and they display excessive friendliness.

    Among the 28 genes impacted by the two disorders is the human version of BAZ1B. This gene codes for a type of protein called a transcription factor. (Transcription factors play a role in regulating gene expression.)

    The Role of BAZ1B in Neural Crest Cell Biology

    To gain insight into the role BAZ1B plays in neural crest cell biology, the European research team developed induced pluripotent stem cell lines from (1) four patients with Williams syndrome, (2) three patients with Williams-Beuren region duplication syndrome, and (3) four people without either disorder. Then, they coaxed these cells in the laboratory to develop into neural crest cells.

    Using a technique called RNA interference, they down-regulated BAZ1B in all three types of neural crest cells. By doing this, the researchers learned that changes in the expression of this gene altered the migration rates of the neural crest cells. Specifically, they discovered that neural crest cells developed from patients with Williams-Beuren region duplication syndrome migrated more slowly than control cells (generated from test subjects without either syndrome) and neural crest cells derived from patients with Williams syndrome migrated more rapidly than control cells.

    The discovery that the BAZ1B gene influences neural crest cell migration is significant because these cells have to migrate to precise locations in the developing embryo to give rise to distinct cell types and tissues, including those that form craniofacial features.

    Because BAZ1B encodes for a transcription factor, when its expression is altered, it alters the expression of genes under its control. The team discovered that 448 genes were impacted by down-regulating BAZ1B. They learned that many of these impacted genes play a role in craniofacial development. By querying databases of genes that correlate with genetic disorders, researchers also learned that, when defective, some of the impacted genes are known to cause disorders that involve altered facial development and intellectual disabilities.

    Lastly, the researchers determined that the BAZ1B protein (again, a transcription factor) targets genes that influence dendrite and axon development (which are structures found in neurons that play a role in transmissions between nerve cells).

    BAZ1B Gene Expression in Modern and Archaic Humans

    With these findings in place, the researchers wondered if differences in BAZ1B gene expression could account for anatomical and cognitive differences between modern humans and archaic humans—hominins such as Neanderthals and Denisovans. To carry out this query, the researchers compared the genomes of modern humans to Neanderthals and Denisovans, paying close attention to DNA sequence differences in genes under the influence of BAZ1B.

    This comparison uncovered differences in the regulatory region of genes targeted by the BAZ1B transcription factor, including genes that control neural crest cell activities and craniofacial anatomy. In other words, the researchers discovered significant genetic differences in gene expression among modern humans and Neanderthals and Denisovans. And these differences strongly suggest that anatomical and cognitive differences existed between modern humans and Neanderthals and Denisovans.

    Did Humans Domesticate Themselves?

    The researchers interpret their findings as evidence for the self-domestication hypothesis—the idea that we domesticated ourselves after the evolutionary lineage that led to modern humans split from the Neanderthal/Denisovan line (around 600,000 years ago). In other words, just as modern humans domesticated dogs, cats, cattle, and sheep, we domesticated ourselves, leading to changes in our anatomical features that parallel changes (such as friendlier faces) in the features of animals we domesticated. Along with these anatomical changes, our self-domestication led to the high levels of cooperativeness characteristic of modern humans.

    On one hand, this is an interesting account that does seem to have some experimental support. But on the other, it is hard to escape the feeling that the idea of self-domestication as the explanation for the origin of modern humans is little more than an evolutionary just-so story.

    It is worth noting that some evolutionary biologists find this account unconvincing. One is William Tecumseh Fitch III—an evolutionary biologist at the University of Vienna. He is skeptical of the precise parallels between animal domestication and human self-domestication. He states, “These are processes with both similarities and differences. I also don’t think that mutations in one or a few genes will ever make a good model for the many, many genes involved in domestication.”2

    Adding to this skepticism is the fact that nobody has anything beyond a speculative explanation for why humans would domesticate themselves in the first place.

    Genetic Differences Support the Idea of Human Exceptionalism

    Regardless of the mechanism that produced the genetic differences between modern and archaic humans, this work can be enlisted in support of human uniqueness and exceptionalism.

    Though the claim of human exceptionalism is controversial, a minority of scientists operating within the scientific mainstream embrace the idea that modern humans stand apart from all other extant and extinct creatures, including Neanderthals and Denisovans. These anthropologists argue that the following suite of capacities uniquely possessed by modern humans accounts for our exceptional nature:

    • symbolism
    • open-ended generative capacity
    • theory of mind
    • capacity to form complex social systems

    As human beings, we effortlessly represent the world with discrete symbols. We denote abstract concepts with symbols. And our ability to represent the world symbolically has interesting consequences when coupled with our abilities to combine and recombine those symbols in a countless number of ways to create alternate possibilities. Our capacity for symbolism manifests in the form of language, art, music, and even body ornamentation. And we desire to communicate the scenarios we construct in our minds with other human beings.

    But there is more to our interactions with other human beings than a desire to communicate. We want to link our minds together. And we can do this because we possess a theory of mind. In other words, we recognize that other people have minds just like ours, allowing us to understand what others are thinking and feeling. We also have the brain capacity to organize people we meet and know into hierarchical categories, allowing us to form and engage in complex social networks. Forming these relationships requires friendliness and cooperativeness.

    In effect, these qualities could be viewed as scientific descriptors of the image of God, if one adopts a resemblance view for the image of God.

    This study demonstrates that, at a genetic level, modern humans appear to be uniquely designed to be friendlier, more cooperative, and less aggressive than other hominins—in part accounting for our capacity to form complex hierarchical social structures.

    To put it differently, the unique capability of modern humans to form complex, social hierarchies no longer needs to be inferred from the fossil and archaeological records. It has been robustly established by comparative genomics in combination with laboratory studies.

    A Creation Model Perspective on Human Origins

    This study not only supports human exceptionalism but also affirms RTB’s human origins model.

    RTB’s biblical creation model identifies hominins such as Neanderthals and the Denisovans as animals created by God. These extraordinary creatures possessed enough intelligence to assemble crude tools and even adopt some level of “culture.” However, the RTB model maintains that these hominids were not spiritual creatures. They were not made in God’s image. RTB’s model reserves this status exclusively for Adam and Eve and their descendants (modern humans).

    Our model predicts many biological similarities will be found between the hominins and modern humans, but so too will significant differences. The greatest distinction will be observed in cognitive capacity, behavioral patterns, technological development, and culture—especially artistic and religious expression.

    The results of this study fulfill these two predictions. Or, to put it another way, the RTB model’s interpretation of the hominins and their relationship to modern humans aligns with “mainstream” science.

    But what about the similarities between the genetic fingerprint of modern humans and the genetic changes responsible for animal domestication that involve BAZ1B and genes under its influence?

    Instead of viewing these features as traits that emerged through parallel and independent evolutionary histories, the RTB human origins model regards the shared traits as reflecting shared designs. In this case, through the process of domestication, modern humans stumbled upon the means (breeding through artificial selection) to effect genetic changes in wild animals that resemble some of the designed features of our genome that contribute to our unique and exceptional capacity for cooperation and friendliness.

    It is true: studying the domestication process does, indeed, tell us something exceptionally important about who we are.

    Resources

    Endnotes
    1. Matteo Zanella et al., “Dosage Analysis of the 7q11.23 Williams Region Identifies BAZ1B as a Major Human Gene Patterning the Modern Human Face and Underlying Self-Domestication,” Science Advances 5, no. 12 (December 4, 2019): eaaw7908, doi:10.1126/sciadv.aaw7908.
    2. Michael Price, “Early Humans Domesticated Themselves, New Genetic Evidence Suggests,” Science (December 4, 2019), doi:10.1126/science.aba4534.
  • Ancient DNA Indicates Modern Humans Are One-of-a-Kind

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Feb 19, 2020

    The wonderful thing about tiggers
    Is tiggers are wonderful things!
    Their tops are made out of rubber
    Their bottoms are made out of springs!
    They’re bouncy, trouncy, flouncy, pouncy
    Fun, fun, fun, fun, fun!
    But the most wonderful thing about tiggers is
    I’m the only one!1

    With eight grandchildren and counting (number nine will be born toward the end of February), I have become reacquainted with children’s stories. Some of the stories my grandchildren want to hear are new, but many of them are classics. It is fun to see my grandchildren experiencing the same stories and characters I enjoyed as a little kid.

    Perhaps my favorite children’s book of all time is A. A. Milne’s (1882–1956) Winnie-the-Pooh. And of all the characters that populated Pooh Corner, my favorite character is the ineffable Tigger—the self-declared one-of-a-kind.

    blog__inline--ancient-dna-indicates-modern-humans-1
    A. A. Milne. Credit: Wikipedia

    For many people (such as me), human beings are like Tigger. We are one-of-a-kind among creation. As a Christian, I take the view that we are unique and exceptional because we alone have been created in God’s image.

    For many others, the Christian perspective on human nature is unpopular and offensive. Who are we to claim some type of special status? They insist that humans aren’t truly unique and exceptional. We are not fundamentally different from other creatures. If anything, we differ only in degree, not kind. Naturalists and others assert that there is no evidence that human beings bear God’s image. In fact, some would go so far as to claim that creatures such as Neanderthals were quite a bit like us. They maintain that these hominins were “exceptional,” just like us. Accordingly, if we are one-of-a-kind it is because, like Tigger, we have arrogantly declared ourselves to be so, when in reality we are no different from any of the other characters who make their home at Pooh Corner.

    Despite this pervasive and popular challenge to human exceptionalism (and the image-of-God concept), there is mounting evidence that human beings stand apart from all extant creatures (such as the great apes) and extinct creatures (such as Neanderthals). This growing evidence can be marshaled to make a scientific case that as human beings we, indeed, are image bearers.

    As a case in point, many archeological studies affirm human uniqueness and exceptionalism. (See the Resources section for a sampling of some of this work.) These studies indicate that human beings alone possess a suite of characteristics that distinguish us from all other hominins. I regard these qualities as scientific descriptors of the image of God:

    • Capacity for symbolism
    • Ability for open-ended manipulation of symbols
    • Theory of mind
    • Capacity to form complex, hierarchical social structures

    Other studies have identified key differences between the brains of modern humans and Neanderthals. (For a sample of this evidence see the Resources section.) One key difference relates to skull shape. Neanderthals (and other hominins) possessed an elongated skull. In contradistinction, our skull shape is globular. The globularity allows for the expansion of the parietal lobe. This is significant because an expanded parietal lobe explains a number of unique human characteristics:

    • Perception of stimuli
    • Sensorimotor transformation (which plays a role in planning)
    • Visuospatial integration (which provides hand-eye coordination)
    • Imagery
    • Self-awareness
    • Working and long-term memory

    Again, I connect these scientific qualities to the image of God.

    Now, two recent studies add to the case for human exceptionalism. They involve genetic comparisons of modern humans with both Neanderthals and Denisovans. Through the recovery and sequencing of ancient DNA, we have high quality genomes for these hominins that we can analyze and compare to the genomes of modern humans.

    While the DNA sequences of protein-coding genes in modern human genomes and the genomes of these two extant hominins is quite similar, both studies demonstrate that the gene expression is dramatically different. That difference accounts for anatomical differences between humans and these two hominins and suggests that significant cognitive differences exist as well.

    Differences in Gene Regulation

    To characterize gene expression patterns in Neanderthals and Denisovans and compare them to modern humans, researchers from Vanderbilt University (VU) used statistical methods to develop a mathematical model that would predict gene expression profiles from the DNA sequences of genomes.2 They built their model using DNA sequences and gene expression data (measured from RNA produced by transcription) for a set of human genomes. To ensure that their model could be used to assess gene expression for Neanderthals and Denisovans, the researchers paid close attention to the gene expression pattern for genes in the human genome that were introduced when modern humans and Neanderthals presumably interbred and compared their expression to human genes that were not of Neanderthal origin.

    blog__inline--ancient-dna-indicates-modern-humans-2
    The Process of Gene Expression. Credit: Shutterstock

    With their model in hand, the researchers analyzed the expression profile for nearly 17,000 genes from the Altai Neanderthal. Their model predicts that 766 genes in the Neanderthal genome had a different expression profile than the corresponding genes in modern humans. As it turns out, the differentially expressed genes in the Neanderthal genomes failed to be incorporated into the human genome after interbreeding took place, suggesting to the researchers that these genes are responsible for key anatomical and physiological differences between modern humans and Neanderthals.

    The VU investigators determined that these 766 differentially expressed genes play roles in reproduction, forming skeletal structures, and the functioning of cardiovascular and immune systems.

    Then, the researchers expanded their analysis to include two other Neanderthal genomes (from the Vindija and Croatian specimens) and the Denisovan genome. The researchers learned that the gene expression profiles of the three Neanderthal genomes were more similar to one another than they were to either the gene expression patterns of modern human and Denisovan genomes.

    This study clearly demonstrates that significant differences existed in the regulation of gene expression in modern humans, Neanderthals, and Denisovans and that these differences account for biological distinctives between the three hominin species.

    Differences in DNA Methylation

    In another study, researchers from Israel compared gene expression profiles in modern human genomes with those from and Neanderthals and Denisovans using a different technique. This method assesses DNA methylation.3 (Methylation of DNA downregulates gene expression, turning genes off.)

    Methylation of DNA influences the degradation process for this biomolecule. Because of this influence, researchers can determine the DNA methylation pattern in ancient DNA by characterizing the damage to the DNA fragments isolated from fossil remains.

    Using this technique, the researchers measured the methylation pattern for genomes of two Neanderthals (Altai and Vindija) and a Denisovan and compared these patterns with genomes recovered from the remains of three modern humans, dating to 45,000 years in age, 8,000 years in age, and 7,000 years in age, respectively. They discovered 588 genes in modern human genomes with a unique DNA methylation pattern, indicating that these genes are expressed differently in modern humans than in Neanderthals and Denisovans. Among the 588 genes, researchers discovered some that influence the structure of the pelvis, facial morphology, and the larynx.

    The researchers think that differences in gene expression may explain the anatomical differences between modern humans and Neanderthals. They also think that this result indicates that Neanderthals lacked the capacity for speech.

    What Is the Relationship between Modern Humans and Neanderthals?

    These two genetic studies add to the extensive body of evidence from the fossil record, which indicates that Neanderthals are biologically distinct from modern humans. For a variety of reasons, some Christian apologists and Intelligent Design proponents classify Neanderthals and modern humans into a single group, arguing that the two are equivalent. But these two studies comparing gene regulation profiles make it difficult to maintain that perspective.

    Modern Humans, Neanderthals, and the RTB Human Origins Model

    RTB’s human origins model regards Neanderthals (and other hominins) as creatures made by God, without any evolutionary connection to modern humans. These extraordinary creatures walked erect and possessed some level of intelligence, which allowed them to cobble together tools and even adopt some level of “culture.” However, our model maintains that the hominins were not spiritual beings made in God’s image. RTB’s model reserves this status exclusively for modern humans.

    Based on our view, we predict that biological similarities will exist among the hominins and modern humans to varying degrees. In this regard, we consider the biological similarities to reflect shared designs, not a shared evolutionary ancestry. We also expect biological differences because, according to our model, the hominins would belong to different biological groups from modern humans.

    We also predict that significant cognitive differences would exist between modern humans and the other hominins. These differences would be reflected in brain anatomy and behavior (inferred from the archeological record). According to our model, these differences reflect the absence of God’s image in the hominins.

    The results of these two studies affirm both sets of predictions that flow from the RTB human origins model. The differences in gene regulation between modern human and Neanderthals is precisely what our model predicts. These differences seem to account for the observed anatomical differences between Neanderthals and modern humans observed from fossil remains.

    The difference in the regulation of genes affecting the larynx is also significant for our model and the idea of human exceptionalism. One of the controversies surrounding Neanderthals relates to their capacity for speech and language. Yet, it is difficult to ascertain from fossil remains if Neanderthals had the anatomical structures needed for the vocalization range required for speech. The differences in the expression profiles for genes that control the development and structure of the larynx in modern humans and Neanderthals suggests that Neanderthals lacked the capacity for speech. This result dovetails nicely with the differences in modern human and Neanderthal brain structure, which suggest that Neanderthals also lacked the neural capacity for language and speech. And, of course, it is significant that there is no conclusive evidence for Neanderthal symbolism in the archeological record.

    With these two innovative genetic studies, the scientific support for human exceptionalism continues to mount. And the wonderful thing about this insight is that it supports the notion that as human beings we are the only ones who bear God’s image and can form a relationship with our Creator.

    Resources

    Behavioral Differences between Humans and Neanderthals

    Biological Differences between Humans and Neanderthals

    Endnotes
    1. Richard M. Sherman and Robert B. Sherman, composers, “The Wonderful Thing about Tiggers” (song), released December 1968.
    2. Laura L. Colbran et al., “Inferred Divergent Gene Regulation in Archaic Hominins Reveals Potential Phenotypic Differences,” Nature Evolution and Ecology 3 (November 2019): 1598-606, doi:10.1038/s41559-019-0996-x.
    3. David Gokhman et al., “Reconstructing the DNA Methylation Maps of the Neandertal and the Denisovan,” Science 344, no. 6183 (May 2, 2014): 523–27, doi:1126/science.1250368; David Gokhman et al., “Extensive Regulatory Changes in Genes Affecting Vocal and Facial Anatomy Separate Modern from Archaic Humans,” bioRxiv, preprint (October 2017), doi:10.1101/106955.
  • Cave Art Tells the Story of Human Exceptionalism

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Feb 05, 2020

    Comic books intrigue me. They are a powerful storytelling vehicle. The combination of snapshot-style imagery, along with narration and dialogue, allows the writer and artist to depict action and emotion in a way that isn’t possible using the written word alone. Comic books make it easy to depict imaginary worlds. And unlike film, comics engage the reader in a deeper, more personal way. The snapshot format requires the reader to make use of their imagination to fill in the missing details. In this sense, the reader becomes an active participant in the storytelling process.

    blog__inline--cave-art-tells-the-story-of-human-exceptionalism-1
    Figure 1: Speech Bubbles on a Comic Strip Background. Credit: Shutterstock

    In America, comics burst onto the scene in the 1930s, but the oldest comics (at least in Europe) trace their genesis to Rodolphe Töpffer (1799-1846). Considered by many to be “the father of comics,” Töpffer was a Swiss teacher, artist, and author who became well-known for his illustrated books—works that bore similarity to modern-day comics.

     

    blog__inline--cave-art-tells-the-story-of-human-exceptionalism-2
    Figure 2: Rodolphe Töpffer, Self Portrait, 1840. Credit: Wikipedia

    Despite his renown, Töpffer wasn’t the first comic book writer and artist. That claim to fame belongs to long forgotten artists from prehistory. In fact, recent work by Australian and Indonesian researchers indicates that comics as a storytelling device dates to earlier than 44,000 years ago.

    Seriously!

    These investigators discovered and characterized cave art from a site on the Indonesian island of Sulawesi that depicts a pig and buffalo hunt. Researchers interpret this mural to be the oldest known recorded story1 —a comic book story on a cave wall.

    This find, and others like it, provide important insight into our origins as human beings. From my perspective as a Christian apologist, this discovery is important for another reason. I see it as affirming the biblical teaching about humanity: God made human beings in his image.

    The Find

    Leading up to this discovery, archeologists had already identified and dated art on cave walls in Sulawesi and Borneo. This art, which includes hand stencils and depictions of animals, dates to older than 40,000 years in age and is highly reminiscent of the cave art of comparable age found in Europe.

    blog__inline--cave-art-tells-the-story-of-human-exceptionalism-3
    Figure 3: Hand Stencils from a Cave in Southern Sulawesi. Credit: Wikipedia.

    In December 2017, an archeologist from Indonesia discovered the hunting mural in a cave (now called Leang Bulu’ Sipong 4) in the southern part of Sulawesi. The panel presents the viewer with an ensemble of pigs and small buffalo (called anoas), endemic to Sulawesi. Most intriguing about the artwork is the depiction of smaller human-like figures with animal features such as tails and snouts. In some instances, the figures appear to be holding spears and ropes. Scholars refer to these human-animal depictions as therianthropes.

    blog__inline--cave-art-tells-the-story-of-human-exceptionalism-4
    Figure 4: Illustration of a Pig Deer Found in a Cave in Southern Sulawesi. Credit: Wikipedia.

    Dating the Find

    Dating cave art can be notoriously difficult. One approach is to directly date the charcoal pigments used to make the art using radiocarbon methods. Unfortunately, the dates measured by this technique can be suspect because the charcoal used to make the art can be substantially older than the artwork itself.

    Recently, archeologists have developed a new approach to date cave art. This method measures the levels of uranium and thorium in calcite deposits that form on top of the artwork. Calcite is continuously deposited on cave walls due to hydrological activity in the cave. As water runs down the cave walls, calcium carbonate precipitates onto the cave wall surface. Trace amounts of radioactive uranium are included in the calcium carbonate precipitates. This uranium decays into thorium, hence the ratio of uranium to thorium provides a measure of the calcite deposit’s age and, in turn, yields a minimum age for the artwork.

    To be clear, this dating method has been the subject of much controversy. Some archeologists argue that the technique is unreliable because the calcite deposits are an open system. Once the calcite deposit forms, water will continue to flow over the surface. The water will solubilize part of the calcite deposit and along with it the trace amounts of uranium and thorium. Thus, because uranium is more soluble than thorium we get an artificially high level of thorium. So, when the uranium-thorium ratio is measured, it may make it appear as if the cave art is older than it actually is.

    To ensure that the method worked as intended, the researchers only dated calcite deposits that weren’t porous (which is a sign that they have been partially re-dissolved) and they made multiple measurements from the surface of the deposit toward the interior. If this sequence of measurements produced a chronologically consistent set of ages, the researchers felt comfortable with the integrity of the calcite samples. Using this method, the researchers determined that the cave painting of the pig and buffalo hunt dates to older than 43,900 years.

    Corroborating evidence gives the archeologists added confidence in this result. For example, the discovery of archeological finds in the Sulawesi cave site that were independently dated indicate that modern humans were in the caves between 40,000 to 50,000 years ago, in agreement with the measured age of the cave art.

    The research team also noted that the animal and the therianthropes in the mural appear to have been created at the same time. This insight is important because therianthropes don’t appear in the cave paintings found in Europe until around 10,000 years ago. This observation means that it is possible that the therianthropes could have been added to the painting millennia after the animals were painted onto the cave wall. However, the researchers don’t think this is the case for at least three reasons. First, the same artistic style was used to depict the animals and therianthropes. Second, the technique and pigment used to create the figures is the same. And third, the degree of weathering is the same throughout the panel. None of these features would be expected if the therianthropes were a late addition to the mural.

    Interpreting the Find

    The researchers find the presence of therianthropes in 44,000+ year-old cave art significant. It indicates that humans in Sulawesi not only possessed the capacity for symbolism, but, more importantly, had the ability to conceive of things that did not exist in the material world. That is to say, they had a sense of the supernatural.

    Some archeologists believe that the cave art reflects shamanic beliefs and visions. If this is the case, then it suggests that the therianthropes in the painting may reflect spirit animal helpers who ensured the success of the hunt. The size of the therianthropes supports this interpretation. These animal-human hybrids are depicted as much smaller than the pigs and buffalo. On the island of Sulawesi, both the pig and buffalo species in question were much smaller than modern humans.

    Because this artwork depicts a hunt involving therianthropes, the researchers see rich narrative content in the display. It seems to tell a story that likely reflected the mythology of the Sulawesi people. You could say it’s a comic book on a cave wall.

    Relationship between Cave Art in Europe and Asia

    Cave art in Europe has been well-known and carefully investigated by archeologists and anthropologists for nearly a century. Now archeologists have access to a growing archeological record in Asia.

    Art found at these sites is of the same quality and character as the European cave art. However, it is older. This discovery means that modern humans most likely had the capacity to make art even before beginning their migrations around the world from out of Africa (around 60,000 years ago).

    As noted, the discovery of therianthropes at 44,000+ years in age in Sulawesi is intriguing because these types of figures don’t appear in cave art in Europe until around 10,000 years ago. But archeologists have discovered the lion-man statue in a cave site in Germany. This artifact, which depicts a lion-human hybrid, dates to around 40,000 years in age. In other words, therianthropes were part of the artwork of the first Europeans. It also indicates that modern humans in Europe had the capacity to envision imaginary worlds and held belief in a supernatural realm.

    Capacity for Art and the Image of God

    For many people, our ability to create and contemplate art serves as a defining feature of humanity—a quality that reflects our capacity for sophisticated cognitive processes. So, too, does our capacity for storytelling. As humans, we seem to be obsessed with both. Art and telling stories are manifestations of symbolism and open-ended generative capacity. Through art (as well as music and language), we express and communicate complex ideas and emotions. We accomplish this feat by representing the world—and even ideas—with symbols. And, we can manipulate symbols, embedding them within one another to create alternate possibilities.

    As a Christian, I believe that our capacity to make art and to tell stories is an outworking of the image of God. As such, the appearance of art (as well as other artifacts that reflect our capacity for symbolism) serves as a diagnostic for the image of God in the archeological record. That record provides the means to characterize the mode and tempo of the appearance of behavior that reflect the image of God. If the biblical account of human origins is true, then I would expect that artistic expression should be unique to modern humans and should appear at the same time that we make our first appearance as a species.

    So, when did art (and symbolic capacity) first appear? Did art emerge suddenly? Did it appear gradually? Is artistic expression unique to human beings or did other hominins, such as Neanderthals, produce art too? Answers to these questions are vital to our case for human exceptionalism and, along with it, the image of God.

    When Did the Capacity for Art First Appear?

    Again, the simultaneous appearance of cave art in Europe and Asia indicates that the capacity for artistic expression (and, hence, symbolism) dates back to the time in prehistory before humans began to migrate around the world from out of Africa (around 60,000 years ago). This conclusion gains support from the recent discovery of a silcrete flake from a layer in the Blombos Cave that dates to about 73,000 years old. (The Blombos Cave is located around 150 miles east of Cape Town, South Africa.) A portion of an abstract drawing is etched into this flake.2

    Linguist Shigeru Miyagawa believes that artistic expression emerged in Africa earlier than 125,000 years ago. Archeologists have discovered rock art produced by the San people that dates to 72,000 years ago. This art shares certain elements with European cave art. Because the San diverged from the modern human lineage around 125,000 years ago, the ancestral people groups that gave rise to both lines must have possessed the capacity for artistic expression before that time.3

    It is also significant that the globular brain shape of modern humans first appears in the archeological record around 130,000 years ago. As I have written about previously, globular brain shape allows expansion of the parietal lobe, which is responsible for many of our capacities:

    • Perception of stimuli
    • Sensorimotor transformation (which plays a role in planning)
    • Visuospatial integration (which provides hand-eye coordination needed for making art)
    • Imagery
    • Self-awareness
    • Working and long-term memory

    In other words, the evidence indicates that our capacity for symbolism emerged at the time that our species first appears in the fossil record. Some archeologists claim that Neanderthals displayed the capacity for symbolism as well. If this claim proves true, then human beings don’t stand apart from other creatures. We aren’t special.

    Did Neanderthals Have the Capacity to Create Art?

    Claims of Neanderthal artistic expression abound in popular literature and appear in scientific journals. However, a number of studies question these claims. When taken as a whole, the evidence indicates that Neanderthals were cognitively inferior to modern humans.

    So, when the evidence is considered as a whole, only human beings (modern humans) possess the capability for symbolism, open-ended generative capacity, and theory of mind—in my view, scientific descriptors of the image of God. The archeological record affirms the biblical view of human nature. It is also worth noting that the origin of our symbolic capacity seems to arise at the same time that modern humans appear in the fossil record, an observation I would expect given the biblical account of human origins.

    Like the comics that intrigue me, this narrative resonates on a personal level. It seems as if the story told in the opening pages of the Old Testament is true.

    Resources

    Cave Art and the Image of God

    The Modern Human Brain

    Could Neanderthals Make Art?

    Endnotes
    1. Maxime Aubert et al., “Earliest Hunting Scene in Prehistoric Art,” Nature 576 (December 11, 2019): 442–45, doi:10.1038/s41586-019-1806y.
    2. Shigeru Miyagawa, Cora Lesure, and Vitor A. Nóbrega, “Cross-Modality Information Transfer: A Hypothesis about the Relationship among Prehistoric Cave Paintings, Symbolic Thinking, and the Emergence of Language,” Frontiers in Psychology 9 (February 20, 2018): 115, doi:10.3389/fpsyg.2018.00115.
    3. Christopher S. Henshilwood et al., “An Abstract Drawing from the 73,000-Year-Old Levels at Blombos Cave, South Africa,” Nature 562 (September 12, 2018): 115–18, doi:10.1038/s41586-018-0514-3.
  • But Do Watches Replicate? Addressing a Logical Challenge to the Watchmaker Argument

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Jan 22, 2020

    Were things better in the past than they are today? It depends who you ask.

    Without question, there are some things that were better in years gone by. And, clearly, there are some historical attitudes and customs that, today, we find hard to believe our ancestors considered to be an acceptable part of daily life.

    It isn’t just attitudes and customs that change over time. Ideas change, too—some for the better, some for the worst. Consider the way doing science has evolved, particularly the study of biological systems. Was the way we approached the study of biological systems better in the past than it is today?

    It depends who you ask.

    As an old-earth creationist and intelligent design proponent, I think the approach biologists took in the past was better than today for one simple reason. Prior to Darwin, teleology was central to biology. In the late 1700s and early to mid-1800s, life scientists viewed biological systems as the product of a Mind. Consequently, design was front and center in biology.

    As part of the Darwinian revolution, teleology was cast aside. Mechanism replaced agency and design was no longer part of the construct of biology. Instead of reflecting the purposeful design of a Mind, biological systems were now viewed as the outworking of unguided evolutionary mechanisms. For many people in today’s scientific community, biology is better for it.

    Prior to Darwin, the ideas shaped by thinkers (such as William Paley) and biologists (such as Sir Richard Owen) took center stage. Today, their ideas have been abandoned and are often lampooned.

    But, advances in my areas of expertise (biochemistry and origins-of-life research) justify a return to the design hypothesis, indicating that there may well be a role for teleology in biology. In fact, as I argue in my book The Cell’s Design, the latest insights into the structure and function of biomolecules bring us full circle to the ideas of William Paley (1743-1805), revitalizing his Watchmaker argument for God’s existence.

    In my view, many examples of molecular-level biomachinery stand as strict analogs to human-made machinery in terms of architecture, operation, and assembly. The biomachines found in the cell’s interior reveal a diversity of form and function that mirrors the diversity of designs produced by human engineers. The one-to-one relationship between the parts of man-made machines and the molecular components of biomachines is startling (e.g., the flagellum’s hook). I believe Paley’s case continues to gain strength as biochemists continue to discover new examples of biomolecular machines.

    The Skeptics’ Challenge

    Despite the powerful analogy that exists between machines produced by human designers and biomolecular machines, many skeptics continue to challenge the revitalized watchmaker argument on logical grounds by arguing in the same vein as David Hume.1 These skeptics assert that significant and fundamental differences exist between biomachines and human creations.

    In a recent interaction on Twitter, a skeptic raised just such an objection. Here is what he wrote:

    “Do [objects and machines designed by humans] replicate with heritable variation? Bad analogy, category mistake. Same one Paley made with his watch on the heath centuries ago.”

    In other words, biological systems replicate, whereas devices and artefacts made by human beings don’t. This difference is fundamental. Such a dissimilarity is so significant that it undermines the analogy between biological systems (in general) and biomolecular machines (specifically) and human designs, invalidating the conclusion that life must stem from a Mind.

    This is not the first time I have encountered this objection. Still, I don’t find it compelling because it fails to take into account manmade machines that do, indeed, replicate.

    Von Neumann’s Universal Self-Constructor

    In the 1940s, mathematician, physicist, and computer scientist John von Neumann (1903–1957) designed a hypothetical machine called a universal constructor. This machine is a conceptual apparatus that can take materials from the environment and build any machine, including itself. The universal constructor requires instructions to build the desired machines and to build itself. It also requires a supervisory system that can switch back and forth between using the instructions to build other machines and copying the instructions prior to the replication of the universal constructor.

    Von Neumann’s universal constructor is a conceptual apparatus, but today researchers are actively trying to design and build self-replicating machines.2 Much work needs to be done before self-replicating machines are a reality. Nevertheless, one day machines will be able to reproduce, making copies of themselves. To put it another way, reproduction isn’t necessarily a quality that distinguishes machines from biological systems.

    It is interesting to me that a description of von Neumann’s universal constructor bears remarkable similarity to a description of a cell. In fact, in the context of the origin-of-life problem, astrobiologists Paul Davies and Sara Imari Walker noted the analogy between the cell’s information systems and von Neumann’s universal constructor.3 Davies and Walker think that this analogy is key to solving the origin-of-life problem. I would agree. However, Davies and Walker support an evolutionary origin of life, whereas I maintain that the analogy between cells and von Neumann’s universal constructor adds vigor to the revitalized Watchmaker argument and, in turn, the scientific case for a Creator.

    In other words, the reproduction objection to the Watchmaker argument has little going for it. Self-replication is not the basis for viewing biomolecular machines as fundamentally dissimilar to machines created by human designers. Instead, self-replication stands as one more machine-like attribute of biochemical systems. It also highlights the sophistication of biological systems compared to systems produced by human designers. We are a far distance away from creating machines that are as sophisticated as the machines found inside the cell. Nevertheless, as we continue to move in that direction, I think the case for a Creator will become even more compelling.

    Who knows? With insights such as these maybe one day we will return to the good old days of biology, when teleology was paramount.

    Resources

    Biomolecular Machines and the Watchmaker Argument

    Responding to Challenges to the Watchmaker Argument

    Endnotes
    1. Whenever you depart, in the least, from the similarity of the cases, you diminish proportionably the evidence; and may at last bring it to a very weak analogy, which is confessedly liable to error and uncertainty.” David Hume, “Dialogues Concerning Natural Religion,” in Classics of Western Philosophy, 3rd ed., ed. Steven M. Cahn, (1779; repr., Indianapolis: Hackett, 1990), 880.
    2. For example, Daniel Mange et al., “Von Neumann Revisited: A Turing Machine with Self-Repair and Self-Reproduction Properties,” Robotics and Autonomous Systems 22 (1997): 35-58, https://doi.org/10.1016/S0921-8890(97)00015-8; Jean-Yves Perrier, Moshe Sipper, and Jacques Zahnd, “Toward a Viable, Self-Reproducing Universal Computer,” Physica D: Nonlinear Phenomena
      97, no. 4 (October 15, 1996): 335–52, https://doi.org/10.1016/0167-2789(96)00091-7; Umberto Pesavento, “An Implementation of von Neumann’s Self-Reproducing Machine,” Artificial Life 2, no. 4 (Summer 1995): 337–54, https://doi.org/10.1162/artl.1995.2.4.337.
    3. Sara Imari Walker and Paul C. W. Davies, “The Algorithmic Origins of Life,” Journal of the Royal Society Interface 10 (2013), doi:10.1098/rsif.2012.0869.
  • The Flagellum’s Hook Connects to the Case for a Creator

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Jan 08, 2020

    What would you say is the most readily recognizable scientific icon? Is it DNA, a telescope, or maybe a test tube?

    blog__inline--the-flagellums-hook-connects-to-the-case-1

    Figure 1: Scientific Icons. Image credit: Shutterstock

    Marketing experts recognize the power of icons. When used well, icons prompt consumers to instantly identify a brand or product. They can also communicate a powerful message with a single glance.

    Though many skeptics question if it’s science at all, the intelligent design movement has identified a powerful icon that communicates its message. Today, when most people see an image the bacterial flagellum they immediately think: Intelligent Design.

    This massive protein complex powerfully communicates sophisticated engineering that could only come from an Intelligent Agent. And along these lines, it serves as a powerful piece of evidence for a Creator’s handiwork. Careful study of its molecular architecture and operation provides detailed evidence that an Intelligent Agent must be responsible for biochemical systems and, hence, the origin of life. And, as it turns out, the more we learn about the bacterial flagellum, the more evident it becomes that a Creator must have played a role in the origin and design of life—at least at the biochemical levelas new research from Japan illustrates.1

    The Bacterial Flagellum

    This massive protein complex looks like a whip extending from the bacterial cell surface. Some bacteria have only a single flagellum, others possess several flagella. Rotation of the flagellum(a) allows the bacterial cell to navigate its environment in response to various chemical signals.

    blog__inline--the-flagellums-hook-connects-to-the-case-2

    Figure 2: Typical Bacteria with Flagella. Image credit: Shutterstock

    An ensemble of 30 to 40 different proteins makes up the typical bacterial flagellum. These proteins function in concert as a literal rotary motor. The flagellum’s components include a rotor, stator, drive shaft, bushing, universal joint, and propeller. It is essentially a molecular-sized electrical motor directly analogous to human-produced rotary motors. The rotation is powered by positively charged hydrogen ions flowing through the motor proteins embedded in the inner membrane.

    blog__inline--the-flagellums-hook-connects-to-the-case-3

    Figure 3: The Bacterial Flagellum. Image credit: Wikipedia

    The Bacterial Flagellum and the Revitalized Watchmaker Argument

    Typically, when intelligent design proponents/creationists use the bacterial flagellum to make the case for a Creator, they focus the argument on its irreducibly complex nature. I prefer a different tact. I like to emphasize the eerie similarity between rotary motors created by human designers and nature’ bacterial flagella.

    The bacterial flagellum is just one of a large number of protein complexes with machine-like attributes. (I devote an entire chapter to biomolecular machines in my book The Cell’s Design.) Collectively, these biomolecular machines can be deployed to revitalize the Watchmaker argument.

    Popularized by William Paley in the eighteenth century, this argument states that as a watch requires a watchmaker, so too, life requires a Creator. Following Paley’s line of reasoning, a machine is emblematic of systems produced by intelligent agents. Biomolecular machines display the same attributes as human-crafted machines. Therefore, if the work of intelligent agents is necessary to explain the genesis of machines, shouldn’t the same be true for biochemical systems?

    Skeptics inspired by atheist philosopher David Hume have challenged this simple, yet powerful, analogy. They argue that the analogy would be compelling only if there is a high degree of similarity between the objects that form the analogy. Skeptics have long argued that biochemical systems and machines are too dissimilar to make the Watchmaker argument work.

    However, the striking similarity between the machine parts of the bacterial flagellum and human-made machines cause this objection to evaporate. New work on flagella by Japanese investigators lends yet more support to the Watchmaker analogy.

    New Insights into the Structure and Function of the Flagellum’s Universal Joint

    The flagellum’s universal joint (sometimes referred to as the hook) transfers the torque generated by the motor to the propeller. The research team wanted to develop a deeper understanding of the relationship between the molecular structure of the hook and how the structural features influence its function as a universal joint.

    Comprised of nearly 100 copies (monomers) of a protein called FlgE, the hook is a curved, tube-like structure with a hollow interior. FlgE monomers stack on top of each other to form a protofilament. Eleven protofilaments organize to form the hook’s tube, with the long axis of the protofilament aligning to form the long axis of the hook.

    Each FlgE monomer consists of three domains, called D0, D1, and D2. The researchers discovered that when the FlgE monomers stack to form a protofilament, the D0, D1, and D2 domains of each of the monomers align along the length of the protofilament to form three distinct regions in the hook. These layers have been labeled the tube layer, the mesh layer, and the spring layer.

    During the rotation of the flagellum, the protofilaments experience compression and extension. The movement of the domains, which changes their spatial arrangement relative to one another, mediates the compression and extension. These domain movements allow the hook to function as a universal joint that maintains a rigid tube shape against a twisting “force,” while concurrently transmitting torque from the motor to the flagellum’s filament as it bends along its axis.

    Regardless of one’s worldview, it is hard not to marvel at the sophisticated and elegant design of the flagellum’s hook!

    The Bacterial Flagellum and the Case for a Creator

    If the Watchmaker argument holds validity, it seems reasonable to think that the more we learn about protein complexes, such as the bacterial flagellum, the more machine-like they should appear to be. This work by the Japanese biochemists bears out this assumption. The more we characterize biomolecular machines, the more reason we have to think that life stems from a Creator’s handiwork.

    Dynamic properties of the hook assembly add to the Watchmaker argument (when applied to the bacterial flagellum). This structure is much more sophisticated and ingenious than the design of a typical universal joint crafted by human designers. This elegance and ingenuity of the hook are exactly the attributes I would expect if a Creator played a role in the origin and design of life.

    Message received, loud and clear.

    Resources

    The Bacterial Flagellum and the Case for a Creator

    Can Intelligent Design Be Part of the Scientific Construct?

    Endnotes
    1. Takayuki Kato et al., “Structure of the Native Supercoiled Flagellar Hook as a Universal Joint,” Nature Communications 10 (2019): 5295, doi:10.1038/s4146.
  • Genome Code Builds the Case for Creation

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Dec 18, 2019

    A few days ago, I was doing a bit of Christmas shopping for my grandkids and I happened across some really cool construction kits, designed to teach children engineering principles while encouraging imaginative play. For those of you who still have a kid or two on your Christmas list, here are some of the products that caught my eye:

    These building block sets are a far cry from the simple Lego kits I played with as a kid.

    As cool as these construction toys may be, they don’t come close to the sophisticated construction kit cells use to build the higher-order structures of chromosomes. This point is powerfully illustrated by the insights of Italian investigator Giorgio Bernardi. Over the course of the last several years, Bernardi’s research teams have uncovered design principles that account for chromosome structure, a set of rules that he refers to as the genome code.1

    To appreciate these principles and their theological implications, a little background information is in order. (For those readers familiar with chromosome structure, skip ahead to The Genome Code.)

    Chromosomes

    DNA and proteins interact to make chromosomes. Each chromosome consists of a single DNA molecule wrapped around a series of globular protein complexes. These complexes repeat to form a supramolecular structure resembling a string of beads. Biochemists refer to the “beads” as nucleosomes.

    blog__inline--genome-code-builds-the-case-for-creation-1

    Figure 1: Nucleosome Structure. Image credit: Shutterstock

    The chain of nucleosomes further coils to form a structure called a solenoid. In turn, the solenoid condenses to form higher-order structures that constitute the chromosome.

    blog__inline--genome-code-builds-the-case-for-creation-2

    Figure 2: Chromosome Structure Image credit: Shutterstock

    Between cell division events (called the interphase of the cell cycle), the chromosome exists in an extended diffuse form that is not readily detectable when viewed with a microscope. Just prior to and during cell division, the chromosome condenses to form its readily recognizable compact structures.

    Biologists have discovered that there are two distinct regions—labeled euchromatin and heterochromatin for chromosomes in the diffuse state. Euchromatin is resistant to staining with dyes that help researchers view it with a microscope. On the other hand, heterochromatin stains readily. Biologists believe that heterochromatin is more tightly packed (and, hence, more readily stained) than euchromatin. They have also learned that heterochromatin associates with the nuclear envelope.

    blog__inline--genome-code-builds-the-case-for-creation-3

    Figure 3: Structure of the Nucleus Showing the Distribution of Euchromatin and Heterochromatin. Image credit: Wikipedia

    The Genome Code

    Historically, biologists have viewed chromosomes as consisting of compositionally distinct units called isochores. In vertebrate genomes, five isochores exist (L1, L2, H1, H2, and H3). The isochores differ in the composition of guanine- and cytosine-containing deoxyribonucleotides (two of the four building blocks of DNA). The GC composition increases from L1 to H3. Gene density also increases, with the H3 isochore possessing the greatest number of genes. On the other hand, the size of DNA pieces of compositional homogeneity decreases from L1 to H3.

    Bernardi and his collaborators have developed evidence that the isochores reflect a fundamental unit of chromosome organization. The H isochores correspond to GC-rich euchromatin (containing most of the genes) and the L isochores correspond to GC-poor heterochromatin (characterized by gene deserts).

    Bernardi’s research teams have demonstrated that the two groups of isochores are characterized by different distributions of DNA sequence elements. GC-poor isochores contain a disproportionately high level of oligo A sequences while GC-rich isochores harbor a disproportionately high level of oligo G sequences. These two different types of DNA sequence elements form stiff structures that mold the overall three-dimensional architecture of chromosomes. For example, oligo A sequences introduce curvature to the DNA double helix. This topology allows the double helix to wrap around the protein core that forms nucleosomes. The oligo G sequence elements adopt a topology that weakens binding to the proteins that form the nucleosome core. As Bernardi points out, “There is a fundamental link between DNA structure and chromatin structure, the genomic code.”2

    In other words, the genomic code refers to a set of DNA sequence elements that:

    1. Directly encodes and molds chromosome structure (while defining nucleosome binding),
    2. Is pervasive throughout the genome, and
    3. Overlaps the genetic code by constraining sequence composition and gene structure.

    Because of the existence of the genomic code, variations in DNA sequence caused by mutations will alter the structure of chromosomes and lead to deleterious effects.

    The bottomline: Most of the genomic sequence plays a role in establishing the higher-order structures necessary for chromosome formation.

    Genomic Code Challenges the Junk DNA Concept

    According to Bernardi, the discovery of the genomic code explains the high levels of noncoding DNA sequences in genomes. Many people view such sequences as vestiges of an evolutionary history. Because of the existence and importance of the genomic code, the vast proportion of noncoding DNA found in vertebrate genomes must be viewed as functionally vital. According to Bernardi:

    Ohno, mostly focusing on pseudo-genes, proposed that non-coding DNA was “junk DNA.” Doolittle and Sapienza and Orgel and Crick suggested the idea of “selfish DNA,” mainly involving transposons visualized as molecular parasites rather than having an adaptive function for their hosts. In contrast, the ENCODE project claimed that the majority (~80%) of the genome participated “in at least one biochemical RNA-and/or chromatin-associated event in at least one cell type.”…At first sight, the pervasive involvement of isochores in the formation of chromatin domains and spatial compartments seems to leave little or no room for “junk” or “selfish” DNA.3

    The ENCODE Project

    Over the last decade or so, ENCODE Project scientists have been seeking to identify the functional DNA sequence elements in the human genome. The most important landmark for the project came in the fall of 2012 when the ENCODE Project reported phase II results. (Currently, ENCODE is in phase IV.) To the surprise of many, the project reported that around 80 percent of the human genome displays biochemical activity—hence, function—with many scientists anticipating that that percentage would increase as phases III and IV moved toward completion.

    The ENCODE results have generated quite a bit of controversy, to say the least. Some researchers accept the ENCODE conclusions. Others vehemently argue that the conclusions fly in the face of the evolutionary paradigm and, therefore, can’t be valid. Of course, if the ENCODE Project conclusions are correct, then it becomes a boon for creationists and intelligent design advocates.

    One of the most prominent complaints about the ENCODE conclusions relates to the way the consortium determined biochemical function. Critics argue that ENCODE scientists conflated biochemical activity with function. These critics assert that, at most, about ten percent of the human genome is truly functional, with the remainder of the activity reflecting biochemical noise and experimental artifacts.

    However, as Bernardi points out, his work (independent of the ENCODE Project) affirms the project’s conclusions. In this case, the so-called junk DNA plays a critical role in molding the structures of chromosomes and must be considered functional.

    Function for “Junk DNA”

    Bernardi’s work is not the first to recognize pervasive function of noncoding DNA. Other researchers have identified other functional attributes of noncoding DNA. To date, researchers have identified at least five distinct functional roles that noncoding DNA plays in genomes.

    1. Helps in gene regulation
    2. Functions as a mutational buffer
    3. Forms a nucleoskeleton
    4. Serves as an attachment site for mitotic apparatus
    5. Dictates three-dimensional architecture of chromosomes

    A New View of Genomes

    These types of insights are forcing us to radically rethink our view of the human genome. It appears that genomes are incredibly complex, sophisticated biochemical systems and most of the genes serve useful and necessary functions.

    We have come a long way from the early days of the human genome project. Just 15 years ago, many scientists estimated that around 95 percent of the human genome consists of junk. That acknowledgment seemingly provided compelling evidence that humans must be the product of an evolutionary history. Today, the evidence suggests that the more we learn about the structure and function of genomes, the more elegant and sophisticated they appear to be. It is quite possible that most of the human genome is functional.

    For creationists and intelligent design proponents, this changing view of the human genome provides reasons to think that it is the handiwork of our Creator. A skeptic might wonder why a Creator would make genomes littered with so much junk. But if a vast proportion of genomes consists of functional sequences, then this challenge no longer carries weight and it becomes more and more reasonable to interpret genomes from within a creation model/intelligent design framework.

    What a Christmas gift!

    Resources

    Junk DNA Regulates Gene Expression

    Junk DNA Serves as a Mutational Buffer

    Junk DNA Serves a Nucleoskeletal Role

    Junk DNA Plays a Role in Cell Division

    ENCODE Project

    Studies that Affirm the ENCODE Results

    Endnotes
    1. Giorgio Bernardi, “The Genomic Code: A Pervasive Encoding/Molding of Chromatin Structures and a Solution of the ‘Non-Coding DNA’ Mystery,” BioEssays 41, no. 12 (November 8, 2019), doi:10.1002/bies.201900106.
    2. Bernardi, “The Genomic Code.
    3. Bernardi, “The Genomic Code.
  • Mutations, Cancer, and the Case for a Creator

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Dec 11, 2019

    Cancer. Perhaps no other word evokes more fear, anger, and hopelessness.

    It goes without saying that cancer is an insidious disease. People who get cancer often die way too early. And even though a cancer diagnosis is no longer an immediate death sentence—thanks to biomedical advances—there are still many forms of cancer that are difficult to manage, let alone effectively treat.

    Cancer also causes quite a bit of consternation for those of us who use insights from science to make a case for a Creator. From my vantage point, one of the most compelling reasons to think that a Creator exists and played a role in the origin and design of life is the elegant, sophisticated, and ingenious designs of biochemical systems. And yet, when I share this evidence with skeptics—and even seekers—I am often met with resistance in the form of the question: What about cancer?

    Why Would God Create a World Where Cancer Is Possible?

    In effect, this question typifies one of the most commonand significantobjections to the design argument. If a Creator is responsible for the designs found in biochemistry, then why are so many biochemical systems seemingly flawed, inelegant, and poorly designed?

    The challenge cancer presents for the design argument carries an added punch. It’s one thing to cite inefficiency of protein synthesis or the error-prone nature of the rubisco enzyme, but it’s quite another to describe the suffering of a loved one who died from cancer. There’s an emotional weight to the objection. These deaths feel horribly unjust.

    Couldn’t a Creator design biochemistry so that a disease as horrific as cancer would never be possible—particularly if this Creator is all-powerful, all-knowing, and all-good?

    I think it’s possible to present a good answer to the challenge that cancer (and other so-called bad designs) poses for the design argument. Recent insights published by a research duo from Cambridge University in the UK help make the case.1

    A Response to the Bad Designs in Biochemistry and Biology

    Because the “bad designs” challenge is so significant (and so frequently expressed), I devoted an entire chapter in The Cell’s Design to addressing the apparent imperfections of biochemical systems. My goal in that chapter was to erect a framework that comprehensively addresses this pervasive problem for the design argument.

    In the face of this challenge it is important to recognize that many so-called biochemical flaws are not genuine flaws at all. Instead, they arise as the consequences of trade-offs. In their cellular roles, many biochemical systems face two (or more) competing objectives. Effectively managing these opposing objectives means that it is impossible for every aspect of the system to perform at an optimal level. Some features must be carefully rendered suboptimal to ensure that the overall system performs robustly under a wide range of conditions.

    Cancer falls into this category. It is not a consequence of flawed biochemical designs. Instead, cancer reflects a trade-off between DNA repair and cell survival.

    DNA Damage and Cancer

    The etiology (cause) of most cancers is complex. While about 10 percent of cancers have a hereditary basis, the vast proportion results from mutations to DNA caused by environmental factors.

    Some of the damage to DNA stems from endogenous (internal) factors, such as water and oxygen in the cell. These materials cause hydrolysis and oxidative damage to DNA, respectively. Both types of damage can introduce mutations into this biomolecule. Exogenous chemicals (genotoxins) from the environment can also interact with DNA and cause damage leading to mutations. So does exposure to ultraviolet radiation and radioactivity from the environment.

    Infectious agents such as viruses can also cause cancer. Again, these infectious agents cause genomic instability, which leads to DNA mutations.

    blog__inline--mutations-cancer-and-the-case-for-a-creator

    Figure: Tumor Formation Process. Image credit: Shutterstock

    In effect, DNA mutations are an inevitable consequence of the laws of nature, specifically the first and second laws of thermodynamics. These laws make possible the chemical structures and operations necessary for life to even exist. But, as a consequence, these same life-giving laws also undergird chemical and physical processes that damage DNA.

    Fortunately, cells have the capacity to detect and repair damage to DNA. These DNA repair pathways are elaborate and sophisticated. They are the type of biochemical features that seem to support the case for a Creator. DNA repair pathways counteract the deleterious effects of DNA mutation by correcting the damage and preventing the onset of cancer.

    Unfortunately, these DNA repair processes function incompletely. They fail to fully compensate for all of the damage that occurs to DNA. Consequently, over time, mutations accrue in DNA, leading to the onset of cancer. The inability of the cell’s machinery to repair all of the mutation-causing DNA damage and, ultimately, protect humans (and other animals) from cancer is precisely the thing that skeptics and seekers alike point to as evidence that counts against intelligent design.

    Why would a Creator make a world where cancer is possible and then design cancer-preventing processes that are only partially effective?

    Cancer: The Result of a Trade-Off

    Even though mutations to DNA cause cancer, it is rare that a single mutation leads to the formation of a malignant cell type and, subsequently, tumor growth. Biomedical researchers have discovered that the onset of cancer involves a series of mutations to highly specific genes (dubbed cancer genes). The mutations that cause cells to transform into cancer cells are referred to as driver mutations. Researchers have also learned that most cells in the body harbor a vast number of mutations that have little or no biological consequence. These mutations are called passenger mutations. As it turns out, there are thousands of passenger mutations in a typical cancer cell and only about ten driver mutations to so-called cancer genes. Biomedical investigators have also learned that many normal cells harbor both passenger and driver mutations without ever transforming. (It appears that other factors unrelated to DNA mutation play a role in causing a cancer cell to undergo extensive clonal expansion, leading to the formation of a tumor.)

    What this means is that mutations to DNA are quite extensive, even in normal, healthy cells. But this factor prompts the question: Why is the DNA repair process so lackluster?

    The research duo from Cambridge University speculate that DNA repair is so costly to cells—making extensive use of energy and cell resources—that to maintain pristine genomes would compromise cell survival. These researchers conclude that “DNA quality control pathways are fully functional but naturally permissive of mutagenesis even in normal cells.”2 And, it seems as if the permissiveness of the DNA repair processes generally have little consequence given that a vast proportion of the human genome consists of noncoding DNA.

    Biomedical researchers have uncovered another interesting feature about the DNA repair processes. The processes are “biased,” with repairs taking place preferentially on the DNA strand (of the double helix) that codes for proteins and, hence, is transcribed. In other words, when DNA repair takes place it occurs where it counts the most. This bias displays an elegant molecular logic and rationale, strengthening the case for design.

    Given that driver mutations are not in and of themselves sufficient to lead to tumor formation, the researchers conclude that cancer prevention pathways are quite impressive in the human body. They conclude, “Considering that an adult human has ~30 trillion cells, and only one cell develops into a cancer, human cells are remarkably robust at preventing cancer.”3

    So, what about cancer?

    Though cancer ravages the lives of so many people, it is not because of poorly designed, substandard biochemical systems. Given that we live in a universe that conforms to the laws of thermodynamics, cancer is inevitable. Despite this inevitability, organisms are designed to effectively ward off cancer.

    Ironically, as we gain a better understanding of the process of oncogenesis (the development of tumors), we are uncovering more—not less—evidence for the remarkably elegant and ingenious designs of biochemical systems.

    The insights by the research team from Cambridge University provide us with a cautionary lesson. We are often quick to declare a biochemical (or biological) feature as poorly designed based on incomplete understanding of the system. Yet, inevitably, as we learn more about the system we discover an exquisite rationale for why things are the way they are. Such knowledge is consistent with the idea that these systems stem from a Creator’s handiwork.

    Still, this recognition does little to dampen the fear and frustration associated with a cancer diagnosis and the pain and suffering experienced by those who battle cancer (and their loved ones who stand on the sidelines watching the fight take place). But, whether we are a skeptic or a believer, we all should be encouraged by the latest insights developed by the Cambridge researchers. The more we understand about the cause and progression of cancers, the closer we are to one day finding cures to a disease that takes so much from us.

    We can also take added encouragement from the powerful scientific case for a Creator’s existence. The Old and New Testaments teach us that the Creator revealed by scientific discovery has suffered on our behalf and will suffer alongside usin the person of Christas we walk through the difficult circumstances of life.

    Resources

    Examples of Biochemical Trade-Offs

    Evidence that Nonfunctional DNA Serves as a Mutational Buffer

    Endnotes
    1. Serena Nik-Zainal and Benjamin A. Hall, “Cellular Survival over Genomic Perfection,” Science 366, no. 6467 (November 15, 2019): 802–03, doi:10.1126/science.aax8046.
    2. Nik-Zainal and Hall, 802–03.
    3. Nik-Zainal and Hall, 802–03.
  • Evolutionary Story Tells the Tale of Creation

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Dec 04, 2019

    In high school I was a bit of a troublemaker. It wasn’t out of the ordinary for me to be summoned to Mr. Reynolds’ office—the school’s vice principal—for some misdeed or other. After a few office visits, I quickly learned the value of a good story. If convincing enough, I could defray the accusations leveled against me. All I had to do was create plausible deniability.

    Story Telling in the Evolutionary Paradigm

    Storytelling isn’t just the purview of a mischievous kid facing the music in the principal’s office, it is part of the construct of science.

    Recent work by a team of scientific investigators from the University of Florida (UF) highlights the central role that storytelling plays in evolutionary biology.1 In fact, it is not uncommon for evolutionary biologists to weave grand narratives that offer plausible evolutionary stories for the emergence of biological or behavioral traits. And, though these accounts seem scientific, they are often unverifiable scientific explanations.

    Inspired by Rudyard Kipling’s (1865–1936) book of children’s origin stories, the late evolutionary biologist Stephen Jay Gould (1941–2002) referred to these evolutionary tales as just-so stories. To be fair, others have been critical of Gould’s cynical view of evolutionary accounts, arguing that, in reality, just-so stories in evolutionary biology are actually hypotheses about evolutionary transformations. But still, more often than not, these “hypotheses” appear to be little more than convenient fictions.

    An Evolutionary Just-So Story of Moths and Bats

    The traditional evolutionary account of ultrasonic sound detection in nocturnal moths serves as a case in point. Moths (and butterflies) belong to one of the most important groups of insects: lepidoptera. This group consists of about 160,000 species, with nocturnal moths comprising over 75 percent of the group.

    Moths play a key role in ecosystems. For example, they serve as one of the primary food sources for bats. Bats use echolocation to help them locate moths at night. Bats emit ultrasonic cries that bounce off the moths and reflect back to the bats, giving these predators the pinpoint location of the moths, even during flight.

    Many nocturnal moth species have defenses that help them escape predation by bats. One defense is ears (located in different areas of their bodies) that detect ultrasonic sounds. This capability allows the moths to hear the bats coming and get out of their way.

    For nearly a half century, evolutionary biologists explained moths’ ability to hear ultrasonic sounds as the outworking of an “evolutionary arms race” between echolocating bats and nocturnal moths. Presumably, bats evolved the ability to echolocate, allowing them to detect and prey upon moths at night by plucking them out of the air in mid-flight. In response, some groups of moths evolved ears that allowed them to detect the ultrasonic screeches emitted by bats, helping them to avoid detection.

    blog__inline--evolutionary-story-tells-the-tale-of-creation

    Figure: Flying Pipistrelle bat. Image credit: Shutterstock

    For 50 years, biologists have studied the relationship between echolocating bats and nocturnal moths with the assumption that this explanation is true. (I doubt Mr. Reynolds ever assumed my stories were true.) In fact, evolutionary accounts like this one provide evidence for the idea of coevolution. Advanced by Paul Ehrlich and Peter Raven in 1964, this evolutionary model maintains that ecosystems are shaped by species that affect one another’s evolution.

    If the UF team’s work is to be believed, then it turns out that the story recounting the evolutionary arms race between nocturnal moths and echolocating bats is fictional. As team member Jesse Barber, a researcher who has studied bats and moths, complains, “Most of the introductions I’ve written in my papers [describing the coevolution of bats and moths] are wrong.”2

    An Evolutionary Study on the Origin of Moths and Butterflies

    To reach this conclusion, the UF team generated the most robust evolutionary tree (phylogeny) for lepidopterans to date. They also developed an understanding of the timing of events in lepidopteran natural history. They were motivated to take on this challenge because of the ecological importance of moths and butterflies. As noted, these insects play a central role in terrestrial ecosystems all over the world and coevolutionary models provide the chief explanations for their place in these ecosystems. But, as the UF researchers note, “These hypotheses have not been rigorously tested, because a robust lepidopteran phylogeny and timing of evolutionary novelties are lacking.”3

    To remedy this problem, the researchers built a lepidopteran evolutionary tree from a data set of DNA sequences that collectively specified 2,100 protein-coding genes from 186 lepidopteran species. These species represented all the major divisions within this biological group. Then, they dated the evolutionary timing of key events in lepidopteran natural history from the fossil record.

    Based on their analysis, the research team concluded that the first lepidopteran appeared around 300 million years ago. This creature fed on nonvascular plants. Around 240 million years ago, lepidopterans with tubelike proboscises (long, sucking mouthpiece) appeared, allowing these insects to extract nectar from flowering plants.

    These results cohere with the coevolutionary model that the first lepidopterans fed internally on plants and, later, externally, as they evolved the ability to access nectar from plants. Flowering plants appear around 260 million years ago, which is about the time that the tubelike proboscis appears in lepidopterans.

    But perhaps the most important and stunning finding from their study stems from the appearance of hearing organs in moths. It looks as if these organs arose independently 9 separate times—around 80 to 90 million years ago—well before bats began to echolocate. (The earliest known bat from the fossil record with the capacity to echolocate is around 45 to 50 million years old.)

    The UF investigators uncovered another surprising result related to the appearance of butterflies. They discovered that butterflies became diurnal (active in the daytime) around 98 million years ago. According to the traditional evolutionary story, butterflies (which are diurnal) evolved from nocturnal moths when they transitioned to daytime activities to escape predation of echolocating bats, which feed at night. But as with the origin of hearing organs in moths, the transition from nocturnal to diurnal behavior occurred well before the first appearance of echolocating bats and seems to have occurred independently at least two separate times.

    It Just Isn’t So

    The UF evolutionary biologists’ study demonstrates that the coevolutionary models for the origin of hearing organs in moths and diurnal behavior of butterflies—dominant for over a half century in evolutionary thought—are nothing more than just-so stories. They appear to make sense on the surface but are no closer to the truth than the tales I would weave in Mr. Reynolds’ office.

    In light of this discovery, the research team posits two new evolutionary models for the origin of these two traits, respectively. Now scientists think that the evolutionary emergence of hearing organs in moths may have provided these insects the capacity for auditory surveillance of their environment. Their capacity to hear may have helped them detect the low-frequency sounds of flapping bird wings, for example, and avoid predation. Presumably, these same hearing organs later evolved to detect the high-frequency cries of bats. As for the evolutionary origin of diurnal behavior characteristic of butterflies, researchers now speculate that butterflies became diurnal to take advantage of flowers that bloom in the daytime.

    Again, on the surface, these explanations seem plausible. But one has to wonder if these models, like their predecessors, are little more than just-so stories. In fact, this study raises a general concern: How much confidence can we place in any evolutionary account? Could it be that other evolutionary accounts are, in reality, good stories, but in the end will turn out to be just as fanciful as the stories written by Rudyard Kipling?

    In and of itself, recognizing that many evolutionary models could just be stories doesn’t provide sufficient warrant for skepticism about the evolutionary paradigm. But it does give pause for thought. Plus, two insights from this study raise real concerns about the capacity of evolutionary processes to account for life’s history and diversity:

    1. The discovery that ultrasonic hearing in moths arose independently nine separate times
    2. The discovery that diurnal behavior in butterflies appeared independently in at least two separate instances

    Convergence

    Evolutionary biologists use the term convergence to refer to the independent origin of identical or nearly identical biological and behavioral traits in organisms that cluster into unrelated groups.

    Convergence isn’t a rare phenomenon or limited to the independent origin of hearing organs in moths and diurnal behavior in butterflies. Instead, it is a widespread occurrence in biology, as evolutionary biologists Simon Conway Morris and George McGhee document in their respective books Life’s Solution and Convergent Evolution. It appears as if the evolutionary process routinely arrives at the same outcome, time and time again.4 In fact, biologists observe these repeated outcomes at the ecological, organismal, biochemical, and genetic levels.

    From my perspective, the widespread occurrence of convergent evolution is a feature of biology that evolutionary theory can’t explain. I see the widespread occurrence of convergence as a failed scientific prediction of the evolutionary paradigm.

    Convergence Should Be Rare, Not Widespread

    In effect, chance governs biological and biochemical evolution at its most fundamental level. Evolutionary pathways consist of a historical sequence of chance genetic changes operated on by natural selection, which, too, consists of chance components. The consequences are profound. If evolutionary events could be repeated, the outcome would be dramatically different every time. The inability of evolutionary processes to retrace the same path makes it highly unlikely that the same biological and biochemical designs should appear repeatedly throughout nature.5

    In support of this view, consider a 2002 landmark study carried out by two Canadian investigators who simulated macroevolutionary processes using autonomously replicating computer programs. In their study, the computer programs operated like digital organisms.6 The programs could be placed into different “ecosystems” and, because they replicate autonomously, they could evolve. By monitoring the long-term evolution of these digital organisms, the two researchers determined that evolutionary outcomes are historically contingent and unpredictable. Every time they placed the same digital organism in the same environment, it evolved along a unique trajectory.

    In other words, given the historically contingent nature of the evolutionary mechanisms, we would expect convergence to be rare in the biological realm. Yet, biologists continue to uncover example after example of convergent features—some of which are quite astounding.

    Bat Echolocation and Convergence

    Biologists have discovered one such example of convergence in the origin of echolocating bats. Echolocation appears to have arisen two times independently: once in microbats and once in Rhinolophidae, a superfamily of megabats.7 Prior to this discovery, reported in 2000, biologists classified Rhinolophidae as a microbat based on their capability to echolocate. But DNA evidence indicates that this superfamily has greater affinity to megabats than to microbats. This result means that echolocation must have originated separately in the microbats and Rhinolophidae. Researchers have also shown that the same genetic and biochemical changes occurred in microbats and megabats to create their echolocating ability. These changes appear to have taken place in the gene prestin and in its protein-product, prestin.8

    In other words, we observe two outcomes: (1) the traditional evolutionary accounts for coevolution among echolocating bats, nocturnal moths, and diurnal butterflies turned out to be just-so stories, and (2) the convergence observed in these three groups stands as independent and separate instances of failed predictions of the evolutionary paradigm.

    Convergence and the Case for Creation

    If the widespread occurrence of convergence can’t be explained through evolutionary theory, then how can it be explained?

    It is not unusual for architects and engineers to redeploy the same design features, sometimes in objects, devices, or systems that are completely unrelated to one another. So, instead of viewing convergent features as having emerged through repeated evolutionary outcomes, we could understand them as reflecting the work of a divine mind. From this perspective, the repeated origins of biological features equate to the repeated creations by an intelligent Agent who employs a common set of solutions to address a common set of problems facing unrelated organisms.

    Now that’s a story even Mr. Reynolds might believe.

    Resources

    Convergence of Echolocation

    The Historical Contingency of the Evolutionary Process

    Endnotes
    1. Akito Y. Kawahara et al., “Phylogenomics Reveals the Evolutionary Timing and Pattern of Butterflies and Moths,” Proceedings of the National Academy of Sciences, USA 116, no. 45 (November 5, 2019): 22657–63, doi:10.1073/pnas.1907847116.
    2. Ed Yong, “A Textbook Evolutionary Story about Moths and Bats Is Wrong,” The Atlantic (October 21, 2019), https://www.theatlantic.com/science/archive/2019/10/textbook-evolutionary-story-wrong/600295/.
    3. Kawahara et al., “Phylogenomics.”
    4. Simon Conway Morris, Life’s Solution: Inevitable Humans in a Lonely Universe (New York: Cambridge University Press, 2003); George McGhee, Convergent Evolution: Limited Forms Most Beautiful (Cambridge, MA: MIT Press, 2011).
    5. Stephen Jay Gould, Wonderful Life: The Burgess Shale and the Nature of History (New York: W. W. Norton & Company, 1990).
    6. Gabriel Yedid and Graham Bell, “Macroevolution Simulated with Autonomously Replicating Computer Programs,” Nature 420 (December 19, 2002): 810–12, doi:10.1038/nature01151.
    7. Emma C. Teeling et al., “Molecular Evidence Regarding the Origin of Echolocation and Flight in Bats,” Nature 403 (January 13, 2000): 188–92, doi:10.1038/35003188.
    8. Gang Li et al., The Hearing Gene Prestin Reunites Echolocating Bats, Proceedings of the National Academy of Sciences, USA 105, no. 37 (September 16, 2008): 13959–64, doi:10.1073/pnas.0802097105.
  • Evolution of Antibiotic Resistance Makes the Case for a Creator

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Nov 27, 2019

    What would it be like to live in a world without antibiotics?

    It isn’t that hard to imagine, because antibiotics weren’t readily available for medical use until after World War II. And since that time, widespread availability of antibiotics has revolutionized medicine. However, the ability to practice modern medicine is being threatened because of the rise of antibiotic-resistant bacteria. Currently, there exists a pressing need to understand the evolution of antibiotic-resistant strains and to develop new types of antibiotics. Surprisingly, this worthy pursuit has unwittingly stumbled upon evidence for a Creator’s role in the design of biochemical systems.

    Alexander Fleming (1881–1955) discovered the first antibiotic, penicillin, in 1928. But it wasn’t until Ernst Chain, Howard Florey, and Edward Abraham purified penicillin in 1942 and Norman Heatley developed a bulk extraction technique in 1945 that the compound became available for routine medical use.

    blog__inline--evolution-of-antibiotic-resistance-1

    Figure 1: Alexander Fleming. Image Credit: Wikipedia

    Prior to this time, people often died from bacterial infections. Complicating this vulnerability to microbial pathogens was the uncertain outcome of many medical procedures. For example, patients often died after surgery due to complications arising from infections.

    blog__inline--evolution-of-antibiotic-resistance-2

    Figure 2: A generalized structure for penicillin antibiotics. Image credit: Shutterstock

    Bacterial Resistance Necessitates New Antibiotics

    Unfortunately, because of the growing threat of superbugs—antibiotic-resistant strains of bacteria—health experts around the world worry that we soon will enter into a post-antibiotic era in which modern medicine will largely revert to pre-World War II practices. According to Dr. David Livermore, laboratory director at Public Health England, which is responsible for monitoring antibiotic-resistant strains of bacteria, “A lot of modern medicine would become impossible if we lost our ability to treat infections.”1

    Without antibiotics, people would routinely die of infections that we easily treat today. Abdominal surgeries would be incredibly risky. Organ transplants and chemotherapy would be out of the question. And the list continues.

    The threat of entering into a post-antibiotic age highlights the desperate need to develop new types of antibiotics. It also highlights the need to develop a better understanding of evolutionary processes that lead to the emergence of antibiotic resistance in bacteria.

    Recently, a research team from Michigan State University (MSU) published a report that offers insight into the latter concern. These researchers studied the evolution of antibiotic resistance in bacteria that had been serially cultured in the laboratory for multiple decades in media that was free from antibiotics.2 Through this effort, they learned that the genetic history of the bacterial strain plays a key role in its acquisition of resistance to antibiotics.

    This work has important implications for public health, but it also carries theological implications. The decades-long experiment provides evidence that the elegant designs characteristic of biochemical and biological systems most likely stem from a Creator’s handiwork.

    The Long-Term Evolution Experiment

    To gain insight into the role that genetic history plays in the evolution of antibiotic resistance, the MSU researchers piggy-backed on the famous Long-Term Evolution Experiment (LTEE) at Michigan State University. Inaugurated in 1988, the LTEE is designed to monitor evolutionary changes in the bacterium E. coli, with the objective of developing an understanding of the evolutionary process.

    blog__inline--evolution-of-antibiotic-resistance-3

    Figure 3: A depiction of E. coli. Image Credit: Shutterstock

    The LTEE began with a single cell of E. coli that was used to generate twelve genetically identical lines of cells. The twelve clones of the parent E. coli cell were separately inoculated into a minimal growth medium containing low levels of glucose as the only carbon source. After growing overnight, an aliquot (equal fractional part) of each of the twelve cultures was transferred into fresh growth media. This process has been repeated every day for about thirty years. Throughout the experiment, aliquots of cells have been frozen every 500 generations. These frozen cells represent a “fossil record” of sorts that can be thawed out and compared to current and other past generations of cells.

    Relaxed Selection and Decay of Antibiotic Resistance

    In general, when a population of organisms no longer experiences natural selection for a particular set of traits (antibiotic resistance, in this case), the traits designed to handle that pressure may experience functional decay as a result of mutations and genetic drift. This process is called relaxed selection.

    In the case of antibiotic resistance, when the threat of antibiotics is removed from the population (relaxed selection), it seems reasonable to think that antibiotic resistance would decline in the population because in most cases antibiotic resistance comes with a fitness cost. In other words, bacterial strains that acquire antibiotic resistance face a trade-off that makes them less fit in environments without the antibiotic.

    Genetic History and the Re-Evolution of Antibiotic Resistance

    In light of this expectation, the MSU researchers wondered how readily bacteria that have experienced relaxed selection can overcome loss of antibiotic resistance when the antibiotic is reintroduced to the population.

    To explore this question, the researchers examined the evolution of antibiotic resistance in the LTEE ancestor by exposing it to a set of different antibiotics and compared its propensity to acquire antibiotic resistance with four strains of E. coli derived from the LTEE ancestor (that underwent 50,000 generations of daily growth and transfer into fresh media in the absence of exposure to antibiotics).

    As expected, the MSU team discovered that 50,000 generations of relaxed selection rendered the four strains more susceptible to four different antibiotics (ampicillin, ceftriaxone, ciprofloxacin, and tetracycline) compared to the LTEE ancestor. When they exposed these strains to the different antibiotics, the researchers discovered that acquisition of antibiotic resistance was idiosyncratic: some strains more readily evolved antibiotic resistance than the LTEE ancestor and others were less evolvable.

    Investigators explained this difference by arguing that during the period of relaxed selection some of the strains experienced mutations that constrained the evolution of antibiotic resistance, whereas others experienced mutations that potentiated (activated) the evolution of antibiotic resistance. That is, historical contingency has played a key role in the acquisition of antibiotic resistance. Different bacterial lineages accumulated genetic differences that influence their capacity to evolve and adapt in new directions.

    Historical Contingency

    This study follows on the heels of previous studies that demonstrate the historical contingency of the evolutionary process.3 In other words, chance governs biological and biochemical evolution at its most fundamental level. As the MSU researchers observed, evolutionary pathways consist of a historical sequence of chance genetic changes operated on by natural selection (or that experience relaxed selection), which, too, consists of chance components.

    Because of the historically contingent nature of the evolutionary process, it is highly unlikely that the same biological and biochemical designs should appear repeatedly throughout nature. In his book Wonderful Life, Stephen Jay Gould used the metaphor of “replaying life’s tape.” If one were to push the rewind button, erase life’s history, and then let the tape run again, the results would be completely different each time.4

    The “Problem” of Convergence

    And yet, we observe the opposite pattern in biology. From an evolutionary perspective, it appears as if the evolutionary process independently and repeatedly arrived at the same outcome, time and time again (convergence). As evolutionary biologists Simon Conway Morris and George McGhee point out in their respective books Life’s Solution and Convergent Evolution, identical evolutionary outcomes are a widespread feature of the biological realm.5

    Scientists see these repeated outcomes at ecological, organismal, biochemical, and genetic levels. To illustrate the pervasiveness of convergence at the biochemical level, I describe 100 examples of convergence in my book The Cell’s Design.6

    From my perspective, the widespread occurrence of convergent evolution is a feature of biology that evolutionary theory can’t genuinely explain. In fact, given the clear-cut demonstration that the evolutionary process is historically contingent, I see the widespread occurrence of convergence as a failed scientific prediction for the evolutionary paradigm.

     

    Evolution in Bacteria Doesn’t Equate to Large-Scale Evolution

    The evolution of E. coli in the LTEE doesn’t necessarily validate the evolutionary paradigm. Just because such change is observed in a microbe doesn’t mean that evolutionary processes can adequately account for life’s origin and history, and the full range of biodiversity.

     

    Convergence and the Case for Creation

    Instead of viewing convergent features as having emerged through repeated evolutionary outcomes, we could understand them as reflecting the work of a divine Mind. In this scheme, the repeated origins of biological features equate to the repeated creations by an intelligent Agent who employs a common set of solutions to address a common set of problems facing unrelated organisms.

    Sadly, many in the scientific community are hesitant to embrace this perspective because they are resistant to the idea that design and purpose may play a role in biology. But, one can hope that someday the scientific community will be willing to move into a post-evolution future as the evidence for a Creator’s role in biology mounts.

    Resources

    The Historical Contingency of the Evolutionary Process

    Microbial Evolution and the Validity of the Evolutionary Paradigm

    Endnotes
    1. Sarah Bosley, “Are You Ready for a World without Antibiotics?” The Guardian, August 12, 2010, https://www.theguardian.com/society/2010/aug/12/the-end-of-antibiotics-health-infections.
    2. Kyle J. Card et al., “Historical Contingency in the Evolution of Antibiotic Resistance after Decades of Relaxed Selection,” PLoS Biology 17, no. 10 (October 23, 2019): e3000397, doi:10.1371/journal.pbio.3000397.
    3. Zachary D. Blount et al., “Historical Contingency and the Evolution of a Key Innovation in an Experimental Population of Escherichia coli,” Proceedings of the National Academy of Sciences USA 105, no. 23 (June 10, 2008): 7899-7906, doi:10.1073/pnas.0803151105.
    4. Stephen Jay Gould, Wonderful Life: The Burgess Shale and the Nature of History (New York: W.W. Norton & Company, 1990).
    5. Simon Conway Morris, Life’s Solution: Inevitable Humans in a Lonely Universe (New York: Cambridge University Press, 2003); George McGhee, Convergent Evolution: Limited Forms Most Beautiful (Cambridge, MA: MIT Press, 2011).
    6. Fazale Rana, The Cell’s Design: How Chemistry Reveal the Creator’s Artistry (Grand Rapids, MI: Baker, 2008).
  • Analysis of Genomes Converges on the Case for a Creator

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Nov 13, 2019

    Are you a Marvel or a DC fan?

    Do you like the Marvel superheroes better than those who occupy the DC universe? Or is it the other way around for you?

    Even though you might prefer DC over Marvel (or Marvel over DC), over the years these two comic book rivals have often created superheroes with nearly identical powers. In fact, a number of Marvel and DC superheroes are so strikingly similar that their likeness to one another is obviously intentional.1

    Here are just a few of the superheroes Marvel and DC have ripped off each other:

    • Superman (DC, created in 1938) and Hyperion (Marvel, created in 1969)
    • Batman (DC, created in 1939) and Moon Knight (Marvel, created in 1975)
    • Green Lantern (DC, created in 1940) and Nova (Marvel, created in 1976)
    • Catwoman (DC, created in 1940) and Black Cat (Marvel, created in 1979)
    • Atom (DC, created in 1961) and Ant-Man (Marvel, created in 1962)
    • Aquaman (DC, created in 1941) and Namor (Marvel, created in 1939)
    • Green Arrow (DC, created in 1941) and Hawkeye (Marvel, created in 1964)
    • Swamp Thing (DC, created in 1971) and Man Thing (Marvel, created in 1971)
    • Deathstroke (DC, created in 1980) and Deadpool (Marvel, created in 1991)

    This same type of striking similarity is also found in biology. Life scientists have discovered countless examples of biological designs that are virtually exact replicas of one another. Yet, these identical (or nearly identical) designs occur in organisms that belong to distinct, unrelated groups (such as the camera eyes of vertebrates and octopi). Therefore, they must have an independent origin.

     

    blog__inline--analysis-of-genomes-converges-1

    Figure 1: The Camera Eyes of Vertebrates (left) and Cephalopods (right); 1: Retina; 2: Nerve Fibers; 3: Optic Nerve; 4: Blind Spot. Image credit: Wikipedia

    From an evolutionary perspective, it appears as if the evolutionary process independently and repeatedly arrived at the same outcome, time and time again. As evolutionary biologists Simon Conway Morris and George McGhee point out in their respective books, Life’s Solution and Convergent Evolution, identical evolutionary outcomes are a widespread feature of the biological realm.2 Scientists observe these repeated outcomes (known as convergence) at the ecological, organismal, biochemical, and genetic levels.

    From my perspective, the widespread occurrence of convergent evolution is a feature of biology that evolutionary theory can’t genuinely explain. In fact, I see pervasive convergence as a failed scientific prediction—for the evolutionary paradigm. Recent work by a research team from Stanford University demonstrates my point.3

    These researchers discovered that identical genetic changes occurred when: (1) bats and whales “evolved” echolocation, (2) killer whales and manatees “evolved” specialized skin in support of their aquatic lifestyles, and (3) pikas and alpacas “evolved” increased lung capacity required to live in high-altitude environments.

    Why do I think this discovery is so problematic for the evolutionary paradigm? To understand my concern, we first need to consider the nature of the evolutionary process.

    Biological Evolution Is Historically Contingent

    Essentially, chance governs biological and biochemical evolution at its most fundamental level. Evolutionary pathways consist of a historical sequence of chance genetic changes operated on by natural selection, which, too, consists of chance components. The consequences are profound. If evolutionary events could be repeated, the outcome would be dramatically different every time. The inability of evolutionary processes to retrace the same path makes it highly unlikely that the same biological and biochemical designs should appear repeatedly throughout nature.

    The concept of historical contingency embodies this idea and is the theme of Stephen Jay Gould’s book Wonderful Life.4 To help illustrate the concept, Gould uses the metaphor of “replaying life’s tape.” If one were to push the rewind button, erase life’s history, and then let the tape run again, the results would be completely different each time.

    Are Evolutionary Processes Historically Contingent?

    Gould based the concept of historical contingency on his understanding of the evolutionary process. In the decades since Gould’s original description of historical contingency, several studies have affirmed his view.

    For example, in a landmark study in 2002, two Canadian investigators simulated macroevolutionary processes using autonomously replicating computer programs, with the programs operating like digital organisms.5 These programs were placed into different “ecosystems” and, because they replicated autonomously, could evolve. By monitoring the long-term evolution of the digital organisms, the two researchers determined that evolutionary outcomes are historically contingent and unpredictable. Every time they placed the same digital organism in the same environment, it evolved along a unique trajectory.

    In other words, given the historically contingent nature of the evolutionary mechanisms, we would expect convergence to be rare in the biological realm. Yet, biologists continue to uncover example after example of convergent features—some of which are quite astounding.

    The Origin of Echolocation

    One of the most remarkable examples of convergence is the independent origin of echolocation (sound waves emitted from an organism to an object and then back to the organism) in bats (chiropterans) and cetaceans (toothed whales). Research indicates that echolocation arose independently in two different groups of bats and also in the toothed whales.

     

    blog__inline--analysis-of-genomes-converges-2

    Figure 2: Echolocation in Bats. Image credit: Shutterstock

    One reason why this example of convergence is so remarkable has to do with the way some evolutionary biologists account for the widespread occurrences of convergence in biological systems. Undaunted by the myriad examples of convergence, these scientists assert that independent evolutionary outcomes result when unrelated organisms encounter nearly identical selection forces (e.g., environmental, competitive, and predatory pressures). According to this idea, natural selection channels unrelated organisms down similar pathways toward the same endpoint.

    But this explanation is unsatisfactory because bats and whales live in different types of habitats (terrestrial and aquatic). Consequently, the genetic changes responsible for the independent emergence of echolocation in the chiropterans and cetaceans should be distinct. Presumably, the evolutionary pathways that converged on a complex biological system such as echolocation would have taken different routes that would be reflected in the genomes. In other words, even though the physical traits appear to be identical (or nearly identical), the genetic makeup of the organisms should reflect an independent evolutionary history.

    But this expectation isn’t borne out by the data.

    Genetic Convergence Parallels Trait Convergence

    In recent years, evolutionary biologists have developed interest in understanding the genetic basis for convergence. Specifically, these scientists want to understand the genetic changes that lead to convergent anatomical and physiological features (how genotype leads to phenotype).

    Toward this end, a Stanford research team developed an algorithm that allowed them to search through entire genome sequences of animals to identify similar genetic features that contribute to particular biological traits.6 In turn, they applied this method to three test cases related to the convergence of:

    • echolocation in bats and whales
    • scaly skin in killer whales
    • lung structure and capacity in pikas and alpacas

    The investigators discovered that for echolocating animals, the same 25 convergent genetic changes took place in their genomes and were distributed among the same 18 genes. As it turns out, these genes play a role in the development of the cochlear ganglion, thought to be involved in echolocation. They also discovered that for aquatic mammals, there were 27 identical convergent genetic changes that occurred in same 15 genes that play a role in skin development. And finally, for high-altitude animals, they learned that the same 25 convergent genetic changes occurred in the same 16 genes that play a role in lung development.

    In response to this finding, study author Gill Bejerano remarked, “These genes often control multiple functions in different tissues throughout the body, so it seems it would be very difficult to introduce even minor changes. But here we’ve found that not only do these very different species share specific genetic changes, but also that these changes occur in coding genes.”7

    In other words, these results are not expected from an evolutionary standpoint. It is nothing short of amazing that genetic convergence would parallel phenotypic convergence.

    On the other hand, these results make perfect sense from a creation model vantage point.

    Convergence and the Case for Creation

    Instead of viewing convergent features as having emerged through repeated evolutionary outcomes, we could understand them as reflecting the work of a Divine Mind. In this scheme, the repeated origins of biological features equate to the repeated creations by an Intelligent Agent who employs a common set of solutions to address a common set of problems facing unrelated organisms.

    Like the superhero rip-offs in the Marvel and DC comics, the convergent features in biology appear to be intentional, reflecting a teleology that appears to be endemic in living systems.

    Resources

    Convergence of Echolocation

    The Historical Contingency of the Evolutionary Process

    Endnotes
    1. Jamie Gerber, 15 DC and Marvel Superheroes Who Are Strikingly Similar, ScreenRant (November 12, 2016), screenrant.com/marvel-dc-superheroes-copies-rip-offs/.
    2. Simon Conway Morris, Life’s Solution: Inevitable Humans in a Lonely Universe (New York: Cambridge University Press, 2003); George McGhee, Convergent Evolution: Limited Forms Most Beautiful (Cambridge, MA: MIT Press, 2011).
    3. Amir Marcovitz et al., “A Functional Enrichment Test for Molecular Convergent Evolution Finds a Clear Protein-Coding Signal in Echolocating Bats and Whales,” Proceedings of the National Academy of Sciences, USA 116, no. 42 (October 15, 2019), 21094–21103, doi:10.1073/pnas.1818532116.
    4. Stephen Jay Gould, Wonderful Life: The Burgess Shale and the Nature of History (New York: W. W. Norton & Company, 1990).
    5. Gabriel Yedid and Graham Bell, “Macroevolution Simulated with Autonomously Replicating Computer Programs,” Nature 420 (December 19, 2002): 810–12, doi:10.1038/nature01151.
    6. Marcovitz et al., “A Functional Enrichment Test.”
    7. Stanford Medicine, “Scientists Uncover Genetic Similarities among Species That Use Sound to Navigate,” ScienceDaily, October 4, 2019, sciencedaily.com/releases/2019/10/191004105643.htm.
  • Glue Production Is Not Evidence for Neanderthal Exceptionalism

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Nov 06, 2019

    Football players aren’t dumb jocks—though they often have that reputation. Football is a physically demanding sport that requires strength, toughness, agility, and speed. But it is also an intellectually demanding game.

    Mastering a playbook, understanding which plays work best for the various in-game scenarios, recognizing defenses and offenses, and adjusting on the fly require hours of study and preparation. Football really is a thinking person’s game.

    blog__inline--glue-production-is-not-evidence-for-neanderthal-exceptionalism-1

    Figure 1: Quarterback Calling an Audible at the Line of Scrimmage. Image Credit: Shutterstock

    Some anthropologists view Neanderthals in the same way that many people view football players: as the “dumb jock” version of a hominin, a creature cognitively inferior to modern humans. Yet, other anthropologists dispute this characterization, arguing that it is undeserved. Instead, they claim that Neanderthals had cognitive capabilities on par with modern humans.

    In support of their claim, these scientists point to finds in the archaeological record that seemingly suggest these hominins were exceptional, just like modern humans. As a case in point, archaeologists have unearthed evidence for tar production at a site in Italy that dates to around 200,000 years in age. They interpret this discovery as evidence that Neanderthals were using tar as glue for hafting (fixing) flint spearheads to wooden spear shafts.1 Archaeologists have also unearthed spearheads with tar residue from two sites in Germany, one dating to 120,000 years in age and the other between 40,000 to 80,000 years.2 Because these dates precede the arrival of modern humans into Europe, anthropologists assume the tar at these sites was deliberately produced and used by Neanderthals.

    Adhesives as a Signature for Superior Cognition

    Anthropologists consider the development of adhesives as a transformative technology. These materials would have provided the first humans the means to construct new types of complex devices and combine different types of materials (composites) into new technologies. Because of this new proficiency, anthropologists consider the production and use of adhesives to be diagnostic of advanced cognitive capabilities such as forward planning, abstraction, and understanding of materials.

    Production of adhesives from natural sources, even by the earliest modern humans, appears to have been a complex operation that required precise temperature control and the use of earthen mounds, or ceramic or metal kilns. In addition, birch bark needed to be heated in the absence of oxygen. Because the first large-scale production of adhesives usually centered around the dry distillation of birch and pine barks to produce tar and pitch, researchers have assumed that this technique is the only way to generate tar.

    blog__inline--glue-production-is-not-evidence-for-neanderthal-exceptionalism-2

    Figure 2: Tar Produced from Birch Bark. Image credit: Wikipedia

    So, if Neanderthals were using tar as an adhesive, the reasoning goes, they must have been pretty impressive creatures.

    In the summer of 2017 researchers from the University of Leiden published work that seemed to support this view.3 To address the question of how Neanderthals may have produced adhesives, these investigators conducted a series of experiments. They sought to learn how Neanderthals used the resources most reasonably available to them to obtain tar from birch bark through dry distillation.

    By studying a variety of methods for dry distillation of tar from birch in a laboratory setting, the research team concluded that Neanderthals could have produced tar from birch bark if they had used methods that were simple enough that they wouldn’t require precise temperature control during the distillation. Still, these methods are complex enough that the researchers concluded that for Neanderthals to pull off this feat, they must have had advanced cognitive abilities similar to those of modern humans.

    Is Adhesive Production and Use Evidence for Neanderthal Exceptionalism?

    At the time this work was reported, I challenged this conclusion by noting that the simplicity of these production methods argued against advanced cognitive abilities in Neanderthals, not for them.

    Recent work by researchers from Germany affirms my skepticism. Their research challenges the view that adhesive production and use constitutes evidence for human exceptionalism.4 The team wondered if a simpler way to produce tar—even simpler than the methods identified by the research team from the University of Leiden— exists. They also wondered if it was possible to produce tar in the presence of oxygen.

    From their work, they discovered that burning birch bark (or branches from a birch tree with the bark still attached) adjacent to a rock with a vertical or subvertical surface is a way to collect tar, which naturally deposits on the rock surface as the bark burns. In other words, tar can be produced accidentally, instead of deliberately. And once produced, it can be scraped from the rock surface.

    Using analytical techniques (gas chromatography coupled to mass spectrometry) to characterize the chemical makeup of the tar produced by this simple method, the research team showed that it is comparable to the chemical composition of tars produced by sophisticated dry distillation methods under anaerobic conditions. Because of the simplicity of this method, the research team thinks that collecting tar deposits from burning birch on rocks is the most likely way that Neanderthals produced tar, if they intentionally produced it at all.

    According to the research team, “The identification of birch tar at archaeological sites can no longer be considered as a proxy for human (complex, cultural) behavior as previously assumed. In other words, our finding changes textbook thinking about what tar production is a smoking gun of.”5

    One other point merits consideration: A growing body of evidence indicates that Neanderthals did not master fire, but rather used it opportunistically. In other words, these creatures could not create fire, but did harvest wildfires. Evidence demonstrates that there were vast periods of time during Neanderthals’ tenure in Europe when wildfires were rare because of cold climatic conditions. During these periods, Neanderthals didn’t use fire.

    Because fire is central to the dry distillation methods, for a significant portion of their time on Earth Neanderthals would have been unable to extract tar and use it for hafting. Perhaps this factor explains why recovery of tar from Neanderthal sites is so rare. And could it be that Neanderthals were not intentionally producing tar? Instead, did tar just happen to collect on rock surfaces as a consequence of burning birch branches when these creatures were able to harvest fire?

    What Difference Does It Make?

    One of the most important ideas taught in Scripture is that human beings uniquely bear God’s image. As such, every human being has immeasurable worth and value. And because we bear God’s image, we can enter into a relationship with our Maker.

    However, if Neanderthals possessed advanced cognitive ability just like that of modern humans, then it becomes difficult to maintain the view that modern humans are unique and exceptional. If human beings aren’t exceptional, then it becomes a challenge to defend the idea that human beings are made in God’s image.

    Yet, claims that Neanderthals are cognitive equals to modern humans fail to withstand scientific scrutiny, time and time again, as this latest study demonstrates. It is unlikely that any of us will see a Neanderthal run onto the football field anytime soon.

    Resources

    Neanderthals Did Not Master Fire

    Differences in Human and Neanderthal Brains

    Endnotes
    1. Paul Peter Anthony Mazza et al., “A New Palaeolithic Discovery: Tar-Hafted Stone Tools in a European Mid-Pleistocene Bone-Bearing Bed,” Journal of Archaeological Science 33, no. 9 (September 2006): 1310–18, doi:10.1016/j.jas.2006.01.006.
    2. Johann Koller, Ursula Baumer, and Dietrich Mania, “High-Tech in the Middle Palaeolithic: Neandertal-Manufactured Pitch Identified,” European Journal of Archaeology 4, no. 3 (December 1, 2001): 385–97, doi:10.1179/eja.2001.4.3.385; Alfred F. Pawlik and Jürgen P. Thissen, “Hafted Armatures and Multi-Component Tool Design at the Micoquian Site of Inden-Altdorf, Germany,” Journal of Archaeological Science 38, no. 7 (July 2011): 1699–1708, doi:10.1016/j.jas.2011.03.001.
    3. P. R. B. Kozowyk et al., “Experimental Methods for the Palaeolithic Dry Distillation of Birch Bark: Implications for the Origin and Development of Neandertal Adhesive Technology,” Scientific Reports 7 (August 31, 2017): 8033, doi:10.1038/s41598-017-08106-7.
    4. Patrick Schmidt et al., “Birch Tar Production Does Not Prove Neanderthal Behavioral Complexity,” Proceedings of the National Academy of Sciences, USA 116, no. 36 (September 3, 2019): 17707–11, doi:10.1073/pnas.1911137116.
    5. Schmidt et al., “Birch Tar Production.”
  • Scientists Reverse the Aging Process: Exploring the Theological Implications

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Oct 30, 2019

    During those days people will seek death but will not find it; they will long to die, but death will elude them.

    Revelation 9:6

     

    I make dad noises now.

    When I sit down, when I stand up, when I get out of bed, when I get into bed, when I bend over to pick up something from the ground, and when I straighten up again, I find myself involuntarily making noises—grunting sounds.

    I guess it is all part of the aging process. My body isn’t quite what it used to be. If someone offered me an elixir that could turn back time and reverse the aging process, I would take it without hesitation. It’s no fun growing old.

    Well, I just might get my wish, thanks to the work of a research team from the US and Canada. These researchers demonstrated that they could disrupt the aging process and, in fact, reverse the biological clock in humans.1

    This advance is nothing short of stunning. It opens up exciting—and disquieting—biomedical possibilities rife with ethical and theological ramifications. The work has other interesting implications, as well. It can be marshaled to demonstrate the scientific credibility of the Old Testament by making scientific sense of the long life spans of the patriarchs listed in the Genesis 5 and 11 genealogies.

    Some Biological Consequences of Aging

    Involuntary grunting is not the worse part of aging, by far. There are other more serious consequences, such as loss of immune function. Senescence (aging) of the immune system can contribute to the onset of cancer and increased susceptibility to pathogens. It can also lead to wide-scale inflammation. None of these are good.

    As we age, our thymus decreases in size. And this size reduction hampers immune system function. Situated between the heart and sternum, the thymus plays a role in maturation of white blood cells, key components of the immune system. As the thymus shrinks with age, the immune system loses its capacity to generate sufficient levels of white blood cells, rendering older adults vulnerable to infections and cancers.

    A Strategy to Improve Immune Function

    Previous studies in laboratory animals have shown that administering growth hormone enlarges the thymus and, consequently, improves immune function. The research team reasoned that the same effect would be seen in human patients. But due to at least one of its negative side effects, the team couldn’t simply administer growth hormone without other considerations. Growth hormone lowers insulin levels and leads to a form of type 2 diabetes. To prevent this adverse effect, the researchers also administered two drugs commonly used to treat type 2 diabetes.

    blog__inline--scientists-reverse-the-aging-process-1

    Figure 1: The Structure of Human Growth Hormone. Image credit: Shutterstock

    To test this idea, the researchers performed a small-scale clinical trial. The study began with ten men (finishing with nine) between the ages of 51 and 65. The volunteers self-administered the drug cocktail three to four times a week for a year. During the course of the study, the researchers monitored white blood cell levels and thymus size. They observed a rejuvenation of the immune system (based on the count of white blood cells in the blood). They also noticed changes in the thymus, with fatty deposits disappearing and thymus tissue returning.

    Reversing the Aging Process

    As an afterthought, the researchers decided to test the patient’s blood using an epigenetic clock that measures biological age. To their surprise, the researchers discovered that the drug cocktail reversed the biological age of the study participants by two years, compared to their chronological age. In other words, even though the patients gained one year in their chronological age during the course of the study, their bodies became younger, based on biological markers, by two years. This age reversal lasted for six months after the trial ended.

    Thus, for the first time ever, researchers have been able to extend human life expectancy through an aging-intervention therapy. And while the increase in life expectancy was limited, this accomplishment serves as a harbinger of things to come, making the prospects of dramatically extending human life expectancy significantly closer to a reality.

    This groundbreaking work carries significant biomedical, ethical, and theological implications, which I will address below. But the breakthrough is equally fascinating to me because it can be used to garner scientific support for Genesis 5 and 11.

    Anti-Aging Technology and Biblical Long Life Spans

    The mere assertion that humans could live for hundreds of years as described in the genealogies of Genesis 5 and 11 is, for many people, nothing short of absurd. Compounding this seeming absurdity is the claim in Genesis 6:3, which describes God intervening to shorten human life spans from about 900 to about 120 years. How can this dramatic change in human life spans be scientifically rational?

    As I discuss in Who Was Adam?, advances in the biochemistry of aging provide a response to these challenging questions. Scientists have uncovered several distinct biochemical mechanisms that either cause, or are associated with, senescence. Even subtle changes in cellular chemistry can increase life expectancy by nearly 50 percent. These discoveries point to several possible ways that God could have allowed long life spans and then altered human life expectancy—simply by “tweaking” human biochemistry.

    Thanks to these advances, biogerontologists have become confident that in the near future, they will be able to interrupt the aging process by direct intervention through altered diet, drug treatment, and gene manipulation. Some biogerontologists such as Aubrey de Grey don’t think it is out of the realm of possibility to extend human life expectancy to several hundred years—about the length of time the Bible claims that the patriarchs lived. The recent study by the US and Canadian investigators seems to validate de Grey’s view.

    So, if biogerontologists can alter life spans—maybe someday on the order of hundreds of years—then the Genesis 5 and 11 genealogies no longer appear to be fantastical. And, if we can intervene in our own biology to alter life spans, how much easier must it be for God to do so?

    Ethical Concerns

    As mentioned, I would be tempted to take an anti-aging elixir if I knew it would work. And so would many others. What could possibly be wrong with wanting to live a longer, healthier, and more productive life? In fact, disrupting—and even reversing—the aging process would offer benefits to society by potentially reducing medical costs associated with age-related diseases such as dementia, cancer, heart disease, and stroke.

    Yet, these biomedical advances in anti-aging therapies do hold the potential to change who we are as human beings. Even a brief moment of reflection makes it plain that wide-scale use of anti-aging treatments could bring about fundamental changes to economies, to society, and to families and put demands on limited planetary resources. In the end, anti-aging technologies may well be unsustainable, undesirable, and unwise. (For a more detailed discussion of the ethical issues surrounding anti-aging technology check out the book I cowrote with Kenneth Samples, Humans 2.0.)

    Anti-Aging Therapies and Transhumanism

    Many people rightly recognize the ethical concerns surrounding applications of anti-aging therapies, but a growing number see these technologies in a different light. They view them as paving the way to an exciting and hopeful future. The increasingly real prospects of extending human life expectancy by disrupting the aging process or even reversing the effects of aging are the types of advances (along with breakthroughs in CRISPR gene editing and computer-brain interfaces) that fuel an intellectual movement called transhumanism.

    This idea has long been on the fringes of respected academic thought, but recently transhumanism has propelled its way into the scientific, philosophical, and cultural mainstreams. Advocates of the transhumanist vision maintain that humanity has an obligation to use advances in biotechnology and bioengineering to correct our biological flaws—to augment our physical, intellectual, and psychological capabilities beyond our natural limits. Perhaps there are no greater biological limitations that human beings experience than those caused by aging bodies and the diseases associated with the aging process.

    blog__inline--scientists-reverse-the-aging-process-2

    Figure 2: Transhumanism. Image credit: Shutterstock

    Transhumanists see science and technology as the means to alleviate pain and suffering and to promote human flourishing. They note, in the case of aging, the pain, suffering, and loss associated with senescence in human beings. But the biotechnology we need to fulfill the transhumanist vision is now within grasp.

    Anti-Aging as a Source of Hope and Salvation?

    Using science and technology to mitigate pain and suffering and to drive human progress is nothing new. But transhumanists desire more. They advocate that we should use advances in biotechnology and bioengineering for the self-directed evolution of our species. They seek to fulfill the grand vision of creating new and improved versions of human beings and ushering in a posthuman future. In effect, transhumanists desire to create a utopia of our own design.

    In fact, many transhumanists go one step further, arguing that advances in gene editing, computer-brain interfaces, and anti-aging technologies could extend our life expectancy, perhaps even indefinitely, and allow us to attain a practical immortality. In this way, transhumanism displays its religious element. Here science and technology serve as the means for salvation.

    Transhumanism: a False Gospel?

    But can transhumanism truly deliver on its promises of a utopian future and practical immortality?

    In Humans 2.0, Kenneth Samples and I delineate a number of reasons why transhumanism is a false gospel, destined to disappoint, not fulfill, our desire for immortality and utopia. I won’t elaborate on those reasons here. But simply recognizing the many ethical concerns surrounding anti-aging technologies (and gene editing and computer-brain interfaces) highlights the real risks connected to pursuing a transhumanist future. If we don’t carefully consider these concerns, we might create a dystopian future, not a utopian world.

    The mere risk of this type of unintended future should give us pause for thought about turning to science and technology for our salvation. As theologian Ronald Cole-Turner so aptly put it:

    “We need to be aware that technology, precisely because of its beneficial power, can lead us to the erroneous notion that the only problems to which it is worth paying attention involve engineering. When we let this happen, we reduce human yearning for salvation to a mere desire for enhancement, a lesser salvation that we can control rather than the true salvation for which we must also wait.”2

    Resources

    Endnotes
    1. Gregory M. Fahy et al., “Reversal of Epigenetic Aging and Immunosenescent Trends in Humans,” Aging Cell (September 8, 2019): e13028, doi:10.1111/acel.13028.
    2. “Transhumanism and Christianity,” in Transhumanism and Transcendence: Christian Hope in an Age of Technological Enhancement, ed. Ronald Cole-Turner (Washington, D.C.: Georgetown University Press, 2011), 201.
  • Origin and Design of the Genetic Code: A One-Two Punch for Creation

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Oct 23, 2019

    True confession: I am a sports talk junkie. It has gotten so bad that sometimes I would rather listen to people talk about the big game than actually watch it on TV.

    So, in the spirit of the endless debates that take place on sports talk radio, I ask: What duo is the greatest one-two punch in NBA history? Is it:

    • Kareem and Magic?
    • Kobe and Shaq?
    • Michael and Scottie?

    Another confession: I am a science-faith junkie. I never tire when it comes to engaging in discussions about the interplay between science and the Christian faith. From my perspective, the most interesting facet of this conversation centers around the scientific evidence for God’s existence.

    So, toward this end, I ask: What is the most compelling biochemical evidence for God’s existence? Is it:

    • The complexity of biochemical systems?
    • The eerie similarity between biomolecular motors and machines designed by human engineers?
    • The information found in DNA?

    Without hesitation I would say it is actually another feature: the origin and design of the genetic code.

    The genetic code is a biochemical code that consists of a set of rules defining the information stored in DNA. These rules specify the sequence of amino acids used by the cell’s machinery to synthesize proteins. The genetic code makes it possible for the biochemical apparatus in the cell to convert the information formatted as nucleotide sequences in DNA into information formatted as amino acid sequences in proteins.

     

    blog__inline--origin-and-design-of-the-genetic-code

    Figure: A Depiction of the Genetic Code. Image credit: Shutterstock

    In previous articles (see the Resources section), I discussed the code’s most salient feature that I think points to a Creator’s handiwork: it’s multidimensional optimization. That optimization is so extensive that evolutionary biologists struggle to account for it’s origin, as illustrated by the work of biologist Steven Massey1.

    Both the optimization of the genetic code and the failure of evolutionary processes to account for its design form a potent one-two punch, evincing the work of a Creator. Optimization is a marker of design, and if it can’t be accounted for through evolutionary processes, the design must be authentic—the product of a Mind.

    Can Evolutionary Processes Generate the Genetic Code?

    For biochemists working to understand the origin of the genetic code, its extreme optimization means that it is not the “frozen accident” that Francis Crick proposed in a classic paper titled “On the Origin of the Genetic Code.”2

    Many investigators now think that natural selection shaped the genetic code, producing its optimal properties. However, I question if natural selection could evolve a genetic code with the degree of optimality displayed in nature. In the Cell’s Design (published in 2008), I cite the work of the late biophysicist Hubert Yockey in support of my claim.3 Yockey determined that natural selection would have to explore 1.40 x 1070 different genetic codes to discover the universal genetic code found in nature. Yockey estimated 6.3 x 1015 seconds (200 million years) is the maximum time available for the code to originate. Natural selection would have to evaluate roughly 1055 codes per second to find the universal genetic code. And even if the search time was extended for the entire duration of the universe’s existence, it still would require searching through 1052 codes per second to find nature’s genetic code. Put simply, natural selection lacks the time to find the universal genetic code.

    Researchers from Germany raised the same difficulty for evolution recently. Because of the genetic code’s multidimensional optimality, they concluded that “the optimality of the SGC [standard genetic code] is a robust feature and cannot be explained by any simple evolutionary hypothesis proposed so far. . . . the probability of finding the standard genetic code by chance is very low. Selection is not an omnipotent force, so this raises the question of whether a selection process could have found the SGC in the case of extreme code optimalities.”4

    Two More Evolutionary Mechanisms Considered

    Life scientist Massey reached a similar conclusion through a detailed analysis of two possible evolutionary mechanisms, both based on natural selection.9

    If the genetic code evolved, then alternate genetic codes would have to have been generated and evaluated until the optimal genetic code found in nature was discovered. This process would require that coding assignments change. Biochemists have identified two mechanisms that could contribute to coding reassignments: (1) codon capture and (2) an ambiguous intermediate mechanism. Massey tested both mechanisms.

    Massey discovered that neither mechanism can evolve the optimal genetic code. When he ran computer simulations of the evolutionary process using codon capture as a mechanism, they all ended in failure, unable to find a highly optimized genetic code. When Massey ran simulations with the ambiguous intermediate mechanism, he could evolve an optimized genetic code. But he didn’t view this result as success. He learned that it takes between 20 to 30 codon reassignments to produce a genetic code with the same degree of optimization as the genetic code found in nature.

    The problem with this evolutionary mechanism is that the number of coding reassignments observed in nature is scarce based on the few deviants of the genetic code thought to have evolved since the origin of the last common ancestor. On top of this problem, the structure of the optimized codes that evolved via the ambiguous intermediate mechanism is different from the structure of the genetic code found in nature. In short, the result obtained via the ambiguous intermediate mechanism is unrealistic.

    As Massey points out, “The evolution of the SGC remains to be deciphered, and constitutes one of the greatest challenges in the field of molecular evolution.”10

    Making Sense of Explanatory Models

    In the face of these discouraging results for the evolutionary paradigm, Massey concludes that perhaps another evolutionary force apart from natural selection shaped the genetic code. One idea Massey thinks has merit is the Coevolution Theory proposed by J. T. Wong. Wong argued that the genetic code evolved in conjunction with the evolution of biosynthetic pathways that produce amino acids. Yet, Wong’s theory doesn’t account for the extreme optimization of the genetic code in nature. And, in fact, the relationships between coding assignments and amino acid biosynthesis appear to result from a statistical artifact, and nothing more.11 In other words, Wong’s ideas don’t work.

    That brings us back to the question of how to account for the genetic code’s optimization and design.

    As I see it, in the same way that two NBA superstars work together to help produce a championship-caliber team, the genetic code’s optimization and the failure of every evolutionary model to account for it form a potent one-two punch that makes a case for a Creator.

    And that is worth talking about.

    Resources

    Endnotes
    1. Steven E. Massey, “Searching of Code Space for an Error-Minimized Genetic Code via Codon Capture Leads to Failure, or Requires at Least 20 Improving Codon Reassignments via the Ambiguous Intermediate Mechanism,” Journal of Molecular Evolution 70, no. 1 (January 2010): 106–15, doi:10.1007/s00239-009-9313-7.
    2. F. H. C. Crick, “The Origin of the Genetic Code,” Journal of Molecular Biology 38, no. 3 (December 28, 1968): 367–79, doi:10.1016/0022-2836(68)90392-6.
    3. Hubert P. Yockey, Information Theory and Molecular Biology (Cambridge, UK: Cambridge University Press, 1992), 180–83.
    4. Stefan Wichmann and Zachary Ardern, “Optimality of the Standard Genetic Code Is Robust with Respect to Comparison Code Sets,” Biosystems 185 (November 2019): 104023, doi:10.1016/j.biosystems.2019.104023.
    5. Massey, “Searching of Code Space.”
    6. Massey, “Searching of Code Space.”
    7. Ramin Amirnovin, “An Analysis of the Metabolic Theory of the Origin of the Genetic Code,” Journal of Molecular Evolution 44, no. 5 (May 1997): 473–76, doi:10.1007//PL00006170.
  • New Insights into Genetic Code Optimization Signal Creator’s Handiwork

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Oct 16, 2019

    I knew my career as a baseball player would be short-lived when, as a thirteen-year-old, I made the transition from Little League to the Babe Ruth League, which uses official Major League Baseball rules. Suddenly there were a whole lot more rules for me to follow than I ever had to think about in Little League.

    Unlike in Little League, at the Babe Ruth level the hitter and base runners have to know what the other is going to do. Usually, the third-base coach is responsible for this communication. Before each pitch is thrown, the third-base coach uses a series of hand signs to relay instructions to the hitter and base runners.

    blog__inline--new-insights-into-genetic-code

    Credit: Shutterstock

    My inability to pick up the signs from the third-base coach was a harbinger for my doomed baseball career. I did okay when I was on base, but I struggled to pick up his signs when I was at bat.

    The issue wasn’t that there were too many signs for me to memorize. I struggled recognizing the indicator sign.

    To prevent the opposing team from stealing the signs, it is common for the third-base coach to use an indicator sign. Each time he relays instructions, the coach randomly runs through a series of signs. At some point in the sequence, the coach gives the indicator sign. When he does that, it means that the next signal is the actual sign.

    All of this activity was simply too much for me to process. When I was at the plate, I couldn’t consistently keep up with the third-base coach. It got so bad that a couple of times the third-base coach had to call time-out and have me walk up the third-base line, so he could whisper to me what I was to do when I was at the plate. It was a bit humiliating.

    Codes Come from Intelligent Agents

    The signs relayed by a third-base coach to the hitter and base runners are a type of codea set of rules used to convert and convey information across formats.

    Experience teaches us that it takes intelligent agents, such as baseball coaches, to devise codes, even those that are rather basic in their design. The more sophisticated a code, the greater the level of ingenuity required to develop it.

    Perhaps the most sophisticated codes of all are those that can detect errors during data transmission.

    I sure could have used a code like that when I played baseball. It would have helped me if the hand signals used by the third-base coach were designed in such a way that I could always understand what he wanted, even if I failed to properly pick up the indicator signal.

    The Genetic Code

    As it turns out, just such a code exists in nature. It is one of the most sophisticated codes known to us—far more sophisticated than the best codes designed by the brightest computer engineers in the world. In fact, this code resides at the heart of biochemical systems. It is the genetic code.

    This biochemical code consists of a set of rules that define the information stored in DNA. These rules specify the sequence of amino acids that the cell’s machinery uses to build proteins. In this process, information formatted as nucleotide sequences in DNA is converted into information formatted as amino acid sequences in proteins.

    Moreover, the genetic code is universal, meaning that all life on Earth uses it.1

    Biochemists marvel at the design of the genetic code, in part because its structure displays exquisite optimization. This optimization includes the capacity to dramatically curtail errors that result from mutations.

    Recently, a team from Germany identified another facet of the genetic code that is highly optimized, further highlighting its remarkable qualities.2

    The Optimal Genetic Code

    As I describe in The Cell’s Design, scientists from Princeton University and the University of Bath (UK) quantified the error-minimization capacity of the genetic code during the 1990s. Their work indicated that the universal genetic code is optimized to withstand the potentially harmful effects of substitution mutations better than virtually any other conceivable genetic code.3

    In 2018, another team of researchers from Germany demonstrated that the universal genetic code is also optimized to withstand the harmful effects of frameshift mutations—again, better than other conceivable codes.4

    In 2007, researchers from Israel showed that the genetic code is also optimized to harbor overlapping codes.5 This is important because, in addition to the genetic code, regions of DNA harbor other overlapping codes that direct the binding of histone proteins, transcription factors, and the machinery that splices genes after they have been transcribed.

    The Robust Optimality of the Genetic Code

    With these previous studies serving as a backdrop, the German research team wanted to probe more deeply into the genetic code’s optimality. These researchers focused on potential optimality of three properties of the genetic code: (1) resistance to harmful effects of substitution mutations, (2) resistance to harmful effects of frameshift mutations, and (3) capacity to support overlapping genes.

    As with earlier studies, the team assessed the optimality of the naturally occurring genetic code by comparing its performance with sets of random codes that are conceivable alternatives. For all three property comparisons, they discovered that the natural (or standard) genetic code (SGC) displays a high degree of optimality. The researchers write, “We find that the SGC’s optimality is very robust, as no code set with no optimised properties is found. We therefore conclude that the optimality of the SGC is a robust feature across all evolutionary hypotheses.”6

    On top of this insight, the research team adds one other dimension to multidimensional optimality of the genetic code: its capacity to support overlapping genes.

    Interestingly, the researchers also note that the results of their work raise significant challenges to evolutionary explanations for the genetic code, pointing to the code’s multidimensional optimality that is extreme in all dimensions. They write:

    We conclude that the optimality of the SGC is a robust feature and cannot be explained by any simple evolutionary hypothesis proposed so far. . . . the probability of finding the standard genetic code by chance is very low. Selection is not an omnipotent force, so this raises the question of whether a selection process could have found the SGC in the case of extreme code optimalities.7

    While natural selection isn’t omnipotent, a transcendent Creator would be, and could account for the genetic code’s extreme optimality.

    The Genetic Code and the Case for a Creator

    In The Cell’s Design, I point out that our common experience teaches us that codes come from minds. It’s true on the baseball diamond and true in the computer lab. By analogy, the mere existence of the genetic code suggests that biochemical systems come from a Mind—a conclusion that gains additional support when we consider the code’s sophistication and exquisite optimization.

    The genetic code’s ability to withstand errors that arise from substitution and frameshift mutations, along with its optimal capacity to harbor multiple overlapping codes and overlapping genes, seems to defy naturalistic explanation.

    As a neophyte playing baseball, I could barely manage the simple code the third-base coach used. How mind-boggling it is for me when I think of the vastly superior ingenuity and sophistication of the universal genetic code.

    And, just like the hitter and base runner work together to produce runs in baseball, the elegant design of the genetic code and the inability of evolutionary processes to account for its extreme multidimensional optimization combine to make the case that a Creator played a role in the origin and design of biochemical systems.

    With respect to the case for a Creator, the insight from the German research team hits it out of the park.

    Resources:

    Endnotes
    1. Some organisms have a genetic code that deviates from the universal code in one or two of the coding assignments. Presumably, these deviant codes originate when the universal genetic code evolves, altering coding assignments.
    2. Stefan Wichmann and Zachery Ardern, “Optimality of the Standard Genetic Code Is Robust with Respect to Comparison Code Sets,” Biosystems 185 (November 2019): 104023, doi:10.1016/j.biosystems.2019.104023.
    3. David Haig and Laurence D. Hurst, “A Quantitative Measure of Error Minimization in the Genetic Code,” Journal of Molecular Evolution 33, no. 5 (November 1991): 412–17, doi:1007/BF02103132; Gretchen Vogel, “Tracking the History of the Genetic Code,” Science 281, no. 5375 (July 17, 1998): 329–31, doi:1126/science.281.5375.329; Stephen J. Freeland and Laurence D. Hurst, “The Genetic Code Is One in a Million,” Journal of Molecular Evolution 47, no. 3 (September 1998): 238–48, doi:10.1007/PL00006381; Stephen J. Freeland et al., “Early Fixation of an Optimal Genetic Code,” Molecular Biology and Evolution 17, no. 4 (April 2000): 511–18, 10.1093/oxfordjournals.molbev.a026331.
    4. Regine Geyer and Amir Madany Mamlouk, “On the Efficiency of the Genetic Code after Frameshift Mutations,” PeerJ 6 (May 21, 2018): e4825, doi:10.7717/peerj.4825.
    5. Shalev Itzkovitz and Uri Alon, “The Genetic Code Is Nearly Optimal for Allowing Additional Information within Protein-Coding Sequences,” Genome Research 17, no. 4 (April 2007): 405–12, doi:10.1101/gr.5987307.
    6. Wichmann and Ardern, “Optimality.”
    7. Wichmann and Ardern, “Optimality.”

About Reasons to Believe

RTB's mission is to spread the Christian Gospel by demonstrating that sound reason and scientific research—including the very latest discoveries—consistently support, rather than erode, confidence in the truth of the Bible and faith in the personal, transcendent God revealed in both Scripture and nature. Learn More »

Support Reasons to Believe

Your support helps more people find Christ through sharing how the latest scientific discoveries affirm our faith in the God of the Bible.

Donate Now

U.S. Mailing Address
818 S. Oak Park Rd.
Covina, CA 91724
  • P (855) 732-7667
  • P (626) 335-1480
  • Fax (626) 852-0178
Reasons to Believe logo

Reasons to Believe is a nonprofit organization designated as tax-exempt under Section 501(c)3 by the Internal Revenue Service. Donations are tax-deductible to the full extent of the law. Our tax ID is #33-0168048. All Transactions on our Web site are safe and secure.

Copyright 2020. Reasons to Believe. All rights reserved. Use of this website constitutes acceptance of our Privacy Policy.