Where Science and Faith Converge
  • Believing Impossible Things: Convergent Origins of Functional Junk DNA Sequences

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Mar 14, 2018

    In a classic scene from Alice in Wonderland, the story’s heroine informs the White Queen, “One can’t believe impossible things,” to which, the White Queen—scolding Alice—replies, “I daresay you haven’t had much practice. When I was your age, I always did it for half-an-hour a day. Why, sometimes I’ve believed as many as six impossible things before breakfast.”

    If recent work by researchers from UC Santa Cruz and the University of Rochester (New York) is to be taken as true, it would require evolutionary biologists to believe two impossible things—before, during, and after breakfast. These scientific investigators have discovered something that is hard to believe about the role SINE DNA plays in gene regulation, raising questions about the validity of the evolutionary explanation for the architecture of the human genome.1 In fact, considering the implications of this work, it would be easier to believe that the human genome was shaped by a Creator’s handiwork than by evolutionary forces.


    One of the many classes of noncoding or junk DNA, short interspersed elements, or SINE DNA sequences, range in size from 100 to 300 base pairs (genetic letters). In primates, the most common SINEs are the Alu sequences. There are about 1.1 million Alu copies in the human genome (roughly 12 percent of the genome).

    SINE DNA sequences (including Alu sequences) contain a DNA segment used by the cell’s machinery to produce an RNA message. This feature allows SINEs to be transcribed. Because of this feature, molecular biologists also categorize SINE DNA as a retroposon. Molecular biologists believe that SINE sequences can multiply in number within an organism’s genome through the activity of the enzyme, reverse transcriptase. Presumably, once SINE DNA becomes transcribed, reverse transcriptase converts SINE RNA back into DNA. The reconverted DNA sequence then randomly reintegrates back into the genome. It’s through this duplication and reintegration mechanism that SINE sequences proliferate as they move around, or retrotranspose, throughout the genome. To say it differently, molecular biologists believe that over time, transcription of SINE DNA and reverse transcription of SINE RNA increases the copy number of SINE sequences and randomly disperses them throughout an organism’s genome.

    Molecular biologists have discovered numerous instances in which nearly identical SINE segments occur at corresponding locations in the genomes of humans, chimpanzees, and other primates. Because the duplication and movement of SINE DNA appear to be random, evolutionary biologists think it unlikely that SINE sequences would independently appear in the same locations in the genomes of humans and chimpanzees (and other primates). And given their supposed nonfunctional nature, shared SINE DNA in humans and chimpanzees seemingly reflects their common evolutionary ancestry. In fact, evolutionary biologists have gone one step further, using SINE Alu sequences to construct primate evolutionary trees.

    SINE DNA Is Functional

    Even though many people view shared junk DNA sequences as the most compelling evidence for biological evolution, the growing recognition that virtually every class of junk DNA has function undermines this conclusion. For if these shared sequences are functional, then one could argue that they reflect the Creator’s common design, not shared evolutionary ancestry and common descent. As a case in point, in recent years, molecular biologists have learned that SINE DNA plays a vital role in gene regulation through a variety of distinct mechanisms.2

    Staufen-Mediated mRNA Decay

    One way SINE sequences regulate gene expression is through a pathway called Staufen-mediated messenger RNA (mRNA) decay (SMD). Critical to an organism’s development, SMD plays a key role in cellular differentiation. SMD is characterized by a complex mechanism centered around the destruction of mRNA. When this degradation takes place, it down-regulates gene expression. The SMD pathway involves binding of a protein called Staufen-1 to one of the ends of the mRNA molecule (dubbed the 3´untranslated region). Staufen-1 binds specifically to double-stranded structures in the 3´untranslated region. This double strand structure forms when Alu sequences in the 3´untranslated region bind to long noncoding RNA molecules containing Alu sequences. This binding event triggers a cascade of additional events that leads to the breakdown of messenger RNA.

    Common Descent or Common Design?

    As an old-earth creationist, I see the functional role played by noncoding DNA sequences as a reflection of God’s handiwork, defending the case for design from a significant evolutionary challenge. To state it differently: these findings mean that it is just as reasonable to conclude that the shared SINE sequences in the genomes of humans and the great apes reflect common design, not a shared evolutionary ancestry.

    In fact, I would maintain that it is more reasonable to think that functional SINE DNA sequences reflect common design, rather than common descent, given the pervasive role these sequence elements play in gene regulation. Because Alu sequences are only found in primates, they must have originated fairly recently (when viewed from an evolutionary framework). Yet, they play an integral and far-reaching role in gene regulation.

    And herein lies the first impossible thing evolutionary biologists must believe: Somehow Alu sequences arose and then quickly assumed a central place in gene regulation. According to Carl Schmid, a researcher who uncovered some of the first evidence for the functional role played by SINE DNA, “Sine Alus have appeared only recently within the primate lineage, this proposal [of SINE DNA function] provokes the challenging question of how Alu RNA could have possibly assumed a significant role in cell physiology.”3

    How Does Junk DNA Acquire Function?

    Still, those who subscribe to the evolutionary framework do not view functional junk DNA as incompatible with common descent. They argue that junk DNA acquired function through a process called neofunctionalization. In the case of SMD mediated by Alu sequences in the human genome, evolutionary biologists maintain that occasionally these DNA elements will become incorporated into the 3´untranslated regions of genes and regions of the human genome that produce long noncoding RNAs, and, occasionally, by chance, some of the Alu sequences in long noncoding RNAs will have the capacity to pair with the 3´untranslated region of specific mRNAs. When this happens, these Alu sequences trigger SMD-mediated gene regulation. And if this gene regulation has any advantage, it will persist so that over time, some Alu sequences will eventually evolve to assume a role in SMD-mediated gene regulation.

    Is Neofunctionalization the Best Explanation for SINE Function?

    At some level, this evolutionary scenario seems reasonable (the concerns expressed by Carl Schmid notwithstanding). Still, neofunctionalization events should be relatively rare. And because of the chance nature of neofunctionalization, it would be rational to think that the central role SINE sequences play in SMD gene regulation would be unique to humans.

    Why would I make this claim? Based on the nature of evolutionary mechanisms, chance should govern biological and biochemical evolution at its most fundamental level (assuming it occurs). Evolutionary pathways consist of a historical sequence of chance genetic changes operated on by natural selection, which also consists of chance components. The consequences are profound. If evolutionary events could be repeated, the outcome would be dramatically different every time. The inability of evolutionary processes to retrace the same path makes it highly unlikely that the same biological and biochemical designs should appear repeatedly throughout nature.

    The concept of historical contingency embodies this idea and is the theme of Stephen Jay Goulds book Wonderful Life. According to Gould,

    “No finale can be specified at the start, none would ever occur a second time in the same way, because any pathway proceeds through thousands of improbable stages. Alter any early event, ever so slightly, and without apparent importance at the time, and evolution cascades into a radically different channel.”4

    To help clarify the concept of historical contingency, Gould used the metaphor of “replaying lifes tape.” If one were to push the rewind button, erase lifes history, and let the tape run again, the results would be completely different each time. The very essence of the evolutionary process renders evolutionary outcomes nonrepeatable.

    Gould’s perspective of the evolutionary process has been affirmed by other researchers who have produced data, indicating that if evolutionary processes explain the origin of biochemical systems, they must be historically contingent.

    Did SMD Evolve Twice?

    Yet, collaborators from UC Santa Cruz and the University of Rochester discovered that SINE-mediated SMD appears to have evolved independently—two separate times—in humans and mice, the second impossible thing evolutionary biologists have to believe.

    Though rodents don’t possess Alu sequences, they do possess several other SINE elements, labeled B1, B2, B4, and ID. Remarkably, these B/ID sequences occur in regions of the mouse genome corresponding to regions of the human-harboring Alu sequences. And, when the B/ID sequences are associated with the 3´untranslated regions of genes, the mRNA produced from these genes is down-regulated, suggesting that these genes are under the influence of the SMD-mediated pathway—an unexpected result.

    But, this finding is not nearly as astonishing as something else the research team discovered. By comparing about 1,200 human-mouse gene pairs in myoblasts, the researchers discovered 24 genes in this cell type that were identical in the human and mouse genomes. These identical genes performed the same physiological role and possessed SINE elements (Alu and B/ID, respectively) and were regulated by the SMD mechanism.

    Evolutionary biologists believe that Alu and B/ID SINE sequences emerged independently in the rodent and human lineages. If so, this means that the evolutionary processes must have independently produced the identical outcome—SINE-mediated SMD gene regulation—24 separate times for each of the 24 identical genes. As the researchers point out, chance alone cannot explain their findings. Yet, evolutionary mechanisms are historically contingent and should not yield identical outcomes. This impossible scenario causes me to question if neofunctionalization is the explanation for functional SINE DNA.

    And yet, this is not the first time that life scientists have discovered the independent emergence of identical function for junk DNA sequences.

    So, which is the better explanation for functional junk DNA sequences: neofunctionalization through historically contingent evolutionary processes or the work of a Mind?

    As Alice emphatically complained, “One can’t believe impossible things.”


    1. Brownyn A. Lucas et al., “Evidence for Convergent Evolution of SINE-Directed Staufen-Mediated mRNA Decay,” Proceedings of the National Academy of Sciences, USA Early Edition (January 2018): doi:10.1073/pnas.1715531115.
    2. Reyad A. Elbarbary et al., “Retrotransposons as Regulators of Gene Function,” Science 351 (February 12, 2016): doi:10.1126/science.aac7247.
    3. Carl W. Schmid, “Does SINE Evolution Preclude Alu Function?Nucleic Acid Research 26 (October 1998): 4541–50, doi:10.1093/nar/26.20.4541.
    4. Stephen Jay Gould, Wonderful Life: The Burgess Shale and the Nature of History (New York: W. W. Norton & Company, 1989), 51.
  • Protein Amino Acids Form a “Just-Right” Set of Biological Building Blocks

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Feb 21, 2018

    Like most kids, I had a set of Lego building blocks. But, growing up in the 1960s, the Lego sets were nothing like the ones today. I am amazed at how elaborate and sophisticated Legos have become, consisting of interlocking blocks of various shapes and sizes, gears, specialty parts, and figurines—a far cry from the square and rectangular blocks that made up the Lego sets of my youth. The most imaginative things I could ever hope to build were long walls and high towers.

    It goes to show: the set of building blocks make all the difference in the world.

    This truism applies to the amino acid building blocks that make up proteins. As it turns out, proteins are built from a specialty set of amino acids that have the just-right set of properties to make life possible, as recent work by researchers from Germany attests.1 From my vantage point as a biochemist and a Christian, the just-right amino acid composition of proteins evinces intelligent design and is part of the reason I think a Creator must have played a direct role in the origin and design of life.

    Why is the Same Set of Twenty Amino Acids Used to Build Proteins?

    It stands as one of the most important insights about protein structure discovered by biochemists: The set of amino acids used to build proteins is universal. In other words, the proteins found in every organism on Earth are made up of the same 20 amino acids.

    Yet, hundreds of amino acids exist in nature. And, this abundance prompts the question: Why these 20 amino acids? From an evolutionary standpoint, the set of amino acids used to build proteins should reflect:

    1) the amino acids available on early Earth, generated by prebiotic chemical reactions;

    2) the historically contingent outworking of evolutionary processes.

    In other words, evolutionary mechanisms would have cobbled together an amino acid set that works “just good enough” for life to survive, but nothing more. No one would expect evolutionary processes to piece together a “just-right,” optimal set of amino acids. In other words, if evolutionary processes shaped the amino acid set used to build proteins, these biochemical building blocks should be much like the unsophisticated Lego sets little kids played with in the 1960s.

    An Optimal Set of Amino Acids

    But, contrary to this expectation, in the early 1980s biochemists discovered that an exquisite molecular rationale undergirds the amino acid set used to make proteins. Every aspect of the amino acid structure has to be precisely the way it is for life to be possible. On top of that, researchers from the University of Hawaii have conducted a quantitative comparison of the range of chemical and physical properties possessed by the 20 protein-building amino acids versus random sets of amino acids that could have been selected from early Earth’s hypothetical prebiotic soup.2 They concluded that the set of 20 amino acids is optimal. It turns out that the set of amino acids found in biological systems possesses the “just-right” properties that evenly and uniformly vary across a broad range of size, charge, and hydrophobicity. They also showed that the amino acids selected for proteins are a “highly unusual set of 20 amino acids; a maximum of 0.03% random sets outperformed the standard amino acid alphabet in two properties, while no single random set exhibited greater coverage in all three properties simultaneously.”3

    A New Perspective on the 20 Protein Amino Acids

    Beyond charge, size, and hydrophobicity, the German researchers wondered if quantum mechanical effects play a role in dictating the universal set of 20 protein amino acids. To address this question, they examined the gap between the HOMO (highest occupied molecular orbital) and the LUMO (lowest unoccupied molecular orbital) for the protein amino acids. The HOMO-LUMO gap is one of the quantum mechanical determinants of chemical reactivity. More reactive molecules have smaller HOMO-LUMO gaps than molecules that are relatively nonreactive.

    The German biochemists discovered that the HOMO-LUMO gap was small for 7 of the 20 amino acids (histidine, phenylalanine cysteine, methionine, tyrosine, and tryptophan), and hence, these molecules display a high level of chemical activity. Interestingly, some biochemists think that these 7 amino acids are not necessary to build proteins. Previous studies have demonstrated that a wide range of foldable, functional proteins can be built from only 13 amino acids (glycine, alanine, valine, leucine, isoleucine, proline, serine, threonine, aspartic acid, glutamic acid, asparagine, lysine, and arginine). As it turns out, this subset of 13 amino acids has a relatively large HOMO-LUMO gap and, therefore, is relatively unreactive. This suggests that the reactivity of histidine, phenylalanine cysteine, methionine, tyrosine, and tryptophan may be part of the reason for the inclusion of the 7 amino acids in the universal set of 20.

    As it turns out, these amino acids readily react with the peroxy free radical, a highly corrosive chemical species that forms when oxygen is present in the atmosphere. The German biochemists believe that when these 7 amino acids reside on the surface of proteins, they play a protective role, keeping the proteins from oxidative damage.

    As I discussed in a previous article, these 7 amino acids contribute in specific ways to protein structure and function. And they contribute to the optimal set of chemical and physical properties displayed by the universal set of 20 amino acids. And now, based on the latest work by the German researchers, it seems that the amino acids’ newly recognized protective role against oxidative damage adds to their functional and structural significance in proteins.

    Interestingly, because of the universal nature of biochemistry, these 7 amino acids must have been present in the proteins of the last universal common ancestor (LUCA) of all life on Earth. And yet, there was little or no oxygen present on early Earth, rendering the protective effect of these amino acids unnecessary. The importance of the small HOMO-LUMO gaps for these amino acids would not have become realized until much later in life’s history when oxygen levels became elevated in Earth’s atmosphere. In other words, inclusion of these amino acids in the universal set at life’s start seemingly anticipates future events in Earth’s history.

    Protein Amino Acids Chosen by a Creator

    The optimality, foresight, and molecular rationale undergirding the universal set of protein amino acids is not expected if life had an evolutionary origin. But, it is exactly what I would expect if life stems from a Creator’s handiwork. As I discuss in The Cell’s Design, objects and systems created and produced by human designers are typically well thought out and optimized. Both are indicative of intelligent design. In human designs, optimization is achieved through foresight and planning. Optimization requires inordinate attention to detail and careful craftsmanship. By analogy, the optimized biochemistry, epitomized by the amino acid set that makes up proteins, rationally points to the work of a Creator.


    1. Matthias Granhold et al., “Modern Diversification of the Amino Acid Repertoire Driven by Oxygen,” Proceedings of the National Academy of Sciences USA 115 (January 2, 2018): 41–46, doi:10.1073/pnas.1717100115.
    2. Gayle K. Philip and Stephen J. Freeland, “Did Evolution Select a Nonrandom ‘Alphabet’ of Amino Acids?” Astrobiology 11 (April 2011): 235–40, doi:10.1089/ast.2010.0567.
    3. Philip and Freeland, “Did Evolution Select,” 235–40.
  • Love Is in the Air and It Smells Like Intelligent Design

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Feb 14, 2018

    Being the hopeless romantic, I worked hard last year to come up with just the right thing to say to my wife on Valentine’s Day. I decided to let my lovely bride know that I really liked her signaling traits. Sadly, that didn’t go over so well.

    This year, I think I am going to tell my wife that I like the way she smells.

    I don’t know how Amy will receive my romantic overture, but I do know that scientific research explains the preference I have for my wife’s odors—it reflects the composition of a key component of her immune system, specifically her major histocompatibility complex. And, my wife’s immune system really turns me on.

    Odor Preference and Immune System Composition

    Why am I so attracted to my wifes scents, and hence, the composition of her immune system? Several studies help explain the connection.

    In a highly cited study, researchers had men sleep in the same T-shirt for several nights in a row. Then, they asked women to rank the T-shirts according to odor preference. As it turns out, women had the greatest preference for the odor of T-shirts worn by men who had MHC genes that were the most dissimilar to theirs.

    In another oft-cited study, researchers had 121 men and women rank the pleasantness of T-shirt odors and found that the ones they most preferred displayed odors that were most similar to those of their partners. Based on the results of another related study, it appears that this odor preference reflects dissimilarities in immune systems. Researchers discovered that the genetic differences in the MHC genes for 90 married couples were far more extensive than for 152 couples made up by randomly combining partners.

    Body Odor and the Immune System

    So, how does odor reflect the composition of the MHC genes? Researchers believe that the breakdown products from the MHC during the normal turnover of cellular components serves as the connection between the immune system and body odors.

    The MHC is a protein complex that resides on the cell surface. This protein complex binds proteins derived from pathogens after these organisms have infected the host cell and, in turn, displays them on the cell surface for recognition by the cells of the immune system.



    Association of Pathogen Proteins with MHCs

    Image credit: By Scray (Own work) [CC BY-SA 3.0 (https://creativecommons.org.licenses/by-sa/3.0)], via Wikimedia Commons

    Organisms possess a large number of MHC variants, making the genes that code the MHCs some of the most diverse in the human genome. Because the MHCs bind proteins derived from pathogens, the greater the diversity of MHC genes, the greater the capacity to respond to infectious agents.

    As part of the normal turnover of cellular components, the MHCs are constantly being broken down and replaced. When this happens, protein fragments from the MHCs become dispersed throughout the body, winding up in the blood, saliva, and urine. Some researchers think that the microbes in the mouth and on the skin surface lining body cavities metabolize the MHC breakdown products leading to the production of odorants. And these odors tell us something about the immune system of our potential partners.

    Advantages of Having a Partner with Dissimilar MHC Genes

    When men and women with dissimilar MHC genes pair up, it provides a significant advantage to their children. Why? Because parental MHC gene dissimilarity translates into the maximal genetic diversity for the MHC genes of their children. And, as already noted, the more diverse the MHC genes, the greater the resistance to pathogens and parasites.

    The attraction between mates with dissimilar immune genes is not limited to human beings. This phenomenon has been observed throughout the animal kingdom. And from studying mate attraction of animals, we can come to appreciate the importance of MHC gene diversity. For example, one study demonstrated that salmon raised in hatcheries displayed a much more limited genetic diversity for their MHC genes than salmon that live in the wild. As it turns out, hatchery-raised salmon are four times more likely to be infected with pathogens than those found in the wild.

    Is Love Nothing More than Biochemistry?

    Does the role odor preference plays in mate selection mean that love is merely an outworking of physiological mechanisms? Does it mean that there is not a spiritual dimension to the love we feel toward our partners? Does it mean that human beings are merely physical creatures? If so, does this type of discovery undermine the biblical view of humanity?

    Hardly. In fact, this discovery makes perfect sense within a Christian worldview.

    In his book The Biology of Sin, neuroscientist Matthew Stanford presents a model that helps make sense of these types of discoveries. Stanford points out that Scripture teaches that human beings are created as both material and immaterial beings, possessing a physical body and nonphysical mind and spirit. Instead of being a “ghost in the machine,” our material and immaterial natures are intertwined, interacting with each other. It is through our bodies (including our brain), that we interact with the physical world around us. The activities of our brain influence the activities of our mind (where our thoughts, feelings, and emotions are housed), and vice versa. It is through our spirit that we have union with God. Spiritual transformation can influence our brain’s activities and how we think; also, how and what we think can influence our spirit.

    So, in light of Stanford’s model, we can make sense of how love can be both a physical and spiritual experience while preserving the biblical view of human nature.

    Smells Like Intelligent Design

    Clearly, the attraction between two people extends beyond body odor and other physical processes and features. Still, the connection between body odor and the composition of the MHC genes presents itself as an ingenious, elegant way to ensure that animal populations (and human beings) are best positioned to withstand the assaults of pathogens. As an old-earth creationist, this insight is exactly what I would expect, attracting me to the view that life on Earth, including human life, is the product of Divine handiwork.

    Now, I am off to the chocolatier to get my wife a box of her favorite chocolates for Valentine’s Day. I don’t want her to decide that I stink as a husband.


  • Rabbit Burrowing Churns Claims about Neanderthal Burials

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Feb 07, 2018

    As a kid, watching cartoons was one of the highlights of my afternoons. As soon as I arrived home from school, I would plop down in front of the TV. Among my favorites were the short features produced by Warner Brothers. What a wonderful cast of characters: Daffy Duck, Sylvester and Tweety, Yosemite Sam, the Tasmanian Devil, the Road Runner and Wile E. Coyote. As much as I loved to watch their shenanigans, none of them compared to the indomitable Bugs Bunny. That wascally wabbit (to quote Elmer Fudd) always seemed to create an upheaval everywhere he went.

    Recently, a research team from France has come to realize that Bugs Bunny isn’t the only rabbit to make a mess of things. These investigators learned that burrowing rabbits have created an upheaval—literally—at Neanderthal archaeological sites, casting doubt on claims that the hominins displayed advanced sophisticated cognitive abilities.1

    Researchers from France unearthed this problem while studying the Regourdou Neanderthal site in Dordogne. Neanderthal bones and stone artifacts, along with animal remains, were recovered from this cave site in 1954. Unfortunately, the removal of the remains by archaeologists was done in a nonscientific manner—by today’s standards.

    Based on the arrangement of the Neanderthal remains, lithic artifacts, and cave bear bones at the site, anthropologists initially concluded that one of the Neanderthals found at Regourdou was deliberately buried, indicating that these hominids must have engaged in complex funerary practices. Many anthropologists consider complex funeral activities to reflect one of the most sophisticated examples of symbolic behavior. If so, then Neanderthals must have possessed similar cognitive abilities to modern humans, undermining the scientific case for human exceptionalism and, along with it, casting aspersions on the biblical view of humanity.

    Questions about Neanderthal Burials

    Yet, more recent analysis of the Regourdou site has raised questions about Neanderthal burial practices. One piece of evidence cited by anthropologists for the funerary burial at this French cave site was the recovery of bear remains associated with a nearly complete Neanderthal specimen. Some anthropologists argued that Neanderthals used the cave bear bones to construct a funerary structure.

    But anthropologists have started to question this interpretation. Evidence mounts that this cave site functioned primarily as a den for cave bears, with the accumulation of cave bear bones largely stemming from attritional mortality—not the deliberate activity of Neanderthals.

    Rabbits at Regourdou

    Anthropologists have also recovered a large quantity of rabbit remains at the Regourdou site. At first, these rabbit bones were taken as evidence that the hominids had the cognitive capacity to hunt and trap small game—something only modern humans were thought to be able to do.

    One species found at the Regourdou cave site is the European rabbit (Ochotona cuniculus). These rabbits dig interconnected burrows (called a warren) to avoid predation and harsh climatic conditions. Depending on the sediment, the warren architecture can be deep and complex.

    Because the researchers discovered over 10,000 rabbit bones at the Regourdou site, they became concerned that the burrowing behavior of these creatures may have compromised the integrity of the site. To address this issue, they used radiocarbon dating to age-date the rabbit remains. They discovered that the rabbit bones were significantly younger than the sediments harboring them. They also noted that the skeletal parts, breakage pattern of the bones, and surface modification of the rabbit remains indicate that these creatures died within the warrens due to natural causes, negating the claim that Neanderthals hunted small game. This set of observations indicates that the rabbits burrowed and lived in warrens in the Regourdou site, well after the cave deposits formed.

    Perhaps of greatest concern associated with this finding is the uncertainty it creates about the integrity of sedimentary layers, because the rabbit burrows cross and perturb several layers, resulting in the mixing of bones and artifacts from one layer to the next. This bioturbation appears to have transported artifacts and bones from the upper layers to the lower layers.

    Upheaval of the cave layers caused by the rabbits means that grave goods associated with Neanderthal skeletons may not have been intentionally placed with the body at the time of death. Instead, they may just have happened to wind up next to the hominin remains due to burrowing activity.

    Such tumult may not be limited to the Regourdou cave site. These creatures live throughout France and the Iberian Peninsula, raising questions about the influence that the rabbits may have had on the integrity of other archaeological cave sites in France and Spain. For example, it is not hard to envision scenarios in which rabbit burrowing caused mixing at other cave sites, resulting in the accidental association of Neanderthal remains with artifacts initially deposited in upper cave layers made by modern humans who occupied the cave sites after Neanderthals. If so, this association could mistakenly lead anthropologists to conclude that Neanderthals had advanced cognitive abilities, when in fact they did not. While Bugs Bunny’s antics may amuse us, it is no laughing matter to consider the possible impact rabbits may have had on scientific findings.

    Only Human Beings Are Exceptional

    Even though some anthropologists assert that Neanderthals possessed advanced cognitive abilities like those of modern humans, ongoing scientific scrutiny of the archaeological evidence consistently fails to substantiate those claims. This failure is clearly the case with the Regourdou burial. No doubt, Neanderthals were fascinating creatures. But there is no compelling scientific reason to think that their behavioral capacity threatens human exceptionalism and the notion that human beings were created to bear God’s image.


    1. Maxime Pelletier et al., “Rabbits in the Grave! Consequences of Bioturbation on the Neandertal ‘Burial at Regourdou (Montignac-sur-Vezérè, Dordogne)” Journal of Human Evolution 110 (September 2017): 1–17, doi:10.1016/j.jhevol.2017.04.001.
  • Is the Laminin “Cross” Evidence for a Creator?

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Jan 31, 2018

    As I interact with people on social media and travel around the country to speak on the biochemical evidence for a Creator, I am frequently asked to comment on laminin.1 The people who mention this protein are usually quite excited, convinced that its structure provides powerful scientific evidence for the Christian faith. Unfortunately, I don’t agree.

    Motivating this unusual question is the popularized claim of a well-known Christian pastor that laminin’s structure provides physical evidence that the God of the Bible created human beings and also sustains our lives. While I wholeheartedly believe God did create and does sustain human life, laminin’s apparent cross-shape does not make the case.

    Laminin is one of the key components of the basal lamina, a thin sheet-like structure that surrounds cells in animal tissue. The basal lamina is part of the extracellular matrix (ECM). This structure consists of a meshwork of fibrous proteins and polysaccharides secreted by the cells. It forms the space between cells in animal tissue. The ECM carries out a wide range of functions that include providing anchor points and support for cells.

    Laminin is a relatively large protein made of three different protein subunits that combine to form a t-shaped structure when the flexible rod-like regions of laminin are fully extended. Each of the four “arms” of laminin contains sites that allow this biomolecule to bind to other laminin molecules, other proteins (like collagen), and large polysaccharides. Laminin also provides a binding site for proteins called integrins, which are located in the cell membrane.


    Figure: The structure of laminin. Image credit: Wikipedia

    Laminin’s architecture and binding sites make this protein ideally suited to interact with other proteins and polysaccharides to form a network called the basal reticulum and to anchor cells to its biochemical scaffolding. The basal reticulum helps hold cells together to form tissues and, in turn, helps cement that tissue to connective tissues.

    The cross-like shape of laminin and the role it plays in holding tissues together has prompted the claim that this biomolecule provides scientific support for passages such as Colossians 1:15–17 and shows how the God of the Bible must have made humans and continues to sustain them.

    I would caution Christians against using this “argument.” I see a number of problems with it. (And so do many skeptics.)

    First, the cross shape is a simple structure found throughout nature. So, it’s probably not a good idea to attach too much significance to laminin’s shape. The t configuration makes laminin ideally suited to connect proteins to each other and cells to the basal reticulum. This is undoubtedly the reason for its structure.

    Secondly, the cross shape of laminin is an idealized illustration of the molecule. Portraying complex biomolecules in simplified ways is a common practice among biochemists. Depicting laminin in this extended form helps scientists visualize and catalog the binding sites along its four arms. This configuration should not be interpreted to represent its actual shape in biological systems. In the basal reticulum, laminin adopts all sorts of shapes that bear no resemblance to a cross. In fact, it’s much more common to observe laminin in a swastika configuration than in a cross-like one. Even electron micrographs of isolated laminin molecules that appear cross-shaped may be misleading. Their shape is likely an artifact of sample preparation. I have seen other electron micrographs that show laminin adopting a variety of twisted shapes that, again, bear no resemblance to a cross.

    Finally, laminin is not the only molecule “holding things together.” A number of other proteins and polysaccharides are also indispensable components of the basal reticulum. None of these molecules is cross-shaped.

    As I argue in my book, The Cell’s Design, the structure and operation of biochemical systems provide some of the most potent support for a Creator’s role in fabricating living systems. Instead of pointing to superficial features of biomolecules such as the “cross-shaped” architecture of laminin, there are many more substantive ways to use biochemistry to argue for the necessity of a Creator and for the value he places on human life. As a case in point, the salient characteristics of biochemical systems identically match those features we would recognize immediately as evidence for the work of a human design engineer. The close similarity between biochemical systems and the devices produced by human designers logically compels this conclusion: life’s most fundamental processes and structures stem from the work of an intelligent, intentional Agent.

    When Christians invest the effort to construct a careful case for the Creator, skeptics and seekers find it difficult to deny the powerful evidence from biochemistry and other areas of science for God’s existence.


    1. This article was originally published in the April 1, 2009, edition of New Reasons to Believe.
  • Did Neanderthals Self-Medicate?

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Jan 24, 2018

    Calculus is hard.

    But it is worth studying because it is such a powerful tool.

    Oh, wait!

    You don’t think I’m referring to math, do you? I’m not. I’m referring to dental calculus, the hardened plaque that forms on teeth.

    Recently, researchers from Australia and the UK studied the calculus scraped from the teeth of Neanderthals and compared it to the calculus taken from the teeth of modern humans and chimpanzees (captured from the wild) with the hope of understanding the diets and behaviors of these hominins.1 The researchers concluded that this study supports the view that Neanderthals had advanced cognitive abilities like that of modern humans. If so, this conclusion creates questions and concerns about the credibility of the biblical view of humanity; specifically, the idea that we stand apart from all other creatures on Earth because we are uniquely made in God’s image. Ironically, careful assessment of this work actually supports the notion of human exceptionalism, and with it provides scientific evidence that human beings are made in God’s image.

    This study built upon previous work in which researchers discovered that they could extract trace amounts of different types of compounds from the dental calculus of Neanderthals and garner insights about their dietary practices.2 Scientists have learned that when plaque forms, it traps food particles and microbes from the mouth and respiratory tract. In the most recent study, Australian and British scientists extracted ancient DNA from the plaque samples isolated from the teeth of Neanderthals recovered in Spy Cave (Belgium) and El Sidrón (Spain). These specimens age-date between 42,000 and 50,000 years in age. By sequencing the ancient DNA in the samples and comparing the sequences to known sequences in databases, the research team determined the types of food Neanderthals ate and the microorganisms that infected their mouths.

    Neanderthal Diets

    Based on the ancient DNA recovered from the calcified dental plaque, the researchers concluded that the Neanderthals unearthed at Spy Cave and El Sidrón consumed different diets. The calculus samples taken from the Spy Cave specimens harbored DNA from the woolly rhinoceros and European wild sheep. It also contained mushroom DNA. On the other hand, the ancient DNA samples taken from the dental plaque of the El Sidrón specimens came from pine nuts, moss, mushrooms, and tree bark. These results suggest that the Spy Neanderthals consumed a diet comprised largely of meat, while the El Sidrón hominins ate a vegetarian diet.

    The microbial DNA recovered from the dental calculus confirmed the dietary differences between the two Neanderthal groups. In Neanderthals, and in modern humans, the composition of the microbiota in the mouth is dictated in part by the diet, varying in predictable ways for meat-based and plant-based diets, respectively.

    Did Neanderthals Consume Medicinal Plants?

    One of the Neanderthals from El Sidrón—a teenage boy—had a large dental abscess. The researchers recovered DNA from his dental calculus showing that he also suffered from a gut parasite that causes diarrhea. But, instead of suffering without any relief, it looks as if this sick individual was consuming plants with medicinal properties. Researchers recovered DNA from poplar plants, which produce salicylic acid, a painkiller, and DNA from a fungus that produces penicillin, an antibiotic. Interestingly, the other El Sidrón specimen showed no evidence of ancient DNA from poplar or the fungus, Penicillium.

    If Neanderthals were able to self-medicate, the researchers conclude that these hominins must have had advanced cognitive abilities, similar to those of modern humans. One of the members of the research team, Alan Cooper, muses, “Apparently, Neandertals possessed a good knowledge of medicinal plants and their various anti-inflammatory and pain-relieving properties, and seem to be self-medicating. The use of antibiotics would be very surprising, as this is more than 40,000 years before we developed penicillin. Certainly, our findings contrast markedly with the rather simplistic view of our ancient relatives in popular imagination.”3

    Though intriguing, one could argue that the research team’s conclusion about Neanderthals self-medicating is a bit of an overreach, particularly the idea that Neanderthals were consuming a specific fungus as a source of antibiotics. Given that the El Sidrón Neanderthals were eating a vegetarian diet, it isn’t surprising that they occasionally consumed fungus because Penicillium grows naturally on plant material when it becomes moldy. This conclusion is based on a single Neanderthal specimen; thus, it could simply be a coincidence that the sick Neanderthal teenager consumed the fungus. In fact, it would be virtually impossible for Neanderthals to intentionally eat penicillin-producing fungi because, according to anthropologist Hannah O’Regan from the University of Nottingham, “It’s difficult to tell these specific moulds apart unless you have a hand lens.”4


    But even if Neanderthals were self-medicating, this behavior is not as remarkable as it might initially seem. Many animals self-medicate. In fact, this phenomenon is called zoopharmacognosy.5 For example, chimpanzees will consume the leaves of certain plants to make themselves vomit, in order to rid themselves of intestinal parasites. So, instead of viewing the consumption of poplar plants and fungus by Neanderthals as evidence for advanced behavior, perhaps, it would be better to regard it as one more instance of zoopharmacognosy.

    Medicine and Human Exceptionalism

    The difference between the development and use of medicine by modern humans and the use of medicinal plants by Neanderthals (assuming they did employ plants for medicinal purposes) is staggering. Neanderthals existed on Earth longer than modern humans have. And at the point of their extinction, the best that these creatures could do is incorporate into their diets a few plants that produced compounds that were natural painkillers or antibiotics. On the other hand, though on Earth for only around 150,000 years, modern humans have created an industrial-pharmaceutical complex that routinely develops and dispenses medicines based on a detailed understanding of chemistry and biology.

    As paleoanthropologist Ian Tattersall and linguist Noam Chomsky (along with other collaborators) put it:

    “Our species was born in a technologically archaic context . . . . Then, within a remarkably short space of time, art was invented, cities were born, and people had reached the moon.”6

    And biomedical advance has yielded an unimaginably large number of drugs that improve the quality of our lives. In other words, comparing the trajectories of Neanderthal and modern human technologies highlights profound differences between us—differences that affirm modern humans really are exceptional, echoing the biblical view that human beings are truly made in God’s image.


    1. Laura S. Weyrich et al., “Neanderthal Behavior, Diet, and Disease Inferred from Ancient DNA in Dental Calculus,” Nature 544 (April 20, 2017): 357–61, doi:10.1038/nature21674.
    2. Karen Hardy et al., “Neanderthal Medics? Evidence for Food, Cooking, and Medicinal Plants Entrapped in Dental Calculus,” Naturwissenschaften 99 (August 2012): 617–26, doi:10.1007/s00114-012-0942-0.
    3. “Dental Plaque DNA Shows Neandertals Used ‘Aspirin,’” Phys.org, updated March 8, 2017, http://phys.org/print408199421.html.
    4. Colin Barras, “Neanderthals May Have Medicated with Penicillin and Painkillers,” New Scientist, March 8, 2017, http://www.newscientist.com/article/2123669-neanderthals-may-have-medicated-with-penicillin-and-painkillers/.
    5. Shrivastava Rounak et al, “Zoopharmacognosy (Animal Self Medication): A Review,” International Journal of Research in Ayurveda and Pharmacy 2 (2011): 1510–12.
    6. Johan J. Bolhuis et al., “How Could Language Have Evolved?,” PLoS Biology 12 (August 26, 2014): e1001934, doi:10.1371/journal.pbio.1001934.
  • Does Development of Artificial Intelligence Undermine Human Exceptionalism?

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Jan 17, 2018

    In each case catalytic technologies, such as artificial wombs, the repair of brain injuries with prostheses and the enhancement of animal intelligence, will force us to choose between pre-modern human-racism and the cyborg citizenship implicit in the liberal democratic tradition.

    —James Hughes, Citizen Cyborg

    On one hand, it appeared to be nothing more than a harmless publicity stunt. On October 25, 2017, Saudi Arabia granted Sophia—a lifelike robot, powered by artificial intelligence software—citizenship. This took place at the FII conference, held in Riyahd, providing a prime opportunity for Hanson Robotics to showcase its most advanced robotics system to date. And, it also served as a chance for Saudi Arabia to establish itself as a world leader in AI technology.

    But, on the other hand, granting Sophia citizenship establishes a dangerous precedent, acting as a harbinger to a dystopian future where machines (and animals with enhanced intelligence) are afforded the same rights as human beings. Elevating machines to the same status as human beings threatens to undermine human dignity and worth and, along with it, the biblical conception of humanity.

    Still, the notion of granting citizenship to robots makes sense within a materialistic/naturalistic worldview. In this intellectual framework, human beings are largely regarded as biological machines and the human brain as an organic computer. If AI systems can be created with self-awareness and emotional capacity, what makes them any different from human beings? Is a silicon-based computer any different from one made up of organic matter?

    For many people, sentience or self-awareness is the key determinant of personhood. And persons are guaranteed rights, whether they are human beings, AI machines, or super-intelligent animals created by genetic engineering or implanting human brain organoids (grown in a lab) into the brains of animals.

    In other words, the way we regard AI technology has wide-ranging consequences for how we view and value human life. And while views of AI rooted in a materialistic/naturalistic worldview potentially threaten human dignity, a Christian worldview perspective of AI actually highlights human exceptionalism—in a way that aligns with the biblical concept of the image of God.

    Will AI Systems Ever Be Self-Aware?

    The linchpin for granting AI citizenship—and the same rights as human beings—is self-awareness.

    But are AI systems self-aware? And will they ever be?

    From my perspective, the answers to both questions are “no.” To be certain, AI systems are on a steep trajectory toward ever-increasing sophistication. But there is little prospect that they will ever truly be sentient. AI systems are becoming better and better at mimicking human cognitive abilities, emotions, and even self-awareness. But these systems do not inherently possess these capabilities—and I don’t think they ever will.

    Researchers are able to create AI systems with the capacity to mimic human qualities through the combination of natural-language processing and machine-learning algorithms. In effect, natural-language processing is pattern matching, in which the AI system employs prewritten scripts that are combined, spliced, and recombined to make the AI systems comments and responses to questions seem natural. For example, Sophia performs really well responding to scripted questions. But, when questions posed to her are off-script, she often provides nonsensical answers or responds with non-sequiturs. These failings reflect limitations of the natural-language processing algorithms. Undoubtedly, Sophia’s responses will improve thanks to machine-learning protocols. These algorithms incorporate new information into the software inputs to generate improved outcomes. In fact, through machine-learning algorithms, Sophia is “learning” how to emote, by controlling mechanical hardware to produce appropriate facial expressions in response to the comments made by “her” conversation partner. But, these improvements will just be a little bit more of the samediffering in degree, not kind. They will never propel Sophia, or any AI system, to genuine self-awareness.

    As the algorithms and hardware improve, Sophia (and other AI systems) are going to become better at mimicking human beings and, in doing so, seem to be more and more like us. But, even now, it is tempting to view Sophia as humanlike. But this tendency has little to do with AI technology. Instead, it has to do with our tendency to anthropomorphize animals and even inanimate objects. Often, we attribute human qualities to nonhuman, nonliving entities. And, undoubtedly, we will do the same for AI systems such as Sophia.

    Our tendency to anthropomorphize arises from our theory-of-mind capacity—unique to human beings. As human beings, we recognize that other people have minds just like ours. As a consequence of this capacity, we anticipate what others are thinking and feeling. But we can’t turn off our theory-of-mind abilities. And as a consequence, we attribute human qualities to animals and machines. To put it another way, AI systems seem to be self-aware, because we have an innate tendency to view them as such, even if they are not.

    Ironically, a quality unique to human beingsone that contributes to human exceptionalism and can be understood as a manifestation of the image of Godmakes us susceptible to seeing AI systems as sentient “beings.” And because of this tendency, and because of our empathy (which relates to our theory of mind capacity), we want to grant AI systems the same rights afforded to us. But when we think carefully about our tendency to anthropomorphize, it should become evident that our proclivity to regard AI systems as humanlike stems from the fact that we are made in God’s image.

    AI Systems and the Case for Human Exceptionalism

    There is another way that research in AI systems evinces human exceptionalism. It is provocative to think that human beings are the only species that has ever existed that has the capability to create machines that are like us—at least, in some sense. Clearly, this achievement is beyond the capabilities of the great apes, and no evidence exists to think that Neanderthals could have ever pulled off a feat such as creating AI systems. Neanderthals—who first appear in the fossil record around 250,000 to 200,000 years ago and disappear around 40,000 years ago—existed on Earth longer than modern humans have. Yet, our technology has progressed exponentially, while Neanderthal technology remained largely static.

    Our ability to create AI systems stems from the capacity for symbolism. As human beings, we effortlessly represent the world with discrete symbols. We denote abstract concepts with symbols. And our ability to represent the world symbolically has interesting consequences when coupled with our abilities to combine and recombine those symbols in a nearly infinite number of ways to create alternate possibilities.

    Our capacity for symbolism manifests in the form of language, art, music, and even body ornamentation. And we desire to communicate the scenarios we construct in our minds with other human beings. In a sense, symbolism and our open-ended capacity to generate alternative hypotheses are scientific descriptors of the image of God. No other creature, including the great apes or Neanderthals, possesses these two qualities. In short, we can create AI systems because we uniquely bear God’s image.

    AI Systems and the Case for Creation

    Our ability to create AI systems also provides evidence that we are the product of a Creator’s handiwork. The creation of AI systems requires the work of highly trained scientists and engineers who rely on several hundred years of scientific and technological advances. Creating AI systems requires designing and building highly advanced computer systems, engineering complex robotics systems, and writing sophisticated computer code. In other words, AI systems are intelligently designed. Or to put it another way, work in AI provides empirical evidence that a mind is required to create a mind—or, at least, a facsimile of a mind. And this conclusion means that the human mind must come from a Mind, as well. In light of this conclusion, is it reasonable to think that the human mind arose through unguided, undirected, historically contingent processes?

    Developments in AI will undoubtedly lead to important advances that will improve the quality of our lives. And while it is tempting to see AI systems in human terms, these devices are machines—and nothing more. No justification exists for AI systems to be granted the same rights as human beings. In fact, when we think carefully about the nature and origin of AI, these systems highlight our exceptional nature as human beings, evincing the biblical view of humanity.

    Only human beings deserve the rights of citizenship because these rights—justifiably called inalienable—are due us because we bear God’s image.


  • Did Neanderthals Make Glue?

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Jan 10, 2018

    Fun fact: each year, people around the world purchase 50 billion dollars’ (US) worth of adhesives. But perhaps this statistic isn’t all that surprising—because almost everything we make includes some type of bonding agent.

    In the context of human prehistory, anthropologists consider adhesives to have been a transformative technology. They would have provided the first humans the means to construct new types of complex devices and combine different types of materials (composites) into new technologies.

    Anthropologists also consider the production and use of adhesives to be a diagnostic of advanced cognitive capabilities, such as forward planning, abstraction, and understanding of materials. Production of adhesives from natural sources, even by the earliest modern humans, appears to have been a complex operation, requiring precise temperature control and the use of earthen mounds, or ceramic or metal kilns. The first large-scale production of adhesives usually centered around the dry distillation of birch and pine barks to produce tar and pitch.

    Even though modern humans perfected dry distillation methods for tar production, the archaeological record seemingly indicates that it wasn’t modern humans who first manufactured adhesives from tar, but, instead, Neanderthals. The oldest evidence for tar production and use dates to around 200,000 years ago, based on organic residues recovered from a site in Italy. It appears that Neanderthals were using the tar as glue for hafting flint spearheads to wooden spear shafts.1 Archaeologists have also unearthed spearheads with tar residue from two sites in Germany dating to 120,000 years in age and between 40,000 to 80,000 years in age, respectively.2 Because these dates precede the arrival of modern humans into Europe, anthropologists assume the tar at these sites was deliberately produced and used by Neanderthals.

    For some anthropologists, this evidence indicates that Neanderthals possessed advanced cognitive ability, just like modern humans. If this is the case, then modern humans are not unique and exceptional. And, if human beings aren’t exceptional, then it becomes a challenge to defend one of the central concepts in Scripture—the idea that human beings are made in God’s image. Yet, claims that Neanderthals are cognitive equals to modern humans fail to withstand scientific scrutiny, time and time again. (See Resources section below.) This, too, is the case when it comes to Neanderthal tar production.

    How Did Neanderthals Extract Tar from Birch Bark?

    Though it appears that Neanderthals were able to produce and use tar as an adhesive, anthropologists have no idea how they went about this task. Archaeologists have yet to unearth any evidence for ceramics at Neanderthal sites. To address this question, a team of researchers from the University of Leiden conducted a series of experiments, trying to learn how Neanderthals could dry distill tar from birch bark using the resources most reasonably available to them.

    The research team devised and evaluated three dry distillation methods:

    • The Ash Mound Method: This technique entails burying rolled up birch bark in hot ash and embers. The heat from the ash and embers distills the tar away from the birch bark, but because the bark is curled and buried, oxygen can’t easily get to the tar, preventing combustion.
    • The Pit Roll Method: This approach involves digging a cylindrical pit and then placing a burning piece of rolled-up birch bark in the pit, followed by covering it with earthen materials.
    • The Raised Structure Method: This method involves placing a vessel made out of birch bark in a pit, igniting it, and covering it with sticks, pebbles, and mud.

    Of the three methods, the researchers learned that the Pit Roll technique produced the most tar and was the most efficient method. Still, the amount of tar that was produced was not enough for large-scale use, but just enough to haft one or two spears at best. The tar produced by all three methods was too fluid to be used for hafting.

    Still, the research team concluded that Neanderthals could have dry distilled tar from birch bark, using methods that were simple and without the need to precisely control the distillation temperature. They also conclude that Neanderthals must have had advanced cognitive abilities—on par with modern humans—to pull off this feat.

    Did Neanderthals Have Similar Cognitive Capacity to Modern Humans?

    Does the ability of Neanderthals to dry distill tar (using crude methods) and use it to haft spears reflect sophisticated cognitive abilities? From my vantage point, no.

    The recognition that the methods Neanderthals most likely used to dry distill tar from birch bark didn’t require temperature control and were simple and crude argues against Neanderthal sophistication, not for it. To this point, it is worth noting that birch bark naturally curls, a factor critical to the success of the three dry distillation methods explored by the University of Leiden archaeologists. In other words, curling the birch bark was not something Neanderthals would have had to discover.

    It is also worth pointing out that recent work indicates that Neanderthals did not master fire, but instead made opportunistic use of fire. These creatures could not create fire, but, instead, harvested wildfires. There were vast periods of time during Neanderthals’ tenure in Europe when wildfires were rare because of cold climatic conditions, meaning Neanderthals didn’t have access to fire. Because fire is central to the dry distillation methods, Neanderthals would have been unable to extract tar and use it for hafting for a significant portion of their time on Earth. Perhaps this explains why recovery of tar from Neanderthal sites is a rare occurrence.

    Still, no matter how crude the method, dry distilling tar from birch bark seems to be pretty remarkable behavior—until we compare Neanderthal behavior to that of chimpanzees.

    Comparing Neanderthal Behavior to Chimpanzee Behavior

    In recent years, primatologists have observed chimpanzees in the wild engaging in some remarkable behaviors. For example, chimpanzees:

    • manufacture spears from tree branches, using a six-step process. In turn, these creatures use these spears to hunt bush babies
    • make stone tools that they use to break open nuts
    • collect branches from specific trees with appropriate mechanical characteristics and insect-repellent properties to build beds in trees
    • collect and consume plants with medicinal properties
    • understand and exploit the behavior of wildfires

    In light of these remarkable chimpanzee behaviors, the manufacture and use of tar by Neanderthals doesn’t seem that impressive. No one would equate a chimpanzee’s cognitive capacity with that of a modern human. And, likewise, no one should equate the cognitive capacity of Neanderthals with modern humans. In terms of sophistication, complexity, and efficiency, the tar production methods of modern humans are categorically different from those of Neanderthals, reflecting cognitive superiority of modern humans.

    Do Anthropologists Display a Bias against Modern Humans?

    Recently, in a New York Times article, science writer Jon Mooallem called attention to paleoanthropologists prejudices when it comes to Neanderthals. He pointed out that the limited data available to these scientists from the archaeological record forces them to rely on speculation. And this speculation is inevitably influenced by their preconceptions. Mooallem states,

    “All sciences operate by trying to fit new data into existing theories. And this particular science [archaeology], for which the “data” has always consisted of scant and somewhat inscrutable bits of rock and fossil, often has to lean on those metanarratives even more heavily. . . . Ultimately, a bottomless relativism can creep in: tenuous interpretations held up by webs of other interpretations, each strung from still more interpretations. Almost every archaeologist I interviewed complained that the field has become “overinterpreted”—that the ratio of physical evidence to speculation about that evidence is out of whack. Good stories can generate their own momentum.”3

    Mooallem’s critique applies to paleoanthropologists who are modern human supremacists and those with an anti-modern human bias that seeks to undermine the uniqueness and exceptionalism of modern humans. And, lately, reading the scientific literature in anthropology, I get the strong sense that there is a growing anti-modern human bias among anthropologists.

    In light of this anti-modern human bias, one could propose an alternate scenario for the association of tar with flint spearheads at a few Neanderthal sites that comport with the view that these creatures were cognitively inferior to modern humans. Perhaps Neanderthals threw birch or pine into a fire they harvested from a wildfire. And maybe a few pieces of bark or some pieces of branches near the edge of the fire naturally curled, leading to dry distillation” of small amounts of tar. Seeing the tar exude from the bark, perhaps a Neanderthal poked at it with his spear, coating the piece of flint with sticky tar.

    When we do our best to set aside our preconceptions, the collective body of evidence indicates that Neanderthals did not have the same cognitive capacity as modern humans.


    1. Paul Peter Anthony Mazza et al., “A New Palaeolithic Discovery: Tar-Hafted Stone Tool in a European Mid-Pleistocene Bone-Bearing Bed,” Journal of Archaeological Science 33 (September 2006): 1310–18, doi:10.1016/j.jas.2006.01.006.
    2. Johann Koller, Ursula Baumer, and Dietrich Mania, “High-Tech in the Middle Palaeolithic: Neandertal-Manufactured Pitch Identified,” European Journal of Archaeology 4 (December 1, 2001): 385–97, doi:10.1179/eja.2001.4.3.385; Alfred F. Pawlik and Jürgen P. Thissen, “Hafted Armatures and Multi-Component Tool Design at the Micoquian Site of Inden-Altdorf, Germany,” Journal of Archaeological Science 38 (July 2011): 1699–1708, doi:10.1016/j.jas.2011.03.001.
    3. P. R. B. Kozowyk et al., “Experimental Methods for the Palaeolithic Dry Distillation of Birch Bark: Implications for the Origin and Development of Neandertal Adhesive Technology,” Scientific Reports 7 (August 31, 2017): 8033, doi:10.1038/s41598-017-08106-7.
    4. Jon Mooallem, “Neanderthals Were People, Too,” New York Times Magazine, January 11, 2017, https://www.nytimes.com/2017/01/11/magazine/neanderthals-were-people-too.html.
  • New Research Douses Claim that Neanderthals Mastered Fire

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Jan 03, 2018

    A few months ago, I posted a link on Twitter to a blog article I wrote challenging the claim that Neanderthals made jewelry and, therefore, possessed the capacity for symbolism.

    When I post articles about the cognitive abilities of Neanderthals, I expect them to generate a fair bit of discussion and opinions that differ from mine (and I expect this article about neanderthal’s use of fire to be no exception). But one response I received was unexpectedly jarring. It came from a Christian who accused me of being “out of touch,” wasting time discussing frivolous issues,” and “targeting the elite with a failed apologetic.” He admonished me to spend my time on real issues related to social justice concerns and chastised me for not focusing my efforts reaching out to the “marginalized.”

    As part of my reply to my new “friend,” I pointed out that the identity and capability of Neanderthals has a direct bearing on the gospel and, consequently, social injustices in our world, because it relates to the question of humanity’s origin and identity. And what we believe about where we come from really matters.

    Scripture teaches that human beings are uniquely made in God’s image. And, it is the image of God that renders human beings of infinite worth and value. Because we bear God’s image, Christ died to reconcile us to the Father. And, as Christians, the immeasurable value we place on all human life motivates us to battle against the injustices of the world—because the people who suffer these injustices are image bearers. According to Scripture, when we love and serve other human beings, it equates to loving and serving God.

    Yet, the biblical view of humanity has been supplanted in the scientific community by human evolution. According to this idea, human beings are not the product of a Creator’s handiwork—the crown of creation—but, like all life on Earth, we emerged through unguided, historically contingent processes. In the evolutionary paradigm, human beings hold no special status. Human beings possess no inherent worth. We possess no more value than any other creature that has ever existed throughout Earth’s history. Human beings lack any inherent worth or dignity in the evolutionary paradigm. And, within this framework, there can be no ultimate meaning or purpose to human life.

    Sadly, the evolutionary view of humanity is not confined to the halls of the academy. It permeates and influences cultures throughout the world. Once human life is rendered meaningless and stripped of its inherent value, there is no fundamental justification to stand against injustice. In fact, it becomes easier to excuse acts of injustice and becomes tolerable to look the other way when these acts occur. In the evolutionary framework, no genuine motivation exists to rescue the marginalized of our world. I would go one step further and argue that many of the social ills we face throughout the world have their etiology in the evolutionary view of humanity.

    I regard my work as a Christian apologist as an antidote to this toxic worldview. Towards this end, I strive to demonstrate the credibility of the biblical view of humanity—apart from biblical and theological appeals. In an increasingly secular world, we can’t simply adopt a theological stance, declaring that human beings bear God’s image, and leave it at that. Few nonbelievers will accept that approach. We must respond to the scientific challenge to the image of God with scientific evidence for human uniqueness and exceptionalism. This endeavor isn’t about challenging the elite with an obscure apologetic argument for the validity of Christianity. Ultimately, it is about establishing the foundation for the gospel and generating the impetus and justification to treat human beings as creatures with inherent worth and dignity. As Christian apologists when we “target the elite” with apologetic arguments for the Christian worldview, we are serving the marginalized in our world.

    As described in Who Was Adam? a scientific case can be marshaled for human exceptionalism in a way that aligns with the biblical view of the image of God. Remarkably, a growing minority of anthropologists and primatologists now believe that human beings really are exceptional. They contend that human beings do, indeed, differ in kind, not just degree, from other creatures. The scientists who argue for this updated perspective have developed evidence for human exceptionalism within the context of the evolutionary paradigm in their attempts to understand how the human mind evolved. Yet, ironically, these new insights marshal support for the biblical conception of humanity.

    However, one potential challenge to human exceptionalism relates to the cognitive capabilities of Neanderthals. Based on archeological and fossil finds some paleoanthropologists now argue that these hominids: (1) buried their dead; (2) made specialized tools; (3) used ochre; (4) produced jewelry; (5) created art; and (6) even had language capacities. These are behaviors one would naturally associate with the image bearers.

    Yet, as discussed in Who Was Adam? (and articles listed in the Resource section), careful examination of the archeological and fossil evidence reveals just how speculative the claims about Neanderthal “exceptionalism” are. Recent insights on Neanderthal fire use illustrate this point.

    Did Neanderthals Use Fire?

    While controversy abounds among paleoanthropologists about fire use by hominins such as Homo erectus, most scientists working in this field believe Neanderthals mastered fire. This view finds its basis in the discovery of primitive hearths, burned bones, heated lithics, and charcoal at Neanderthal archeological sites. Frankly, fire use by Neanderthals bothers me. If these creatures could create and use fire—in short, if they mastered fire (called pyrotechnology)it makes them much more like us—but uncomfortably so.

    Yet, recent work raises questions about Neanderthal fire usage.1 Careful assessment of archeological sites in southern France occupied by Neanderthals from about 100,000 to 40,000 years ago indicates that Neanderthals could not create fire. Instead, they made opportunistic use of natural fire when available to them.

    The French sites show clear evidence of fire use by Neanderthals. However, when researchers correlated the archeological layers harboring evidence for fire use with paleoclimate data, they found an unexpected pattern. Neanderthals used fire during warm climate conditions and failed to use fire during cold periods—the opposite of what would be predicted—if Neanderthals had mastery over fire.

    Instead, this unusual correlation indicates that Neanderthals made opportunistic use of fire. Lightning strikes that would generate natural fires are much more likely to occur during warm periods. Instead of creating fire, Neanderthals most likely collected natural fire and cultivated it as long as they could before it extinguished.

    Such evidence shows that human beings are unique and exceptional in our capacity to create and curate fire, distinguishing us from Neanderthals.

    Chimpanzees Exploit Natural Fire

    Still, the capacity to make opportunistic use of fire seems pretty impressive. At least until Neanderthal behavior is compared to that of chimpanzees. Recent work by Jill Pruetz indicates that these great apes understand the behavior of natural fires and even exploit them.2 Pruetz and her collaborator observed the response of the Fongoli community of chimpanzees to two wildfires in the spring of 2006. The members of the community calmly monitored the fires at close range and then changed their behavior in anticipation of the fires’ movement. To put it another way, the chimpanzees’ behavior was predictive, not responsive. This capacity is impressive, because the behavior of natural fires is complex, depending on wind speed and direction and the amount and type of fuel sources.

    So, as impressive as Neanderthal behavior may seem, their opportunistic use of fire seems more closely in line with chimpanzee behavior than that of human beings, who create and control fire at will. In fact, Pruetz believes one reason chimpanzees don’t harvest natural fire relates to their lack of manual dexterity.

    How Did Neanderthals Survive Cold Climates without Fire?

    If Neanderthals were opportunistic exploiters of fire and it was only available to them when the climate was warm, how did they survive the cold? One possibility is that they simply migrated from cold climes to warmer ones.

    Another possibility is that the hominins made clothing. At least, this is the common narrative about Neanderthals. Yet, recent work indicates that this popular depiction is incorrect. These creatures did not make clothing from animal skins, but instead made use of animal hides as capes.3

    A team of paleoanthropologists reached this conclusion by studying the faunal remains at Neanderthal and modern human archeological sites and comparing them to a database of animals used to make cold weather clothing. While both modern humans and Neanderthals used deer, bison, and bear hides for body coverings, the remains of these creatures were found more frequently at modern human archeological sites. Additionally, the remains of smaller creatures, such as weasels, wolverines, and dogs were found at modern human sites but were absent from sites occupied by Neanderthals. These smaller animals have no food value. Instead, modern humans used the animal hides to trim clothing.

    This data indicates that modern humans made much more frequent use of animal hides for clothing than did Neanderthals. And when modern humans made clothing, it was more elaborate and well-fitted than the coverings made by Neanderthals. This conclusion finds added support from the discovery of bone needles at modern human archeological sites (and the absence of these artifacts at Neanderthal sites), and reflects cognitive differences between human beings and Neanderthals.

    Even though Neanderthals made poorly crafted body coverings and most likely made little use of fire during cold periods, they were aided in their survival of frigid conditions by the design of their bodies. Anthropologists describe Neanderthals as having a hyper-polar body design that made them well-adapted to live under frozen conditions. Neanderthal bodies were stout and compact, comprised of barrel-shaped torsos and shorter limbs, which helped them retain body heat. Their noses were long and sinus cavities extensive, which helped them warm the cold air they breathed before it reached their lungs. Neanderthals most likely survived the cold because of their body design, not because of their cognitive abilities.

    Even though many paleoanthropologists assert that Neanderthals possessed cognitive abilities on par with modern humans, careful evaluation finds these claims wanting, time and time again, as the latest insights about fire use by these hominins attest.

    Compared to the hominins, including Neanderthals, human beings do, indeed, appear to be exceptional in a way that aligns with the image of God. These are far from “frivolous issues.” The implications are profound.

    What we think about Neanderthals really matters.


    1. Dennis M. Sandgathe et al., “Timing of the Appearance of Habitual Fire Use,” Proceedings of the National Academy of Sciences, USA 108 (July 19, 2011), E298, doi:10.1073/pnas.1106759108; Paul Goldberg et al., “New Evidence on Neandertal Use of Fire: Examples from Roc de Marsal and Pech de l’Azé IV,” Quaternary International 247 (2012), 325–40, doi:10.1016/j.quaint.2010.11.015; Dennis M. Sandgathe et al., “On the Role of Fire in Neanderthal Adaptations in Western Europe: Evidence from Pech de l’Azé IV and Roc de Marsal, France,” PaleoAnthropology (2011), 216–42, doi:10.4207/PA.2011.ART54.
    2. Jill D. Pruetz and Thomas C. LaDuke, “Brief Communication: Reaction to Fire by Savanna Chimpanzees (Pan troglodytes verus) at Fongoli, Senegal: Conceptualization of “Fire Behavior” and the Case for a Chimpanzee Model,” American Journal of Physical Anthropology 141 (April 2010) 646–50, doi:10.1002/ajpa.21245.
    3. Mark Collard et al., “Faunal Evidence for a Difference in Clothing Use between Neanderthals and Early Modern Humans in Europe,” Journal of Anthropological Archaeology 44 B (December 2016), 235–46, doi:org/10.1016/j.jaa.2016.07.010.
  • Does the Recovery of Oils from a Fossilized Bird Evince a Young Earth?

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Dec 20, 2017

    Now the Berean Jews were of more noble character than those in Thessalonica, for they received the message with great eagerness and examined the Scriptures every day to see if what Paul said was true.

    –Acts 17:11

    Is there scientific evidence that the earth is only 6,000 years old?

    In spite of the valiant efforts of young-earth creationists (YECs), I have yet to come across any compelling scientific arguments that the earth is only a few thousand years old. At least not until I learned about the numerous discoveries of soft-tissue remnants associated with fossils that date to several hundred million years in age, in some instances. (For a detailed survey of the soft tissues recovered from the fossil record, check out my book, Dinosaur Blood and the Age of the Earth.) These discoveries give me some pause for thought about the age-of-the-earth measurements.

    These types of discoveries generate a lot of excitement among paleontologists. Having access to soft-tissue materials provides the scientific community with inspiring new insights into the biology of these ancient creatures.

    They also create a lot of excitement for YECs, because the findings suggest to them that the geologists’ dating methods are unreliable. Before these discoveries, very few scientists would have ever thought that soft-tissue materials could survive in the geological layers for thousands—let alone hundreds of millions—of years because of unrelenting decomposition processes. And yet, the number of soft-tissue fossil discoveries continues to mount. For example, investigators from the UK, the US, and Germany recently reported on the recovery of endogenous oils from the fossilized uropygial gland of a bird specimen that dates to 48 million years in age.1 I will take a closer look at what they found after a bit of explanation to show why it is critical to understand such a discovery.

    For YECs, the isolation of soft-tissue materials from fossils indicates that the fossils cannot be millions of years old but, at best, only a few thousand years old—and most likely deposited by a catastrophic worldwide flood. They reason that if the fossils are only a few thousand years old, then the methods used to age-date the fossils must be faulty. That being the case, then the same methods used to date the earth, too, must be flawed.

    As an old-earth creationist, I must admit the discovery of soft-tissue materials associated with fossils represents one of the most interesting arguments for a young earth I’ve encountered. On the surface, the argument seems reasonable. Perhaps it isn’t surprising that many YEC organizations (such as Answers in Genesis, Creation Ministries International, and the Institute for Creation Research) have elevated the existence of soft tissue materials in the fossil record to one of their central arguments for a young earth. I observe many well-meaning Christians following suit, using this same argument in their efforts to convince seekers and skeptics about the scientific reliability of the Genesis 1 creation account. Unfortunately, most people who are scientifically minded fail to find this argument persuasive because of the overwhelming amount of scientific evidence for the reliability of radiometric dating. And as a result, skeptics are often driven further away from the Christian faith.

    When using scientific discoveries to demonstrate God’s existence and to defend the reliability of the biblical creation accounts, it is critical to adopt a posture like that of the Bereans. It is incumbent on all of us to investigate or “examine” on our own to ensure the arguments we use are sound.

    That’s why I wrote Dinosaur Blood and the Age of the Earth. In this book, I make every effort to take the soft-tissue argument seriously. But, following the Bereans’ example, I thoroughly examine each premise of their argument to see if it holds up to scrutiny, including their central claim: soft-tissue materials cannot persist in fossils that are millions of years old.

    Though admittedly counterintuitive, after thorough investigation into this claim, I have come to believe that soft-tissue remnants can survive in the fossil record. To illustrate how this survival is possible, let’s use the recently discovered 48-million-year preening oil isolated, fossilized uropygial gland as a case study.

    Discovery of Preening Oil in a 48-Million-Year-Old Fossilized Gland

    The 48-million-year-old fossil bird specimen that possessed uropygial gland oils was recovered from the Messel Pit. Located in Darmstadt, Germany, this UNESCO World Heritage site has yielded a number of important vertebrate fossils throughout its history and still serves as a source of exciting new fossil discoveries today.

    While carefully examining this bird specimen (which still remains unclassified), the paleontologists noted the outline of the uropygial gland at the base of the tail region. To confirm this interpretation, the researchers attempted to extract remnants of preening oil from this putative gland. Motivated by previous soft-tissue finds and the discovery of lipids (a class of biomolecules that include oils) in other ancient geological deposits, the research team removed milligram amounts of the fossilized uropygial gland from the specimen and extracted material from the sample. Afterward, they subjected the extracts to chemical analysis, relying on a technique known as pyrolysis-gas chromatography-mass spectrometry. Analysis with this technique begins with a heating step that decomposes the analytes into small molecular fragments that, in turn, are separated by gas chromatography and then analyzed by mass spectrometry. This technique produces profiles of molecular fragments that serve as a fingerprint, helping scientists determine the identity of compounds in the sample.

    The research team detected C-8 to C-30 n-alkanes, n-alkenes, and alkylbenzenes in the uropygial gland extracted—as expected if the fossil contained remnants for preening oil. The profiles of the fossilized uropygial gland extracts differed from the profiles of extracts taken from shales that make up the geological layer that originally housed the fossil specimen. This result indicates that the uropygial gland extracts are not due to contamination from the surrounding geological layers. When the researchers compared the extracts of the fossilized glands to extracts of uropygial glands of extant birds (such as the common blackbird, the ringed teal, and the middle spotted woodpecker), they noted a difference in the profiles. This difference most likely reflects chemical alteration of the original preening oil during the fossilization process.

    How the Preening Oil Was Preserved

    So how can soft tissue material, such as preening oil, persist in fossils for millions and millions of years?

    In Dinosaur Blood and the Age of the Earth, I point out that paleontologists believe that soft-tissue preservation reflects a race between two competing processes: decomposition and mineral entombment. If mineral entombment wins, then whatever soft tissue that has avoided decomposition remains behind—for millions and millions of years. Once encased in mineral deposits, soft-tissue materials become isolated and protected from the environment, arresting the decomposition processes that would otherwise destroy them.

    Anything that slows down the rate of decomposition will help soft-tissue materials to hang around long enough for mineral entombment to take place. One factor contributing to soft-tissue survival is the structural durability of the molecules that make up the soft tissues. In most instances, the soft tissues that survive are made up of highly durable materials. Toward this end, some of the components of preening oil (such as long chain alkanes) are chemically inert, making them resistant to chemical decomposition.

    Though usually destructive, in some instances chemical reactivity can contribute to soft-tissue survival. This reactivity likely contributed to the survival of the preening oil. The team of paleontologists believes that the alkene components of the preening oils reacted to form high-molecular-weight polymers that, in turn, became resistant to chemical decomposition.

    While not subject to chemical decomposition, long chain hydrocarbons would serve as an ideal food source for microbes in the environment. This process would work against preservation. But, microbial decomposition of preening oil is unlikely, because some of the components of the uropygial gland secretions possess antimicrobial activities.

    Also, the shale that harbored the fossil bird is oxygen-depleted. The absence of oxygen in this geological setting most likely contributed to soft-tissue survival, preventing oxidative decomposition of the preening oil.

    In other words, there are several collective mechanisms in play that would stave off the decomposition of the original preening oil, though it does look as if the original material did become chemically altered. The bottom line: There is no reason to think that soft-tissue materials derived from the original preening oil in the uropygial glands could not persist for 48 million years or longer in the fossil record.

    At first glance, the soft-tissue argument for a young earth seems so compelling. Yet, when carefully evaluated (“examined”), it simply doesn’t hold up.

    Becoming Bereans

    As Christians, we should expect that there will be scientific discoveries that affirm our faith by revealing God’s fingerprints in nature and by supporting the creation accounts found in Scripture. Key biblical passages (such as Psalm 19 and Romans 1:20) teach this much. Yet, we must also recognize that as human beings interpreting nature (through science) and interpreting Scripture can be complex undertakings. As such, we can make mistakes. We are fallen creatures, we have limited knowledge, insight, and understanding, and we have preconceived notions . . . all of which influence our interpretations. And, it is for these reasons that we must all operate like the Bereans. We should respond to scientific arguments for the Christian faith with eagerness, but before we use them, we must rigorously evaluate them to ensure their validity and, if valid, to understand the arguments’ limitations. Sincere, well-meaning Christians can be wrong and can unintentionally mislead other Christians. But, when that happens it is our fault, not theirs, if we are mislead because we have failed to take the “noble,” Berean-like approach and do our homework.

    Resources to Dig Deeper

    1. Shane O’Reilly et al., “Preservation of Uropygial Gland Lipids in a 48-Million-Year-Old Bird,” Proceedings of the Royal Society B 284 (October 18, 2017): doi:10.1098/rspb.2017.1050.
  • Brain Synchronization Study Evinces the Image of God

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Dec 13, 2017

    As I sit down at my computer to compose this post, the new Justice League movie has just hit the theaters. Even though it has received mixed reviews, I can’t wait to see this latest superhero flick. With several superheroes fighting side-by-side, it begs the question: “Who is the most powerful superhero in the DC universe?”

    I’m not sure how you would respond, but in my opinion, it’s not Superman or Wonder Woman. Instead, it’s a superhero that didn’t appear in the Justice League movie (but he is a longtime member of the Justice League in the comic books): the Martian Manhunter.

    Originally from Mars, J’onn J’onzz possesses superhuman strength and endurance, just like Superman. He can fly and shoot energy beams out of his eyes. But, he also has shapeshifting abilities and is a powerful telepath. It would be fun to see Superman and the Martian Manhunter tangle. My money would be on J’onn J’onzz because of his powerful telepathic abilities. As a telepath, he can read minds, control people’s thoughts and memories, create realistic illusions, and link minds together.


    Image credit: Fazale Rana

    Even though it is fun (and somewhat silly) to daydream about superhuman strength and telepathic abilities, recent work by Spanish neuroscientists from the Basque Center on Cognition, Brain, and Language indicates that mere mortals do indeed have an unusual ability that seems a bit like telepathy. When we engage in conversations with one another—even with strangers—the electrical activities of our brains synchronize.1 In part, this newfound ability may provide the neurological basis for the theory of mind and our capacity to form complex, hierarchical social relationships, properties uniquely displayed by human beings. In other words, this discovery provides more reasons to think that human beings are exceptional in a way that aligns with the biblical concept of the image of God.

    Brain Synchronization

    Most brain activity studies focus on individual subjects and their responses to single stimuli. For example, single-person studies have shown that oscillations in electrical activity in the brain couple with speech rhythms when the test subject is either listening or speaking. The Spanish neuroscientists wanted to go one step further. They wanted to learn what happens to brain activities when two people engage one another in a conversation.

    To find out, they assembled 15 dyads (14 men and 16 women) consisting of strangers who were 20–30 years in age. They asked the members of each dyad to exchange opinions on sports, movies, music, and travel. While the strangers conversed, the researchers monitored electrical activities in the brains using EEG technology. As expected, they detected coupling of brain electrical activities with the speech rhythms in both speakers and listeners. But, to their surprise, they also detected pure brain entrainment in the electrical activities of the test subject, independent of the physical properties of the sound waves associated with speaking and listening. To put it another way, the brain activities of the two people in the conversation became synchronized, establishing a deep connection between their minds.

    Brain Synchronization and the Image of God

    The notion that human beings differ in degree, not kind, from other creatures has been a mainstay concept in anthropology and primatology for over 150 years. And it has been the primary reason why so many people have abandoned the belief that human beings bear God’s image. Yet, this stalwart view in anthropology is losing its mooring, with the concept of human exceptionalism taking its place. A growing minority of anthropologists and primatologists now believe that human beings really are exceptional. They contend that human beings do, indeed, differ in kind, not merely degree, from other creatures, including Neanderthals. Ironically, the scientists who argue for this updated perspective have developed evidence for human exceptionalism in their attempts to understand how the human mind evolved. But, instead of buttressing human evolution, these new insights marshal support for the biblical conception of humanity.

    Anthropologists identify at least four interrelated qualities that make us exceptional: (1) symbolism, (2) open-ended generative capacity, (3) theory of mind, and (4) our capacity to form complex social networks.

    As human beings, we effortlessly represent the world with discrete symbols. We denote abstract concepts with symbols. And our ability to represent the world symbolically has interesting consequences when coupled with our abilities to combine and recombine those symbols in a countless number of ways to create alternate possibilities. Our capacity for symbolism manifests in the form of language, art, music, and even body ornamentation. And we desire to communicate the scenarios we construct in our minds with other human beings.

    But there is more to our interactions with other human beings than a desire to communicate. We want to link our minds together. And we can do this because we possess a theory of mind. In other words, we recognize that other people have minds just like ours, allowing us to understand what others are thinking and feeling. We also have the brain capacity to organize people we meet and know into hierarchical categories, allowing us to form and engage in complex social networks.

    In effect, these qualities could be viewed as scientific descriptors of the image of God.

    It is noteworthy that all four of these qualities are on full display in the Spanish neuroscientists study. The capacity to offer opinions on a wide range of topics and to communicate our ideas with language reflects our symbolism and our open-ended generative capacity. I find it intriguing that the oscillations of our brain’s electrical activity couples with the rhythmic patterns created by speech—suggesting our brains are hard-wired to support our desire to communicate with one another symbolically. I also find it intriguing that our brains become coupled at an even deeper level when we converse, consistent with our theory of mind and our capacity to enter into complex social relationships.

    Even though many people in the scientific community promote a view of humanity that denigrates the image of God, common-day experience continually supports the notion that we are unique and exceptional as human beings. But, for me, I find it even more gratifying to learn that scientific investigations into our cognitive and behavioral capacities continue to affirm human exceptionalism and, with it, the image of God. Indeed, we are the crown of creation.

    Resources to Dig Deeper

    1. Alejandro Pérez et al., “Brain-to-Brain Entrainment: EEG Interbrain Synchronization While Speaking and Listening,” Scientific Reports 7 (June 23, 2017): 4190, doi:10.1038/s41598-017-04464-4.
  • Molecular Scale Robotics Build Case for Design

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Dec 06, 2017

    Sometimes bigger is better, and other times, not so much—particularly for scientists working in the field of nanotechnology.

    Scientists and engineers working in this area are obsessed with miniaturization. And because of this obsession, they have developed techniques to manipulate matter at the molecular scale. Thanks to these advances, they can now produce novel materials (that could never be produced with macro-scale methods) with a host of applications. They also use these techniques to fabricate molecular-level devices—nanometer-sized machines—made up of complex arrangements of atoms and molecules. They hope that these machines will perform sophisticated tasks, giving researchers full control of the molecular domain.

    Recently, scientists from the University of Manchester in the UK achieved a milestone in nanotechnology when they designed the first-ever molecular robot that can be deployed to build molecules in the same way that robotic arms on assembly lines manufacture automobiles.1 These molecular robots can be used to improve the efficiency of chemical reactions and make it possible for organic chemists to design synthetic routes that, up to this point, were inconceivable.

    Undoubtedly, this advance will pave the way for more cost-effective, greener chemical reactions at the bench and plant scales. It will also grant organic chemists greater control over chemical reactions, paving the way for the synthesis of new types of compounds including drugs and other pharmaceutical agents.

    As exciting as these prospects are, perhaps the greater significance of this research lies in the intriguing theological implications. For example, comparison of the molecular robots to the biomolecular machines in the cell—machines that carry out similar assembly-line operations—highlights the elegant designs of biochemical systems, evincing a Creator’s handiwork. This research is theologically provocative in another way. It demonstrates human exceptionalism and, by doing so, supports the biblical claim that human beings are made in the image of God.

    Molecular Robotics

    University of Manchester chemists built molecular robots that consist of about 150 atoms of carbon, nitrogen, oxygen, and hydrogen. Though these robots consist of a relatively small number of atoms, the arrangement of these atoms makes the molecular robots structurally complex.

    The robots’ architecture is organized around a molecular-scale platform. Located in the middle of the platform is a molecular arm that extends upward and then bends at a 90-degree angle. This molecular prosthesis binds molecules at the end of the arm and then can be made to swivel between the two ends of the platform as researchers add different chemicals to the reaction. The swiveling action brings the bound molecule in juxtaposition to the chemical groups at the tip ends of the platform. When reactants are added to the solution, these compounds will react with the bound molecule differently depending on the placement of the arm, whether it is oriented toward one end of the platform or the other. In this way, the bound molecule—call it A—can react through two cycles of arm placement to form one of four possible compounds—B, C, D, and E. In this scheme, unwanted side reactions are kept to a minimum, because the bound molecule is precisely positioned next to either of the two ends of the molecular platform. This specificity improves the reaction efficiency, while at the same time making it possible for chemists to generate compounds that would be impossible to synthesize without the specificity granted by the molecular robots.

    Molecular Robots Make the Case for Design

    Many researchers working in nanotechnology did not think that the University of Manchester scientists—or any scientists, for that matter—could design and build a molecular robot that could carry out high precision molecular assembly. In the abstract of their paper, the Manchester team writes, “It has been convincingly argued that molecular machines that manipulate individual atoms, or highly reactive clusters of atoms, with Ångstrom precision are unlikely to be realized.”2

    Yet, the researchers were motivated to try to achieve this goal because molecular machines with this capacity exist inside the cell. They continue, “However, biological molecular machines routinely position rather less reactive substrates in order to direct chemical reaction sequences.”3 To put it another way, the Manchester chemists derived insight and inspiration from the biomolecular machines inside the cell to design and build their molecular robot.

    As I have written about before, the use of designs in biochemistry to inspire advances in nanotechnology make possible a new design argument, one I call the converse watchmaker argument. Namely, if biological designs are the work of a Creator, these systems should be so well-designed that they can serve as engineering models and otherwise inspire the development of new technologies.

    Comparison of the molecular robots designed by the University of Manchester team with a typical biomolecular machine found in the cell illustrates this point. The newly synthesized molecular robot consists of around 150 atoms, yet it took an enormous amount of ingenuity and effort to design and make. Still, this molecular machine is far less efficient than the biomolecular machines found in the cell. The cell’s biomolecular machines consist of thousands of atoms and are much more elegant and sophisticated than the man-made molecular robots. Considering these differences, is it reasonable to think that the biomolecular machines in the cell resulted from unguided, undirected, contingent processes when they are so much more advanced than the molecular robots built by scientists—some of them among the best chemists in the world?

    The only reasonable explanation is that the biomolecular machines in the cell stem from the work of a mind—a divine mind with unlimited creative capacity.

    Molecular Robots Make the Case for Human Exceptionalism

    Though unimpressive when compared to the elegant biomolecular machines in the cell, molecular robots still stand as a noteworthy scientific accomplishment—one might even say they represent science at its very best. And this accomplishment stresses the fact that human beings are the only species that has ever existed that can create technologies as advanced as the molecular robots invented by the University of Manchester chemists. Our capacity to investigate and understand nature through science and then turn that insight into technologies is unique to human beings. No other creature that exists today or that has ever existed, possesses this capability.

    Thomas Suddendorf puts it this way:

    “We reflect on and argue about our present situation, our history, and our destiny. We envision wonderful harmonious worlds as easily as we do dreadful tyrannies. Our powers are used for good as they are for bad, and we incessantly debate which is which. Our minds have spawned civilizations and technologies that have changed the face of the Earth, while our closest living animal relatives sit unobtrusively in their remaining forests. There appears to be a tremendous gap between human and animal minds.”4

    Anthropologists believe that symbolism accounts for the gap between humans and the great apes. As human beings, we effortlessly represent the world with discrete symbols. We denote abstract concepts with symbols. And our ability to represent the world symbolically has interesting consequences when coupled with our abilities to combine and recombine those symbols in a nearly infinite number of ways to create alternate possibilities.

    Our capacity for symbolism manifests in the form of language, art, music, and even body ornamentation. And we desire to communicate the scenarios we construct in our minds with other human beings. In a sense, symbolism and our open-ended capacity to generate alternative hypotheses are scientific descriptors of the image of God.

    There also appears to be a gap between human minds and the minds of the hominins, such as Neanderthals, who preceded us in the fossil record. It is true: claims abound about Neanderthals possessing the capacity for symbolism. Yet, as I discuss in Who Was Adam, those claims do not withstand scientific scrutiny. Recently, paleoanthropologist Ian Tattersall and linguist Noam Chomsky (along with other collaborators) argued that Neanderthals could not have possessed language and, hence, symbolism, because their crude “technology” remained stagnant for the duration of their time on Earth. Neanderthals—who first appear in the fossil record around 250,000 to 200,000 years ago and disappear around 40,000 years ago—existed on Earth longer than modern humans have. Yet, our technology has progressed exponentially, while Neanderthal technology remained largely static. According to Tattersall, Chomsky, and their coauthors:

    “Our species was born in a technologically archaic context, and significantly, the tempo of change only began picking up after the point at which symbolic objects appeared. Evidently, a new potential for symbolic thought was born with our anatomically distinctive species, but it was only expressed after a necessary cultural stimulus had exerted itself. This stimulus was most plausibly the appearance of language. . . . Then, within a remarkably short space of time, art was invented, cities were born, and people had reached the moon.”5

    In effect, these researchers echo Suddendorf’s point. The gap between human beings and the great apes and hominins becomes most apparent when we consider the remarkable technological advances we have made during our tenure as a species. And this mind-boggling growth in technology points to our exceptionalism as a species, affirming the biblical view that, as human beings, we uniquely bear God’s image.

    Resources to Dig Deeper

    1. Salma Kassem et al., “Stereodivergent Synthesis with a Programmable Molecular Machine,” Nature 549 (September 21, 2017): 374–8, doi:10.1038/nature23677.
    2. Kassem et al., “Stereodivergent Synthesis,” 374.
    3. Kassem et al., “Stereodivergent Synthesis,” 374.
    4. Thomas Suddendorf, The Gap: The Science of What Separates Us from Other Animals (New York: Basic Books, 2013), 2.
    5. Johan J. Bolhuis et al., “How Could Language Have Evolved?” PLoS Biology 12 (August 26, 2014): e1001934, doi:10.1371/journal.pbio.1001934.
  • Fatty Acids Are Beautiful

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Nov 22, 2017

    Who says that fictions onely and false hair
    Become a verse? Is there in truth no beauty?
    Is all good structure in a winding stair?
    May no lines passe, except they do their dutie
    Not to a true, but painted chair?

    ­–George Herbert, “Jordan (I)”

    I doubt the typical person would ever think fatty acids are a thing of beauty. In fact, most people try to do everything they can to avoid them—at least in their diets. But, as a biochemist who specializes in lipids (a class of biomolecules that includes fatty acids) and cell membranes, I am fascinated by these molecules—and by the biochemical and cellular structures they form.

    I know, I know—Im a science geek. But for me, the chemical structures and the physicochemical properties of lipids are as beautiful as an evening sunset. As an expert, I thought I knew most of what there is to know about fatty acids, so I was surprised to learn that researchers from Germany recently uncovered an elegant mathematical relationship that explains the structural makeup of fatty acids.1 From my vantage point, this newly revealed mathematical structure boggles my mind, providing new evidence for a Creator’s role in bringing life into existence.

    Fatty Acids

    To first approximation, fatty acids are relatively simple compounds, consisting of a carboxylic acid head group and a long-chain hydrocarbon tail.


    Structure of two typical fatty acids
    Image credit: Edgar181/Wikimedia Commons

    Despite their structural simplicity, a bewildering number of fatty acid species exist. For example, the hydrocarbon chain of fatty acids can vary in length from 1 carbon atom to over 30. One or more double bonds can occur at varying positions along the chain, and the double bonds can be either cis or trans in geometry. The hydrocarbon tails can be branched and can be modified by carbonyl groups and by hydroxyl substituents at varying points along the chain. As the hydrocarbon chains become longer, the number of possible structural variants increases dramatically.

    How Many Fatty Acids Exist in Nature?

    This question takes on an urgency today because advances in analytical techniques now make it possible for researchers to identify and quantify the vast number of lipid species found in biological systems, birthing the discipline of lipidomics. Investigators are interested in understanding how lipid compositions vary spatially and temporally in biological systems and how these compositions change in response to altered physiological conditions and pathologies.

    To process and make sense of the vast amount of data generated in lipidomics studies, biochemists need to have some understanding of the number of lipid species that are theoretically possible. Recently, researchers from Friedrich Schiller University in Germany took on this challenge—at least, in part—by attempting to calculate the number of chemical species that exist for fatty acids varying in size from 1 to 30 atoms.

    Fatty Acids and Fibonacci Numbers

    To accomplish this objective, the German researchers developed mathematical equations that relate the number of carbon atoms in fatty acids to the number of structural variants (isomers). They discovered that this relationship conforms to the Fibonacci series, with the number of possible fatty acid species increasing by a factor of 1.618—the golden mean—for each carbon atom added to the fatty acid. Though not immediately evident when first examining the wide array of fatty acids found in nature, deeper analysis reveals that a beautiful yet simple mathematical structure underlies the seemingly incomprehensible structural diversity of these biomolecules.

    This discovery indicates it is unlikely that the fatty acid compositions found in nature reflect the haphazard outcome of an undirected, historically contingent evolutionary history, as many biochemists are prone to think. Instead, the fatty acids found throughout the biological realm appear to be fundamentally dictated by the tenets of nature. It is provocative to me that the fatty acid diversity produced by the laws of nature is precisely the isomers needed to for life to be possiblea fitness to purpose, if you will.

    Understanding this mathematical relationship and knowing the theoretical number of fatty acid species will certainly aid biochemists working in lipidomics. But for me, the real significance of these results lies in the philosophical and theological arenas.

    The Mathematical Beauty of Fatty Acids

    The golden mean occurs throughout nature, describing the spiral patterns found in snail shells and the flowers and leaves of plants, as examples, highlighting the pervasiveness of mathematical structures and patterns that describe many aspects of the world in which we live.

    But there is more. As it turns out, we perceive the golden mean to be a thing of beauty. In fact, architects and artists often make use of the golden mean in their work because of its deeply aesthetic qualities.

    Everywhere we look in nature—whether the spiral arms of galaxies, the shell of a snail, or the petals of a flower—we see a grandeur so great that we are often moved to our very core. This grandeur is not confined to the elements of nature we perceive with our senses; it also exists in the underlying mathematical structure of nature, such the widespread occurrence of the Fibonacci sequence and the golden mean. And it is remarkable that this beautiful mathematical structure even extends to the relationship between the number of carbon atoms in a fatty acid and the number of isomers.

    As a Christian, nature’s beauty—including the elegance exemplified by the mathematically dictated composition of fatty acids—prompts me to worship the Creator. But this beauty also points to the reality of God’s existence and supports the biblical view of humanity. If God created the universe, then it is reasonable to expect it to be a beautiful universe. Yet, if the universe came into existence through mechanism alone, there is no reason to think it would display beauty. In other words, the beauty in the world around us signifies the Divine.

    Furthermore, if the universe originated through uncaused physical mechanisms, there is no reason to think that humans would possess an aesthetic sense. But if human beings are made in God’s image, as Scripture teaches, we should be able to discern and appreciate the universe’s beauty, made by our Creator to reveal his glory and majesty.

    Resources to Dig Deeper

    1. Stefan Schuster, Maximilian Fichtner, and Severin Sasso, “Use of Fibonacci Numbers in Lipidomics—Enumerating Various Classes of Fatty Acids,” Scientific Reports 7 (January 2017): 39821, doi:10.1038/srep39821.
  • Ribosomes: Manufactured by Design, Part 2

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Nov 08, 2017

    “I hope there are no creationists in the audience, but it would be a miracle if a strand of RNA ever appeared on the primitive Earth.”1

    Hugh Ross and I witnessed the late origin-of-life researcher, Leslie Orgel, make this shocking proclamation at the end of a lecture he presented at the 13th International Conference on the Origin of Life (ISSOL 2002).

    Orgel was one of the originators of the RNA world hypothesis. And because of his prominence in the origin-of-life research community, the conference organizers granted Orgel the honor of opening ISSOL 2002 with a plenary lecture on the status of the RNA world hypothesis. During his presentation, Orgel described problem after problem with the leading origin-of-life explanation, reaching the tongue-in-cheek conclusion that it would require a miracle for this evolutionary scenario to yield RNA, let alone the first life-forms. (For a detailed discussion of the problems with the RNA world hypothesis, see my book Creating Life in the Lab.)

    Despite these problems, many origin-of-life researchers—including Leslie Orgel (while he was alive)—remain convinced that the RNA world scenario must be the explanation for the emergence of life via chemical evolution. Why? For one key reason: the intermediary role RNA plays in protein synthesis.

    The RNA World Hypothesis

    The RNA world hypothesis posits that biochemistry was initially organized exclusively around RNA and only later did evolutionary processes transform the RNA world into the familiar DNA-protein world of contemporary organisms. If this model is correct, then the DNA-protein world represents the historically contingent outworking of evolutionary history. To put it another way, contemporary biochemistry has been cobbled together by unguided evolutionary forces and the role RNA plays in protein synthesis is an accidental outcome.

    The discovery of ribozymes in the 1980s provided initial support for the RNA world scenario. These RNA molecules possess functional capabilities, behaving just like enzymes. In other words, RNA not only harbors information like DNA, it also carries out cellular functions like proteins. Origin-of-life researchers take RNA’s dual capacities as evidence that life could have been organized around RNA biochemistry. These same researchers presume that evolutionary processes later apportioned RNA’s twofold capabilities between DNA (information storage) and proteins (function). Origin-of-life researchers often point to RNA’s intermediary role in protein synthesis as evidence for the RNA world hypothesis. Again, RNA’s reduced role in contemporary biochemical systems stands as a vestige of evolutionary history, with RNA viewed as a sort of molecular fossil.

    Ribosomes serve as a prime illustration of RNA’s role as a go-between in protein synthesis. As subcellular particles, ribosomes catalyze (assist) the chemical reactions that form the bonds between the amino acid subunits of the proteins. Two subunits of different sizes (comprised of proteins and RNA molecules) combine to form a functional ribosome. In organisms like bacteria, the large subunit (LSU) contains 2 ribosomal RNA (rRNA) molecules and about 30 different protein molecules. The small subunit (SSU) consists of a single rRNA molecule and about 20 proteins. In more complex organisms, the LSU is formed by 3 rRNA molecules that combine with around 50 distinct proteins, and the SSU consists of a single rRNA molecule and over 30 different proteins.

    The rRNA molecules function as scaffolding, organizing the myriad ribosomal proteins. They also catalyze the chain-forming reactions between amino acids. In other words, the ribosome is a ribozyme. At the ISSOL 2002 meeting, I heard Orgel adamantly insist that the RNA world hypothesis must be valid because rRNA catalyzes protein bond formation.

    Orgel’s perspective gains support considering the inefficiency of ribozymes as catalysts. Protein enzymes are much more efficient than ribozymes. In other words, it seemingly would be better and more efficient to design ribosomes so that proteins catalyzed bond formation between amino acids, not rRNA. This reason convinces origin-of-life researchers that the role rRNAs play in protein synthesis is a haphazard consequence of life’s historically contingent evolutionary history.

    But recent work by scientists from Harvard and Uppsala Universities paints a different picture of the compositional makeup of ribosomes, and in doing so, undermines what many origin-of-life researchers believe to be the most compelling evidence for the RNA world hypothesis. These researchers demonstrate that the compositional makeup of ribosomes does not appear to be the accidental outworking of an unguided, contingent process. Instead, an exquisite molecular logic accounts for the composition and structural properties of the protein and rRNA components of ribosomes.2

    Is There a Rationale for Ribosome Structure?

    As part of their research efforts, the Harvard and Uppsala University investigators were specifically trying to answer several questions related to the composition of ribosomes, including:

    1. Why are ribosomes made up of so many proteins?
    2. Why are ribosomal proteins nearly the same size?
    3. Why are ribosomal proteins smaller than typical proteins?
    4. Why are ribosomes made up of so few rRNA molecules?
    5. Why are rRNA molecules so large?
    6. Why do ribosomes employ rRNA as the catalyst to form bonds between amino acids, instead of proteins which are much more efficient as enzymes?

    Ribosomes Make Ribosomes

    Before a cell can replicate, ribosomes must manufacture the proteins needed to form more ribosomes—in fact, ribosomes need to manufacture enough proteins to form a full complement of these subcellular complexes. This ensures that both daughter cells have the sufficient number of protein-manufacturing machines to thrive once the cell division process is completed. Because of this constraint, cell replication cannot proceed until a duplicate population of ribosomes is produced.

    Ribosome Composition is Optimal for Efficient Production of Ribosomes

    As discussed in an earlier blog post, the Harvard and Uppsala University investigators discovered that if ribosomal proteins were large, or if these biomolecules were variable in size, ribosome production would be slow and inefficient. Building ribosomes with smaller, uniform-size proteins represents the faster way to duplicate the ribosome population, permitting the cell replication to proceed in a timely manner. They also determined that the optimal number of ribosomal proteins is between 50 to 80—the number of ribosomal proteins found in nature. In short, the composition of these sub cellular complexes appears to be undergirded by an elegant molecular rationale.

    As part of their mathematical modeling study, these researchers also provided an explanation for why ribosomes are made up of large RNA molecules. Because the number of steps involved in rRNA production is fewer than the steps required for protein manufacture, rRNA molecules can be made more rapidly than proteins. This being the case, ribosome production is more efficient when these organelles are built using fewer and larger rRNA molecules as opposed to smaller, more numerous ones.

    The research team learned that ribosomes containing more rRNA can be built faster than ribosomes made up of more proteins. This fact helps explain why rRNA operates as the catalytic portion of ribosomes (linking amino acids together to construct proteins), though less efficient as a catalyst than proteins.

    These insights also explain the compositional differences among ribosomes found in bacteria, eukaryotic cells, and mitochondria. Bacteria, which typically replicate faster than eukaryotic cells, possess ribosomes that contain proportionally more rRNA and fewer proteins than ribosomes found in eukaryotic cells. Mitochondria—organelles found in eukaryotic cells—possess ribosomes with a much greater ratio of proteins to rRNA than eukaryotic cells. This observation makes sense because ribosomes in mitochondria don’t produce themselves.

    It Would Be a Miracle if a Strand of RNA Appeared on the Primitive Earth

    An exquisite molecular rationale undergirds the number and size of rRNA molecules in ribosomes and accounts for why the ribosome is a ribozyme. The work of the Harvard and Uppsala University scientists undermines the view that ribosomes were cobbled together as a result of the evolutionary transition from the RNA world to the DNA/protein world. If the presence and role of RNA molecules in ribosomes were simply vestiges of life’s origin out of an RNA world, then there should not be an elegant molecular logic that accounts for ribosome compositions in bacteria and eukaryotic organisms. In other words, it doesn’t appear as if ribosomes are the unintended outcome of an unguided evolutionary process.

    This conclusion gains support from earlier work by life scientist Ian S. Dunn. As I wrote about in a previous blog post, Dunn has uncovered a molecular rationale for the intermediary role messenger RNA (mRNA) plays in protein synthesis. Again, it indicates that the intermediary role of RNA molecules in protein synthesis is a necessary design of a DNA/protein world, not a molecular vestige of life’s evolutionary origin that proceeds through an RNA world.

    Given these new insights and the intractable problems with the RNA world scenario, I must agree with Leslie Orgel. It would be a miracle if a strand of RNA appeared on the primitive Earth—unless a Creator intervened.

    Resources to Dig Deeper

    1. Fazale Rana, Creating Life in the Lab: How New Discoveries in Synthetic Biology Make a Case for the Creator (Grand Rapids, MI: Baker Books, 2011), 161.
    2. Shlomi Reuveni, Måns Ehrenberg, and Johan Paulsson, “Ribosomes Are Optimized for Autocatalytic Production,” Nature 547 (July 20, 2017): 293–7, doi:10.1038/nature22998.
  • Ribosomes: Manufactured by Design, Part 1

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Nov 01, 2017

    Before joining Reasons to Believe in 1999, I spent seven years working in R&D at a Fortune 500 company, which meant that I spent most of my time in a chemistry laboratory alongside my colleagues trying to develop new technologies with the hope that one day our ideas would become a reality, making their way onto store shelves.

    From time to time, my work would be interrupted by an urgent call from one of our manufacturing plants. Inevitably, there was some crisis requiring my expertise as a chemist to troubleshoot. Often, I could solve the plant’s problem over the phone, or by analyzing a few samples sent to my lab. But, occasionally, the crisis necessitated a trip to the plant. These trips weren’t much fun. They were high pressure, stressful situations, because the longer the plant was offline, the more money it cost the company.

    But, once the crisis abated, we could breathe easier. And that usually afforded us an opportunity to tour the plant.

    It was a thrill to see working assembly lines manufacturing our products. These manufacturing operations were engineering marvels to behold, efficiently producing high-quality products at unimaginable speeds.

    The Cell as a Factory

    Inside each cell, an ensemble of manufacturing operations exists, more remarkable than any assembly line designed by human engineers. Perhaps one of the most astounding is the biochemical process that produces proteins—the workhorse molecules of life. These large complex molecules work collaboratively to carry out every cellular operation and contribute to the formation of all the structures within the cell.

    Subcellular particles called ribosomes produce proteins through an assembly-line-like operation, replete with sophisticated quality control checkpoints. (As discussed in The Cell’s Design, the similarity between the assembly-line production of proteins and human manufacturing operations bolsters the Watchmaker argument for God’s existence.)


    About 23 nanometers in diameter, ribosomes play a central role in protein synthesis by catalyzing (assisting) the chemical reactions that form the bonds between the amino acid subunits of proteins. A human cell may contain up to half a million ribosomes. A typical bacterium possesses about 20,000 of these subcellular structures, comprising one-fourth the total bacterial mass.

    Two subunits of different sizes (comprised of proteins and RNA molecules) combine to form a functional ribosome. In organisms like bacteria, the large subunit (LSU) contains 2 ribosomal RNA (rRNA) molecules and about 30 different protein molecules. The small subunit (SSU) consists of a single rRNA molecule and about 20 proteins. In more complex organisms, the LSU is formed by 3 rRNA molecules that combine with around 50 distinct proteins and the SSU consists of a single rRNA molecule and over 30 different proteins. The rRNAs act as scaffolding that organizes the myriad ribosomal proteins. They also catalyze the chain-forming reactions between amino acids.

    Ribosomes Make Ribosomes

    Before a cell can replicate, ribosomes must manufacture the proteins needed to form more ribosomes—in fact, the cell’s machinery needs to manufacture enough ribosomes to form a full complement of these subcellular complexes. This ensures that both daughter cells have the sufficient number of protein-manufacturing machines to thrive once the cell division process is completed. Because of this constraint, cell replication cannot proceed until a duplicate population of ribosomes is produced.

    Is There a Rationale for Ribosome Structure?

    Clearly, ribosomes are complex subcellular particles. But, is there any rhyme or reason for their structure? Or are ribosomes the product of a historically contingent evolutionary history?

    New work by researchers from Harvard University and Uppsala University in Sweden provides key insight into the compositional make up of ribosomes, and, in doing so, help answer these questions.1

    As part of their research efforts, the Harvard and Uppsala University investigators were specifically trying to answer several questions related to the composition of ribosomes, including:

    1. Why are ribosomes made up of so many proteins?
    2. Why are ribosomal proteins nearly the same size?
    3. Why are ribosomal proteins smaller than typical proteins?
    4. Why are ribosomes made up of so few rRNA molecules?
    5. Why are rRNA molecules are so large?
    6. Why do ribosomes employ rRNA as the catalyst to form bonds between amino acids, instead of proteins which are much more efficient as enzymes?

    Ribosome Composition Is Optimal for Efficient Production of Ribosomes

    Using mathematical modeling, the Harvard and Uppsala University investigators discovered that if ribosomal proteins were larger, or if these biomolecules were variable in size, ribosome production would be slow and inefficient. Building ribosomes with smaller, uniform-size proteins represents the faster way to duplicate the ribosome population, permitting the cell replication to proceed in a timely manner.

    These researchers also learned that if the ribosomal proteins were any shorter, inefficient ribosome production also results. This inefficiency stems from biochemical events needed to initiate protein production. If proteins are too short, then the initiation events take longer than the elongation processes that build the protein chains.

    The bottom line: The mathematical modeling work by the Harvard and Uppsala University research team indicates that the sizes of ribosomal proteins are optimal to ensure the most rapid and efficient production of ribosomes. The mathematical modeling also determined that the optimal number of ribosomal proteins is between 50 to 80—the number of ribosomal proteins found in nature.

    Ribosome Composition Is Optimal to Produce a Varied Population of Ribosomes

    The insights of this work have bearing on the recent discovery that within cells a heterogeneous population of ribosomes exists, not a homogeneous one as biochemists have long thought.2 Instead of every ribosome in the cell being identical, capable of producing each and every protein the cell needs, a diverse ensemble of distinct ribosomes exists in the cell. Each type of ribosome manufactures characteristically distinct types of proteins. Typically, ribosomes produce proteins that work in conjunction to carry out related cellular functions. The heterogeneous makeup of ribosomes contributes to the overall efficiency of protein production, and also provides an important means to regulate protein synthesis. It wouldn’t make sense to use an assembly line to make both consumer products, such as antiperspirant sticks, and automobiles. In the same manner, it doesn’t make sense to use the same ribosomes to make the myriad proteins, performing different functions for the cell.

    Because ribosomes consist of a large number of small proteins, the cell can efficiently produce heterogeneous populations of ribosomes by assembling a ribosomal core and then including and excluding specific ribosomal proteins to generate a diverse population of ribosomes.3 In other words, the protein composition of ribosomes is optimized to efficiently replicate a diverse population of these subcellular particles.

    The Case for Creation

    The ingenuity of biochemical systems was one of the features of the cell’s chemistry that most impressed me as a graduate student (and moved me toward the recognition that there was a Creator). And the latest work by researchers on ribosome composition from Harvard and Uppsala Universities provides another illustration of the clever way that biochemical systems are constructed. The composition of these subcellular structures doesn’t appear to be haphazard—a frozen accident of a historically contingent evolutionary process—but instead is undergirded by an elegant molecular rationale, consistent with the work of a mind.

    The case for intelligent design gains reinforcement from the optimal composition of ribosomal proteins. Quite often, designs produced by human beings have been optimized, making this property a telltale signature for intelligent design. In fact, optimality is most often associated with superior designs.

    As I pointed out in The Cell’s Design, ribosomes are chicken-and-egg systems. Because ribosomes are composed of proteins, proteins are needed to make proteins. As with ingenuity and optimality, this property also evinces for the work of intelligent agency. Building a system that displays this unusual type of interdependency requires, and hence, reflects the work of a mind.

    On the other hand, the chicken-and-egg nature of ribosome biosynthesis serves as a potent challenge to evolutionary explanations for the origin of life.

    The Challenge to Evolution

    Because ribosomes are needed to make the proteins needed to make ribosomes, it becomes difficult to envision how this type of chicken-and-egg system could emerge via evolutionary processes. Protein synthesis would have to function optimally at the onset. If it did not, it would lead to a cycle of auto-destruction for the cell.

    Ribosomes couldn’t begin as crudely operating protein-manufacturing machines that gradually increased in efficiency—evolving step-by-step—toward the optimal systems, characteristic of contemporary biochemistry. If error-prone, ribosomes will produce defective proteins—including ribosomal proteins. In turn, defective ribosomal proteins will form ribosomes even more prone to error, setting up the auto-destruct cycle. And in any evolutionary scheme, the first ribosomes would have been error-prone.

    The compositional requirement that ribosomal proteins be of the just-right size and uniform in length only exacerbates this chicken-and-egg problem. Even if ribosomes form functional, intact proteins, if these proteins arent the correct number, size, or uniformity then ribosomes couldnt be replicated fast enough to support cellular reproduction.

    In short, the latest insights in the protein composition of ribosomes provides compelling reasons to think that life must stem from a Creator’s handiwork.

    So does the compositional makeup of ribosomal RNA molecules, which will be the topic of my next blog post.


    1. Shlomi Reuveni et al., “Ribosomes Are Optimized for Autocatalytic Production,” Nature 547 (July 20, 2017): 293–97, doi:10.1038/nature22998.
    2. Zhen Shi et al., “Heterogeneous Ribosomes Preferentially Translate Distinct Subpools of mRNAs Genome-Wide,” Molecular Cell 67 (July 6, 2017): 71–83, doi:10.1016/j.molcel.2017.05.021.
    3. Jeffrey A. Hussmann et al., “Ribosomal Architecture: Constraints Imposed by the Need for Self-Production,” Current Biology 27 (August 21, 2017): R798–R800, doi:10.1016/j.cub.2017.06.080.
  • Evolutionary Paradigm Lacks Explanation for Origin of Mitochondria and Eukaryotic Cells

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Oct 03, 2017

    You carried the cross
    Of my shame
    Oh my shame
    You know I believe it
    But I still haven’t found
    What I’m looking for

    —Adam Clayton, Dave Evans, Larry Mullen, Paul David Hewson, Victor Reina

    One of my favorite U2 songs is “I Still Haven’t Found What I’m Looking For.” For me, it’s a reminder that because of Christ, my life has meaning, purpose, and a sense of destiny. Still, I will never discover ultimate fulfillment in this world no matter how hard I search, but in the world to come—the new heaven and new earth.

    Though their pursuit is scientific and not religious, many scientists have also failed to find what they have been looking for. Physicists are on a quest to find the Theory of Everything—a Grand Unified Theory (GUT) that can account for everything in physics. However, a GUT eludes them.

    On the other hand, life scientists appear to have found it. They claim to have discovered biology’s GUT: the theory of evolution. Many biologists assert that evolutionary mechanisms can fully account for the origin, history, and design of life. And they are happy to sing about their discovery any chance they get.

    Yet, despite this claim, the evolutionary paradigm seems to come up short time and time again when it comes to explaining key events in life’s history. And this failure serves as the basis for my skepticism regarding the evolutionary paradigm.

    Currently, evolutionary biologists lack explanations for the key transitions in life’s history, including thes

    • origin of life,
    • origin of eukaryotic cells,
    • origin of sexual reproduction,
    • origin of body plans,
    • origin of consciousness,
    • and the origin of human exceptionalism.

    To be certain, evolutionary biologists have proposed models to explain each of these transitions, but the models consistently fail to deliver, as a recent review article published by two prominent evolutionary biologists from the Hungarian Academy of Sciences illustrates.In this article, these researchers point out the insufficiency of the endosymbiont hypothesis—the leading evolutionary model for the origin of eukaryotic cells—to account for the origin of mitochondria and, hence, eukaryogenesis.

    The Endosymbiont Hypothesis

    Lynn Margulis (1938–2011) advanced the endosymbiont hypothesis for the origin of eukaryotic cells in the 1960s, building on the ideas of Russian botanist, Konstantin Mereschkowski. Taught in introductory high school and college biology courses, Margulis’s work has become a cornerstone idea of the evolutionary paradigm. This classroom exposure explains why students often ask me about the endosymbiont hypothesis when I speak on university campuses. Many first-year biology students and professional life scientists alike find the evidence for this idea compelling and, consequently, view it as providing broad support for an evolutionary explanation for the history and design of life.

    According to the hypothesis, complex cells originated when symbiotic relationships formed among single-celled microbes after free-living bacterial and/or archaeal cells were engulfed by a “host” microbe. (Ingested cells that take up permanent residence within other cells are referred to as endosymbionts.)

    Presumably, organelles such as mitochondria were once endosymbionts. Once engulfed, the endosymbionts took up permanent residency within the host, with the endosymbiont growing and dividing inside the host. Over time, the endosymbionts and the host became mutually interdependent, with the endosymbionts providing a metabolic benefit for the host cell. The endosymbionts gradually evolved into organelles through a process referred to as genome reduction. This reduction resulted when genes from the endosymbionts’ genomes were transferred into the genome of the host organism. Eventually, the host cell evolved the machinery to produce the proteins needed by the former endosymbiont and processes to transport those proteins into the organelle’s interior.

    Evidence for the Endosymbiont Hypothesis

    The similarity between organelles and bacteria serve as the main line of evidence for the endosymbiont hypothesis. For example, mitochondria—which are believed to be descended from a group of alpha-proteobacteria—are about the same size and shape as a typical bacterium and have a double membrane structure like gram-negative cells. These organelles also divide in a way that is reminiscent of bacterial cells.

    Biochemical evidence also exists for the endosymbiont hypothesis. Evolutionary biologists view the presence of the diminutive mitochondrial genome as a vestige of this organelle’s evolutionary history. They see the biochemical similarities between mitochondrial and bacterial genomes as further evidence for the evolutionary origin of these organelles.

    The presence of the unique lipid, cardiolipin, in the mitochondrial inner membrane also serves as evidence for the endosymbiont hypothesis. This important lipid component of bacterial inner membranes is not found in the membranes of eukaryotic cells—except for the inner membranes of mitochondria. In fact, biochemists consider it a signature lipid for mitochondria and a vestige of this organelle’s evolutionary history.2

    Does the Endosymbiont Hypothesis Successfully Account for the Origin of Mitochondria?

    Despite the seemingly compelling evidence for the endosymbiont hypothesis, evolutionary biologists lack a genuine explanation for the origin of mitochondria, and, in a broader context, the origin of eukaryotic cells. In their recently published critical review, Zachar and Szathmary point out that evolutionary biologists have proposed over twenty different evolutionary scenarios for the mitochondrial origins that umbrella underneath the endosymbiont hypothesis. Of these, they identify eight that are reasonable, casting the others aside. Still, these eight hypotheses fail to fully account for the origin of mitochondria. The Hungarian biologists delineate twelve questions that any successful endosymbiogenesis model must answer. In turn, they demonstrate that none of these models answers all the questions. In doing so, the two researchers call for a new theory.

    In the article’s abstract, the authors state, “The origin of mitochondria is a unique and hard evolutionary problem, embedded within the origin of eukaryotes. . . . Contending theories widely disagree on ancestral partners, initial conditions and unfolding events. There are many open questions but there is no comparative examination of hypotheses. We have specified twelve questions about the observable facts and hidden processes leading to the establishment of the endosymbiont that a valid hypothesis must address. There is no single theory capable of answering all questions.”3

    Space doesn’t permit me to discuss each of the questions posed by the pair of biologists. Still, I would like to call attention to a few problems confronting the endosymbiont hypothesis, highlighted in their critical review.

    Lack of Transitional Intermediates. Biologists have yet to discover any single-celled organisms that represent transitional intermediates between prokaryotes and eukaryotic cells. (There are some eukaryotes that lack mitochondria, but they appear to have lost these organelles.) All complex cells display the eukaryotic hallmark features. In other words, it looks as if eukaryotic cells emerged in a short period of time, without any transitional forms. In fact, some biologists dub the transition the eukaryotic big bang.

    Chimeric Nature of Eukaryotic Cells. Eukaryotic cells possess an unusual combination of features. Their information-processing systems resemble those of archaea, but their membranes and energy metabolism are bacteria-like. There is no plausible evolutionary scenario to explain this blend of features. It would require the archaeon host to replace its membranes while retaining all its information-processing genes. Evolutionary biologists know of no instance in which this type of transition took place, nor do they know how it could have occurred.

    Absence of Membrane Bioenergetics in the Host. All prokaryotic organisms rely on their plasma membrane to produce energy. If eukaryotic cells emerged via endosymbiogenesis, then the plasma membranes of eukaryotic cells should possess vestiges of that past function. Yet, the plasma membranes of eukaryotic cells show no traces of this essential biochemical feature.

    Mechanism of Inclusion. The most plausible way for the endosymbiont to be taken up by the host cell is through a process called phagocytosis. But why wouldn’t the engulfed cell be digested by the host? How did the endosymbiont escape destruction? And, if it somehow survived, why doesn’t the mitochondria possess a triple membrane system, with the outermost membrane derived from the phagosome?

    Early Selective Advantage. Once inside the host, why didn’t the endosymbiont simply reproduce, overrunning the host cell? What benefit would it be for the host cell to initially harbor the endosymbiont? Currently, evolutionary biologists don’t have answers to troubling questions such as these.

    The challenges delineated by the Hungarian biologists aren’t the only ones faced by evolutionary models for endosymbiogenesis. As I discuss in a previous article, mitochondrial protein biogenesis poses another difficult problem for the endosymbiont hypothesis.

    The authors of the critical review sum it up this way: “The integration of mitochondria was a major transition, and a hard one. It poses puzzles so complicated that new theories are still generated 100 years since endosymbiogenesis was first proposed by Konstantin Mereschkowsky and 50 years since Lynn Margulis cemented the endosymbiotic origin of mitochondria into evolutionary biology. . . . One would expect that by this time, there is a consensus about the transition, but far from that even the most fundamental points are still debated.”4

    Though evolutionary biologists claim to have life’s history all figured out, in reality they are like most of us—they still haven’t found what they are looking for.



    1. Istvan Zachar and Eors Szathmary, “Breath-Giving Cooperation: Critical Review of Origin of Mitochondria Hypotheses,” Biology Direct 12 (August 14, 2017): 19, doi:10.1186/s13062-017-0190-5.
    2. In previous posts (here, here, and here), I explain the rationale for mitochondrial DNA and the presence of cardiolipin in the inner mitochondrial membrane from a creation model/intelligent design vantage point and, in doing so, demonstrate that the two biochemical features aren’t uniquely explained by the endosymbiont hypothesis.
    3. Zachar and Szathmary, “Breath-Giving Cooperation.”
    4. Zachar and Szathmary, “Breath-Giving Cooperation.”
  • Whale Vocal Displays Make Beautiful Case for a Creator

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Sep 26, 2017

    There is the sea, vast and spacious,
    teeming with creatures beyond number—
    living things both large and small.
    There the ships go to and fro,
    and Leviathan, which you formed to frolic there.

    —Psalm 104:25–26

    A few weeks ago, I did something I always wanted to do. I listened to the uncut, live version of the Allman Brothers’ Mountain Jam from beginning to end. Thirty-four minutes in length, this song appears on The Allman Brothers’ live At Fillmore East album. Though The Allman Brothers are among my favorite groups, I have never had the time and motivation to listen to this song in its entirety. I like listening to jam bands, but a thirty-four-minute song . . . in any case, a cross-country flight finally afforded me the opportunity to give my undivided attention to this jam band masterpiece. What an incredible display of musicianship!

    Humpback Whale Acoustical Displays

    Rockers aren’t the only ones who can get a bit carried away when performing a song. Humpback whales are notorious for their jam-band-like acoustical displays. These creatures produce elaborate patterns of sounds that researchers dub songs. The whale songs can last for up to 30 minutes, and some whales will repeatedly perform the same song for up to 24 hours.

    Humpback whale songs display a complex hierarchical organization. The most basic element of the song consists of a single sound, called a unit. These creatures combine units together to form phrases. In turn, they combine phrases to form themes. Finally, they combine themes to form a song, with each theme connected by transitional phrasing.

    Researchers aren’t certain why humpback whales engage in these complex acoustical displays. Only the males sing. Perhaps their singing establishes dominance within the group. Most researchers think that the males sing to attract females. (Even for whales, the musicians get the girls.)

    Humpback whales in the same area perform the same song. But, their songs continually evolve. Researchers refer to the complete transformation of one whale song into another as a revolution. As the songs evolve, each member of the group learns the new variant. When one group of humpback whales encounters another group, the two groups exchange songs. This exchange accelerates the song revolution. As a result of this encounter, members of both groups develop and learn a new song.

    How Do Humpback Whales Learn Songs?

    Researchers from the UK and Australia wanted to understand how humpback whales learn new songs.1 Their query is part of a bigger question: How do animals transmit culture—learned information and behaviors—to other members of the group and to the next generation?

    To answer this question, the research team recorded 9,300 acoustical displays over the course of two complete song revolutions for the humpback whales of the South Pacific. Among these recordings, they discovered hybrid songs—vocal displays comprised of bits and pieces of both the old and the new songs. They concluded that these hybrids songs captured the transition from one song to the next.

    These song hybrids consisted of phrases and themes from the old and new songs spliced together. The structure of hybrid songs indicated to the research team that humpback whales must learn songs in the same way that humans learn languages, by learning bits and piecing them together.

    Rock on!

    The Creator’s Artistry

    Sometimes, as Christian apologists, we tend to think of God solely as an Engineer who creates with only one specific purpose or function in mind. But, the insights researchers have gained into the vocal displays of the humpback whales reminds me that the God I worship is also a Divine Artist—a God who creates for his enjoyment.

    Scripture supports this idea. Psalm 104:25 states that God formed the leviathan (which in this passage seems to refer to whales) on day five to frolic in the vast, spacious seas. In other words, God created the great sea mammals for no other purpose than to play!

    Artistry and engineering are not mutually exclusive. Engineers often design cars and buildings to be both functionally efficient and aesthetically pleasing. But sometimes, as humans, we create for no other reason than for our pleasure and for others to enjoy and be moved by our work.

    Nature’s Beauty and God’s Existence

    The humpback whale exemplifies the remarkable beauty of the natural world. Everywhere we look in nature—whether the night sky, the oceans, the rain forests, the deserts, even the microscopic world—we see a grandeur so great that we are often moved to our very core.

    Watching a humpback whale breach or hearing a recording of its vocal displays is more than sufficient to produce in us that sense of awe and wonder. And yet, our wonder and amazement only grow as we study these creatures using sophisticated scientific techniques.

    For Christians, nature’s beauty prompts us to worship the Creator. But it also points to the reality of God’s existence and supports the biblical view of humanity.

    As philosopher Richard Swinburne argues, “If God creates a universe, as a good workman, he will create a beautiful universe. On the other hand, if the universe came into existence without being created by God, there is no reason to suppose that it would be a beautiful universe.”2 In other words, the beauty in the world around us signifies the Divine.

    But, as human beings, why do we perceive beauty in the world? In response to this question, Swinburne asserts, “There is certainly no particular reason why, if the universe originated uncaused, psycho-physical laws…would bring about aesthetic sensibilities in humans.”3 But, if human beings are made in God’s image, as Scripture teaches, we should be able to discern and appreciate the universe’s beauty, made by our Creator to reveal his glory and majesty.

    In short, the humpback whales’ acoustical displaysa jam band masterpiecesing of the Creator’s existence and his artistry.



    1. Ellen C. Garland et al., “Song Hybridization Events during Revolutionary Song Change Provide Insights into Cultural Transmission in Humpback Whales,” Proceedings of the National Academy of Sciences USA 114 (July 25, 2017): 7822–29, doi:10.1073/pnas.1621072114.
    2. Richard Swinburne, The Existence of God, 2nd ed. (New York: Oxford University Press, 2004), 190–91.
    3. Swinburne, Existence of God, 190–91.
  • The Human Genome: Copied by Design

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Sep 19, 2017

    The time my wife Amy and I spent in graduate school studying biochemistry were some of the best days of our lives. But it wasn’t all fun and games. For the most part, we spent long days and nights working in the lab.

    But we weren’t alone. Most of the graduate students in the chemistry department at Ohio University kept the same hours we did, with all-nighters broken up around midnight by “Dew n’ Donut” runs to the local 7-Eleven. Even though everybody worked hard, some people were just more productive than others. I soon came to realize that activity and productivity were two entirely different things. Some of the busiest people I knew in graduate school rarely accomplished anything.

    This same dichotomy lies at the heart of an important scientific debate taking place about the meaning of the ENCODE project results. This controversy centers around the question: Is the biochemical activity measured for the human genome merely biochemical noise or is it productive for the cell? Or to phrase the question the way a biochemist would: Is biochemical activity associated with the human genome the same thing as biochemical function?

    The answer to this question doesn’t just have scientific implications. It impacts questions surrounding humanity’s origin. Did we arise through evolutionary processes or are we the product of a Creator’s handiwork?

    The ENCODE Project

    The ENCODE project—a program carried out by a consortium of scientists with the goal of identifying the functional DNA sequence elements in the human genome—reported phase II results in the fall of 2012. To the surprise of many, the ENCODE project reported that around 80% of the human genome displays biochemical activity, and hence function, with the expectation that this percentage should increase with phase III of the project.

    If valid, the ENCODE results force a radical revision of the way scientists view the human genome. Instead of a wasteland littered with junk DNA sequences (as the evolutionary paradigm predicts), the human genome (and the genomes of other organisms) is packed with functional elements (as expected if a Creator brought human beings into existence).

    Within hours of the publication of the phase II results, evolutionary biologists condemned the ENCODE results, citing technical issues with the way the study was designed and the way the results were interpreted. (For a response to these complaints go here, here, and here.)

    Is Biochemical Activity the Same Thing As Function?

    One of the technical complaints relates to how the ENCODE consortium determined biochemical function. Critics argue that ENCODE scientists conflated biochemical activity with function. For example, the ENCODE Project determined that about 60% of the human genome is transcribed to produceRNA. ENCODE skeptics argue that most of these transcripts lack function. Evolutionary biologist Dan Graur has asserted that “some studies even indicate that 90% of transcripts generated by RNA polymerase II may represent transcriptional noise.”In other words, the biochemical activity measured by the ENCODE project can be likened to busy but nonproductive graduate students who hustle and bustle about the lab but fail to get anything done.

    When I first learned how many evolutionary biologists interpreted the ENCODE results I was skeptical. As a biochemist, I am well aware that living systems could not tolerate such high levels of transcriptional noise.

    Transcription is an energy- and resource-intensive process. Therefore, it would be untenable to believe that most transcripts are mere biochemical noise. Such a view ignores cellular energetics. Transcribing 60% of the genome when most of the transcripts serve no useful function would routinely waste a significant amount of the organism’s energy and material stores. If such an inefficient practice existed, surely natural selection would eliminate it and streamline transcription to produce transcripts that contribute to the organism’s fitness.

    Most RNA Transcripts Are Functional

    Recent work supports my intuition as a biochemist. Genomics scientists are quickly realizing that most of the RNA molecule transcribed from the human genome serve critical functional roles.

    For example, a recently published report from the Second Aegean International Conference on the Long and the Short of Non-Coding RNAs (held in Greece between June 9–14, 2017) highlights this growing consensus. Based on the papers presented at the conference, the authors of the report conclude, “Non-coding RNAs . . . are not simply transcriptional by-products, or splicing artefacts, but comprise a diverse population of actively synthesized and regulated RNA transcripts. These transcripts can—and do—function within the contexts of cellular homeostasis and human pathogenesis.”2

    Shortly before this conference was held, a consortium of scientists from the RIKEN Center for Life Science Technologies in Japan published an atlas of long non-coding RNAs transcribed from the human genome. (Long non-coding RNAs are a subset of RNA transcripts produced from the human genome.) They identified nearly 28,000 distinct long non-coding RNA transcripts and determined that nearly 19,200 of these play some functional role, with the possibility that this number may increase as they and other scientific teams continue to study long non-coding RNAs.3 One of the researchers involved in this project acknowledges that “There is strong debate in the scientific community on whether the thousands of long non-coding RNAs generated from our genomes are functional or simply byproducts of a noisy transcriptional machinery . . . we find compelling evidence that the majority of these long non-coding RNAs appear to be functional.”4

    Copied by Design

    Based on these results, it becomes increasingly difficult for ENCODE skeptics to dismiss the findings of the ENCODE project. Independent studies affirm the findings of the ENCODE consortium—namely, that a vast proportion of the human genome is functional.

    We have come a long way from the early days of the human genome project. When completed in 2003, many scientists at that time estimated that around 95% of the human genome consisted of junk DNA. And in doing so, they seemingly provided compelling evidence that humans must be the product of an evolutionary history.

    But, here we are, nearly 15 years later. And the more we learn about the structure and function of genomes, the more elegant and sophisticated they appear to be. And the more reasons we have to think that the human genome is the handiwork of our Creator.



    1. Dan Graur et al., “On the Immortality of Television Sets: ‘Function’ in the Human Genome According to the Evolution-Free Gospel of ENCODE,” Genome Biology and Evolution5 (March 1, 2013): 578–90, doi:10.1093/gbe/evt028.
    2. Jun-An Chen and Simon Conn, “Canonical mRNA is the Exception, Rather than the Rule,” Genome Biology 18 (July 7, 2017): 133, doi:10.1186/s13059-017-1268-1.
    3. Chung-Chau Hon et al., “An Atlas of Human Long Non-Coding RNAs with Accurate 5′ Ends,” Nature 543 (March 9, 2017): 199–204, doi:10.1038/nature21374.
    4. RIKEN, “Improved Gene Expression Atlas Shows that Many Human Long Non-Coding RNAs May Actually Be Functional,” ScienceDaily, March 1, 2017, www.sciencedaily.com/releases/2017/03/170301132018.htm.
  • Dollo’s Law at Home with a Creation Model, Reprised*

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Sep 12, 2017

    *This article is an expanded and updated version of an article published in 2011 on reasons.org.

    Published posthumously, Thomas Wolfe’s 1940 novel, You Can’t Go Home Againconsidered by many to be his most significant work—explores how brutally unfair the passage of time can be. In the finale, George Webber (the story’s protagonist) concedes, “You can’t go back home” to family, childhood, familiar places, dreams, and old ways of life.

    In other words, there’s an irreversible quality to life. Call it the arrow of time.

    Like Wolfe, most evolutionary biologists believe there is an irreversibility to life’s history and the evolutionary process. In fact, this idea is codified in Dollo’s Law, which states that an organism cannot return, even partially, to a previous evolutionary stage occupied by one of its ancestors. Yet, several recent studies have uncovered what appears to be violations of Dollo’s Law. These violations call into question the sufficiency of the evolutionary paradigm to fully account for life’s history. On the other hand, the return to ‘ancestral states’ finds an explanation in an intelligent design/creation model approach to life’s history.

    Dollo’s Law

    French paleontologist Louis Dollo formulated the law that bears his name in 1893 before the advent of modern-day genetics, basing it on patterns he unearthed from the fossil record. Today, his idea finds undergirding in contemporary understanding of genetics and developmental biology.

    Evolutionary biologist Richard Dawkins explains the modern-day conception of Dollo’s Law this way:

    “Dollo’s Law is really just a statement about the statistical improbability of following exactly the same evolutionary trajectory twice . . . in either direction. A single mutational step can easily be reversed. But for larger numbers of mutational steps . . . mathematical space of all possible trajectories is so vast that the chance of two trajectories ever arriving at the same point becomes vanishingly small.”1

    If a biological trait is lost during the evolutionary process, then the genes and developmental pathways responsible for that feature will eventually degrade, because they are no longer under selective pressure. In 1994, using mathematical modeling, researchers from Indiana University determined that once a biological trait is lost, the corresponding genes can be “reactivated” with reasonable probability over time scales of five hundred thousand to six million years. But once a time span of ten million years has transpired, unexpressed genes and dormant developmental pathways become permanently lost.2

    In 2000, a scientific team from the University of Oregon offered a complementary perspective on the timescale for evolutionary reversals when they calculated how long it takes for a duplicated gene to lose function.3 (Duplicated genes serve as a proxy for dormant genes rendered useless because the trait they encode has been lost.) According to the evolutionary paradigm, once a gene becomes duplicated, it is no longer under the influence of natural selection. That is, it undergoes neutral evolution, and eventually becomes silenced as mutations accrue. As it turns out, the half-life for this process is approximately four million years. To put it another way, sixteen to twenty-four million years after the duplication event, the duplicated gene will have completely lost its function. Presumably, this result applies to dormant, unexpressed genes rendered unnecessary because the trait they specify is lost.

    Both scenarios assume neutral evolution and the accumulation of mutations in a clockwise manner. But what if the loss of gene function is advantageous? Collaborative work by researchers from Harvard University and NYU in 2007 demonstrated that loss of gene function can take place on the order of about one million years if natural selection influences gene loss.4 This research team studied the loss of eyes in the cave fish, the Mexican tetra. Because they live in a dark cave environment, eyes serve no benefit for these creatures. The team discovered that eye reduction offers an advantage for these fish, because of the high metabolic cost associated with maintaining eyes. The reduced metabolic cost associated with eye loss accelerates the loss of gene function through the operation of natural selection.

    Based on these three studies, it is reasonable to conclude that once a trait has been lost, the time limit for evolutionary reversals is on the order of about 20 million years.

    The very nature of evolutionary mechanisms and the constraints of genetic mutations make it extremely improbable that evolutionary processes would allow an organism to revert to an ancestral state or to recover a lost biological trait. You can’t go home again.

    Violations of Dollo’s Law

    Despite this expectation, over the course of the last several years, researchers have uncovered several instances in which Dollo’s Law has been violated. A brief description of a handful of these occurrences follows:

    The re-evolution of mandibular teeth in the frog genus Gastrotheca. This group is the only one that includes living frogs with true teeth on the lower jaw. When examined from an evolutionary framework, mandibular teeth were present in ancient frogs and then lost in the ancestor of all living frogs. It also looks as if teeth have been absent in frogs for 225 million years before they reappeared in Gastrotheca.5

    The re-evolution of oviparity in sand boas. When viewed from an evolutionary perspective, it appears as if live-birth (viviparity) evolved from egg-laying (oviparity) behaviors in reptiles several times. For example, estimates indicate that this evolutionary transition has occurred in snakes at least thirty times. As a case in point, there are 41 species of boas in the Old and New Worlds that give live births. Yet, two recently described sand boas, the Arabian sand boas (Eryx jayakari) and the Saharan sand boa (Eryx muelleri) lay eggs. Phylogenetic analysis carried out by researchers from Yale University indicates that the egg-laying in these two species of sand boas re-evolved 60 million years after the transition to viviparity took place.6

    The re-evolution of rotating sex combs in Drosophila. Sex combs are modified bristles unique to male fruit flies, used for courtship and mating. Compared to transverse sex combs, rotating sex combs result when several rows of bristles undergo a rotation of ninety degrees. In the ananassae fruit fly group most of the twenty or so species have simple transverse sex combs, with Drosophila bipectinata and Drosophila parabipectinata the two exceptions. These fruit fly species possess rotating sex combs. Phylogenetic analysis conducted by investigators from the University of California, Davis indicates that the rotating sex combs in these two species re-evolved, twelve million years after being lost.7

    The re-evolution of sexuality in mites belonging to the taxa, Crotoniidae. Mites exhibit a wide range of reproductive modes, including parthenogenesis. In fact, this means of reproduction is prominent in the group Oribatida, clustering into two subgroups that display parthenogenesis, almost exclusively. However, residing within one of these clusters is the taxa Crotoniidae, which displays sexual reproduction. Based on an evolutionary analysis, a team of German researchers conclude this group re-evolved the capacity for sexual reproduction.8

    The re-evolution of shell coiling in limpets. From an evolutionary perspective, the coiled shell has been lost in gastropod lineages numerous times, producing a limpet shape, consisting of a cap-shaped shell and a large foot. Evolutionary biologists have long thought that the loss of the coiled shell represents an evolutionary dead end. However, researchers from Venezuela have shown that coiled shell morphology re-evolved, at least one time, in calyptraeids, 20 to 100 million years after its loss.9

    This short list gives just a few recently discovered examples of Dollo’s Law violations. Surveying the scientific literature, evolutionary biologist J. J. Wiens identified an additional eight examples in which Dollo’s Law was violated and determined that in all cases the lost trait reappeared after at least 20 million years had passed and in some instances after 120 million years had transpired.10

    Violation of Dollo’s Law and the Theory of Evolution

    Given that the evolutionary paradigm predicts that re-evolution of traits should not occur after the trait has been lost for twenty million years, the numerous discoveries of Dollo’s Law violations provide a basis for skepticism about the capacity of the evolutionary paradigm to fully account for life’s history. The problem is likely worse than it initially appears. J. J. Wiens points out that Dollo’s Law violations may be more widespread than imagined, but difficult to detect for methodological reasons.11

    In response to this serious problem, evolutionary biologists have offered two ways to account for Dollo’s Law violations.12 The first is to question the validity of the evolutionary analysis that exposes the violations. To put it another way, these scientists claim that the recently identified Dollo’s Law violations are artifacts of the evolutionary analysis, and not real. However, this work-around is unconvincing. The evolutionary biologists who discovered the different examples of Dollo’s Law violations were aware of this complication and took painstaking efforts to ensure the validity of the evolutionary analysis they performed.

    Other evolutionary biologists argue that some genes and developmental modules serve more than one function. So, even though the trait specified by a gene or a developmental module is lost, the gene or the module remains intact because they serve other roles. This retention makes it possible for traits to re-evolve, even after a hundred million years. Though reasonable, this explanation still must be viewed as speculative. Evolutionary biologists have yet to apply the same mathematical rigor to this explanation as they have when estimating the timescale for loss of function in dormant genes. These calculations are critical given the expansive timescales involved in some of the Dollo’s Law violations.

    Considering the nature of evolutionary processes, this response neglects the fact that genes and developmental pathways will continue to evolve under the auspices of natural selection, once a trait is lost. Free from the constraints of the lost function, the genes and developmental modules experience new evolutionary possibilities, previously unavailable to them. The more functional roles a gene or developmental module assumes, the less likely it is that these systems can evolve. Shedding one of their roles increases the likelihood that these genes and developmental pathways will become modified as the evolutionary process explores new space now available to it. In this scenario, it is reasonable to think that natural selection could modify the genes and developmental modules to such an extent that the lost trait would be just as unlikely to re-evolve as it would if gene loss was a consequence of neutral evolution. In fact, the study of eye loss in the Mexican tetra suggests that the modification of these genes and developmental modules could occur at a faster rate if governed by natural selection rather than neutral evolution.

    Violation of Dollo’s Law and the Case for Creation

    While Dollo’s Law violations are problematic for the evolutionary paradigm, the re-evolution—or perhaps, more appropriately, the reappearance—of the same biological traits after their disappearance makes sense from a creation model/intelligent design perspective. The reappearance of biological systems could be understood as the work of the Creator. It is not unusual for engineers to reuse the same design or to revisit a previously used design feature in a new prototype. While there is an irreversibility to the evolutionary process, designers are not constrained in that way and can freely return to old designs.

    Dollo’s Law violations are at home in a creation model, highlighting the value of this approach to understanding life’s history.


    1. Richard Dawkins, The Blind Watchmaker: Why the Evidence of Evolution Reveals a Universe without Design (New York: W.W. Norton, 2015), 94.
    2. Charles R. Marshall, Elizabeth C. Raff, and Rudolf A. Raff, “Dollo’s Law and the Death and Resurrection of Genes,” Proceedings of the National Academy of Sciences USA 91 (December 6, 1994): 12283–87.
    3. Michael Lynch and John S. Conery, “The Evolutionary Fate and Consequences of Duplicate Genes, Science 290 (November 10, 2000): 1151–54, doi:10.1126/science.290.5494.1151.
    4. Meredith Protas et al., “Regressive Evolution in the Mexican Cave Tetra, Astyanax mexicanus, Current Biology 17 (March 6, 2007): 452–54, doi:10.1016/j.cub.2007.01.051.
    5. John J. Wiens, “Re-evolution of Lost Mandibular Teeth in Frogs after More than 200 Million Years, and Re-evaluating Dollo’s Law,” Evolution 65 (May 2011): 1283–96, doi:10.1111/j.1558-5646.2011.01221.x.
    6. Vincent J. Lynch and Günter P. Wagner, “Did Egg-Laying Boas Break Dollo’s Law? Phylogenetic Evidence for Reversal to Oviparity in Sand Boas (Eryx: Boidae),” Evolution 64 (January 2010): 207–16, doi:10.1111/j.1558-5646.2009.00790.x.
    7. Thaddeus D. Seher et al., “Genetic Basis of a Violation of Dollo’s Law: Re-Evolution of Rotating Sex Combs in Drosophila bipectinata,” Genetics 192 (December 1, 2012): 1465–75, doi:10.1534/genetics.112.145524.
    8. Katja Domes et al., “Reevolution of Sexuality Breaks Dollo’s Law,” Proceedings of the National Academy of Sciences USA 104 (April 24, 2007): 7139–44, doi:10.1073/pnas.0700034104.
    9. Rachel Collin and Roberto Cipriani, “Dollo’s Law and the Re-Evolution of Shell Coiling,” Proceedings of the Royal Society B 270 (December 22, 2003): 2551–55, doi:10.1098/rspb.2003.2517.
    10. Wiens, “Re-evolution of Lost Mandibular Teeth in Frogs.”
    11. Wiens, “Re-evolution of Lost Mandibular Teeth in Frogs.
    12. Rachel Collin and Maria Pia Miglietta, “Reversing Opinions on Dollo’s Law,” Trends in Ecology and Evolution 23 (November 2008): 602–9, doi:10.1016/j.tree.2008.06.013.
  • Is 75% of the Human Genome Junk DNA?

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Aug 29, 2017

    By the rude bridge that arched the flood,
    Their flag to April’s breeze unfurled,
    Here once the embattled farmers stood,
    And fired the shot heard round the world.

    –Ralph Waldo Emerson, Concord Hymn

    Emerson referred to the Battles of Lexington and Concord, the first skirmishes of the Revolutionary War, as the “shot heard round the world.”

    While not as loud as the gunfire that triggered the Revolutionary War, a recent article published in Genome Biology and Evolution by evolutionary biologist Dan Graur has garnered a lot of attention,1 serving as the latest salvo in the junk DNA wars—a conflict between genomics scientists and evolutionary biologists about the amount of functional DNA sequences in the human genome.

    Clearly, this conflict has important scientific ramifications, as researchers strive to understand the human genome and seek to identify the genetic basis for diseases. The functional content of the human genome also has significant implications for creation-evolution skirmishes. If most of the human genome turns out to be junk after all, then the case for a Creator potentially suffers collateral damage.

    According to Graur, no more than 25% of the human genome is functional—a much lower percentage than reported by the ENCODE Consortium. Released in September 2012, phase II results of the ENCODE project indicated that 80% of the human genome is functional, with the expectation that the percentage of functional DNA in the genome would rise toward 100% when phase III of the project reached completion.

    If true, Graur’s claim would represent a serious blow to the validity of the ENCODE project conclusions and devastate the RTB human origins creation model. Intelligent design proponents and creationists (like me) have heralded the results of the ENCODE project as critical in our response to the junk DNA challenge.

    Junk DNA and the Creation vs. Evolution Battle

    Evolutionary biologists have long considered the presence of junk DNA in genomes as one of the most potent pieces of evidence for biological evolution. Skeptics ask, “Why would a Creator purposely introduce identical nonfunctional DNA sequences at the same locations in the genomes of different, though seemingly related, organisms?”

    When the draft sequence was first published in 2000, researchers thought only around 2–5% of the human genome consisted of functional sequences, with the rest being junk. Numerous skeptics and evolutionary biologists claim that such a vast amount of junk DNA in the human genome is compelling evidence for evolution and the most potent challenge against intelligent design/creationism.

    But these arguments evaporate in the wake of the ENCODE project. If valid, the ENCODE results would radically alter our view of the human genome. No longer could the human genome be regarded as a wasteland of junk; rather, the human genome would have to be recognized as an elegantly designed system that displays sophistication far beyond what most evolutionary biologists ever imagined.

    ENCODE Skeptics

    The findings of the ENCODE project have been criticized by some evolutionary biologists who have cited several technical problems with the study design and the interpretation of the results. (See articles listed under “Resources to Go Deeper” for a detailed description of these complaints and my responses.) But ultimately, their criticisms appear to be motivated by an overarching concern: if the ENCODE results stand, then it means key features of the evolutionary paradigm can’t be correct.

    Calculating the Percentage of Functional DNA in the Human Genome

    Graur (perhaps the foremost critic of the ENCODE project) has tried to discredit the ENCODE findings by demonstrating that they are incompatible with evolutionary theory. Toward this end, he has developed a mathematical model to calculate the percentage of functional DNA in the human genome based on mutational load—the amount of deleterious mutations harbored by the human genome.

    Graur argues that junk DNA functions as a sponge absorbing deleterious mutations, thereby protecting functional regions of the genome. Considering this buffering effect, Graur wanted to know how much junk DNA must exist in the human genome to buffer against the loss of fitness—which would result from deleterious mutations in functional DNA—so that a constant population size can be maintained.

    Historically, the replacement level fertility rates for human beings have been two to three children per couple. Based on Graur’s modeling, this fertility rate requires 85–90% of the human genome to be composed of junk DNA in order to absorb deleterious mutations—ensuring a constant population size, with the upper limit of functional DNA capped at 25%.

    Graur also calculated a fertility rate of 15 children per couple, at minimum, to maintain a constant population size, assuming 80% of the human genome is functional. According to Graur’s calculations, if 100% of the human genome displayed function, the minimum replacement level fertility rate would have to be 24 children per couple.

    He argues that both conclusions are unreasonable. On this basis, therefore, he concludes that the ENCODE results cannot be correct.

    Response to Graur

    So, has Graur’s work invalidated the ENCODE project results? Hardly. Here are four reasons why I’m skeptical. 

    1. Graur’s estimate of the functional content of the human genome is based on mathematical modeling, not experimental results.

    An adage I heard repeatedly in graduate school applies: “Theories guide, experiments decide.” Though the ENCODE project results theoretically don’t make sense in light of the evolutionary paradigm, that is not a reason to consider them invalid. A growing number of studies provide independent experimental validation of the ENCODE conclusions. (Go here and here for two recent examples.)

    To question experimental results because they don’t align with a theory’s predictions is a Bizarro World” approach to science. Experimental results and observations determine a theory’s validity, not the other way around. Yet when it comes to the ENCODE project, its conclusions seem to be weighed based on their conformity to evolutionary theory. Simply put, ENCODE skeptics are doing science backwards.

    While Graur and other evolutionary biologists argue that the ENCODE results don’t make sense from an evolutionary standpoint, I would argue as a biochemist that the high percentage of functional regions in the human genome makes perfect sense. The ENCODE project determined that a significant fraction of the human genome is transcribed. They also measured high levels of protein binding.

    ENCODE skeptics argue that this biochemical activity is merely biochemical noise. But this assertion does not make sense because (1) biochemical noise costs energy and (2) random interactions between proteins and the genome would be harmful to the organism.

    Transcription is an energy- and resource-intensive process. To believe that most transcripts are merely biochemical noise would be untenable. Such a view ignores cellular energetics. Transcribing a large percentage of the genome when most of the transcripts serve no useful function would routinely waste a significant amount of the organism’s energy and material stores. If such an inefficient practice existed, surely natural selection would eliminate it and streamline transcription to produce transcripts that contribute to the organism’s fitness.

    Apart from energetics considerations, this argument ignores the fact that random protein binding would make a dire mess of genome operations. Without minimizing these disruptive interactions, biochemical processes in the cell would grind to a halt. It is reasonable to think that the same considerations would apply to transcription factor binding with DNA.

    2. Graur’s model employs some questionable assumptions.

    Graur uses an unrealistically high rate for deleterious mutations in his calculations.

    Graur determined the deleterious mutation rate using protein-coding genes. These DNA sequences are highly sensitive to mutations. In contrast, other regions of the genome that display function—such as those that (1) dictate the three-dimensional structure of chromosomes, (2) serve as transcription factors, and (3) aid as histone binding sites—are much more tolerant to mutations. Ignoring these sequences in the modeling work artificially increases the amount of required junk DNA to maintain a constant population size.

    3. The way Graur determines if DNA sequence elements are functional is questionable. 

    Graur uses the selected-effect definition of function. According to this definition, a DNA sequence is only functional if it is undergoing negative selection. In other words, sequences in genomes can be deemed functional only if they evolved under evolutionary processes to perform a particular function. Once evolved, these sequences, if they are functional, will resist evolutionary change (due to natural selection) because any alteration would compromise the function of the sequence and endanger the organism. If deleterious, the sequence variations would be eliminated from the population due to the reduced survivability and reproductive success of organisms possessing those variants. Hence, functional sequences are those under the effects of selection.

    In contrast, the ENCODE project employed a causal definition of function. Accordingly, function is ascribed to sequences that play some observationally or experimentally determined role in genome structure and/or function.

    The ENCODE project focused on experimentally determining which sequences in the human genome displayed biochemical activity using assays that measured

    • transcription,
    • binding of transcription factors to DNA,
    • histone binding to DNA,
    • DNA binding by modified histones,
    • DNA methylation, and
    • three-dimensional interactions between enhancer sequences and genes.

    In other words, if a sequence is involved in any of these processes—all of which play well-established roles in gene regulation—then the sequences must have functional utility. That is, if sequenceQperforms functionG, then sequenceQis functional.

    So why does Graur insist on a selected-effect definition of function? For no other reason than a causal definition ignores the evolutionary framework when determining function. He insists that function be defined exclusively within the context of the evolutionary paradigm. In other words, his preference for defining function has more to do with philosophical concerns than scientific ones—and with a deep-seated commitment to the evolutionary paradigm.

    As a biochemist, I am troubled by the selected-effect definition of function because it is theory-dependent. In science, cause-and-effect relationships (which include biological and biochemical function) need to be established experimentally and observationally, independent of any particular theory. Once these relationships are determined, they can then be used to evaluate the theories at hand. Do the theories predict (or at least accommodate) the established cause-and-effect relationships, or not?

    Using a theory-dependent approach poses the very real danger that experimentally determined cause-and-effect relationships (or, in this case, biological functions) will be discarded if they don’t fit the theory. And, again, it should be the other way around. A theory should be discarded, or at least reevaluated, if its predictions don’t match these relationships.

    What difference does it make which definition of function Graur uses in his model? A big difference. The selected-effect definition is more restrictive than the causal-role definition. This restrictiveness translates into overlooked function and increases the replacement level fertility rate.

    4. Buffering against deleterious mutations is a function.

    As part of his model, Graur argues that junk DNA is necessary in the human genome to buffer against deleterious mutations. By adopting this view, Graur has inadvertently identified function for junk DNA. In fact, he is not the first to argue along these lines. Biologist Claudiu Bandea has posited that high levels of junk DNA can make genomes resistant to the deleterious effects of transposon insertion events in the genome. If insertion events are random, then the offending DNA is much more likely to insert itself into “junk DNA” regions instead of coding and regulatory sequences, thus protecting information-harboring regions of the genome.

    If the last decade of work in genomics has taught us anything, it is this: we are in our infancy when it comes to understanding the human genome. The more we learn about this amazingly complex biochemical system, the more elegant and sophisticated it becomes. Through this process of discovery, we continue to identify functional regions of the genome—DNA sequences long thought to be junk.

    In short, the criticisms of the ENCODE project reflect a deep-seated commitment to the evolutionary paradigm and, bluntly, are at war with the experimental facts.

    Bottom line: if the ENCODE results stand, it means that key aspects of the evolutionary paradigm can’t be correct.

    Resources to Go Deeper


    1. Dan Graur, “An Upper Limit on the Functional Fraction of the Human Genome,” Genome Biology and Evolution 9 (July 2017): 1880–85, doi:10.1093/gbe/evx121.

About Reasons to Believe

RTB's mission is to spread the Christian Gospel by demonstrating that sound reason and scientific research—including the very latest discoveries—consistently support, rather than erode, confidence in the truth of the Bible and faith in the personal, transcendent God revealed in both Scripture and nature. Learn More »

Support Reasons to Believe

Your support helps more people find Christ through sharing how the latest scientific discoveries affirm our faith in the God of the Bible.

Donate Now

U.S. Mailing Address
818 S. Oak Park Rd.
Covina, CA 91724
  • P (855) 732-7667
  • P (626) 335-1480
  • Fax (626) 852-0178
Reasons to Believe logo