Where Science and Faith Converge
  • Brain Synchronization Study Evinces the Image of God

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Dec 13, 2017

    As I sit down at my computer to compose this post, the new Justice League movie has just hit the theaters. Even though it has received mixed reviews, I can’t wait to see this latest superhero flick. With several superheroes fighting side-by-side, it begs the question: “Who is the most powerful superhero in the DC universe?”

    I’m not sure how you would respond, but in my opinion, it’s not Superman or Wonder Woman. Instead, it’s a superhero that didn’t appear in the Justice League movie (but he is a longtime member of the Justice League in the comic books): the Martian Manhunter.

    Originally from Mars, J’onn J’onzz possesses superhuman strength and endurance, just like Superman. He can fly and shoot energy beams out of his eyes. But, he also has shapeshifting abilities and is a powerful telepath. It would be fun to see Superman and the Martian Manhunter tangle. My money would be on J’onn J’onzz because of his powerful telepathic abilities. As a telepath, he can read minds, control people’s thoughts and memories, create realistic illusions, and link minds together.

    blog__inline--brain-synchronization-study

    Image credit: Fazale Rana

    Even though it is fun (and somewhat silly) to daydream about superhuman strength and telepathic abilities, recent work by Spanish neuroscientists from the Basque Center on Cognition, Brain, and Language indicates that mere mortals do indeed have an unusual ability that seems a bit like telepathy. When we engage in conversations with one another—even with strangers—the electrical activities of our brains synchronize.1 In part, this newfound ability may provide the neurological basis for the theory of mind and our capacity to form complex, hierarchical social relationships, properties uniquely displayed by human beings. In other words, this discovery provides more reasons to think that human beings are exceptional in a way that aligns with the biblical concept of the image of God.

    Brain Synchronization

    Most brain activity studies focus on individual subjects and their responses to single stimuli. For example, single-person studies have shown that oscillations in electrical activity in the brain couple with speech rhythms when the test subject is either listening or speaking. The Spanish neuroscientists wanted to go one step further. They wanted to learn what happens to brain activities when two people engage one another in a conversation.

    To find out, they assembled 15 dyads (14 men and 16 women) consisting of strangers who were 20–30 years in age. They asked the members of each dyad to exchange opinions on sports, movies, music, and travel. While the strangers conversed, the researchers monitored electrical activities in the brains using EEG technology. As expected, they detected coupling of brain electrical activities with the speech rhythms in both speakers and listeners. But, to their surprise, they also detected pure brain entrainment in the electrical activities of the test subject, independent of the physical properties of the sound waves associated with speaking and listening. To put it another way, the brain activities of the two people in the conversation became synchronized, establishing a deep connection between their minds.

    Brain Synchronization and the Image of God

    The notion that human beings differ in degree, not kind, from other creatures has been a mainstay concept in anthropology and primatology for over 150 years. And it has been the primary reason why so many people have abandoned the belief that human beings bear God’s image. Yet, this stalwart view in anthropology is losing its mooring, with the concept of human exceptionalism taking its place. A growing minority of anthropologists and primatologists now believe that human beings really are exceptional. They contend that human beings do, indeed, differ in kind, not merely degree, from other creatures, including Neanderthals. Ironically, the scientists who argue for this updated perspective have developed evidence for human exceptionalism in their attempts to understand how the human mind evolved. But, instead of buttressing human evolution, these new insights marshal support for the biblical conception of humanity.

    Anthropologists identify at least four interrelated qualities that make us exceptional: (1) symbolism, (2) open-ended generative capacity, (3) theory of mind, and (4) our capacity to form complex social networks.

    As human beings, we effortlessly represent the world with discrete symbols. We denote abstract concepts with symbols. And our ability to represent the world symbolically has interesting consequences when coupled with our abilities to combine and recombine those symbols in a countless number of ways to create alternate possibilities. Our capacity for symbolism manifests in the form of language, art, music, and even body ornamentation. And we desire to communicate the scenarios we construct in our minds with other human beings.

    But there is more to our interactions with other human beings than a desire to communicate. We want to link our minds together. And we can do this because we possess a theory of mind. In other words, we recognize that other people have minds just like ours, allowing us to understand what others are thinking and feeling. We also have the brain capacity to organize people we meet and know into hierarchical categories, allowing us to form and engage in complex social networks.

    In effect, these qualities could be viewed as scientific descriptors of the image of God.

    It is noteworthy that all four of these qualities are on full display in the Spanish neuroscientists study. The capacity to offer opinions on a wide range of topics and to communicate our ideas with language reflects our symbolism and our open-ended generative capacity. I find it intriguing that the oscillations of our brain’s electrical activity couples with the rhythmic patterns created by speech—suggesting our brains are hard-wired to support our desire to communicate with one another symbolically. I also find it intriguing that our brains become coupled at an even deeper level when we converse, consistent with our theory of mind and our capacity to enter into complex social relationships.

    Even though many people in the scientific community promote a view of humanity that denigrates the image of God, common-day experience continually supports the notion that we are unique and exceptional as human beings. But, for me, I find it even more gratifying to learn that scientific investigations into our cognitive and behavioral capacities continue to affirm human exceptionalism and, with it, the image of God. Indeed, we are the crown of creation.

    Resources to Dig Deeper

    Endnotes
    1. Alejandro Pérez et al., “Brain-to-Brain Entrainment: EEG Interbrain Synchronization While Speaking and Listening,” Scientific Reports 7 (June 23, 2017): 4190, doi:10.1038/s41598-017-04464-4.
  • Molecular Scale Robotics Build Case for Design

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Dec 06, 2017

    Sometimes bigger is better, and other times, not so much—particularly for scientists working in the field of nanotechnology.

    Scientists and engineers working in this area are obsessed with miniaturization. And because of this obsession, they have developed techniques to manipulate matter at the molecular scale. Thanks to these advances, they can now produce novel materials (that could never be produced with macro-scale methods) with a host of applications. They also use these techniques to fabricate molecular-level devices—nanometer-sized machines—made up of complex arrangements of atoms and molecules. They hope that these machines will perform sophisticated tasks, giving researchers full control of the molecular domain.

    Recently, scientists from the University of Manchester in the UK achieved a milestone in nanotechnology when they designed the first-ever molecular robot that can be deployed to build molecules in the same way that robotic arms on assembly lines manufacture automobiles.1 These molecular robots can be used to improve the efficiency of chemical reactions and make it possible for organic chemists to design synthetic routes that, up to this point, were inconceivable.

    Undoubtedly, this advance will pave the way for more cost-effective, greener chemical reactions at the bench and plant scales. It will also grant organic chemists greater control over chemical reactions, paving the way for the synthesis of new types of compounds including drugs and other pharmaceutical agents.

    As exciting as these prospects are, perhaps the greater significance of this research lies in the intriguing theological implications. For example, comparison of the molecular robots to the biomolecular machines in the cell—machines that carry out similar assembly-line operations—highlights the elegant designs of biochemical systems, evincing a Creator’s handiwork. This research is theologically provocative in another way. It demonstrates human exceptionalism and, by doing so, supports the biblical claim that human beings are made in the image of God.

    Molecular Robotics

    University of Manchester chemists built molecular robots that consist of about 150 atoms of carbon, nitrogen, oxygen, and hydrogen. Though these robots consist of a relatively small number of atoms, the arrangement of these atoms makes the molecular robots structurally complex.

    The robots’ architecture is organized around a molecular-scale platform. Located in the middle of the platform is a molecular arm that extends upward and then bends at a 90-degree angle. This molecular prosthesis binds molecules at the end of the arm and then can be made to swivel between the two ends of the platform as researchers add different chemicals to the reaction. The swiveling action brings the bound molecule in juxtaposition to the chemical groups at the tip ends of the platform. When reactants are added to the solution, these compounds will react with the bound molecule differently depending on the placement of the arm, whether it is oriented toward one end of the platform or the other. In this way, the bound molecule—call it A—can react through two cycles of arm placement to form one of four possible compounds—B, C, D, and E. In this scheme, unwanted side reactions are kept to a minimum, because the bound molecule is precisely positioned next to either of the two ends of the molecular platform. This specificity improves the reaction efficiency, while at the same time making it possible for chemists to generate compounds that would be impossible to synthesize without the specificity granted by the molecular robots.

    Molecular Robots Make the Case for Design

    Many researchers working in nanotechnology did not think that the University of Manchester scientists—or any scientists, for that matter—could design and build a molecular robot that could carry out high precision molecular assembly. In the abstract of their paper, the Manchester team writes, “It has been convincingly argued that molecular machines that manipulate individual atoms, or highly reactive clusters of atoms, with Ångstrom precision are unlikely to be realized.”2

    Yet, the researchers were motivated to try to achieve this goal because molecular machines with this capacity exist inside the cell. They continue, “However, biological molecular machines routinely position rather less reactive substrates in order to direct chemical reaction sequences.”3 To put it another way, the Manchester chemists derived insight and inspiration from the biomolecular machines inside the cell to design and build their molecular robot.

    As I have written about before, the use of designs in biochemistry to inspire advances in nanotechnology make possible a new design argument, one I call the converse watchmaker argument. Namely, if biological designs are the work of a Creator, these systems should be so well-designed that they can serve as engineering models and otherwise inspire the development of new technologies.

    Comparison of the molecular robots designed by the University of Manchester team with a typical biomolecular machine found in the cell illustrates this point. The newly synthesized molecular robot consists of around 150 atoms, yet it took an enormous amount of ingenuity and effort to design and make. Still, this molecular machine is far less efficient than the biomolecular machines found in the cell. The cell’s biomolecular machines consist of thousands of atoms and are much more elegant and sophisticated than the man-made molecular robots. Considering these differences, is it reasonable to think that the biomolecular machines in the cell resulted from unguided, undirected, contingent processes when they are so much more advanced than the molecular robots built by scientists—some of them among the best chemists in the world?

    The only reasonable explanation is that the biomolecular machines in the cell stem from the work of a mind—a divine mind with unlimited creative capacity.

    Molecular Robots Make the Case for Human Exceptionalism

    Though unimpressive when compared to the elegant biomolecular machines in the cell, molecular robots still stand as a noteworthy scientific accomplishment—one might even say they represent science at its very best. And this accomplishment stresses the fact that human beings are the only species that has ever existed that can create technologies as advanced as the molecular robots invented by the University of Manchester chemists. Our capacity to investigate and understand nature through science and then turn that insight into technologies is unique to human beings. No other creature that exists today or that has ever existed, possesses this capability.

    Thomas Suddendorf puts it this way:

    “We reflect on and argue about our present situation, our history, and our destiny. We envision wonderful harmonious worlds as easily as we do dreadful tyrannies. Our powers are used for good as they are for bad, and we incessantly debate which is which. Our minds have spawned civilizations and technologies that have changed the face of the Earth, while our closest living animal relatives sit unobtrusively in their remaining forests. There appears to be a tremendous gap between human and animal minds.”4

    Anthropologists believe that symbolism accounts for the gap between humans and the great apes. As human beings, we effortlessly represent the world with discrete symbols. We denote abstract concepts with symbols. And our ability to represent the world symbolically has interesting consequences when coupled with our abilities to combine and recombine those symbols in a nearly infinite number of ways to create alternate possibilities.

    Our capacity for symbolism manifests in the form of language, art, music, and even body ornamentation. And we desire to communicate the scenarios we construct in our minds with other human beings. In a sense, symbolism and our open-ended capacity to generate alternative hypotheses are scientific descriptors of the image of God.

    There also appears to be a gap between human minds and the minds of the hominins, such as Neanderthals, who preceded us in the fossil record. It is true: claims abound about Neanderthals possessing the capacity for symbolism. Yet, as I discuss in Who Was Adam, those claims do not withstand scientific scrutiny. Recently, paleoanthropologist Ian Tattersall and linguist Noam Chomsky (along with other collaborators) argued that Neanderthals could not have possessed language and, hence, symbolism, because their crude “technology” remained stagnant for the duration of their time on Earth. Neanderthals—who first appear in the fossil record around 250,000 to 200,000 years ago and disappear around 40,000 years ago—existed on Earth longer than modern humans have. Yet, our technology has progressed exponentially, while Neanderthal technology remained largely static. According to Tattersall, Chomsky, and their coauthors:

    “Our species was born in a technologically archaic context, and significantly, the tempo of change only began picking up after the point at which symbolic objects appeared. Evidently, a new potential for symbolic thought was born with our anatomically distinctive species, but it was only expressed after a necessary cultural stimulus had exerted itself. This stimulus was most plausibly the appearance of language. . . . Then, within a remarkably short space of time, art was invented, cities were born, and people had reached the moon.”5

    In effect, these researchers echo Suddendorf’s point. The gap between human beings and the great apes and hominins becomes most apparent when we consider the remarkable technological advances we have made during our tenure as a species. And this mind-boggling growth in technology points to our exceptionalism as a species, affirming the biblical view that, as human beings, we uniquely bear God’s image.

    Resources to Dig Deeper

    Endnotes
    1. Salma Kassem et al., “Stereodivergent Synthesis with a Programmable Molecular Machine,” Nature 549 (September 21, 2017): 374–8, doi:10.1038/nature23677.
    2. Kassem et al., “Stereodivergent Synthesis,” 374.
    3. Kassem et al., “Stereodivergent Synthesis,” 374.
    4. Thomas Suddendorf, The Gap: The Science of What Separates Us from Other Animals (New York: Basic Books, 2013), 2.
    5. Johan J. Bolhuis et al., “How Could Language Have Evolved?” PLoS Biology 12 (August 26, 2014): e1001934, doi:10.1371/journal.pbio.1001934.
  • Fatty Acids Are Beautiful

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Nov 22, 2017

    Who says that fictions onely and false hair
    Become a verse? Is there in truth no beauty?
    Is all good structure in a winding stair?
    May no lines passe, except they do their dutie
    Not to a true, but painted chair?

    ­–George Herbert, “Jordan (I)”

    I doubt the typical person would ever think fatty acids are a thing of beauty. In fact, most people try to do everything they can to avoid them—at least in their diets. But, as a biochemist who specializes in lipids (a class of biomolecules that includes fatty acids) and cell membranes, I am fascinated by these molecules—and by the biochemical and cellular structures they form.

    I know, I know—Im a science geek. But for me, the chemical structures and the physicochemical properties of lipids are as beautiful as an evening sunset. As an expert, I thought I knew most of what there is to know about fatty acids, so I was surprised to learn that researchers from Germany recently uncovered an elegant mathematical relationship that explains the structural makeup of fatty acids.1 From my vantage point, this newly revealed mathematical structure boggles my mind, providing new evidence for a Creator’s role in bringing life into existence.

    Fatty Acids

    To first approximation, fatty acids are relatively simple compounds, consisting of a carboxylic acid head group and a long-chain hydrocarbon tail.

    blog__inline--fatty-acids-are-beautiful-1

    Structure of two typical fatty acids
    Image credit: Edgar181/Wikimedia Commons

    Despite their structural simplicity, a bewildering number of fatty acid species exist. For example, the hydrocarbon chain of fatty acids can vary in length from 1 carbon atom to over 30. One or more double bonds can occur at varying positions along the chain, and the double bonds can be either cis or trans in geometry. The hydrocarbon tails can be branched and can be modified by carbonyl groups and by hydroxyl substituents at varying points along the chain. As the hydrocarbon chains become longer, the number of possible structural variants increases dramatically.

    How Many Fatty Acids Exist in Nature?

    This question takes on an urgency today because advances in analytical techniques now make it possible for researchers to identify and quantify the vast number of lipid species found in biological systems, birthing the discipline of lipidomics. Investigators are interested in understanding how lipid compositions vary spatially and temporally in biological systems and how these compositions change in response to altered physiological conditions and pathologies.

    To process and make sense of the vast amount of data generated in lipidomics studies, biochemists need to have some understanding of the number of lipid species that are theoretically possible. Recently, researchers from Friedrich Schiller University in Germany took on this challenge—at least, in part—by attempting to calculate the number of chemical species that exist for fatty acids varying in size from 1 to 30 atoms.

    Fatty Acids and Fibonacci Numbers

    To accomplish this objective, the German researchers developed mathematical equations that relate the number of carbon atoms in fatty acids to the number of structural variants (isomers). They discovered that this relationship conforms to the Fibonacci series, with the number of possible fatty acid species increasing by a factor of 1.618—the golden mean—for each carbon atom added to the fatty acid. Though not immediately evident when first examining the wide array of fatty acids found in nature, deeper analysis reveals that a beautiful yet simple mathematical structure underlies the seemingly incomprehensible structural diversity of these biomolecules.

    This discovery indicates it is unlikely that the fatty acid compositions found in nature reflect the haphazard outcome of an undirected, historically contingent evolutionary history, as many biochemists are prone to think. Instead, the fatty acids found throughout the biological realm appear to be fundamentally dictated by the tenets of nature. It is provocative to me that the fatty acid diversity produced by the laws of nature is precisely the isomers needed to for life to be possiblea fitness to purpose, if you will.

    Understanding this mathematical relationship and knowing the theoretical number of fatty acid species will certainly aid biochemists working in lipidomics. But for me, the real significance of these results lies in the philosophical and theological arenas.

    The Mathematical Beauty of Fatty Acids

    The golden mean occurs throughout nature, describing the spiral patterns found in snail shells and the flowers and leaves of plants, as examples, highlighting the pervasiveness of mathematical structures and patterns that describe many aspects of the world in which we live.

    But there is more. As it turns out, we perceive the golden mean to be a thing of beauty. In fact, architects and artists often make use of the golden mean in their work because of its deeply aesthetic qualities.

    Everywhere we look in nature—whether the spiral arms of galaxies, the shell of a snail, or the petals of a flower—we see a grandeur so great that we are often moved to our very core. This grandeur is not confined to the elements of nature we perceive with our senses; it also exists in the underlying mathematical structure of nature, such the widespread occurrence of the Fibonacci sequence and the golden mean. And it is remarkable that this beautiful mathematical structure even extends to the relationship between the number of carbon atoms in a fatty acid and the number of isomers.

    As a Christian, nature’s beauty—including the elegance exemplified by the mathematically dictated composition of fatty acids—prompts me to worship the Creator. But this beauty also points to the reality of God’s existence and supports the biblical view of humanity. If God created the universe, then it is reasonable to expect it to be a beautiful universe. Yet, if the universe came into existence through mechanism alone, there is no reason to think it would display beauty. In other words, the beauty in the world around us signifies the Divine.

    Furthermore, if the universe originated through uncaused physical mechanisms, there is no reason to think that humans would possess an aesthetic sense. But if human beings are made in God’s image, as Scripture teaches, we should be able to discern and appreciate the universe’s beauty, made by our Creator to reveal his glory and majesty.

    Resources to Dig Deeper

    Endnotes
    1. Stefan Schuster, Maximilian Fichtner, and Severin Sasso, “Use of Fibonacci Numbers in Lipidomics—Enumerating Various Classes of Fatty Acids,” Scientific Reports 7 (January 2017): 39821, doi:10.1038/srep39821.
  • Ribosomes: Manufactured by Design, Part 2

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Nov 08, 2017

    “I hope there are no creationists in the audience, but it would be a miracle if a strand of RNA ever appeared on the primitive Earth.”1

    Hugh Ross and I witnessed the late origin-of-life researcher, Leslie Orgel, make this shocking proclamation at the end of a lecture he presented at the 13th International Conference on the Origin of Life (ISSOL 2002).

    Orgel was one of the originators of the RNA world hypothesis. And because of his prominence in the origin-of-life research community, the conference organizers granted Orgel the honor of opening ISSOL 2002 with a plenary lecture on the status of the RNA world hypothesis. During his presentation, Orgel described problem after problem with the leading origin-of-life explanation, reaching the tongue-in-cheek conclusion that it would require a miracle for this evolutionary scenario to yield RNA, let alone the first life-forms. (For a detailed discussion of the problems with the RNA world hypothesis, see my book Creating Life in the Lab.)

    Despite these problems, many origin-of-life researchers—including Leslie Orgel (while he was alive)—remain convinced that the RNA world scenario must be the explanation for the emergence of life via chemical evolution. Why? For one key reason: the intermediary role RNA plays in protein synthesis.

    The RNA World Hypothesis

    The RNA world hypothesis posits that biochemistry was initially organized exclusively around RNA and only later did evolutionary processes transform the RNA world into the familiar DNA-protein world of contemporary organisms. If this model is correct, then the DNA-protein world represents the historically contingent outworking of evolutionary history. To put it another way, contemporary biochemistry has been cobbled together by unguided evolutionary forces and the role RNA plays in protein synthesis is an accidental outcome.

    The discovery of ribozymes in the 1980s provided initial support for the RNA world scenario. These RNA molecules possess functional capabilities, behaving just like enzymes. In other words, RNA not only harbors information like DNA, it also carries out cellular functions like proteins. Origin-of-life researchers take RNA’s dual capacities as evidence that life could have been organized around RNA biochemistry. These same researchers presume that evolutionary processes later apportioned RNA’s twofold capabilities between DNA (information storage) and proteins (function). Origin-of-life researchers often point to RNA’s intermediary role in protein synthesis as evidence for the RNA world hypothesis. Again, RNA’s reduced role in contemporary biochemical systems stands as a vestige of evolutionary history, with RNA viewed as a sort of molecular fossil.

    Ribosomes serve as a prime illustration of RNA’s role as a go-between in protein synthesis. As subcellular particles, ribosomes catalyze (assist) the chemical reactions that form the bonds between the amino acid subunits of the proteins. Two subunits of different sizes (comprised of proteins and RNA molecules) combine to form a functional ribosome. In organisms like bacteria, the large subunit (LSU) contains 2 ribosomal RNA (rRNA) molecules and about 30 different protein molecules. The small subunit (SSU) consists of a single rRNA molecule and about 20 proteins. In more complex organisms, the LSU is formed by 3 rRNA molecules that combine with around 50 distinct proteins, and the SSU consists of a single rRNA molecule and over 30 different proteins.

    The rRNA molecules function as scaffolding, organizing the myriad ribosomal proteins. They also catalyze the chain-forming reactions between amino acids. In other words, the ribosome is a ribozyme. At the ISSOL 2002 meeting, I heard Orgel adamantly insist that the RNA world hypothesis must be valid because rRNA catalyzes protein bond formation.

    Orgel’s perspective gains support considering the inefficiency of ribozymes as catalysts. Protein enzymes are much more efficient than ribozymes. In other words, it seemingly would be better and more efficient to design ribosomes so that proteins catalyzed bond formation between amino acids, not rRNA. This reason convinces origin-of-life researchers that the role rRNAs play in protein synthesis is a haphazard consequence of life’s historically contingent evolutionary history.

    But recent work by scientists from Harvard and Uppsala Universities paints a different picture of the compositional makeup of ribosomes, and in doing so, undermines what many origin-of-life researchers believe to be the most compelling evidence for the RNA world hypothesis. These researchers demonstrate that the compositional makeup of ribosomes does not appear to be the accidental outworking of an unguided, contingent process. Instead, an exquisite molecular logic accounts for the composition and structural properties of the protein and rRNA components of ribosomes.2

    Is There a Rationale for Ribosome Structure?

    As part of their research efforts, the Harvard and Uppsala University investigators were specifically trying to answer several questions related to the composition of ribosomes, including:

    1. Why are ribosomes made up of so many proteins?
    2. Why are ribosomal proteins nearly the same size?
    3. Why are ribosomal proteins smaller than typical proteins?
    4. Why are ribosomes made up of so few rRNA molecules?
    5. Why are rRNA molecules so large?
    6. Why do ribosomes employ rRNA as the catalyst to form bonds between amino acids, instead of proteins which are much more efficient as enzymes?

    Ribosomes Make Ribosomes

    Before a cell can replicate, ribosomes must manufacture the proteins needed to form more ribosomes—in fact, ribosomes need to manufacture enough proteins to form a full complement of these subcellular complexes. This ensures that both daughter cells have the sufficient number of protein-manufacturing machines to thrive once the cell division process is completed. Because of this constraint, cell replication cannot proceed until a duplicate population of ribosomes is produced.

    Ribosome Composition is Optimal for Efficient Production of Ribosomes

    As discussed in an earlier blog post, the Harvard and Uppsala University investigators discovered that if ribosomal proteins were large, or if these biomolecules were variable in size, ribosome production would be slow and inefficient. Building ribosomes with smaller, uniform-size proteins represents the faster way to duplicate the ribosome population, permitting the cell replication to proceed in a timely manner. They also determined that the optimal number of ribosomal proteins is between 50 to 80—the number of ribosomal proteins found in nature. In short, the composition of these sub cellular complexes appears to be undergirded by an elegant molecular rationale.

    As part of their mathematical modeling study, these researchers also provided an explanation for why ribosomes are made up of large RNA molecules. Because the number of steps involved in rRNA production is fewer than the steps required for protein manufacture, rRNA molecules can be made more rapidly than proteins. This being the case, ribosome production is more efficient when these organelles are built using fewer and larger rRNA molecules as opposed to smaller, more numerous ones.

    The research team learned that ribosomes containing more rRNA can be built faster than ribosomes made up of more proteins. This fact helps explain why rRNA operates as the catalytic portion of ribosomes (linking amino acids together to construct proteins), though less efficient as a catalyst than proteins.

    These insights also explain the compositional differences among ribosomes found in bacteria, eukaryotic cells, and mitochondria. Bacteria, which typically replicate faster than eukaryotic cells, possess ribosomes that contain proportionally more rRNA and fewer proteins than ribosomes found in eukaryotic cells. Mitochondria—organelles found in eukaryotic cells—possess ribosomes with a much greater ratio of proteins to rRNA than eukaryotic cells. This observation makes sense because ribosomes in mitochondria don’t produce themselves.

    It Would Be a Miracle if a Strand of RNA Appeared on the Primitive Earth

    An exquisite molecular rationale undergirds the number and size of rRNA molecules in ribosomes and accounts for why the ribosome is a ribozyme. The work of the Harvard and Uppsala University scientists undermines the view that ribosomes were cobbled together as a result of the evolutionary transition from the RNA world to the DNA/protein world. If the presence and role of RNA molecules in ribosomes were simply vestiges of life’s origin out of an RNA world, then there should not be an elegant molecular logic that accounts for ribosome compositions in bacteria and eukaryotic organisms. In other words, it doesn’t appear as if ribosomes are the unintended outcome of an unguided evolutionary process.

    This conclusion gains support from earlier work by life scientist Ian S. Dunn. As I wrote about in a previous blog post, Dunn has uncovered a molecular rationale for the intermediary role messenger RNA (mRNA) plays in protein synthesis. Again, it indicates that the intermediary role of RNA molecules in protein synthesis is a necessary design of a DNA/protein world, not a molecular vestige of life’s evolutionary origin that proceeds through an RNA world.

    Given these new insights and the intractable problems with the RNA world scenario, I must agree with Leslie Orgel. It would be a miracle if a strand of RNA appeared on the primitive Earth—unless a Creator intervened.

    Resources to Dig Deeper

    Endnotes
    1. Fazale Rana, Creating Life in the Lab: How New Discoveries in Synthetic Biology Make a Case for the Creator (Grand Rapids, MI: Baker Books, 2011), 161.
    2. Shlomi Reuveni, Måns Ehrenberg, and Johan Paulsson, “Ribosomes Are Optimized for Autocatalytic Production,” Nature 547 (July 20, 2017): 293–7, doi:10.1038/nature22998.
  • Ribosomes: Manufactured by Design, Part 1

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Nov 01, 2017

    Before joining Reasons to Believe in 1999, I spent seven years working in R&D at a Fortune 500 company, which meant that I spent most of my time in a chemistry laboratory alongside my colleagues trying to develop new technologies with the hope that one day our ideas would become a reality, making their way onto store shelves.

    From time to time, my work would be interrupted by an urgent call from one of our manufacturing plants. Inevitably, there was some crisis requiring my expertise as a chemist to troubleshoot. Often, I could solve the plant’s problem over the phone, or by analyzing a few samples sent to my lab. But, occasionally, the crisis necessitated a trip to the plant. These trips weren’t much fun. They were high pressure, stressful situations, because the longer the plant was offline, the more money it cost the company.

    But, once the crisis abated, we could breathe easier. And that usually afforded us an opportunity to tour the plant.

    It was a thrill to see working assembly lines manufacturing our products. These manufacturing operations were engineering marvels to behold, efficiently producing high-quality products at unimaginable speeds.

    The Cell as a Factory

    Inside each cell, an ensemble of manufacturing operations exists, more remarkable than any assembly line designed by human engineers. Perhaps one of the most astounding is the biochemical process that produces proteins—the workhorse molecules of life. These large complex molecules work collaboratively to carry out every cellular operation and contribute to the formation of all the structures within the cell.

    Subcellular particles called ribosomes produce proteins through an assembly-line-like operation, replete with sophisticated quality control checkpoints. (As discussed in The Cell’s Design, the similarity between the assembly-line production of proteins and human manufacturing operations bolsters the Watchmaker argument for God’s existence.)

    Ribosomes

    About 23 nanometers in diameter, ribosomes play a central role in protein synthesis by catalyzing (assisting) the chemical reactions that form the bonds between the amino acid subunits of proteins. A human cell may contain up to half a million ribosomes. A typical bacterium possesses about 20,000 of these subcellular structures, comprising one-fourth the total bacterial mass.

    Two subunits of different sizes (comprised of proteins and RNA molecules) combine to form a functional ribosome. In organisms like bacteria, the large subunit (LSU) contains 2 ribosomal RNA (rRNA) molecules and about 30 different protein molecules. The small subunit (SSU) consists of a single rRNA molecule and about 20 proteins. In more complex organisms, the LSU is formed by 3 rRNA molecules that combine with around 50 distinct proteins and the SSU consists of a single rRNA molecule and over 30 different proteins. The rRNAs act as scaffolding that organizes the myriad ribosomal proteins. They also catalyze the chain-forming reactions between amino acids.

    Ribosomes Make Ribosomes

    Before a cell can replicate, ribosomes must manufacture the proteins needed to form more ribosomes—in fact, the cell’s machinery needs to manufacture enough ribosomes to form a full complement of these subcellular complexes. This ensures that both daughter cells have the sufficient number of protein-manufacturing machines to thrive once the cell division process is completed. Because of this constraint, cell replication cannot proceed until a duplicate population of ribosomes is produced.

    Is There a Rationale for Ribosome Structure?

    Clearly, ribosomes are complex subcellular particles. But, is there any rhyme or reason for their structure? Or are ribosomes the product of a historically contingent evolutionary history?

    New work by researchers from Harvard University and Uppsala University in Sweden provides key insight into the compositional make up of ribosomes, and, in doing so, help answer these questions.1

    As part of their research efforts, the Harvard and Uppsala University investigators were specifically trying to answer several questions related to the composition of ribosomes, including:

    1. Why are ribosomes made up of so many proteins?
    2. Why are ribosomal proteins nearly the same size?
    3. Why are ribosomal proteins smaller than typical proteins?
    4. Why are ribosomes made up of so few rRNA molecules?
    5. Why are rRNA molecules are so large?
    6. Why do ribosomes employ rRNA as the catalyst to form bonds between amino acids, instead of proteins which are much more efficient as enzymes?

    Ribosome Composition Is Optimal for Efficient Production of Ribosomes

    Using mathematical modeling, the Harvard and Uppsala University investigators discovered that if ribosomal proteins were larger, or if these biomolecules were variable in size, ribosome production would be slow and inefficient. Building ribosomes with smaller, uniform-size proteins represents the faster way to duplicate the ribosome population, permitting the cell replication to proceed in a timely manner.

    These researchers also learned that if the ribosomal proteins were any shorter, inefficient ribosome production also results. This inefficiency stems from biochemical events needed to initiate protein production. If proteins are too short, then the initiation events take longer than the elongation processes that build the protein chains.

    The bottom line: The mathematical modeling work by the Harvard and Uppsala University research team indicates that the sizes of ribosomal proteins are optimal to ensure the most rapid and efficient production of ribosomes. The mathematical modeling also determined that the optimal number of ribosomal proteins is between 50 to 80—the number of ribosomal proteins found in nature.

    Ribosome Composition Is Optimal to Produce a Varied Population of Ribosomes

    The insights of this work have bearing on the recent discovery that within cells a heterogeneous population of ribosomes exists, not a homogeneous one as biochemists have long thought.2 Instead of every ribosome in the cell being identical, capable of producing each and every protein the cell needs, a diverse ensemble of distinct ribosomes exists in the cell. Each type of ribosome manufactures characteristically distinct types of proteins. Typically, ribosomes produce proteins that work in conjunction to carry out related cellular functions. The heterogeneous makeup of ribosomes contributes to the overall efficiency of protein production, and also provides an important means to regulate protein synthesis. It wouldn’t make sense to use an assembly line to make both consumer products, such as antiperspirant sticks, and automobiles. In the same manner, it doesn’t make sense to use the same ribosomes to make the myriad proteins, performing different functions for the cell.

    Because ribosomes consist of a large number of small proteins, the cell can efficiently produce heterogeneous populations of ribosomes by assembling a ribosomal core and then including and excluding specific ribosomal proteins to generate a diverse population of ribosomes.3 In other words, the protein composition of ribosomes is optimized to efficiently replicate a diverse population of these subcellular particles.

    The Case for Creation

    The ingenuity of biochemical systems was one of the features of the cell’s chemistry that most impressed me as a graduate student (and moved me toward the recognition that there was a Creator). And the latest work by researchers on ribosome composition from Harvard and Uppsala Universities provides another illustration of the clever way that biochemical systems are constructed. The composition of these subcellular structures doesn’t appear to be haphazard—a frozen accident of a historically contingent evolutionary process—but instead is undergirded by an elegant molecular rationale, consistent with the work of a mind.

    The case for intelligent design gains reinforcement from the optimal composition of ribosomal proteins. Quite often, designs produced by human beings have been optimized, making this property a telltale signature for intelligent design. In fact, optimality is most often associated with superior designs.

    As I pointed out in The Cell’s Design, ribosomes are chicken-and-egg systems. Because ribosomes are composed of proteins, proteins are needed to make proteins. As with ingenuity and optimality, this property also evinces for the work of intelligent agency. Building a system that displays this unusual type of interdependency requires, and hence, reflects the work of a mind.

    On the other hand, the chicken-and-egg nature of ribosome biosynthesis serves as a potent challenge to evolutionary explanations for the origin of life.

    The Challenge to Evolution

    Because ribosomes are needed to make the proteins needed to make ribosomes, it becomes difficult to envision how this type of chicken-and-egg system could emerge via evolutionary processes. Protein synthesis would have to function optimally at the onset. If it did not, it would lead to a cycle of auto-destruction for the cell.

    Ribosomes couldn’t begin as crudely operating protein-manufacturing machines that gradually increased in efficiency—evolving step-by-step—toward the optimal systems, characteristic of contemporary biochemistry. If error-prone, ribosomes will produce defective proteins—including ribosomal proteins. In turn, defective ribosomal proteins will form ribosomes even more prone to error, setting up the auto-destruct cycle. And in any evolutionary scheme, the first ribosomes would have been error-prone.

    The compositional requirement that ribosomal proteins be of the just-right size and uniform in length only exacerbates this chicken-and-egg problem. Even if ribosomes form functional, intact proteins, if these proteins arent the correct number, size, or uniformity then ribosomes couldnt be replicated fast enough to support cellular reproduction.

    In short, the latest insights in the protein composition of ribosomes provides compelling reasons to think that life must stem from a Creator’s handiwork.

    So does the compositional makeup of ribosomal RNA molecules, which will be the topic of my next blog post.

    Resources

    Endnotes
    1. Shlomi Reuveni et al., “Ribosomes Are Optimized for Autocatalytic Production,” Nature 547 (July 20, 2017): 293–97, doi:10.1038/nature22998.
    2. Zhen Shi et al., “Heterogeneous Ribosomes Preferentially Translate Distinct Subpools of mRNAs Genome-Wide,” Molecular Cell 67 (July 6, 2017): 71–83, doi:10.1016/j.molcel.2017.05.021.
    3. Jeffrey A. Hussmann et al., “Ribosomal Architecture: Constraints Imposed by the Need for Self-Production,” Current Biology 27 (August 21, 2017): R798–R800, doi:10.1016/j.cub.2017.06.080.
  • Evolutionary Paradigm Lacks Explanation for Origin of Mitochondria and Eukaryotic Cells

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Oct 03, 2017

    You carried the cross
    Of my shame
    Oh my shame
    You know I believe it
    But I still haven’t found
    What I’m looking for

    —Adam Clayton, Dave Evans, Larry Mullen, Paul David Hewson, Victor Reina

    One of my favorite U2 songs is “I Still Haven’t Found What I’m Looking For.” For me, it’s a reminder that because of Christ, my life has meaning, purpose, and a sense of destiny. Still, I will never discover ultimate fulfillment in this world no matter how hard I search, but in the world to come—the new heaven and new earth.

    Though their pursuit is scientific and not religious, many scientists have also failed to find what they have been looking for. Physicists are on a quest to find the Theory of Everything—a Grand Unified Theory (GUT) that can account for everything in physics. However, a GUT eludes them.

    On the other hand, life scientists appear to have found it. They claim to have discovered biology’s GUT: the theory of evolution. Many biologists assert that evolutionary mechanisms can fully account for the origin, history, and design of life. And they are happy to sing about their discovery any chance they get.

    Yet, despite this claim, the evolutionary paradigm seems to come up short time and time again when it comes to explaining key events in life’s history. And this failure serves as the basis for my skepticism regarding the evolutionary paradigm.

    Currently, evolutionary biologists lack explanations for the key transitions in life’s history, including thes

    • origin of life,
    • origin of eukaryotic cells,
    • origin of sexual reproduction,
    • origin of body plans,
    • origin of consciousness,
    • and the origin of human exceptionalism.

    To be certain, evolutionary biologists have proposed models to explain each of these transitions, but the models consistently fail to deliver, as a recent review article published by two prominent evolutionary biologists from the Hungarian Academy of Sciences illustrates.In this article, these researchers point out the insufficiency of the endosymbiont hypothesis—the leading evolutionary model for the origin of eukaryotic cells—to account for the origin of mitochondria and, hence, eukaryogenesis.

    The Endosymbiont Hypothesis

    Lynn Margulis (1938–2011) advanced the endosymbiont hypothesis for the origin of eukaryotic cells in the 1960s, building on the ideas of Russian botanist, Konstantin Mereschkowski. Taught in introductory high school and college biology courses, Margulis’s work has become a cornerstone idea of the evolutionary paradigm. This classroom exposure explains why students often ask me about the endosymbiont hypothesis when I speak on university campuses. Many first-year biology students and professional life scientists alike find the evidence for this idea compelling and, consequently, view it as providing broad support for an evolutionary explanation for the history and design of life.

    According to the hypothesis, complex cells originated when symbiotic relationships formed among single-celled microbes after free-living bacterial and/or archaeal cells were engulfed by a “host” microbe. (Ingested cells that take up permanent residence within other cells are referred to as endosymbionts.)

    Presumably, organelles such as mitochondria were once endosymbionts. Once engulfed, the endosymbionts took up permanent residency within the host, with the endosymbiont growing and dividing inside the host. Over time, the endosymbionts and the host became mutually interdependent, with the endosymbionts providing a metabolic benefit for the host cell. The endosymbionts gradually evolved into organelles through a process referred to as genome reduction. This reduction resulted when genes from the endosymbionts’ genomes were transferred into the genome of the host organism. Eventually, the host cell evolved the machinery to produce the proteins needed by the former endosymbiont and processes to transport those proteins into the organelle’s interior.

    Evidence for the Endosymbiont Hypothesis

    The similarity between organelles and bacteria serve as the main line of evidence for the endosymbiont hypothesis. For example, mitochondria—which are believed to be descended from a group of alpha-proteobacteria—are about the same size and shape as a typical bacterium and have a double membrane structure like gram-negative cells. These organelles also divide in a way that is reminiscent of bacterial cells.

    Biochemical evidence also exists for the endosymbiont hypothesis. Evolutionary biologists view the presence of the diminutive mitochondrial genome as a vestige of this organelle’s evolutionary history. They see the biochemical similarities between mitochondrial and bacterial genomes as further evidence for the evolutionary origin of these organelles.

    The presence of the unique lipid, cardiolipin, in the mitochondrial inner membrane also serves as evidence for the endosymbiont hypothesis. This important lipid component of bacterial inner membranes is not found in the membranes of eukaryotic cells—except for the inner membranes of mitochondria. In fact, biochemists consider it a signature lipid for mitochondria and a vestige of this organelle’s evolutionary history.2

    Does the Endosymbiont Hypothesis Successfully Account for the Origin of Mitochondria?

    Despite the seemingly compelling evidence for the endosymbiont hypothesis, evolutionary biologists lack a genuine explanation for the origin of mitochondria, and, in a broader context, the origin of eukaryotic cells. In their recently published critical review, Zachar and Szathmary point out that evolutionary biologists have proposed over twenty different evolutionary scenarios for the mitochondrial origins that umbrella underneath the endosymbiont hypothesis. Of these, they identify eight that are reasonable, casting the others aside. Still, these eight hypotheses fail to fully account for the origin of mitochondria. The Hungarian biologists delineate twelve questions that any successful endosymbiogenesis model must answer. In turn, they demonstrate that none of these models answers all the questions. In doing so, the two researchers call for a new theory.

    In the article’s abstract, the authors state, “The origin of mitochondria is a unique and hard evolutionary problem, embedded within the origin of eukaryotes. . . . Contending theories widely disagree on ancestral partners, initial conditions and unfolding events. There are many open questions but there is no comparative examination of hypotheses. We have specified twelve questions about the observable facts and hidden processes leading to the establishment of the endosymbiont that a valid hypothesis must address. There is no single theory capable of answering all questions.”3

    Space doesn’t permit me to discuss each of the questions posed by the pair of biologists. Still, I would like to call attention to a few problems confronting the endosymbiont hypothesis, highlighted in their critical review.

    Lack of Transitional Intermediates. Biologists have yet to discover any single-celled organisms that represent transitional intermediates between prokaryotes and eukaryotic cells. (There are some eukaryotes that lack mitochondria, but they appear to have lost these organelles.) All complex cells display the eukaryotic hallmark features. In other words, it looks as if eukaryotic cells emerged in a short period of time, without any transitional forms. In fact, some biologists dub the transition the eukaryotic big bang.

    Chimeric Nature of Eukaryotic Cells. Eukaryotic cells possess an unusual combination of features. Their information-processing systems resemble those of archaea, but their membranes and energy metabolism are bacteria-like. There is no plausible evolutionary scenario to explain this blend of features. It would require the archaeon host to replace its membranes while retaining all its information-processing genes. Evolutionary biologists know of no instance in which this type of transition took place, nor do they know how it could have occurred.

    Absence of Membrane Bioenergetics in the Host. All prokaryotic organisms rely on their plasma membrane to produce energy. If eukaryotic cells emerged via endosymbiogenesis, then the plasma membranes of eukaryotic cells should possess vestiges of that past function. Yet, the plasma membranes of eukaryotic cells show no traces of this essential biochemical feature.

    Mechanism of Inclusion. The most plausible way for the endosymbiont to be taken up by the host cell is through a process called phagocytosis. But why wouldn’t the engulfed cell be digested by the host? How did the endosymbiont escape destruction? And, if it somehow survived, why doesn’t the mitochondria possess a triple membrane system, with the outermost membrane derived from the phagosome?

    Early Selective Advantage. Once inside the host, why didn’t the endosymbiont simply reproduce, overrunning the host cell? What benefit would it be for the host cell to initially harbor the endosymbiont? Currently, evolutionary biologists don’t have answers to troubling questions such as these.

    The challenges delineated by the Hungarian biologists aren’t the only ones faced by evolutionary models for endosymbiogenesis. As I discuss in a previous article, mitochondrial protein biogenesis poses another difficult problem for the endosymbiont hypothesis.

    The authors of the critical review sum it up this way: “The integration of mitochondria was a major transition, and a hard one. It poses puzzles so complicated that new theories are still generated 100 years since endosymbiogenesis was first proposed by Konstantin Mereschkowsky and 50 years since Lynn Margulis cemented the endosymbiotic origin of mitochondria into evolutionary biology. . . . One would expect that by this time, there is a consensus about the transition, but far from that even the most fundamental points are still debated.”4

    Though evolutionary biologists claim to have life’s history all figured out, in reality they are like most of us—they still haven’t found what they are looking for.

    Resources

    Endnotes

    1. Istvan Zachar and Eors Szathmary, “Breath-Giving Cooperation: Critical Review of Origin of Mitochondria Hypotheses,” Biology Direct 12 (August 14, 2017): 19, doi:10.1186/s13062-017-0190-5.
    2. In previous posts (here, here, and here), I explain the rationale for mitochondrial DNA and the presence of cardiolipin in the inner mitochondrial membrane from a creation model/intelligent design vantage point and, in doing so, demonstrate that the two biochemical features aren’t uniquely explained by the endosymbiont hypothesis.
    3. Zachar and Szathmary, “Breath-Giving Cooperation.”
    4. Zachar and Szathmary, “Breath-Giving Cooperation.”
  • Whale Vocal Displays Make Beautiful Case for a Creator

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Sep 26, 2017

    There is the sea, vast and spacious,
    teeming with creatures beyond number—
    living things both large and small.
    There the ships go to and fro,
    and Leviathan, which you formed to frolic there.

    —Psalm 104:25–26

    A few weeks ago, I did something I always wanted to do. I listened to the uncut, live version of the Allman Brothers’ Mountain Jam from beginning to end. Thirty-four minutes in length, this song appears on The Allman Brothers’ live At Fillmore East album. Though The Allman Brothers are among my favorite groups, I have never had the time and motivation to listen to this song in its entirety. I like listening to jam bands, but a thirty-four-minute song . . . in any case, a cross-country flight finally afforded me the opportunity to give my undivided attention to this jam band masterpiece. What an incredible display of musicianship!

    Humpback Whale Acoustical Displays

    Rockers aren’t the only ones who can get a bit carried away when performing a song. Humpback whales are notorious for their jam-band-like acoustical displays. These creatures produce elaborate patterns of sounds that researchers dub songs. The whale songs can last for up to 30 minutes, and some whales will repeatedly perform the same song for up to 24 hours.

    Humpback whale songs display a complex hierarchical organization. The most basic element of the song consists of a single sound, called a unit. These creatures combine units together to form phrases. In turn, they combine phrases to form themes. Finally, they combine themes to form a song, with each theme connected by transitional phrasing.

    Researchers aren’t certain why humpback whales engage in these complex acoustical displays. Only the males sing. Perhaps their singing establishes dominance within the group. Most researchers think that the males sing to attract females. (Even for whales, the musicians get the girls.)

    Humpback whales in the same area perform the same song. But, their songs continually evolve. Researchers refer to the complete transformation of one whale song into another as a revolution. As the songs evolve, each member of the group learns the new variant. When one group of humpback whales encounters another group, the two groups exchange songs. This exchange accelerates the song revolution. As a result of this encounter, members of both groups develop and learn a new song.

    How Do Humpback Whales Learn Songs?

    Researchers from the UK and Australia wanted to understand how humpback whales learn new songs.1 Their query is part of a bigger question: How do animals transmit culture—learned information and behaviors—to other members of the group and to the next generation?

    To answer this question, the research team recorded 9,300 acoustical displays over the course of two complete song revolutions for the humpback whales of the South Pacific. Among these recordings, they discovered hybrid songs—vocal displays comprised of bits and pieces of both the old and the new songs. They concluded that these hybrids songs captured the transition from one song to the next.

    These song hybrids consisted of phrases and themes from the old and new songs spliced together. The structure of hybrid songs indicated to the research team that humpback whales must learn songs in the same way that humans learn languages, by learning bits and piecing them together.

    Rock on!

    The Creator’s Artistry

    Sometimes, as Christian apologists, we tend to think of God solely as an Engineer who creates with only one specific purpose or function in mind. But, the insights researchers have gained into the vocal displays of the humpback whales reminds me that the God I worship is also a Divine Artist—a God who creates for his enjoyment.

    Scripture supports this idea. Psalm 104:25 states that God formed the leviathan (which in this passage seems to refer to whales) on day five to frolic in the vast, spacious seas. In other words, God created the great sea mammals for no other purpose than to play!

    Artistry and engineering are not mutually exclusive. Engineers often design cars and buildings to be both functionally efficient and aesthetically pleasing. But sometimes, as humans, we create for no other reason than for our pleasure and for others to enjoy and be moved by our work.

    Nature’s Beauty and God’s Existence

    The humpback whale exemplifies the remarkable beauty of the natural world. Everywhere we look in nature—whether the night sky, the oceans, the rain forests, the deserts, even the microscopic world—we see a grandeur so great that we are often moved to our very core.

    Watching a humpback whale breach or hearing a recording of its vocal displays is more than sufficient to produce in us that sense of awe and wonder. And yet, our wonder and amazement only grow as we study these creatures using sophisticated scientific techniques.

    For Christians, nature’s beauty prompts us to worship the Creator. But it also points to the reality of God’s existence and supports the biblical view of humanity.

    As philosopher Richard Swinburne argues, “If God creates a universe, as a good workman, he will create a beautiful universe. On the other hand, if the universe came into existence without being created by God, there is no reason to suppose that it would be a beautiful universe.”2 In other words, the beauty in the world around us signifies the Divine.

    But, as human beings, why do we perceive beauty in the world? In response to this question, Swinburne asserts, “There is certainly no particular reason why, if the universe originated uncaused, psycho-physical laws…would bring about aesthetic sensibilities in humans.”3 But, if human beings are made in God’s image, as Scripture teaches, we should be able to discern and appreciate the universe’s beauty, made by our Creator to reveal his glory and majesty.

    In short, the humpback whales’ acoustical displaysa jam band masterpiecesing of the Creator’s existence and his artistry.

    Resources

    Endnotes

    1. Ellen C. Garland et al., “Song Hybridization Events during Revolutionary Song Change Provide Insights into Cultural Transmission in Humpback Whales,” Proceedings of the National Academy of Sciences USA 114 (July 25, 2017): 7822–29, doi:10.1073/pnas.1621072114.
    2. Richard Swinburne, The Existence of God, 2nd ed. (New York: Oxford University Press, 2004), 190–91.
    3. Swinburne, Existence of God, 190–91.
  • The Human Genome: Copied by Design

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Sep 19, 2017

    The time my wife Amy and I spent in graduate school studying biochemistry were some of the best days of our lives. But it wasn’t all fun and games. For the most part, we spent long days and nights working in the lab.

    But we weren’t alone. Most of the graduate students in the chemistry department at Ohio University kept the same hours we did, with all-nighters broken up around midnight by “Dew n’ Donut” runs to the local 7-Eleven. Even though everybody worked hard, some people were just more productive than others. I soon came to realize that activity and productivity were two entirely different things. Some of the busiest people I knew in graduate school rarely accomplished anything.

    This same dichotomy lies at the heart of an important scientific debate taking place about the meaning of the ENCODE project results. This controversy centers around the question: Is the biochemical activity measured for the human genome merely biochemical noise or is it productive for the cell? Or to phrase the question the way a biochemist would: Is biochemical activity associated with the human genome the same thing as biochemical function?

    The answer to this question doesn’t just have scientific implications. It impacts questions surrounding humanity’s origin. Did we arise through evolutionary processes or are we the product of a Creator’s handiwork?

    The ENCODE Project

    The ENCODE project—a program carried out by a consortium of scientists with the goal of identifying the functional DNA sequence elements in the human genome—reported phase II results in the fall of 2012. To the surprise of many, the ENCODE project reported that around 80% of the human genome displays biochemical activity, and hence function, with the expectation that this percentage should increase with phase III of the project.

    If valid, the ENCODE results force a radical revision of the way scientists view the human genome. Instead of a wasteland littered with junk DNA sequences (as the evolutionary paradigm predicts), the human genome (and the genomes of other organisms) is packed with functional elements (as expected if a Creator brought human beings into existence).

    Within hours of the publication of the phase II results, evolutionary biologists condemned the ENCODE results, citing technical issues with the way the study was designed and the way the results were interpreted. (For a response to these complaints go here, here, and here.)

    Is Biochemical Activity the Same Thing As Function?

    One of the technical complaints relates to how the ENCODE consortium determined biochemical function. Critics argue that ENCODE scientists conflated biochemical activity with function. For example, the ENCODE Project determined that about 60% of the human genome is transcribed to produceRNA. ENCODE skeptics argue that most of these transcripts lack function. Evolutionary biologist Dan Graur has asserted that “some studies even indicate that 90% of transcripts generated by RNA polymerase II may represent transcriptional noise.”In other words, the biochemical activity measured by the ENCODE project can be likened to busy but nonproductive graduate students who hustle and bustle about the lab but fail to get anything done.

    When I first learned how many evolutionary biologists interpreted the ENCODE results I was skeptical. As a biochemist, I am well aware that living systems could not tolerate such high levels of transcriptional noise.

    Transcription is an energy- and resource-intensive process. Therefore, it would be untenable to believe that most transcripts are mere biochemical noise. Such a view ignores cellular energetics. Transcribing 60% of the genome when most of the transcripts serve no useful function would routinely waste a significant amount of the organism’s energy and material stores. If such an inefficient practice existed, surely natural selection would eliminate it and streamline transcription to produce transcripts that contribute to the organism’s fitness.

    Most RNA Transcripts Are Functional

    Recent work supports my intuition as a biochemist. Genomics scientists are quickly realizing that most of the RNA molecule transcribed from the human genome serve critical functional roles.

    For example, a recently published report from the Second Aegean International Conference on the Long and the Short of Non-Coding RNAs (held in Greece between June 9–14, 2017) highlights this growing consensus. Based on the papers presented at the conference, the authors of the report conclude, “Non-coding RNAs . . . are not simply transcriptional by-products, or splicing artefacts, but comprise a diverse population of actively synthesized and regulated RNA transcripts. These transcripts can—and do—function within the contexts of cellular homeostasis and human pathogenesis.”2

    Shortly before this conference was held, a consortium of scientists from the RIKEN Center for Life Science Technologies in Japan published an atlas of long non-coding RNAs transcribed from the human genome. (Long non-coding RNAs are a subset of RNA transcripts produced from the human genome.) They identified nearly 28,000 distinct long non-coding RNA transcripts and determined that nearly 19,200 of these play some functional role, with the possibility that this number may increase as they and other scientific teams continue to study long non-coding RNAs.3 One of the researchers involved in this project acknowledges that “There is strong debate in the scientific community on whether the thousands of long non-coding RNAs generated from our genomes are functional or simply byproducts of a noisy transcriptional machinery . . . we find compelling evidence that the majority of these long non-coding RNAs appear to be functional.”4

    Copied by Design

    Based on these results, it becomes increasingly difficult for ENCODE skeptics to dismiss the findings of the ENCODE project. Independent studies affirm the findings of the ENCODE consortium—namely, that a vast proportion of the human genome is functional.

    We have come a long way from the early days of the human genome project. When completed in 2003, many scientists at that time estimated that around 95% of the human genome consisted of junk DNA. And in doing so, they seemingly provided compelling evidence that humans must be the product of an evolutionary history.

    But, here we are, nearly 15 years later. And the more we learn about the structure and function of genomes, the more elegant and sophisticated they appear to be. And the more reasons we have to think that the human genome is the handiwork of our Creator.

    Resources

    Endnotes

    1. Dan Graur et al., “On the Immortality of Television Sets: ‘Function’ in the Human Genome According to the Evolution-Free Gospel of ENCODE,” Genome Biology and Evolution5 (March 1, 2013): 578–90, doi:10.1093/gbe/evt028.
    2. Jun-An Chen and Simon Conn, “Canonical mRNA is the Exception, Rather than the Rule,” Genome Biology 18 (July 7, 2017): 133, doi:10.1186/s13059-017-1268-1.
    3. Chung-Chau Hon et al., “An Atlas of Human Long Non-Coding RNAs with Accurate 5′ Ends,” Nature 543 (March 9, 2017): 199–204, doi:10.1038/nature21374.
    4. RIKEN, “Improved Gene Expression Atlas Shows that Many Human Long Non-Coding RNAs May Actually Be Functional,” ScienceDaily, March 1, 2017, www.sciencedaily.com/releases/2017/03/170301132018.htm.
  • Dollo’s Law at Home with a Creation Model, Reprised*

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Sep 12, 2017

    *This article is an expanded and updated version of an article published in 2011 on reasons.org.

    Published posthumously, Thomas Wolfe’s 1940 novel, You Can’t Go Home Againconsidered by many to be his most significant work—explores how brutally unfair the passage of time can be. In the finale, George Webber (the story’s protagonist) concedes, “You can’t go back home” to family, childhood, familiar places, dreams, and old ways of life.

    In other words, there’s an irreversible quality to life. Call it the arrow of time.

    Like Wolfe, most evolutionary biologists believe there is an irreversibility to life’s history and the evolutionary process. In fact, this idea is codified in Dollo’s Law, which states that an organism cannot return, even partially, to a previous evolutionary stage occupied by one of its ancestors. Yet, several recent studies have uncovered what appears to be violations of Dollo’s Law. These violations call into question the sufficiency of the evolutionary paradigm to fully account for life’s history. On the other hand, the return to ‘ancestral states’ finds an explanation in an intelligent design/creation model approach to life’s history.

    Dollo’s Law

    French paleontologist Louis Dollo formulated the law that bears his name in 1893 before the advent of modern-day genetics, basing it on patterns he unearthed from the fossil record. Today, his idea finds undergirding in contemporary understanding of genetics and developmental biology.

    Evolutionary biologist Richard Dawkins explains the modern-day conception of Dollo’s Law this way:

    “Dollo’s Law is really just a statement about the statistical improbability of following exactly the same evolutionary trajectory twice . . . in either direction. A single mutational step can easily be reversed. But for larger numbers of mutational steps . . . mathematical space of all possible trajectories is so vast that the chance of two trajectories ever arriving at the same point becomes vanishingly small.”1

    If a biological trait is lost during the evolutionary process, then the genes and developmental pathways responsible for that feature will eventually degrade, because they are no longer under selective pressure. In 1994, using mathematical modeling, researchers from Indiana University determined that once a biological trait is lost, the corresponding genes can be “reactivated” with reasonable probability over time scales of five hundred thousand to six million years. But once a time span of ten million years has transpired, unexpressed genes and dormant developmental pathways become permanently lost.2

    In 2000, a scientific team from the University of Oregon offered a complementary perspective on the timescale for evolutionary reversals when they calculated how long it takes for a duplicated gene to lose function.3 (Duplicated genes serve as a proxy for dormant genes rendered useless because the trait they encode has been lost.) According to the evolutionary paradigm, once a gene becomes duplicated, it is no longer under the influence of natural selection. That is, it undergoes neutral evolution, and eventually becomes silenced as mutations accrue. As it turns out, the half-life for this process is approximately four million years. To put it another way, sixteen to twenty-four million years after the duplication event, the duplicated gene will have completely lost its function. Presumably, this result applies to dormant, unexpressed genes rendered unnecessary because the trait they specify is lost.

    Both scenarios assume neutral evolution and the accumulation of mutations in a clockwise manner. But what if the loss of gene function is advantageous? Collaborative work by researchers from Harvard University and NYU in 2007 demonstrated that loss of gene function can take place on the order of about one million years if natural selection influences gene loss.4 This research team studied the loss of eyes in the cave fish, the Mexican tetra. Because they live in a dark cave environment, eyes serve no benefit for these creatures. The team discovered that eye reduction offers an advantage for these fish, because of the high metabolic cost associated with maintaining eyes. The reduced metabolic cost associated with eye loss accelerates the loss of gene function through the operation of natural selection.

    Based on these three studies, it is reasonable to conclude that once a trait has been lost, the time limit for evolutionary reversals is on the order of about 20 million years.

    The very nature of evolutionary mechanisms and the constraints of genetic mutations make it extremely improbable that evolutionary processes would allow an organism to revert to an ancestral state or to recover a lost biological trait. You can’t go home again.

    Violations of Dollo’s Law

    Despite this expectation, over the course of the last several years, researchers have uncovered several instances in which Dollo’s Law has been violated. A brief description of a handful of these occurrences follows:

    The re-evolution of mandibular teeth in the frog genus Gastrotheca. This group is the only one that includes living frogs with true teeth on the lower jaw. When examined from an evolutionary framework, mandibular teeth were present in ancient frogs and then lost in the ancestor of all living frogs. It also looks as if teeth have been absent in frogs for 225 million years before they reappeared in Gastrotheca.5

    The re-evolution of oviparity in sand boas. When viewed from an evolutionary perspective, it appears as if live-birth (viviparity) evolved from egg-laying (oviparity) behaviors in reptiles several times. For example, estimates indicate that this evolutionary transition has occurred in snakes at least thirty times. As a case in point, there are 41 species of boas in the Old and New Worlds that give live births. Yet, two recently described sand boas, the Arabian sand boas (Eryx jayakari) and the Saharan sand boa (Eryx muelleri) lay eggs. Phylogenetic analysis carried out by researchers from Yale University indicates that the egg-laying in these two species of sand boas re-evolved 60 million years after the transition to viviparity took place.6

    The re-evolution of rotating sex combs in Drosophila. Sex combs are modified bristles unique to male fruit flies, used for courtship and mating. Compared to transverse sex combs, rotating sex combs result when several rows of bristles undergo a rotation of ninety degrees. In the ananassae fruit fly group most of the twenty or so species have simple transverse sex combs, with Drosophila bipectinata and Drosophila parabipectinata the two exceptions. These fruit fly species possess rotating sex combs. Phylogenetic analysis conducted by investigators from the University of California, Davis indicates that the rotating sex combs in these two species re-evolved, twelve million years after being lost.7

    The re-evolution of sexuality in mites belonging to the taxa, Crotoniidae. Mites exhibit a wide range of reproductive modes, including parthenogenesis. In fact, this means of reproduction is prominent in the group Oribatida, clustering into two subgroups that display parthenogenesis, almost exclusively. However, residing within one of these clusters is the taxa Crotoniidae, which displays sexual reproduction. Based on an evolutionary analysis, a team of German researchers conclude this group re-evolved the capacity for sexual reproduction.8

    The re-evolution of shell coiling in limpets. From an evolutionary perspective, the coiled shell has been lost in gastropod lineages numerous times, producing a limpet shape, consisting of a cap-shaped shell and a large foot. Evolutionary biologists have long thought that the loss of the coiled shell represents an evolutionary dead end. However, researchers from Venezuela have shown that coiled shell morphology re-evolved, at least one time, in calyptraeids, 20 to 100 million years after its loss.9

    This short list gives just a few recently discovered examples of Dollo’s Law violations. Surveying the scientific literature, evolutionary biologist J. J. Wiens identified an additional eight examples in which Dollo’s Law was violated and determined that in all cases the lost trait reappeared after at least 20 million years had passed and in some instances after 120 million years had transpired.10

    Violation of Dollo’s Law and the Theory of Evolution

    Given that the evolutionary paradigm predicts that re-evolution of traits should not occur after the trait has been lost for twenty million years, the numerous discoveries of Dollo’s Law violations provide a basis for skepticism about the capacity of the evolutionary paradigm to fully account for life’s history. The problem is likely worse than it initially appears. J. J. Wiens points out that Dollo’s Law violations may be more widespread than imagined, but difficult to detect for methodological reasons.11

    In response to this serious problem, evolutionary biologists have offered two ways to account for Dollo’s Law violations.12 The first is to question the validity of the evolutionary analysis that exposes the violations. To put it another way, these scientists claim that the recently identified Dollo’s Law violations are artifacts of the evolutionary analysis, and not real. However, this work-around is unconvincing. The evolutionary biologists who discovered the different examples of Dollo’s Law violations were aware of this complication and took painstaking efforts to ensure the validity of the evolutionary analysis they performed.

    Other evolutionary biologists argue that some genes and developmental modules serve more than one function. So, even though the trait specified by a gene or a developmental module is lost, the gene or the module remains intact because they serve other roles. This retention makes it possible for traits to re-evolve, even after a hundred million years. Though reasonable, this explanation still must be viewed as speculative. Evolutionary biologists have yet to apply the same mathematical rigor to this explanation as they have when estimating the timescale for loss of function in dormant genes. These calculations are critical given the expansive timescales involved in some of the Dollo’s Law violations.

    Considering the nature of evolutionary processes, this response neglects the fact that genes and developmental pathways will continue to evolve under the auspices of natural selection, once a trait is lost. Free from the constraints of the lost function, the genes and developmental modules experience new evolutionary possibilities, previously unavailable to them. The more functional roles a gene or developmental module assumes, the less likely it is that these systems can evolve. Shedding one of their roles increases the likelihood that these genes and developmental pathways will become modified as the evolutionary process explores new space now available to it. In this scenario, it is reasonable to think that natural selection could modify the genes and developmental modules to such an extent that the lost trait would be just as unlikely to re-evolve as it would if gene loss was a consequence of neutral evolution. In fact, the study of eye loss in the Mexican tetra suggests that the modification of these genes and developmental modules could occur at a faster rate if governed by natural selection rather than neutral evolution.

    Violation of Dollo’s Law and the Case for Creation

    While Dollo’s Law violations are problematic for the evolutionary paradigm, the re-evolution—or perhaps, more appropriately, the reappearance—of the same biological traits after their disappearance makes sense from a creation model/intelligent design perspective. The reappearance of biological systems could be understood as the work of the Creator. It is not unusual for engineers to reuse the same design or to revisit a previously used design feature in a new prototype. While there is an irreversibility to the evolutionary process, designers are not constrained in that way and can freely return to old designs.

    Dollo’s Law violations are at home in a creation model, highlighting the value of this approach to understanding life’s history.

    Endnotes

    1. Richard Dawkins, The Blind Watchmaker: Why the Evidence of Evolution Reveals a Universe without Design (New York: W.W. Norton, 2015), 94.
    2. Charles R. Marshall, Elizabeth C. Raff, and Rudolf A. Raff, “Dollo’s Law and the Death and Resurrection of Genes,” Proceedings of the National Academy of Sciences USA 91 (December 6, 1994): 12283–87.
    3. Michael Lynch and John S. Conery, “The Evolutionary Fate and Consequences of Duplicate Genes, Science 290 (November 10, 2000): 1151–54, doi:10.1126/science.290.5494.1151.
    4. Meredith Protas et al., “Regressive Evolution in the Mexican Cave Tetra, Astyanax mexicanus, Current Biology 17 (March 6, 2007): 452–54, doi:10.1016/j.cub.2007.01.051.
    5. John J. Wiens, “Re-evolution of Lost Mandibular Teeth in Frogs after More than 200 Million Years, and Re-evaluating Dollo’s Law,” Evolution 65 (May 2011): 1283–96, doi:10.1111/j.1558-5646.2011.01221.x.
    6. Vincent J. Lynch and Günter P. Wagner, “Did Egg-Laying Boas Break Dollo’s Law? Phylogenetic Evidence for Reversal to Oviparity in Sand Boas (Eryx: Boidae),” Evolution 64 (January 2010): 207–16, doi:10.1111/j.1558-5646.2009.00790.x.
    7. Thaddeus D. Seher et al., “Genetic Basis of a Violation of Dollo’s Law: Re-Evolution of Rotating Sex Combs in Drosophila bipectinata,” Genetics 192 (December 1, 2012): 1465–75, doi:10.1534/genetics.112.145524.
    8. Katja Domes et al., “Reevolution of Sexuality Breaks Dollo’s Law,” Proceedings of the National Academy of Sciences USA 104 (April 24, 2007): 7139–44, doi:10.1073/pnas.0700034104.
    9. Rachel Collin and Roberto Cipriani, “Dollo’s Law and the Re-Evolution of Shell Coiling,” Proceedings of the Royal Society B 270 (December 22, 2003): 2551–55, doi:10.1098/rspb.2003.2517.
    10. Wiens, “Re-evolution of Lost Mandibular Teeth in Frogs.”
    11. Wiens, “Re-evolution of Lost Mandibular Teeth in Frogs.
    12. Rachel Collin and Maria Pia Miglietta, “Reversing Opinions on Dollo’s Law,” Trends in Ecology and Evolution 23 (November 2008): 602–9, doi:10.1016/j.tree.2008.06.013.
  • Is 75% of the Human Genome Junk DNA?

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Aug 29, 2017

    By the rude bridge that arched the flood,
    Their flag to April’s breeze unfurled,
    Here once the embattled farmers stood,
    And fired the shot heard round the world.

    –Ralph Waldo Emerson, Concord Hymn

    Emerson referred to the Battles of Lexington and Concord, the first skirmishes of the Revolutionary War, as the “shot heard round the world.”

    While not as loud as the gunfire that triggered the Revolutionary War, a recent article published in Genome Biology and Evolution by evolutionary biologist Dan Graur has garnered a lot of attention,1 serving as the latest salvo in the junk DNA wars—a conflict between genomics scientists and evolutionary biologists about the amount of functional DNA sequences in the human genome.

    Clearly, this conflict has important scientific ramifications, as researchers strive to understand the human genome and seek to identify the genetic basis for diseases. The functional content of the human genome also has significant implications for creation-evolution skirmishes. If most of the human genome turns out to be junk after all, then the case for a Creator potentially suffers collateral damage.

    According to Graur, no more than 25% of the human genome is functional—a much lower percentage than reported by the ENCODE Consortium. Released in September 2012, phase II results of the ENCODE project indicated that 80% of the human genome is functional, with the expectation that the percentage of functional DNA in the genome would rise toward 100% when phase III of the project reached completion.

    If true, Graur’s claim would represent a serious blow to the validity of the ENCODE project conclusions and devastate the RTB human origins creation model. Intelligent design proponents and creationists (like me) have heralded the results of the ENCODE project as critical in our response to the junk DNA challenge.

    Junk DNA and the Creation vs. Evolution Battle

    Evolutionary biologists have long considered the presence of junk DNA in genomes as one of the most potent pieces of evidence for biological evolution. Skeptics ask, “Why would a Creator purposely introduce identical nonfunctional DNA sequences at the same locations in the genomes of different, though seemingly related, organisms?”

    When the draft sequence was first published in 2000, researchers thought only around 2–5% of the human genome consisted of functional sequences, with the rest being junk. Numerous skeptics and evolutionary biologists claim that such a vast amount of junk DNA in the human genome is compelling evidence for evolution and the most potent challenge against intelligent design/creationism.

    But these arguments evaporate in the wake of the ENCODE project. If valid, the ENCODE results would radically alter our view of the human genome. No longer could the human genome be regarded as a wasteland of junk; rather, the human genome would have to be recognized as an elegantly designed system that displays sophistication far beyond what most evolutionary biologists ever imagined.

    ENCODE Skeptics

    The findings of the ENCODE project have been criticized by some evolutionary biologists who have cited several technical problems with the study design and the interpretation of the results. (See articles listed under “Resources to Go Deeper” for a detailed description of these complaints and my responses.) But ultimately, their criticisms appear to be motivated by an overarching concern: if the ENCODE results stand, then it means key features of the evolutionary paradigm can’t be correct.

    Calculating the Percentage of Functional DNA in the Human Genome

    Graur (perhaps the foremost critic of the ENCODE project) has tried to discredit the ENCODE findings by demonstrating that they are incompatible with evolutionary theory. Toward this end, he has developed a mathematical model to calculate the percentage of functional DNA in the human genome based on mutational load—the amount of deleterious mutations harbored by the human genome.

    Graur argues that junk DNA functions as a sponge absorbing deleterious mutations, thereby protecting functional regions of the genome. Considering this buffering effect, Graur wanted to know how much junk DNA must exist in the human genome to buffer against the loss of fitness—which would result from deleterious mutations in functional DNA—so that a constant population size can be maintained.

    Historically, the replacement level fertility rates for human beings have been two to three children per couple. Based on Graur’s modeling, this fertility rate requires 85–90% of the human genome to be composed of junk DNA in order to absorb deleterious mutations—ensuring a constant population size, with the upper limit of functional DNA capped at 25%.

    Graur also calculated a fertility rate of 15 children per couple, at minimum, to maintain a constant population size, assuming 80% of the human genome is functional. According to Graur’s calculations, if 100% of the human genome displayed function, the minimum replacement level fertility rate would have to be 24 children per couple.

    He argues that both conclusions are unreasonable. On this basis, therefore, he concludes that the ENCODE results cannot be correct.

    Response to Graur

    So, has Graur’s work invalidated the ENCODE project results? Hardly. Here are four reasons why I’m skeptical. 

    1. Graur’s estimate of the functional content of the human genome is based on mathematical modeling, not experimental results.

    An adage I heard repeatedly in graduate school applies: “Theories guide, experiments decide.” Though the ENCODE project results theoretically don’t make sense in light of the evolutionary paradigm, that is not a reason to consider them invalid. A growing number of studies provide independent experimental validation of the ENCODE conclusions. (Go here and here for two recent examples.)

    To question experimental results because they don’t align with a theory’s predictions is a Bizarro World” approach to science. Experimental results and observations determine a theory’s validity, not the other way around. Yet when it comes to the ENCODE project, its conclusions seem to be weighed based on their conformity to evolutionary theory. Simply put, ENCODE skeptics are doing science backwards.

    While Graur and other evolutionary biologists argue that the ENCODE results don’t make sense from an evolutionary standpoint, I would argue as a biochemist that the high percentage of functional regions in the human genome makes perfect sense. The ENCODE project determined that a significant fraction of the human genome is transcribed. They also measured high levels of protein binding.

    ENCODE skeptics argue that this biochemical activity is merely biochemical noise. But this assertion does not make sense because (1) biochemical noise costs energy and (2) random interactions between proteins and the genome would be harmful to the organism.

    Transcription is an energy- and resource-intensive process. To believe that most transcripts are merely biochemical noise would be untenable. Such a view ignores cellular energetics. Transcribing a large percentage of the genome when most of the transcripts serve no useful function would routinely waste a significant amount of the organism’s energy and material stores. If such an inefficient practice existed, surely natural selection would eliminate it and streamline transcription to produce transcripts that contribute to the organism’s fitness.

    Apart from energetics considerations, this argument ignores the fact that random protein binding would make a dire mess of genome operations. Without minimizing these disruptive interactions, biochemical processes in the cell would grind to a halt. It is reasonable to think that the same considerations would apply to transcription factor binding with DNA.

    2. Graur’s model employs some questionable assumptions.

    Graur uses an unrealistically high rate for deleterious mutations in his calculations.

    Graur determined the deleterious mutation rate using protein-coding genes. These DNA sequences are highly sensitive to mutations. In contrast, other regions of the genome that display function—such as those that (1) dictate the three-dimensional structure of chromosomes, (2) serve as transcription factors, and (3) aid as histone binding sites—are much more tolerant to mutations. Ignoring these sequences in the modeling work artificially increases the amount of required junk DNA to maintain a constant population size.

    3. The way Graur determines if DNA sequence elements are functional is questionable. 

    Graur uses the selected-effect definition of function. According to this definition, a DNA sequence is only functional if it is undergoing negative selection. In other words, sequences in genomes can be deemed functional only if they evolved under evolutionary processes to perform a particular function. Once evolved, these sequences, if they are functional, will resist evolutionary change (due to natural selection) because any alteration would compromise the function of the sequence and endanger the organism. If deleterious, the sequence variations would be eliminated from the population due to the reduced survivability and reproductive success of organisms possessing those variants. Hence, functional sequences are those under the effects of selection.

    In contrast, the ENCODE project employed a causal definition of function. Accordingly, function is ascribed to sequences that play some observationally or experimentally determined role in genome structure and/or function.

    The ENCODE project focused on experimentally determining which sequences in the human genome displayed biochemical activity using assays that measured

    • transcription,
    • binding of transcription factors to DNA,
    • histone binding to DNA,
    • DNA binding by modified histones,
    • DNA methylation, and
    • three-dimensional interactions between enhancer sequences and genes.

    In other words, if a sequence is involved in any of these processes—all of which play well-established roles in gene regulation—then the sequences must have functional utility. That is, if sequenceQperforms functionG, then sequenceQis functional.

    So why does Graur insist on a selected-effect definition of function? For no other reason than a causal definition ignores the evolutionary framework when determining function. He insists that function be defined exclusively within the context of the evolutionary paradigm. In other words, his preference for defining function has more to do with philosophical concerns than scientific ones—and with a deep-seated commitment to the evolutionary paradigm.

    As a biochemist, I am troubled by the selected-effect definition of function because it is theory-dependent. In science, cause-and-effect relationships (which include biological and biochemical function) need to be established experimentally and observationally, independent of any particular theory. Once these relationships are determined, they can then be used to evaluate the theories at hand. Do the theories predict (or at least accommodate) the established cause-and-effect relationships, or not?

    Using a theory-dependent approach poses the very real danger that experimentally determined cause-and-effect relationships (or, in this case, biological functions) will be discarded if they don’t fit the theory. And, again, it should be the other way around. A theory should be discarded, or at least reevaluated, if its predictions don’t match these relationships.

    What difference does it make which definition of function Graur uses in his model? A big difference. The selected-effect definition is more restrictive than the causal-role definition. This restrictiveness translates into overlooked function and increases the replacement level fertility rate.

    4. Buffering against deleterious mutations is a function.

    As part of his model, Graur argues that junk DNA is necessary in the human genome to buffer against deleterious mutations. By adopting this view, Graur has inadvertently identified function for junk DNA. In fact, he is not the first to argue along these lines. Biologist Claudiu Bandea has posited that high levels of junk DNA can make genomes resistant to the deleterious effects of transposon insertion events in the genome. If insertion events are random, then the offending DNA is much more likely to insert itself into “junk DNA” regions instead of coding and regulatory sequences, thus protecting information-harboring regions of the genome.

    If the last decade of work in genomics has taught us anything, it is this: we are in our infancy when it comes to understanding the human genome. The more we learn about this amazingly complex biochemical system, the more elegant and sophisticated it becomes. Through this process of discovery, we continue to identify functional regions of the genome—DNA sequences long thought to be junk.

    In short, the criticisms of the ENCODE project reflect a deep-seated commitment to the evolutionary paradigm and, bluntly, are at war with the experimental facts.

    Bottom line: if the ENCODE results stand, it means that key aspects of the evolutionary paradigm can’t be correct.

    Resources to Go Deeper

    Endnotes

    1. Dan Graur, “An Upper Limit on the Functional Fraction of the Human Genome,” Genome Biology and Evolution 9 (July 2017): 1880–85, doi:10.1093/gbe/evx121.
  • DNA Replication Winds Up the Case for Intelligent Design

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Aug 08, 2017

    One of my classmates and friends in high school was a kid we nicknamed “Radar.” He was a cool kid who had special needs. He was mentally challenged. He was also funny and as good-hearted as they come, never causing any real problems—other than playing hooky from school, for days on end. Radar hated going to school.

    When he eventually showed up, he would be sent to the principal’s office to explain his unexcused absences to Mr. Reynolds. And each time, Radar would offer the same excuse: his grandmother died. But Mr. Reynolds didn’t buy it—for obvious reasons. It didn’t require much investigation on the principal’s part to know that Radar was lying.

    Skeptics have something in common with my friend Radar. They use the same tired excuse when presented with compelling evidence for design from biochemistry. Inevitably, they dismiss the case for a Creator by pointing out all the “flawed” designs in biochemical systems. But this excuse never sticks. Upon further investigation, claimed instances of bad designs turn out to be elegant, in virtually every instance, as recent work by scientists from UC Davis illustrates.

    These researchers accomplished an important scientific milestone by using single molecule techniques to observe the replication of a single molecule of DNA.1 Their unexpected insights have bearing on how we understand this key biochemical operation. The work also has important implications for the case for biochemical design.

    For those familiar with DNA’s structure and replication process, you can skip the next two sections. But for those of you who are not, a little background information is necessary to appreciate the research team’s findings and their relevance to the creation-evolution debate.

    DNA’s Structure

    DNA consists of two molecular chains (called “polynucleotides”) aligned in an antiparallel fashion. (The two strands are arranged parallel to one another with the starting point of one strand of the polynucleotide duplex located next to the ending point of the other strand and vice versa.) The paired molecular chains twist around each other forming the well-known DNA double helix. The cell’s machinery generates the polynucleotide chains using four different nucleotides: adenosineguanosinecytidine, and thymidine, abbreviated as A, G, C, and T, respectively.

    A special relationship exists between the nucleotide sequences of the two DNA strands. Biochemists say the DNA sequences of the two strands are complementary. When the DNA strands align, the adenine (A) side chains of one strand always pair with thymine (T) side chains from the other strand. Likewise, the guanine (G) side chains from one DNA strand always pair with cytosine (C) side chains from the other strand. Biochemists refer to these relationships as “base-pairing rules.” Consequently, if biochemists know the sequence of one DNA strand, they can readily determine the sequence of the other strand. Base-pairing plays a critical role in DNA replication.

    Image 1: DNA’s Structure

    DNA Replication

    Biochemists refer to DNA replication as a “template-directed, semiconservative process.” By “template-directed,” biochemists mean that the nucleotide sequences of the “parent” DNA molecule function as a template, directing the assembly of the DNA strands of the two “daughter” molecules using the base-pairing rules. By “semiconservative,” biochemists mean that after replication, each daughter DNA molecule contains one newly formed DNA strand and one strand from the parent molecule.

    Image 2: Semiconservative DNA Replication

    Conceptually, template-directed, semiconservative DNA replication entails the separation of the parent DNA double helix into two single strands. By using the base-pairing rules, each strand serves as a template for the cell’s machinery to use when it forms a new DNA strand with a nucleotide sequence complementary to the parent strand. Because each strand of the parent DNA molecule directs the production of a new DNA strand, two daughter molecules result. Each one possesses an original strand from the parent molecule and a newly formed DNA strand produced by a template-directed synthetic process.

    DNA replication begins at specific sites along the DNA double helix, called “replication origins.” Typically, prokaryotic cells have only a single origin of replication. More complex eukaryotic cells have multiple origins of replication.

    The DNA double helix unwinds locally at the origin of replication to produce what biochemists call a “replication bubble.” During the course of replication, the bubble expands in both directions from the origin. Once the individual strands of the DNA double helix unwind and are exposed within the replication bubble, they are available to direct the production of the daughter strand. The site where the DNA double helix continuously unwinds is called the “replication fork.” Because DNA replication proceeds in both directions away from the origin, there are two replication forks within each bubble.

    Image 3: DNA Replication Bubble

    DNA replication can only proceed in a single direction, from the top of the DNA strand to the bottom. Because the strands that form the DNA double helix align in an antiparallel fashion with the top of one strand juxtaposed with the bottom of the other strand, only one strand at each replication fork has the proper orientation (bottom-to-top) to direct the assembly of a new strand, in the top-to-bottom direction. For this strand—referred to as the “leading strand”—DNA replication proceeds rapidly and continuously in the direction of the advancing replication fork.

    DNA replication cannot proceed along the strand with the top-to-bottom orientation until the replication bubble has expanded enough to expose a sizable stretch of DNA. When this happens, DNA replication moves away from the advancing replication fork. DNA replication can only proceed a short distance for the top-to-bottom-oriented strand before the replication process has to stop and wait for more of the parent DNA strand to be exposed. When a sufficient length of the parent DNA template is exposed a second time, DNA replication can proceed again, but only briefly before it has to stop again and wait for more DNA to be exposed. The process of discontinuous DNA replication takes place repeatedly until the entire strand is replicated. Each time DNA replication starts and stops, a small fragment of DNA is produced.

    Biochemists refer to these pieces of DNA (that will eventually compose the daughter strand) as “Okazaki fragments”—after the biochemist who discovered them. Biochemists call the strand produced discontinuously the “lagging strand” because DNA replication for this strand lags behind the more rapidly produced leading strand. One additional point: the leading strand at one replication fork is the lagging strand at the other replication fork since the replication forks at the two ends of the replication bubble advance in opposite directions.

    An ensemble of proteins is needed to carry out DNA replication. Once the origin recognition complex (which consists of several different proteins) identifies the replication origin, a protein called “helicase” unwinds the DNA double helix to form the replication fork.

    Image 4: DNA Replication Proteins

    Once the replication fork is established and stabilized, DNA replication can begin. Before the newly formed daughter strands can be produced, a small RNA primer must be produced. The protein that synthesizes new DNA by reading the parent DNA template strand—DNA polymerase—can’t start production from scratch. It must be primed. A massive protein complex, called the “primosome,” which consists of over 15 different proteins, produces the RNA primer needed by DNA polymerase.

    Once primed, DNA polymerase will continuously produce DNA along the leading strand. However, for the lagging strand, DNA polymerase can only generate DNA in spurts to produce Okazaki fragments. Each time DNA polymerase generates an Okazaki fragment, the primosome complex must produce a new RNA primer.

    Once DNA replication is completed, the RNA primers are removed from the continuous DNA of the leading strand and from the Okazaki fragments that make up the lagging strand. A protein called a “3’-5’ exonuclease” removes the RNA primers. A different DNA polymerase fills in the gaps created by the removal of the RNA primers. Finally, a protein called a “ligase” connects all the Okazaki fragments together to form a continuous piece of DNA out of the lagging strand.

    Are Leading and Lagging Strand Polymerases Coordinated?

    Biochemists had long assumed that the activities of the leading and lagging strand DNA polymerase enzymes were coordinated. If not, then DNA replication of one strand would get too far ahead of the other, increasing the likelihood of mutations.

    As it turns out, the research team from UC Davis discovered that the activities of the two polymerases are not coordinated. Instead, the leading and lagging strand DNA polymerase enzymes replicate DNA autonomously. To the researchers’ surprise, they learned that the leading strand DNA polymerase replicated DNA in bursts, suddenly stopping and starting. And when it did replicate DNA, the rate of production varied by a factor of ten. On the other hand, the researchers discovered that the rate of DNA replication on the lagging strand depended on the rate of RNA primer formation.

    The researchers point out that if not for single molecule techniques—in which replication is characterized for individual DNA molecules—the autonomous behavior of leading and lagging strand DNA polymerases would not have been detected. Up to this point, biochemists have studied the replication process using a relatively large number of DNA molecules. These samples yield average replication rates for leading and lagging strand replication, giving the sense that replication of both strands is coordinated.

    According to the researchers, this discovery is a “real paradigm shift, and undermines a great deal of what’s in the textbooks.”Because the DNA polymerase activity is not coordinated but autonomous, they conclude that the DNA replication process is a flawed design, driven by stochastic (random) events. Also, the lack of coordination between the leading and lagging strands means that leading strand replication can get ahead of the lagging strand, yielding long stretches of vulnerable single-stranded DNA.

    Diminished Design or Displaced Design?

    Even though this latest insight appears to undermine the elegance of the DNA replication process, other observations made by the UC Davis research team indicate that the evidence for design isn’t diminished, just displaced.

    These investigators discovered that the activity of helicase—the enzyme that unwinds the double helix at the replication fork—somehow senses the activity of the DNA polymerase on the leading strand. When the DNA polymerase stalls, the activity of the helicase slows down by a factor of five until the DNA polymerase catches up. The researchers believe that another protein (called the “tau protein”) mediates the interaction between the helicase and DNA polymerase molecules. In other words, the interaction between DNA polymerase and the helicase compensates for the stochastic behavior of the leading strand polymerase, pointing to a well-designed process.

    As already noted, the research team also learned that the rate of lagging strand replication depends on primer production. They determined that the rate of primer production exceeds the rate of DNA replication on the leading strand. This fortuitous coincidence ensures that as soon as enough of the bubble opens for lagging strand replication to continue, the primase can immediately lay down the RNA primer, restarting the process. It turns out that the rate of primer production is controlled by the primosome concentration in the cell, with primer production increasing as the number of primosome copies increase. The primosome concentration appears to be fine-tuned. If the concentration of this protein complex is too large, the replication process becomes “gummed up”; if too small, the disparity between leading and lagging strand replication becomes too great, exposing single-stranded DNA. Again, the fine-tuning of primosome concentration highlights the design of this cellular operation.

    It is remarkable how two people can see things so differently. For scientists influenced by the evolutionary paradigm, the tendency is to dismiss evidence for design and, instead of seeing elegance, become conditioned to see flaws. Though DNA replication takes place in a haphazard manner, other features of the replication process appear to be engineered to compensate for the stochastic behavior of the DNA polymerases and, in the process, elevate the evidence for design.

    And, that’s no lie.

    Resources

    Endnotes

    1. James E. Graham et al., “Independent and Stochastic Action of DNA Polymerases in the Replisome,” Cell 169 (June 2017): 1201–13, doi:10.1016/j.cell.2017.05.041.
    2. Bec Crew, “DNA Replication Has Been Filmed for the First Time, and It’s Not What We Expected,” ScienceAlert, June 19, 2017, https://sciencealert.com/dna-replication-has-been-filmed-for-the-first-time-and-it-s-stranger-than-we-thought.
  • How Are Sea Slugs a Failed Prediction of the Evolutionary Paradigm?

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Jul 18, 2017

    Test them all; hold on to what is good.

    –1 Thessalonians 5:21

    What is your definition of success?

    The answer to this question most likely depends on the person you ask. People view success differently.

    However, subjectivity is not the case when it comes to scientific theories. Success in science is based on a singular criterion: how well does the theory perform at predicting future scientific outcomes?

    Scientific predictions arise as the logical entailments of the theory at hand. In turn, scientists use these predictions to assess the theory’s validity. If experimental results and observations fulfill the theory’s predictions, then scientists consider it sound. If observations and results don’t match the predictions, then scientists are forced to revise and even discard, the theory under evaluation. In short, successful scientific theories have explanatory and predictive power.

    It is for this reason many biologists view the theory of evolution as a valid paradigm for interpreting the origin, history, and design of life. And it is for this reason many biologists regard the theory of evolution as biology’s grand unifying theory.

    However, the evolutionary paradigm has yet to adequately explain key events in life’s history, such as (1) the origin of life, (2) the origin of body plans, (3) the origin of sexual reproduction, (4) the trigger for the sociocultural big bang and human exceptionalism, and (5) the origin of consciousness. The evolutionary paradigm also suffers from failed predictions, as recent work by a team of neuroscientists from Georgia State University attests.1

    Swimming Sea Slugs

    The Georgia State University researchers characterized the neural circuits involved in the swimming behavior of a group of sea slugs called the nudibranchs. These creatures serve as an ideal model system to study neural circuits because relatively large neurons make up their neural systems. The sea slugs’ neural circuits are simple and straightforward to map. On top of that, the sea slugs’ neural circuits regulate simple behaviors. These properties make it easy to characterize and, then, manipulate the neural circuitry of these creatures.

    Biologists have identified about 2,000 species of nudibranchs. Of this number, about 50 swim with a characteristic left-right motion.

    The Georgia State scientists investigated the neural mechanism associated with the left-right swimming behavior of two sea slug species: the giant nudibranch and the hooded nudibranch. From an evolutionary perspective, these two sea slugs share an evolutionary ancestor. In fact, all 50 left-right swimming sea slugs belong to the same branch of the evolutionary tree. (In technical terms, they are monophyletic.)

    Predictions of the Evolutionary Model

    Given that the left-right swimming nudibranchs are monophyletic, the evolutionary model predicts that the morphology, genetics, and behavior originated in the common ancestor of this group. And, given that the swimming behavior of this group is shared among all members (homologous), the expectation is that the neurons and neural circuitry that control this behavior should also be shared among all members.

    The Georgia State scientists say, “. . . Behavioral morphology is often assumed to involve similarity in underlying neuronal mechanisms. . . . Behaviors that are homologous and similar in form would naturally be assumed to be produced by similar neural mechanisms.”2

    Sea Slug Neural Circuitry

    Consistent with the predictions of the evolutionary paradigm, the researchers discovered that the neurons of the giant and hooded nudibranchs were homologous. But, to their surprise, they discovered that the underlying neural mechanisms that controlled the swimming behavior of the two sea slugs were distinct.

    In fact, using a technique called dynamic clamping, the Georgia State scientists could modify the neural circuitry of one sea slug to be the same as the other, all the while inducing the same swimming behavior.

    Masking the Failure of the Evolutionary Paradigm

    The unexpected discovery of distinct neural circuitry in the giant and hooded nudibranchs stands as a failed prediction of the evolutionary paradigm. So how do the Georgia State scientists respond to this discovery?

    First, they point out that their findings support the notion of neural plasticity, with the same neurons supporting multiple neural circuits and varying neural circuits producing the same behavior. But, neural plasticity doesn’t fully account for this finding. If the two sea slugs weren’t part of the same branch on the evolutionary tree, one could argue that the difference in neural circuits represents an example of convergence.

    The researchers suggest that perhaps the divergence of the neural circuitry from the neural mechanism displayed by the shared ancestor of the nudibranch is due to a phenomenon they dub neural drift. This doesn’t seem plausible given the importance of the swimming behavior for sea slug survival. Altering the neural circuitry would alter this behavior, compromising the sea slug’s fitness.

    In fact, there is no independent evidence whatsoever for neural drift. It is a made-up, ad hoc phenomenon that creates a diversion, masking the fact that the results from this study represent a failed prediction of the evolutionary paradigm.

    While this failed prediction is not sufficient to overthrow the evolutionary paradigm, it does justify skepticism about the capacity of evolutionary theory—as currently conceived—to fully explain life’s design and diversity.

    Resources

    Endnotes

    1. Akira Sakurai and Paul S. Katz, “Artificial Synaptic Rewiring Demonstrates that Distinct Neural Circuit Configuration Underlie Homologous Behaviors,” Current Biology 27 (June 19, 2017): 1–14, doi:10.1016/j.cub.2017.05.016.
    2. Ibid.
  • Why Did God Create the Thai Liver Fluke?

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Jul 11, 2017

    The Thai liver fluke causes quite a bit of human misery. This parasite infects fish living in the rivers of Southeast Asia, which, in turn, infects people who eat the fish.

    Raw and fermented fish make up a big part of the diet of people in Southeast Asia. For example, in Thailand, a popular culinary item is called sour fish. This “delicacy” is prepared by mixing raw fish with garlic, salt, seasoning, and rice. After rolling the mixture into a ball, it is placed in a plastic bag and left to ferment in the hot sun for several days.

    The fermentation process isn’t sufficient to kill the cysts of the Thai liver fluke embedded in the muscles of the infected fish. So, when people eat sour fish (or raw fish), they risk ingesting the parasite.

    The Thai Liver Fluke Life Cycle

    After ingestion, the cysts open in the digestive track of the human host, releasing the fluke. This parasite travels through the bile duct, making its way into the liver, where it takes up residence.

    Once in the liver, the fluke lays eggs that are carried into the host’s digestive track by bile secreted by the liver. In turn, the eggs are released into the environment with human excrement. After being ingested by snails, the eggs hatch, producing larvae that escape from the snail. The free-living larvae infect fish, forming cysts in their skin, fins, and muscle.

    Image: Life cycle of Opisthorchis viverrini. Image source: Wikipedia

    The Thai liver fluke is a master of disguise, evading the immune system of the human host and living for decades in the liver. Unless the infestation is extreme, people infected with the fluke are completely unaware that they harbor this parasite.

    Estimates indicate that 10% of the Thai population is infected with the Thai liver fluke. But in the villages of northern Thailand, where the consumption of raw and fermented fish is higher than in other areas of the country, 45% of the people carry the parasite.

    The Thai Liver Fluke and Cancer

    The Thai liver fluke can live for several decades in the host’s liver without much consequence. But eventually, the burden of the infection catches up with the human host, leading to an aggressive and deadly form of liver cancer that claims about 26,000 Thai lives each year. Once the cancer is detected, most patients die within a year.

    Biomedical researchers think the liver cancer is triggered by the Thai liver fluke, which munches on the host’s liver. Interestingly, the fluke’s saliva contains a protein (called granulin-like protein) that stimulates cell growth and division. These processes help the liver to repair itself after being damaged by the fluke. In effect, the parasite eats part of the liver, supercharges the liver to repair itself, and then eats the new tissue, repeating the cycle for decades. The repeated wounding and repairing of the liver tissue accompanied by rapid cell division eventually leads to the onset of cancer.

    The Thai Liver Fluke and God’s Goodness

    The problems caused by the Thai liver fluke are not limited to the biomedical arena. This parasite causes theological issues, as well. Why would a good God create the Thai liver fluke? Questions like this one fall under the problem of evil.

    Philosophers and theologians recognize two kinds of evil: moral and natural.Moral evil stems from human action (or inaction in some cases). Natural evil proceeds from nature itself—earthquakes, tornadoes, floods, diseases, and the like.

    Natural evil seems to present a greater theological challenge than moral evil does. Skeptics could agree that God can be excused for the free-will actions of human beings who violate his standard of goodness, but they reason that natural disasters and disease don’t result from human activity. Therefore, this type of “evil” must be attributed solely to God.

    Are Some Forms of Natural Evil Actually Moral Evil?

    As I have previously argued, many times natural evil is moral evil in disguise. (See the Resources section below.) In other words, the suffering humans experience stems from human moral failing and poor judgment, not the actual natural phenomenon.

    This most certainly seems to be the case when it comes to the Thai liver fluke. Liver cancer caused by parasite infestations would plummet if people stopped eating raw fish and developed better public sanitation systems and practices.

    So, is it God’s fault that humans become infected with the Thai liver fluke? Or is it because the people of northern Thailand suffer from poverty and a lack of sanitation—ultimately, conditions caused by human moral failing? Is it God’s fault that people of Southeast Asia develop liver cancer from fluke infestations, when they eat raw and fermented fish instead of properly cooking the meat, knowing the adverse health effects?

    Parasites Play a Critical Role in Ecological Systems

    Still, the question remains: Why would God create parasites at all?

    As it turns out, parasites play an indispensable role in ecosystem health.1 Though these creatures make minor contributions to the biomass of ecosystems, they have a significant effect on several ecosystem parameters, including biodiversity. In fact, some ecologists believe that an ecosystem becomes more robust and functions better as parasite diversity increases.

    Considering this insight, a rationale exists as to why God would create the Thai liver fluke to be a member of the ecosystems of the rivers in Southeast Asia. This parasite infects any carnivore (dogs, cats, rats, and pigs) that eats fish from these rivers, not just humans. Undoubtedly infecting these carnivores influences a variety of ecosystem processes, such as species competition, and energy flow through the ecosystem. The harm this parasite causes humans is an unintended consequence of imprudent human activities—not the inherent design of nature.

    Parasites and God’s Providence

    Remarkably, recent work by scientists from the Australian Institute of Tropical Health and Medicine (AITHM) indicates that the suffering caused by the Thai liver fluke may fulfill a higher purposea greater good.

    These researchers believe that the Thai liver fluke may hold the key to effectively treat slow- and non-healing wounds caused by diabetes.2

    High blood glucose levels associated with diabetes compromise the circulatory and immune systems. This compromised condition inhibits wound repair due to restricted blood flow to the site of the injury. It also makes the wound much more prone to infection.

    The AITHM researchers realized that the granulin-like protein produced by the Thai liver fluke could be used to promote healing of chronic wounds because it promotes rapid cell proliferation in the liver. If incorporated into a cream, this protein could be topically applied to the wounds, stimulating wound repair. This treatment would dramatically reduce the cost of treating chronic wounds and significantly improve the treatment outcomes.

    Ironically, the properties of the granulin-like protein that make this biomolecule so insidious are exactly the properties that make it useful to treat diabetics’ wounds. To put it another way, the Thai liver fluke is beneficial to humanity.

    The idea that God designed nature to be useful for humanity is a facet of divine providence. In Christian theology, this idea refers to God’s continual role in: (1) preserving his creation; (2) ensuring that everything happens; and (3) guiding the universe. The concept of divine providence also posits that when God created the world he built into the creation everything humans (and other living organisms) would need. Accordingly, every good thing that people possess has been provided and preserved by God, either directly or indirectly.

    On this basis, as counterintuitive as this may initially seem, it could be argued that as part of his providence, God created the Thai liver fluke for humanity’s use and benefit.

    And we know that in all things God works for the good of those who love him, who have been called according to his purpose.

    –Romans 8:28

    Resources

    Endnotes

    1. Peter J. Hudson, Andrew P. Dobson, and Kevin D. Lafferty, “Is a Healthy Ecosystem One that Is Rich in Parasites?” Trends in Ecology and Evolution 21 (July 2006): 381–85, doi:10.1016/j.tree.2006.04.007.
    2. Paramjit S. Bansal et al., “Development of a Potent Wound Healing Agent Based on the Liver Fluke Granulin Structural Fold,” Journal of Medicinal Chemistry 60 (April 20, 2017): 4258–66, doi:10.1021/acs.jmedchem.7b00047.
  • Can Intelligent Design Be Part of the Construct of Science?

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Jun 27, 2017

    “If this result stands up to scrutiny, it does indeed change everything we thought we knew about the earliest human occupation of the Americas.”1

    This was the response of Christopher Stringer—a highly-regarded paleoanthropologist at the Natural History Museum in London—to the recent scientific claim that Neanderthals made their way to the Americas 100,000 years before the first modern humans.2

    At this point, many anthropologists have expressed skepticism about this claim, because it requires them to abandon long-held ideas about the way the Americas were populated by modern humans. As Stringer cautions, “Many of us will want to see supporting evidence of this ancient occupation from other sites before we abandon the conventional model.”3

    Yet, the archaeologists making the claim have amassed an impressive cache of evidence that points to Neanderthal occupation of North America.

    As Stringer points out, this work has radical implications for anthropology. But, in my view, the importance of the work extends beyond questions relating to human migrations around the world. It demonstrates that intelligent design/creation models have a legitimate place in science.

    The Case for Neanderthal Occupation of North America

    In the early 1990s, road construction crews working near San Diego, CA, uncovered the remains of a single mastodon. Though the site was excavated from 1992 to 1993, scientists were unable to date the remains. Both radiocarbon and luminescence dating techniques failed.

    Recently, researchers turned failure into success, age-dating the site to be about 130,000 years old, using uranium-series disequilibrium methods. This result shocked them because analysis at the site indicated that the mastodon remainswere deliberately processed by hominids, most likely Neanderthals.

    The researchers discovered that the mastodon bones displayed spiral fracture patterns that looked as if a creature, such as a Neanderthal, struck the bone with a rock—most likely to extract nutrient-rich marrow from the bones. The team also found rocks (called cobble) with the mastodon bones that bear markings consistent with having been used to strike bones and other rocks.

    To confirm this scenario, the archaeologists took elephant and cow bones and broke them open with a hammerstone. In doing so, they produced the same type of spiral fracture patterns in the bones and the same type of markings on the hammerstone as those found at the archaeological site. The researchers also ruled out other possible explanations, such as wild animals creating the fracture patterns on the bones while scavenging the mastodon carcass.

    Despite this compelling evidence, some anthropologists remain skeptical that Neanderthals—or any other hominid—modified the mastodon remains. Why? Not only does this claim fly in the face of the conventional explanation for the populating of the Americas by humans, but the sophistication of the tool kit does not match that produced by Neanderthals 130,000 years ago based on archaeological sites in Europe and Asia.

    So, did Neanderthals make their way to the Americas 100,000 years before modern humans? An interesting debate will most certainly ensue in the years to come.

    But, this work does make one thing clear: intelligent design/creation is a legitimate part of the construct of science.

    A Common Skeptical Response to the Case for a Creator

    Based on my experience, when confronted with scientific evidence for a Creator, skeptics will often summarily dismiss the argument by asserting that intelligent design/creation isn’t science and, therefore, it is not legitimate to draw the conclusion that a Creator exists from scientific advances.

    Undergirding this objection is the conviction that science is the best, and perhaps the only, way to discover truth. By dismissing the evidence for God’s existence—insisting that it is nonscientific—they hope to undermine the argument, thereby sidestepping the case for a Creator.

    There are several ways to respond to this objection. One way is to highlight the fact that intelligent design is part of the construct of science. This response is not motivated by a desire to reform science, but by a desire to move the scientific evidence into a category that forces skeptics to interact with it properly.

    The Case for a Creator’s Role in the Origin of Life

    It is interesting to me that the line of reasoning the archaeologists use to establish the presence of Neanderthals in North America equates to the line of reasoning I use to make the case that the origin of life reflects the product of a Creator’s handiwork, as presented in my three books: The Cell’s Design, Origins of Life, and Creating Life in the Lab. There are three facets to this line of reasoning.

    The Appearance of Design

    The archaeologists argued that: (1) the arrangement of the bones and the cobble and (2) the markings on the cobble and the fracture patterns on the bones appear to result from the intentional activity of a hominid. To put it another way, the archaeological site shows the appearance of design.

    In The Cell’s Design I argue that the analogies between biochemical systems and human designs evince the work of a Mind, serving to revitalize Paley’s Watchmaker argument for God’s existence. In other words, biochemical systems display the appearance of design.

    Failure to Explain the Evidence through Natural Processes

    The archaeologists explored and rejected alternative explanations—such as scavenging by wild animals—for the arrangement, fracture patterns, and markings of the bones and stones.

    In Origins of Life, Hugh Ross (my coauthor) and I explore and demonstrate the deficiency of natural process, mechanistic explanations (such as replicator-first, metabolism-first, and membrane-first scenarios) for the origin of life and, hence, biological systems.

    Reproduction of the Design Patterns

    The archaeologists confirmed—by striking elephant and cow bones with a rock—that the markings on the cobble and the fracture patterns on the bone were made by a hominid. That is, through experimental work in the laboratory, they demonstrated that the design features were, indeed, produced by intelligent agency.

    In Creating Life in the Lab, I describe how work in synthetic biology and prebiotic chemistry empirically demonstrate the necessary role intelligent agency plays in transforming chemicals into living cells. In other words, when scientists go into the lab and create protocells, they are demonstrating that the design of biochemical systems is intelligent design.

    So, is it legitimate for skeptics to reject the scientific case for a Creator, by dismissing it as non-scientific?

    Work in archaeology illustrates that intelligent design is an integral part of science, and it highlights the fact that the same scientific reasoning used to interpret the mastodon remains discovered near San Diego, likewise, undergirds the case for a Creator.

    Resources

    Endnotes

    1. Colin Barras, “First Americans May Have Been Neanderthals 130,000 Years Ago,” New Scientist, April 26, 2017, https://www.newscientist.com/article/2129042-first-americans-may-have-been-neanderthals-130000-years-ago/.
    2. Steven R. Holen et al., “A 130,000-Year-Old Archaeological Site in Southern California, USA,” Nature 544 (April 27, 2017): 479–83, doi:10.1038/nature22065.
    3. Barras, “First Americans.”
  • DNA Wired for Design

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Jun 20, 2017

    Though this be madness, yet there is method
    int.

    Hamlet (Act II, scene II)

    Was Hamlet crazy? Or was he feigning madness so he could investigate the murder of his father without raising suspicion?

    In my senior year of high school, Mrs. Hodges assigned our class these questions as the topic for the first essay we wrote for honors English. I made the case that Hamlet was perfectly sane. Indeed, there was method to his madness.

    I wound up with a B- on the assignment. Mrs. Hodges wasn’t impressed with my reasoning, writing on my paper in red ink, “You aren’t qualified to comment on Hamlet’s sanity. You are not a psychologist!” When she returned my paper, I muttered, “Of course, I’m not a psychologist. I’m a high school student. You were the one who asked me to speculate on his sanity. And then when I do . . .”

    I was reminded of this high school memory a few days ago while contemplating the structure and function of DNA. This biomolecule’s design is “crazy.” Yet every detail of DNA’s structure is crucial for the role it plays as an information storage system in the cell. You might say there is biochemical method to DNA’s madness when it comes to its properties. One of DNA’s “insane” features is its capacity to conduct electrical current through the interior of the double helix.

    DNA Wires

    Caltech chemist Jacqueline Barton discovered this phenomenon in the early 1990s. Barton and her collaborators attached different chemical groups to the two ends of the DNA double helix. Both compounds possessed redox centers (metal atoms that can give off and take up electrons). When they blasted one of the redox centers with a pulse of light, it ejected an electron that was taken up by the redox center attached to the opposite end of the DNA molecule, causing the compound to emit a flash of light. The researchers concluded that the ejected electron must have travelled through the interior of the double helix from one redox center to the other.

    Shortly after this discovery, Barton and her team learned that electrical charges  move through DNA only when the double helix is intact. Electrical current won’t flow through single-stranded DNA, nor will it flow if the DNA double helix is distorted, due to damage or misincorporation of DNA subunits during replication.

    These (and other) observations indicate that the conductance of electrical charge through the DNA molecule stems from π-π stacking interactions of the nucleobases in the double helix interior. These interactions produce a molecular orbital that spans the length of the double helix. In effect, the molecular orbital functions like a wire running through DNA’s interior.

    DNA Wires and Nanoelectronics

    Charge conductance through the DNA double helix occurs more rapidly than it does through “standard” molecular wires made from inorganic materials. These “insane” transport speeds have inspired researchers to explore the possibility of using DNA as molecular scale wiring in nanoelectronic devices. In fact, some researchers think that DNA wires might become an integral feature for the next generation of medical diagnostic equipment.

    Does DNA Function as a Wire in the Cell?

    While the charge conductance through the DNA double helix is an interesting and potentially useful property, biochemists have long wondered if DNA functions as a nanowire in the cell.

    In 2009, Barton and her team discovered the answer to this question. DNA’s capacity to transmit electrical charges along the length of the double helix plays a key role in the DNA repair process, and recently Barton’s collaborators have demonstrated that DNA’s wire property plays an important role in the initiation of DNA replication. Both processes are important for DNA to function as an information storage system. Repairing damage to DNA insures the integrity of the information it houses. And DNA replication makes it possible to pass this information on to the next generation. There is a purpose to every aspect of DNA’s properties—a method to the madness.

    Detecting Damage to DNA

    Damage to DNA distorts the double helix. In a process called base excision repair, the cell’s machinery recognizes and removes the damaged portion of the DNA molecule, replacing it with the correct DNA subunits.

    For some time, biochemists puzzled over how the DNA repair enzymes located the damaged regions. In the bacteria E. coli, two repair enzymes, dubbed EndoIII and MutY, occur at low levels. (E. coli is a model organism often used by biochemists to study cellular processes.) Biochemists estimate that less than 500 copies of EndoIII exist in the cell and around 30 copies of MutY. These are low numbers considering the task at hand. These repair enzymes bear the responsibility of surveying the E. coli genome for damage—a genome that consists of over 4.6 million base pairs (genetic letters).

    Barton and her team discovered that the two repair enzymes possess a redox center consisting of an iron-sulfur cluster (4Fe4S) that has no enzymatic activity.1 They speculated and then demonstrated that the 4Fe4S cluster functions just like the compounds they attached to the DNA double helix in their original experiment in the 1990s.

    It turns out Barton and her team were right. These repair proteins bind to DNA. Once bound, they send an electron from the 4Fe4S redox center through the interior of the double helix, which establishes a current through the DNA molecule. Once the repair protein loses an electron, it cannot dissociate from the DNA double helix. Other repair proteins bound to the DNA pick up the electrons from the DNA’s interior at their iron-sulfur redox center. When they do, they dissociate from the DNA and resume their migration along the double helix. Eventually, the migrating repair protein will bind to the DNA again, sending an electron through the DNA’s interior.

    This process is repeated, over and over again. However, if the DNA becomes damaged and the double helix distorted, then the DNA wire breaks, interrupting the flow of electrons. When this happens, the repair proteins remain attached to the DNA close to the location of the damage—thus, initiating the repair process.

    Initiating DNA Replication

    Recently, Barton and her team discovered that charge conductance through DNA also plays a critical role in the early stages of DNA replication.DNA replication—the process of generating two daughter molecules identical to the parent molecule—serves an essential life function.

    DNA replication begins at specific sites along the double helix, called replication origins. Typically, prokaryotic cells, such as E. coli, have only a single origin of replication.

    The replication machinery locally unwinds the DNA double helix at the origin of replication to produce a replication bubble. Once the individual strands of the DNA double helix unwind and are exposed within the replication bubble, they are available to direct the production of the daughter strand.

    Before the newly formed daughter strands can be produced, a small RNA primer must be produced. DNA polymerase—the protein that synthesizes new DNA by reading the parent template strand—can’t start production from scratch. It must be primed. The primosome, a massive protein complex that consists of over 15 different proteins (including the enzyme primase), produces the RNA primer. From there, DNA polymerase takes over and begins synthesizing the daughter DNA strand.

    Barton and her team discovered that the handoff between primase and DNA polymerase relies on DNA’s wire property. Both primase and DNA polymerase possess 4Fe4S redox clusters. When primase’s 4Fe4S redox center loses an electron, this protein binds to DNA to produce the RNA primer. When primase’s 4Fe4S redox center picks up an electron, the protein detaches from the DNA to end the production of the RNA primer.

    When DNA polymerase binds to the DNA to begin the process of daughter strand synthesis, it sends an electron from its 4Fe4S redox center along the double helix formed by the parent DNA-RNA primer. When the electron reaches the 4Fe4S redox center of primase, it brings the production of the RNA primer to a halt.

    DNA Wires and the Case for a Creator

    The work by Barton and her colleagues highlights the elegant and sophisticated design of biochemical systems. DNAs wire property is so remarkable that it serves as inspiration for the design of the next generation of electronic devices—at the nanoscale. The use of biological designs to drive technological advance is one of the most exciting areas in engineering. This area of study—called biomimetics and bioinspiration—presents us with new reasons to believe that life stems from a Creator. It paves the way for a new type of design argument I dub the converse Watchmaker argument: If biological designs are the work of a Creator, then these systems should be so well-designed that they can serve as engineering models and, otherwise, inspire the development of new technologies.

    The converse Watchmaker argument complements William Paley’s classical Watchmaker argument for God’s existence. In my book The Cell’s Design, I describe how recent advances in biochemistry revitalize this classical argument. Over the last few decades, one of the most astounding insights from biochemistry is the recognition that many biochemical systems display the same properties as human designs. This similarity can be used to argue that life must come from the work of a Mind.

    The Watchmaker Prediction

    In conjunction with my presentation of the revitalized Watchmaker argument in The Cell’s Design, I proposed the Watchmaker prediction. I contend that many of the cell’s molecular systems currently go unrecognized as analogs to human designs because the corresponding technology has yet to be developed. That is, the Watchmaker argument may well become stronger in the future, and its conclusion more certain, as human technology advances.

    The possibility that advances in human technology will ultimately mirror the molecular technology that already exists as an integral part of biochemical systems leads to the Watchmaker prediction: As human designers develop new technologies, examples of these technologies, which previously went unrecognized, will become evident in the operation of the cell’s molecular systems. In other words, if the Watchmaker analogy truly serves as evidence for the Creator’s existence, then it is reasonable to expect that life’s biochemical machinery anticipates human technological advances.

    The Watchmaker Prediction, Satisfied

    The discovery that DNA’s wire properties are critical for DNA repair and the initiation of DNA replication fulfills the Watchmaker prediction. Barton and her team recognized the physiological importance of DNA charge conductance a year after The Cell’s Design was published.

    Nanoscientists have been working to develop molecular-scale nanowires for the last couple of decades. The discovery of DNA’s wire properties occurred in this context. In other words, as new technology emerged—in this case, nanoelectronics—we have discovered its existence inside the cell.

    Considering the wire properties of DNA, it is not madness to think that a Creator exists and played a role in life’s genesis.

    Resources

    Endnotes

    1. Amie K. Boal et al., “Redox Signaling between DNA Repair Proteins for Efficient Lesion Detection,” Proceedings of the National Academy of Sciences, USA 106 (September 8, 2009): 15237–42, doi:10.1073/pnas.0908059106Pamel A. Sontz et al., “DNA Charge Transport as a First Step in Coordinating the Detection of Lesions by Repair Proteins,” Proceedings of the National Academy of Sciences, USA 109 (February 7, 2012): 1856–61, doi:10.1073/pnas.1120063109; Michael A. Grodick, Natalie B. Muren, and Jacqueline K. Barton, “DNA Charge Transport within the Cell,” Biochemistry 54 (February 3, 2015): 962–73, doi:10.1021/bi501520w.
    2. Elizabeth O’Brien et al., “The [4Fe4S] Cluster of Human DNA Primase Functions as a Redox Switch Using DNA Charge Transport,” Science 355 (February 24, 2017): doi:10.1126/science.aag1789.
  • DNA: Digitally Designed

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | May 24, 2017

    We live in uncertain and frightening times.

    There seems to be no end to the serious risks confronting humanity. In fact, in 2014, USA Today published an article identifying the 10 greatest threats facing our world:

    • Fiscal crises in key economies
    • Structurally high unemployment/underemployment
    • Water crises
    • Severe income disparity
    • Failure to climate change mitigation and adaptation
    • Greater incidence of extreme weather events (e.g., floods, storms, fires)
    • Global governance failure
    • Food crises
    • Failure of a major financial mechanism/institution
    • Profound political and social instability

    If this list isn’t bad enough, another crisis looms in our near future: a data storage crisis.

    Thanks to the huge volume of scientific data generated by disciplines such as genomics and the explosion of YouTube videos, 44 trillion gigabytes of digital data currently exist in the world. To put this in context, each person in a worldwide population of 10 billion people would have to store over 6,000 CDs to house this data. Estimates are that if we keep generating data at this pace, we will run out of high-quality silicon needed to make data storage devices by 2040.1

    Compounding this problem are the limitations of current data storage technology. Because of degradative processes, hard disks have a lifetime of about 3 years and magnetic tapes about 10 years. These storage systems must be kept in controlled environments—which makes data storage an expensive proposition.

    Digital Data Storage in DNA

    Because of DNA’s role as a biochemical data storage system (in which the data is digitized), researchers are exploring the use of this biomolecule as the next-generation digital data storage technology. As proof of principle, a team of researchers from Harvard University headed up by George Church coded the entire contents of a 54,000-word book (including 11 JPEG images) into DNA fragments.

    The researchers chose to encode the book’s contents into small DNA fragments—devoting roughly two-thirds of the sequence for data and the remainder for information that can be used to locate the content within the entire data block. In this sense, their approach is analogous to using page numbers to order and locate the contents of a book.

    Since then, researchers have encoded computer programs, operating systems, and even movies into DNA.

    Because DNA is so highly optimized to store information, it is an ideal data storage medium. (For details regarding the optimal nature of DNA’s structure, see The Cell’s Design.) Researchers think that DNA has the capacity to store data near the theoretical maximum. About one-half pound of DNA can store all the data that exists in the world today.

    Limitations of DNA Data Storage

    Despite its promises, there are some significant technical hurdles to overcome before DNA can serve as a data storage system. Cost and time are two limitations. It is expensive and time-consuming to produce and read the synthetic DNA used to store information. As technology advances, the cost and time requirements associated with DNA data storage will likely improve. Still, because of these limitations, most technologists think that the best use of DNA will be for archival storage of data.

    Another concern is the long-term stability of DNA. Over time, DNA degrades. Researchers believe that redundancy may be one way around this problem. By encoding the same data in multiple pieces of DNA, data lost because of DNA degradation can be recovered.

    The processes of making and reading synthetic DNA also suffer from error. Current technology has an error rate of 1 in 100. Recently, researchers from Columbia University achieved a breakthrough that allows them to elegantly address loss of information from DNA due to degradation or miscoding that takes place when DNA is made and read. These researchers successfully applied techniques used for “noisy communication” operations to DNA data storage.2

    With these types of advances, the prospects of using DNA to store digital data may soon become a reality. And unlike other data storage technologies, DNA will never become obsolete.

    Biomimetics and Bioinspiration

    The use of biological designs to drive technological advance is one of the most exciting areas in engineering. This area of study—called biomimetics and bioinspiration—presents us with new reasons to believe that life stems from a Creator. As the names imply, biomimetics involves direct copying (or mimicry) of designs from biology, whereas bioinspiration relies on insights from biology to guide the engineering enterprise. DNA’s capacity to inspire engineering efforts to develop new data storage technology highlights this biomolecules elegant, sophisticated design and, at the same time, raises a troubling question for the evolutionary paradigm.

    The Converse Watchmaker Argument

    Biomimetics and bioinspiration pave the way for a new type of design argument I dub the converse Watchmaker argument: If biological designs are the work of a Creator, then these systems should be so well-designed that they can serve as engineering models and otherwise inspire the development of new technologies.

    At some level, I find the converse Watchmaker argument more compelling than the classical Watchmaker analogy. It is remarkable to me that biological designs can inspire engineering efforts.

    It is even more astounding to think that biomimetics and bioinspiration programs could be so successful if biological systems were truly generated by an unguided, historically contingent process, as evolutionary biologists claim.

    Biomimetics and Bioinspiration: The Challenge to the Evolutionary Paradigm

    To appreciate why work in biomimetics and bioinspiration challenge the evolutionary paradigm, we need to discuss the nature of the evolutionary process.

    Evolutionary biologists view biological systems as the outworking of unguided, historically contingent processes that co-opt preexisting designs to cobble together new systems. Once these designs are in place, evolutionary mechanisms can optimize them, but still, these systems remain—in essence—kludges.

    Most evolutionary biologists are quick to emphasize that evolutionary processes and pathways seldom yield perfect designs. Instead, most biological designs are flawed in some way. To be certain, most biologists would concede that natural selection has produced biological designs that are well-adapted, but they would maintain that biological systems are not well-designed. Why? Because evolutionary processes do not produce biological systems from scratch, but from preexisting systems that are co-opted through a process dubbed exaptation and then modified by natural selection to produce new designs. Once formed, these new structures can be fine-tuned and optimized through natural selection to produce well-adapted designs, but not well-designed systems.

    If biological systems are, in effect, kludged together, why would engineers and technologists turn to them for inspiration? If produced by evolutionary processes—even if these processes operated over the course of millions of years—biological systems should make unreliable muses for technology development. Does it make sense for engineers to rely on biological systems—historically contingent and exapted in their origin—to solve problems and inspire new technologies, much less build an entire subdiscipline of engineering around mimicking biological designs?

    Using biological designs to guide engineering efforts seems to be fundamentally incompatible with an evolutionary explanation for life’s origin and history. On the other hand, biomimetics and bioinspiration naturally flow out of an intelligent design/creation model approach to biology. Using biological systems to inspire engineering makes better sense if the designs in nature arise from a Mind.

    Resources

    The Cell’s Design: How Chemistry Reveals the Creator’s Artistry by Fazale Rana (book)
    iDNA: The Next Generation of iPods?” by Fazale Rana (article)
    Harvard Scientists Write the Book on Intelligent Design—in DNA” by Fazale Rana (article)
    Digital and Analog Information Housed in DNA by Fazale Rana (article)
    Engineer’s Muse: The Design of Biochemical Systemsby Fazale Rana (article)

    Endnotes
    1. Andy Extance, “How DNA Could Store All the Worlds Data, Nature 537 (September 2, 2016): 22–24, doi:10.1038/537022a.
    2. Yaniv Erlich and Dina Zielinski, “DNA Fountain Enables a Robust and Efficient Storage Architecture,” Science 355 (March 3, 2017): 950–54, doi:10.1126/science.aaj2038.
  • A Critical Reflection on Adam and the Genome, Part 2

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | May 17, 2017

    When I began college, I signed up for a premed major but quickly changed my course of study after my first biology class. Biology 101 introduced me to the fascinating molecular world inside the cell. At that point, I was hooked. All I wanted was to become a biochemist.

    But there was another reason why I gave up on the prospects of becoming a physician. I didn’t think I had the mental wherewithal to make decisions with life and death consequences for patients. And to this day, I deeply admire men and women who do possess that mental fortitude.

    The problem is that once someone dies, they don’t come back to life. I knew this reality would loom large for every decision I would make as a physician. Over 100,000 years of human experience teaches that when people die, they remain dead. And this experience is borne out by centuries of scientific study into human biology.

    When It Comes to the Virgin Birth and the Resurrection, Christianity is Anti-Scientific

    Yet at the heart of the Christian faith is the idea that Jesus Christ was raised from the dead. To be clear: This idea is counter to human experience and thoroughly anti-scientific.

    On the other hand, a strong circumstantial case based on historical facts can be marshaled for the life, death, and resurrection of Jesus Christ. The historical evidence for the resurrection combined with the fact that this event transcends the laws of nature is clear evidence for Christians that God intervened in human history to perform a miracle—to act in a way that contravenes the laws of nature.

    Even though alternative explanations for the facts surrounding the resurrection fall short, many skeptics remain unconvinced that the resurrection happened. Why? Because it defies scientific explanation—dead people don’t come back to life.

    Yet, I don’t know of any evangelical or conservative Christian that would deny the resurrection. Nor would these same Christians deny the virgin birth—another event that also defies scientific explanation. As Christians, we readily embrace anti-scientific ideas when they are central to Christianity. We don’t view them as allegorical or as literary constructs that teach theological truths so that they can be accommodated to scientific truth. We regard them as real events in space and time, in which God discernibly acted in a miraculous way.

    Not only are the resurrection and the virgin birth anti-scientific, but the explanations for these two events completely fly in the face of methodological naturalism—the philosophical idea undergirding contemporary science. According to this philosophical system, scientific explanations must rely on material causes—natural process mechanisms. Any explanation that appeals to the work of a supernatural agent—a Creator—or processes that defy known laws of nature can’t be part of the scientific construct. By definition, these types of explanations are forbidden. Yet when it comes to the resurrection and the virgin birth, Christians reject methodological naturalism without apology. We don’t try to force these events within the framework of methodological naturalism by arguing that God used the laws of nature to affect the virgin birth or the resurrection. Why? Because the explanations for these events go beyond nature’s laws—these events are transcendent miracles.

    Adam and Eve’s Creation and Importance to the Christian Faith

    Should we not be willing to do adopt the same posture when it comes to the question of origins, including the historicity of Adam and Eve?

    Like the virgin birth and the resurrection, Adam and Eve’s existence and role as humanity’s founding couple impacts key doctrines of the Christian faith, such as inerrancy, the image of God, the fall, original sin, marriage, and the atonement.

    Venema and McKnight’s Adam and the Genome

    The importance of a historical Adam and Eve to the Christian faith explains why New testament scholar Scot McKnight (Northern Seminary) spent four chapters—half of a book—in Adam and the Genome trying to convince the reader that the existence of this primordial couple is not critical to the Christian faith. McKnight felt this exercise necessary because he concedes that comparative genomics and population genetics demonstrate the truth of human evolution and the impossibility that humanity arose from a primordial pair—an Adam and an Eve.

    Coauthored along with biologist Dennis Venema (Trinity Western University), Adam and the Genome presents a scientific and theological case for evolutionary creationism—the idea that God employed evolutionary processes to bring about the design, origin, and history of life, including humanity.1

    The case Venema presents for human evolution serves as the motivation for McKnight’s contribution to the book. In fact, McKnight’s portion of Adam and the Genome is just the latest in a growing list of responses by evangelical and conservative Christian theologians to the specter of human evolution. Though this idea has been in play since the late 1800s with the publication of Darwin’s The Descent of Man, recently, Christian scholars, such as McKnight, feel compelled to sort through the theological fallout of this scientific explanation for human origins because of the emergence of genomics. Now that we have the capability to efficiently sequence and compare the entire genetic makeup of humans and other creatures, such as the great apes, the sense is that the case for human evolution has become undeniable.

    So, have Venema and McKnight made their case? Is human evolution a fact? Are Adam and Eve merely theological constructs?

    Having left the theological response to McKnight in the hands of scholars such as Gavin Ortlund and Ken Keathley, in part 1 of this review, I offered my reflections on Venema’s intellectual journey from an antievolutionary intelligent design proponent to someone who embraces and now advocates for evolutionary creationism, concluding that it wasn’t scientific evidence alone that motivated Venema and many other evolutionary creationists to adopt this view. I contend that many evolutionary creationists adopt this view, in part, because they are reacting to the disappointment they felt when they realized that they had been unintentionally mislead (when they were young and scientifically naïve) by well-meaning Christians who taught them young-earth creationism. I argue that in abandoning young-earth creationism, many evolutionary creationists have moved to the opposite extreme, rejecting any science-faith model that doesn’t fully embrace mainstream scientific ideas—even if those ideas challenge key biblical doctrines.

    In this second part of my review, I offer my thoughts on the core of Venema’s case for human evolution: namely, work in comparative genomics and population genetics, found in chapters two and three, respectively, of Adam and the Genome.

    Venema’s goal in his contribution to Adam and the Genome is to communicate the “undeniable” evidence for human evolution. Specifically, Venema discusses recent work in comparative genomics with the hope of explaining to the motivated layperson why many biologists regard the shared features in genomes as evidence for common ancestry. Applying that insight to whole genome comparisons of humans, chimpanzees, and other great apes, Venema explains why biologists think humanity shares an evolutionary history with the great apes—in fact, with all life on Earth. Focusing on pseudogenes, Venema concludes the case for common descent by discussing the widespread occurrence of nonfunctional DNA sequences located throughout the genomes of humans and the great apes—usually in corresponding locations in these genomes. Venema argues that these one-time functional DNA sequence elements were rendered nonfunctional through mutational events and are retained in genomes as vestiges of evolutionary history.

    Role of Methodological Naturalism in Venema’s Argument

    Admittedly, the scientific case Venema presents for common descent is strong—at least at first glance. (Though, in making his case, he does overlook some significant scientific issues confronting evolutionary biologists, such as the incongruency of evolutionary trees. In other words, evolutionary biologists wind up with different evolutionary trees depending on the region of the genome they use to build the trees. This is certainly the case when the human genome is compared to the genomes of chimpanzees and gorillas. One-third of the human genome more closely aligns to the gorilla genome than to the chimpanzee genome, indicating that gorillas, not chimpanzees, are our closest evolutionary relative.)

    Having acknowledged the strong case Venema makes for human evolution, I want to make sure that the reader recognizes the powerful, yet often unrecognized, role methodological naturalism plays, propping up the case for common descent, and, hence, human evolution. Because of the influence of methodological naturalism, the only permissible way to interpret shared genetic features within the mainstream scientific enterprise is from an evolutionary framework. Any explanation evoking a Creator’s involvement is off the table—even if a creation model can account for the data, and it can. However, this approach will never receive a hearing in the scientific community today because it violates the tenets of methodological naturalism. In other words, because of methodological naturalism’s sway, common descent, and, consequently, human evolution must be true by default. No other option is allowed. No other explanation, no matter how valid, is permitted.

    Like most evolutionary creationists, Venema and McKnight embrace methodological naturalism when it comes to the question of human origins. Yet they readily reject this idea when it comes to the virgin birth and the resurrection. As a result, their approach to science is inconsistent. Why apply the principles of methodological naturalism to human origins but not to questions surrounding the resurrection or the virgin birth?

    It is true that methodological naturalism has a demonstrated track record of success—when it guides investigation of secondary, proximal causes. But this scientific approach often comes up short when scientific questions focus on primary or ultimate causes, such as the origin of the universe or the origin of life.

    In fact, I wonder if Christians should embrace methodological naturalism at all. At its essence, this philosophical approach to science is inherently atheistic. A Christian could justify embracing a limited or weak form of methodological naturalism because Scripture teaches that God has providentially instituted processes that operate within the creation to sustain it. When studying these types of phenomena, application of methodological naturalism appears to be justified because the focus is on identifying and characterizing secondary, proximal causes.

    But what about the question of origins? Given the descriptions of God’s creative work in the creation accounts, it looks as if God intervened in a direct personal way when it comes to the origin of the universe and the origin and history of life—particularly when it comes to humanity’s beginnings. If so, then methodological naturalism becomes an impotent guide for scientific study because it insists that these events must have mechanistic causes—even if they may not. By default, an atheistic worldview is imposed on the scientific enterprise. Within the framework of methodological naturalism, science no longer becomes the quest for truth, but a game played, with the goal being to produce a material causes explanation for the universe and phenomena within the universe, even if material causes aren’t the true explanation—and even if the explanations leave something to be desired.

    Adherents of methodological naturalism defend its restrictions by arguing that science can’t put God in a test tube. Yet it is a straightforward exercise to show that science does have the tool kit to detect the work of intelligent agents within nature and to characterize their capabilities. By extension, science should have no problem detecting a Creator’s handiwork—and even determining the Creator’s identity.

    So, what happens if we relax the restrictive requirements of methodological naturalism when we investigate the question of human origins? If we do, it becomes evident that human evolution isn’t unique in its capacity to explain shared genetic features. It becomes conceivable that the shared genetic features in the genomes of humans and the great apes could reflect similar designs employed by a Creator. To put it another way, the shared genetic features could reflect common design, not common descent.

    Though this approach to the data is forbidden by contemporary mainstream science, this interpretative approach is not anti-scientific. In fact, there is a historical precedent for viewing shared genetic features as evidence for common design, not common descent. Prior to Darwin, distinguished biologist Sir Richard Owen interpreted shared (homologous) biological structures (and, consequently, related organisms) as manifestations of an archetype that originated in the mind of the First Cause, not the products of descent with modification. Darwin later replaced Owen’s archetype with a common ancestor. Again, the key point is that it is possible to conceive of an alternative interpretation of shared biological features, if one is willing to allow for the operation of a Creator within the history of life.

    If the action of an intelligent agent becomes part of the construct of science, and hence, biology, then the shared molecular fossils in the genomes of humans and the great apes (such as pseudogenes) could be seen as shared design features. These sequence elements point to common descent only if certain assumptions are true:

    1. the genomes’ shared structures and sequences are nonfunctional;
    2. the events that created these features are rare, random, and nonrepeatable;
    3. no mechanisms other than common descent (vertical gene transfer) can generate shared features in genomes.

    However, recent studies raise questions about the validity of these assumptions. For example, in the last decade or so, molecular biologists and molecular geneticists have discovered that most classes of “junk DNA,” including pseudogenes, have function. (Interested readers can find references to the original scientific papers in the expanded second edition of Who Was Adam? and The Cell’s Design.) In fact, the recently proposed competitive endogenous RNA hypothesis explains why pseudogenes must display similar sequences to their functional counterparts in order to carry out their cellular function.

    Moreover, as discussed in Who Was Adam?, researchers are now learning that many of the events that alter genomes’ structures and DNA sequences are not necessarily rare and random. For example, biochemists have known for quite some time that mutations occur in hotspots in genomes. Recent work also indicates that transposon insertion and intron insertion occur at hot spots, and gene loss is repeatable. New studies also reveal that horizontal gene transfer can mimic common descent. This phenomenon is not confined to bacteria and archaea but has been observed in higher plants and animals as well, via a vector-mediated pathway or organelle capture.

    These advances serve to undermine key assumptions needed for a common descent argument. Considering these discoveries, is it possible to make sense of the shared genomic architecture and DNA sequences within the framework of a creation model?

    A Scientific Creation Model for Common Design

    What follows is a brief abstract of the RTB genomics model. A more detailed description and defense of our model can be found in the second expanded edition of Who Was Adam?

    A key tenet of the model is the idea that organisms—and hence, their genomes—are the products of God’s direct creative activity. But once created, genomes are subjected to microevolutionary processes.

    In brief, our model explains the similarities among organisms’ genomes in one of two ways:

    1. Reflecting the work of a Creator who deliberately designed similar features in genomes according to: (1) a common function, or (2) a common blueprint.
    2. Reflecting the outworking of physical, chemical, or biochemical processes that (1) occur frequently, (2) are nonrandom, and (3) are reproducible. These processes cause the independent origin of the same features in the genomes of different organisms. These features can be either functional or nonfunctional.

    Our model also explains genomes’ differences in one of two ways:

    1. Reflecting the work of a Creator who deliberately designed differences in genomes with distinct functions.
    2. Reflecting the outworking of physical, chemical, or biochemical processes that reflect microevolutionary changes.

    In principle, our model can account for similarities and differences in the genomes of organisms as either the deliberate work of a Creator or via natural-process mechanisms that alter the genomes after creation.

    Were Adam and Eve Real?

    Having argued for the reality of human evolution, Venema focuses attention on Adam and Eve’s historicity. If humanity arose through an evolutionary process, then Venema rightly points out that humanity must have begun as a population, not a primordial couple—by definition. According to evolutionary biologists: evolution is a population-level phenomenon. That being the case, if humanity arose via evolutionary processes, then there could never have been an Adam and an Eve. In support of this idea, Venema then discusses population genetics studies that indicate humanity began as an initial group of around 10,000 individuals. Based on these methods, the genetic diversity among humans today is too great to have come from just two individuals. Venema then goes on to explain how evolutionary biologists reconcile the existence of Mitochondrial Eve and Y-chromosomal Adam (understood to be an actual woman and man, respectively) with the idea that humanity began as a population.

    Some Thoughts on Methods Used to Estimate Humanity’s Initial Population Size

    Did humanity originate from a primordial pair?

    One point Venema fails to acknowledge is that, at best, the population sizes generated from genetic diversity data are merely estimates, not hard and fast values. The reason: the mathematical models these methods are based on are highly idealized, generating differing estimates based on several factors.

    More significantly, recent studies focusing on birds and mammals, however, raise questions as to whether these models predict population size. As the author of one study states, “Analyses of mitochondrial DNA (mtDNA) have challenged the concept that genetic diversity within populations is governed by effective population size and mutation rate . . . the variation in the rate of mutation rather than in population size is the main explanation for variations in mtDNA diversity observed among bird species.”2

    In fact, several studies—involving white-tail deer, mouflon sheep, Przewalski’s horses, white-tail eagles, the copper redhorse, and gray whales—in which the original population size was known, the measured genetic diversity generations later was much greater than expected based on the models. In turn, if this data was used to estimate initial population size, the numbers would be much greater than the models predicted.

    Did humanity originate from a single pair? Even though population estimates indicate humanity originated from several hundred to several thousand individuals based on mathematical models, it could well be that these numbers overestimate the original numbers for the first humans. And given how poorly these population size models perform, it is hard to argue that science has falsified the notion that humanity descended from a primordial pair.

    Final Thoughts

    In Adam and the Genome, Venema makes a compelling case for human evolution, but he fails to tell the entire story. Venema overlooks a serious problem facing the evolutionary paradigm: namely, the incongruencies of evolutionary trees built from genetic data. He also neglects to communicate a legion of exciting discoveries made since the human genome sequence was completed—discoveries indicating that virtually every class of junk DNA has function. These discoveries undermine evolution’s case and make it apparent that we are in our infancy when it comes to understanding the structure and function of the human genome. The more we learn, the more evident its elegant and ingenious design.

    At the end of the day, the case for human evolution is propped up by the restrictions of methodological naturalism. As we have demonstrated in Who Was Adam?, when this restriction is relaxed, it is possible to advance a competing creation model that can account for the data from comparative genomics.

    One thing has become clear to me after reading Adam and the Genome. It is no longer effective for creationists and intelligent design proponents to focus our efforts on taking pot shots at human evolution. We must move beyond that type of critique and develop a philosophically robust framework for science that can compete with methodological naturalism and advance scientific models within that new framework with the capacity of explaining the data from comparative genomics and population genetics.

    I am confident we can. We simply must roll up our sleeves and get to work.

    Resources—Theological Reflections on Adam and the Genome

    Resources—An Old-Earth Creationist Perspective on the Scientific Case for a Traditional Biblical View of Human Origins

    Resources—The Problem of Incongruent Evolutionary Trees

    Resources—Science Can Detect the Creator’s Handiwork in Nature

    Resources—Common Design as a Valid Scientific Model

    Resources—Junk DNA is Functional

    Resources—Pseudogenes are Functional

    Resources—Mutational Hot Spots in Genomes

    Resources—Adam and Eve’s Historicity

    Endnotes
    1. Dennis R. Venema and Scot McKnight, Adam and the Genome: Reading Scripture after Genetic Science (Grand Rapids, MI: Brazos Press, 2017).
    2. Hans Ellegren, “Is Genetic Diversity Really Higher in Large Populations?” Journal of Biology 8 (April 2009): 41, doi:10.1186/jbiol135.
  • A Critical Reflection on Adam and the Genome, Part 1

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | May 10, 2017

    Who doesn’t like a bargain? I sure do. And I am a sucker for 2-for-1 specials.

    For those interested in science-faith discussions, the recent book by biologist Dennis Venema (Trinity Western University) and New Testament scholar Scot McKnight (Northern Seminary) is quite the deal. Two books in one, Adam and the Genome presents a scientific and theological case for evolutionary creationism—the idea that God employed evolutionary processes to bring about the design, origin, and history of life, including humanity.1

    The first half of the book, written by Venema, presents a case for human evolution from recent work in comparative genomics and population genetics. As part of his case for human evolution, Venema makes it clear that the genetic diversity of humanity is too extensive to have come from a primordial couple—Adam and Eve.

    As an author who works in the science-faith arena, I am impressed with the writing of Venema’s portion of the book. He does a masterful job of communicating complex ideas in genomics and population genetics in an accessible way. He makes it easy for the uninitiated to understand why a growing number of evangelical Christians feel compelled to embrace evolutionary creationism.

    The author of the book’s second half, Scot McKnight, assumes the reality of human evolution along with the inevitable requirement that humanity emerged as a population, not a primordial pair. Making these two concessions, McKnight explains why he doesn’t think the Christian faith depends on a historical Adam and Eve as the sole progenitors of all humanity. Instead, he argues that Adam and Eve should be viewed as archetypal—as literary and theological concepts.

    So, have Venema and McKnight made their case?

    Even though I can’t resist 2-for-1 deals, I am not going to offer the reader a 2-for-1 review. Instead, I am limiting my critical reflections to Dennis Venema’s portion of the book. Because I’m not a biblical scholar or a theologian, I will refrain from sharing my thoughts on McKnight’s contribution to Adam and the Genome. Instead, I encourage the curious reader to take a look at articles by theologians Ken Keathley and Gavin Ortlund. Both scholars offer insightful commentary on McKnight’s analysis of the historical Adam—a much better bargain than anything I could hope to offer.

    Venema’s Case for Human Evolution

    Venema opens his case for human evolution by maintaining that the theory of biological evolution is well evidenced—the real deal. He argues that the theory of evolution has broad explanatory and predictive power.

    He then turns to recent work in comparative genomics, explaining why many biologists regard the shared features in genomes as evidence for common ancestry. Applying that insight to whole genome comparisons of humans, chimpanzees, and other great apes, Venema explains why biologists think humanity shares an evolutionary history with the great apes—and, in fact, with all life on Earth. Focusing on pseudogenes, Venema concludes the case for common descent by discussing the widespread occurrence of nonfunctional DNA sequences located throughout the genomes of humans and the great apes—usually in corresponding locations in these genomes. Venema argues that these one-time functional DNA sequence elements were rendered nonfunctional through mutational events and are retained in genomes as vestiges of evolutionary history.

    Venema then turns his attention to the question of Adam and Eve. If humanity arose through an evolutionary process, then Venema rightly points out that humanity must have begun as a population, not a primordial couple—by definition. According to evolutionary biologists, evolution is a population-level phenomenon. That being the case, if humanity arose via evolutionary processes, then there could never have been an Adam and an Eve. In support of this idea, Venema then discusses population genetics studies that indicate humanity began as an initial group of around 10,000 individuals. Based on these methods, the genetic diversity among humans today is too great to have come from just two individuals. Venema then goes on to explain how evolutionary biologists reconcile the existence of mitochondrial Eve and Y-chromosomal Adam (understood to be an actual woman and man, respectively) with the idea that humanity began as a population.

    Finally, Venema closes out his portion of the book by offering a critique of the two most common challenges to biological evolution raised by the intelligent design movement: (1) irreducible complexity, and (2) the improbability of biological information arising by chance. Venema does a nice job of explaining why most biologists are not impressed with these challenges to biological evolution, and hence, the case for intelligent design.

    Venema’s Story

    One of the things Venema does exceptionally well in Adam and the Genome is interweave throughout his four chapters the story of his intellectual conversion—from intelligent design to evolutionary creationism.

    Venema recounts growing up in a conservative Christian home and attending a private Christian school where he learned that “‘Darwin’ and ‘evolution’ were evil, of course—things that atheist scientists believed despite their overwhelming flaws, because those scientists had purposefully blinded their eyes to the truth.”2

    Venema tells how, at an early age, he was fascinated with the natural world and wanted to be a scientist. His frustration evident, Venema describes how his dreams of becoming a scientist were waylaid because of the influence of the young-earth creationism that perfused his home, school, and church community.

    Unable to afford a private Christian college, Venema headed off to a secular university, sure that his faith would be challenged by his course work. Enrolled in a premed program (because he felt it safer than pursuing a science major), Venema describes how biology failed to capture his interest, until he began to do research in a university lab as an undergraduate student. That experience transformed him from a lackluster student to one who was highly motivated. It also inspired him to give up on medicine (even though he had the grades to get into medical school) and pursue a career in science. After completing his undergraduate education, Venema earned a PhD in genetics. Venema recounts how his antievolutionary views remained intact throughout his undergraduate and graduate training. In fact, he recalls how deeply impacted he was by the challenges biochemist Michael Behe leveled against Darwinian evolution in his book Darwin’s Black Box. In this book, and elsewhere, Behe argues that biochemical systems are irreducibly complex, and because of this property cannot arise in a stepwise evolutionary process, but must originate at once, with all components simultaneously coming together.

    It was only later that Venema realized the deficiency of Behe’s case and other intelligent design arguments. According to Venema, he eventually concluded that intelligent design was based on god-of-the-gaps reasoning. Venema states, “Over the course of my personal journey away from ID, I came to an uncomfortable conclusion: ID seemed strong only where there was a lack of relevant evidence.” 3

    Is Evolutionary Creationism an Overreaction to Ill-Conceived Science-Faith Models?

    Venema does a masterful job of explaining why so many biologists are convinced that life’s design, origin, history—including humanity’s origin—are best explained by the theory of evolution. Reading through Venema’s chapters, it becomes clear that strong evidential support exists for the theory of evolution, and along with it, human evolution. But, in my view Venema doesn’t tell the full story. There are also significant events in life’s history that evolutionary theory fails to explain—for example, the origin and design of biochemical systems. In fact, Venema readily acknowledges the scientific community’s failure to explain the origin of life through evolutionary means. It was this failure combined with the elegant, sophisticated, and ingenious designs of biochemical systems that convinced me that life’s origin and design at a molecular level must be the handiwork of a Creator. Despite Venema’s assertion, when it comes to the origin and design of biochemical systems, the case for intelligent design, and hence, a Creator’s role in life’s origin has become stronger over the last three decades—not because of our ignorance, but because of what we have learned about the origin-of-life problem and the structure and function of biochemical systems.

    Yet having staked out and defended this claim in Origins of Life, The Cell’s Design, and Creating Life in the Lab, I am sympathetic to the critique Venema levels against: (1) Behe’s idea of irreducible complexity, and (2) the popular claim made by many Christian apologists that evolutionary mechanisms cannot generate biological information. Like Venema, at one time I found both arguments compelling. But as I carefully listened to the rebuttals to these arguments from origin-of-life researchers and evolutionary biologists over the years, I found myself less convinced that these specific arguments represent valid critiques of the abiogenesis and evolutionary theory. (For more details, see the Resources section of this review.)

    Unlike Venema, I didn’t abandon progressive creationism for evolutionary creationism when I soured on these two popular design arguments. Why? In spite of the limitations of these two arguments, I am more convinced than ever that the origin of life and the design of biochemical systems can’t be explained by evolutionary mechanisms. The case for a Creator doesn’t rise and fall on the validity of the arguments from irreducible complexity and the improbability of evolutionary mechanisms generating information. Instead, as I outline in a recently released video, How to Make a Case for Biochemical Design, the case for God’s role in the genesis of life and design of biochemical systems finds its basis in several different lines of evidence that collectively form a powerful weight-of-evidence case for biochemical design.

    Yet Venema doesn’t see it that way, even though he acknowledges the challenges facing an evolutionary explanation for life’s origin. Why?

    I am sure Venema would answer that his reluctance to embrace any form of intelligent design/creationism is the overwhelming evidence for common descent and human evolution. But given his story, I can’t help but wonder if there is more to it. I can’t help but wonder if Venema’s move away from intelligent design to evolutionary creationism isn’t possibly an overreaction, in part, to feeling duped by well-meaning Christians who authoritatively taught flawed scientific ideas as truth. I can’t help but wonder if Venema’s embrace of mainstream scientific ideas about evolution finds some motivation in the safety of this approach. By embracing evolutionary creationism, he will never be at odds with mainstream scientific thinking again. Those of us who espouse ideas about the design, origin, and history of life outside of the scientific mainstream know the cost of adopting these views. All of us have been ridiculed and dismissed by skeptics and people in the scientific community simply because we have the impertinence to challenge mainstream scientific ideas regarding origins and the temerity to claim that the evidence points to God’s role in the origin and design of the universe and life.

    Over the years, I have gotten to know several evolutionary creationists who have similar stories to Venema’s. I have often heard evolutionary creationists express disappointment by being unintentionally mislead when they were young and scientifically naïve by well-meaning Christians who taught them young-earth creationism, only to later discover the scientific deficiencies of that idea. It seems to me that in abandoning young-earth creationism, they, like Venema, have moved to the opposite extreme, rejecting any science-faith model that doesn’t fully embrace mainstream scientific ideas—even if those ideas challenge key biblical doctrines.

    In fact, I have had many evolutionary creationists say to me both publicly and privately: If evangelical Christians don’t accept the evolutionary paradigm, we will lose all credibility with the scientific community. I have heard evolutionary creationists argue that evangelical Christianity must adapt to the reality of evolution if the Christian faith is to remain relevant.

    I will address these concerns more fully in part 2 of this review. For now, Venema’s story serves as a cautionary tale for all of us involved in science-faith discussions. We need to make sure that our ideas are scientifically credible, even if they lie outside the scientific mainstream. It is important that we faithfully communicate the scientific consensus and why the scientific community holds to it before we offer alternative models. We also need to be willing to acknowledge the shortcomings of our approach and models, whether our ideas fall within or outside the mainstream. Young-earth, old-earth, and evolutionary creationists alike need to exercise humility when it comes to advocating for their views. Perhaps if these practices were more commonplace, extreme views such as evolutionary creationism (and young-earth creationism) wouldn’t hold such sway.

    What Motivations Influence My Views?

    Venema’s story has caused me to reflect on my own intellectual journey: from an agnostic to a theist; from a theist to a Christian, who embraced theistic evolution; and, finally, from an adherent of theistic evolution to one who now espouses progressive creationism. Do I hold my views based on evidence alone? Or are there other motivating factors? Like Venema, I would like to think that I hold my views because they best account for all the evidence, both scientific and biblical. But maybe I have a deep-seated skepticism of biological evolution because I, too, felt duped by well-meaning biology professors who taught me that the case for the evolutionary paradigm was airtight, when, in fact, I later learned was not the case whatsoever. I feel as if my journey to faith in Christ was waylaid because of my wholehearted embrace of the evolutionary paradigm, again based on a simplistic treatment of biological evolution. At one time in my life, I reasoned that if evolution can account for everything, then why is a Creator needed? God becomes superfluous in the evolutionary paradigm.

    My point is this: a complex interplay of several factors determines the views that each of us holds, including the relationship between science and the Christian faith. Sincere, thoughtful, highly educated Christians can look at the same scientific and biblical data and come to rather different conclusions. It is for this reason that when we discuss science-faith issues with others (both inside and outside the Church) we need to move past the evidence and learn about one another’s experiences and control beliefs. In doing so, hopefully we realize that no one position uniquely holds the scientific or biblical high ground.

    Unfortunately, like many evolutionary creationists, Venema writes as if evolutionary creationism is the only scientifically credible view. And McKnight, like other evolutionary creationists, adopts the posture that it is exegetically unreasonable to embrace a traditional biblical view of human origins. But, what if one reaches a different conclusion? Namely, that Scripture teaches humanity was created in God’s image through direct and personal Divine action and that all humanity comes from Adam and Eve? Does that mean, as Christians, we must abandon the scientific high ground?

    In part 2, I will argue that the answer is no. I maintain that it is possible to hold to a scientifically credible view of human origins, while at the same time embracing the traditional biblical view of human origins. However, to do so, we must abandon methodological naturalism as the philosophical framework for science. If we do so, we will find the theory of evolution doesn’t uniquely account for the data from comparative genomics and population genetics. It is possible to present a robust scientific model (see Who Was Adam?) that explains the shared similarities and differences found in the genomes of humans and the great apes as shared design features—manifestations of an archetypal design.

    Resources—Theological Reflections on Adam and the Genome

    Resources—An Old-Earth Creationist Perspective on the Scientific Case for a Traditional Biblical View of Human Origins

    Resources—The Challenges to Two Popular Design Arguments

    Endnotes
    1. Dennis R. Venema and Scot McKnight, Adam and the Genome: Reading Scripture after Genetic Science (Grand Rapids, MI: Brazos Press, 2017).
    2. Ibid., 1.
    3. Ibid., 90.
  • Conservation Biology Studies Elicit Doubts about the First Human Population Size

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Apr 26, 2017

    Adam named his wife Eve, because she would become the mother of all the living.

    –Genesis 3:20

    Prior to joining Reasons to Believe in June of 1999, I spent seven years working in research and development for a Fortune 500 company. Part of my responsibilities included method development. My lab worked on developing analytical methods to measure the active ingredients in our products. But more interesting to me was the work we did designing methods to predict consumer responses to our product prototypes.

    Before we could deploy either type of method, it was critical for us to ensure that the techniques we developed would generate reliable, accurate data that could be used to make sound business decisions.

    Method Validation

    Researchers assess the soundness of scientific methods through a process called method validation. A key part of this process involves applying the method to “known” samples. If the method produces the expected result, it passes the test. For example, the team in my lab would often develop analytical methods to measure the active ingredients in our products. To validate these methods, we would carefully weigh and add specified amounts of the actives to prepared samples and then use our newly developed method to measure the ingredient levels. If we got the right results, it gave us the confidence to apply the method to real world samples.

    A Controversy about the Size of the First Human Population

    Currently, a set of scientific methods resides at the center of an important controversy among conservative and evangelical Christians about the historicity of Adam and Eve. Specifically, the scientific methods in question are designed to measure the population size of the first humans. Even though the traditional reading of the biblical creation accounts indicates humanity began as a primordial pair—an Adam and Eve—all three sets of methods indicate that the initial human population size consisted of several thousand individuals, not two, raising serious questions about the traditional Christian understanding of humanity’s origin. Some evangelical Christians argue that we must accept these findings and reinterpret the biblical creation accounts, regardless of the theological consequences. Others (myself included) question the validity of these methods. It is important to make sure that these techniques perform as intended before abandoning the traditional biblical view of humanity’s beginnings.

    The Importance of Adam and Eve’s Historicity

    The finding that humanity began as a population, not a pair, causes quite a bit of consternation for me and many other evangelical and conservative Christians. Adam and Eve’s existence and role as humanity’s founding couple are not merely academic concerns. For the Christian faith, the question of Adam and Eve’s historicity are more significant than any business decision that relied on analytical methods my lab developed. (Data from my lab was used to make some decisions that involved millions of dollars.) The historicity of Adam and Eve impacts key doctrines of the Christian faith, such as inerrancy, the image of God, the fall, original sin, marriage, and the atonement.

    Again, given the profound implications of abandoning Adam and Eve’s historicity, it is important to know if these population size methods perform as intended. They are a big part of the reason evolutionary biologists and geneticists reject Adam and Eve’s existence. To put it another way, are these methods valid, yielding accurate, reliable results?

    Measuring the Initial Human Population Size

    Currently, geneticists use three approaches to estimate the size of the initial human population. 1

    1. The most prominent method finds its basis in mathematical expressions relating the current genetic variability among humans today to mutation rate and initial population size. Using these relationships, geneticists develop mathematical models that allow them to calculate the initial population size for the first humans after measuring genetic variability of contemporary human population groups (and assuming a constant mutation rate).
    2. A more recently developed approach relies on a phenomenon called linkage disequilibrium to measure the initial population size of the first humans.
    3. The final approach (also relatively new on the scene) makes use of a process called incomplete lineage sorting to estimate humanity’ s initial population size.

    Are Population Size Methods Valid?

    So are these methods valid? When I have asked evolutionary creationists this question, they usually hem and haw, and then reply: These methods are based on sound, well-understood phenomena, and therefore should be considered reliable.

    I believe that to be true. The methods do appear to be based on sound principles. But that is not enough—not if we are to draw rigorous scientific conclusions. Scientific methods can only be considered reliable if they have been validated. When I worked in R&D, if I insisted to my bosses that they should accept the results of methods I developed because they were based on sound principles but lacked validation data, I would have been fired.

    So given the importance of the historical Adam and Eve, why should we accept anything less for population size measurements?

    To my surprise, when I survey the scientific literature, I can’t find any studies that demonstrate successful validation of any of these three population size methods. For me, this is a monumental concern, particularly given the importance of Adam and Eve’s historicity. The fact that these methods haven’t been validated provides justification for Christians to hold the results of these studies at arm’s length.

    In fact, when it comes to the first category of methods, I find something even more troubling: Studies in conservation biology raise serious questions about the validity of these methods. Of course, we can’t directly validate methods designed to measure the numbers of the first humans because we don’t have access to that initial population. But we can gain insight into the validity of these methods by turning to work in conservation biology. When a species is on the verge of extinction, conservationists often know the numbers of species that remain. And because genetic variability is critical for their recovery and survival, conservation biologists monitor genetic diversity of endangered species. In other words, conservation biologists have the means to validate population size methods that rely on genetic diversity.

    In my book Who Was Adam? I discuss three separate studies (involving mouflon sheep, Przewalski’s horses, and gray whales) in which the initial populations were known. When the researchers measured the genetic diversity generations after the initial populations were established, the genetic diversity was much greater than expected—again, based on the models relating genetic diversity and population size.2 In other words, this method failed validation in each of these cases. If researchers used the genetic variability to estimate original population sizes, the sizes would have measured larger than they actually were.

    In Who Was Adam? I also cite studies that raise doubts about the reliability of linkage disequilibrium methods to accurately measure population sizes.3 Not only is this method not validated, it, too, has failed validation.

    Recently, I conducted another survey of the scientific literature to see if I had missed any important studies involving population size and genetic diversity. Again, I was unable to find any studies that demonstrated the validity of any of the three approaches used to measure population size. Instead, I found three more studies indicating that when genetic diversity was measured for animal populations on the verge of extinction it was much greater than expected, based on the predictions derived from the mathematical models.4

    The Surprisingly High Genetic Diversity of White-Tailed Deer in Finland

    Of specific interest is a study published in 2012 by researchers from Finland. These scientists monitored the genetic diversity (focusing on 14 locations in the genome consisting of microsatellite DNA) of a population of white-tailed deer that were introduced into Finland from North America in 1934.5 The initial population consisted of three females and one male, and since then has grown to between 40,000 to 50,000 individuals. This population has remained isolated from all other deer populations since its introduction.

    Though the researchers found that the genetic diversity of this population was lower than for a comparable population in Oklahoma (reflecting the genetic bottleneck that occurred when the original members of the population were relocated), it was still surprisingly high. Because of this unexpectedly high genetic diversity, size estimates for the initial population would be much greater than four individuals. To put it another way, this population size method fails validation—one more time.

    Why is this approach to measuring population sizes so beleaguered, when the method is based on sound, well-understood principles? In Who Was Adam? (and elsewhere), I point out that the equations undergirding this method are simplified, idealized mathematical relationships that do not take into account several relevant factors that are difficult to mathematically model, such as population dynamics through time and across geography.

    Recently, conservation biologists have identified another factor influencing genetic diversity that confounds the straightforward application of the equations used to calculate initial population size: long generation times. That is, animals with long generation times display greater-than-anticipated genetic diversity, even when the population begins with a limited number of individuals.6

    This finding is significant when it comes to discussions about Adam and Eve’s historicity. Human beings have long generation times—longer than white-tailed deer. From a creation model perspective, these long generation times help to explain why humanity displays such relatively large genetic diversity, even though we come from a primordial pair. And it suggests that the initial population size estimates for modern humans are likely exaggerated.

    So did humanity originate as a population or a primordial pair?

    The claims of some geneticists and evolutionary biologists notwithstanding, it’s hard to maintain that humanity began as a population of thousands of individuals, because the methods used to generate these numbers haven’t been validated—in fact, work in conservation biology makes me wonder if these methods are trustworthy at all. Given their track record, I would never have used these methods when I worked in R&D to make a business decision.

    Resources

    Endnotes
    1. For a recent and accessible discussion of these methods see Dennis R. Venema and Scot McKnight, Adam and the Genome: Reading Scripture after Genetic Science (Grand Rapids, MI: Brazos Press, 2017), 45–48.
    2. Fazale Rana with Hugh Ross, Who Was Adam? A Creation Model Approach to the Origin of Humanity (Covina, CA: RTB Press, 2015), 349–353.
    3. Ibid.
    4. Catherine Lippé, Pierre Dumont, and Louis Bernatchez, “High Genetic Diversity and No Inbreeding in the Endangered Copper Redhorse, Moxostoma hubbsi (Catostomidae, Pisces): The Positive Sides of a Long Generation Time,” Molecular Ecology 15 (June 2006): 1769–1780, doi:10.1111/j.1365-294X.2006.02902.x; Frank Hailer et al., “Bottlenecked But Long-Lived: High Genetic Diversity Retained in White-Tailed Eagles upon Recovery from Population Decline,” Biology Letters 2 (June 2006): 316–319, doi:10.1098/rsbl.2006.0453; Jaana Kekkonen, Mikael Wikström, and Jon E. Brommer, “Heterozygosity in an Isolated Population of a Large Mammal Founded by Four Individuals Is Predicted by an Individual-Based Genetic Model,” PLoS ONE 7 (September 2012): e43482, doi:10.1371/journal.pone.0043482.
    5. Kekkonen, Wikström, and Brommer, “Heterozygosity in an Isolated Population.”
    6. Lippé, Dumont, and Bernatchez, “High Genetic Diversity and No Inbreeding”; Hailer et al., “Bottlenecked but Long-Lived.”
  • Does Radiocarbon Dating Prove a Young Earth? A Response to Vernon R. Cupps

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Apr 19, 2017

    In my experience, one of the most persuasive scientific claims for a young Earth is the detection of carbon-14 in geological samples such as coal and fossilized dinosaur remains.1 According to young-earth creationists (YECs), if the coal samples and fossils are truly millions of years old (as the scientific community claims), then there shouldn’t be any trace of carbon-14 in these samples. Why? It’s because the half-life of carbon-14 is about 5,700 years, meaning that all the detectable carbon-14 should have disappeared from the samples long before they reach even 100,000 years of age.

    In Dinosaur Blood and the Age of the Earth, I respond to this young-earth argument, suggesting three mechanisms that can account for carbon-14 in fossil remains (and by extension, in geological materials) from an old-earth perspective.

    When YECs detect carbon-14, they find it at low levels, corresponding to age dates older than 30,000 years (not 3,000 to 6,000 years old, as their model predicts, by the way). These low levels make it reasonable to think that some of the carbon-14 signal comes from contamination of the sample by, say, microorganisms picked up from the environment.

    These low levels also make it conceivable that some of the detected carbon-14 is due to a ubiquitous carbon-14 background. Cosmic rays are continuously producing radiocarbon from nitrogen-14. Because of this nonstop production, carbon-14 is everywhere and will show up at extremely low levels in any measurement that is made, even if it isn’t present in the actual sample.

    It is also possible that some of the carbon-14 in the fossil and coal samples arises from the in situ conversion of nitrogen-14 to carbon-14 driven by the decay of radioactive elements in the environment. Because fossils and coal derive from once-living organisms, there will be plenty of nitrogen-14 contained in these specimens. For example, environmental uranium and thorium would readily infuse into the interiors of fossils, and as these elements decay, the high energy they release will convert nitrogen-14 to carbon-14.

    Employing a “back-of-the-envelope” flux analysis, Vernon Cupps—a YEC affiliated with the Institute of Creation Research—has challenged my assessment, concluding that neither (1) the production of carbon-14 from cosmic radiation nor (2) the decay of radioactive isotopes in the environment is sufficient to account for the carbon-14 detected in fossil and geological samples.2

    Though I think his analysis may be unrealistically simplistic, let’s assume Cupps’s calculations are correct. He still misses my point. In Dinosaur Blood and the Age of the Earth, I argue that all three possible sources simultaneously contribute to the detectable carbon-14. In other words, while no single source may fully account for the detectable carbon-14, when combined, all three can. Cupps’s analysis neglects the contribution of the ubiquitous background carbon-14 and possible sources of contamination from the environment.

    Ironically, the low levels of carbon-14 detected in fossils and geological specimens by YECs actually argue against a young Earth, not an old Earth.

    How can that be?

    If fossil and geological specimens are between 3,000 and 6,000 years old, then somewhere between 50 and 75 percent of the original carbon-14 should remain in the sample. This amount of material should generate a strong carbon-14 signal. The fact that these specimens all age-date to 30,000 to 45,000 years old means that less than 2 percent of the original carbon-14 remains in these samples—if the results of this measurement are taken at face value. It becomes difficult to explain this result if these samples are less than 6,000 years old. On the other hand, the weak carbon-14 signal measured by YECs does make sense if carbon-14 does not reflect the material originally in the sample, but instead stems from a combination of (1) contamination from the environment, (2) ubiquitous background radiocarbon, and/or (3) irradiation of the samples by isotopes such as uranium or thorium in the environment.

    To put it plainly, it is difficult to reconcile the carbon-14 measurements made by YECs with fossil and geological samples that are 3,000 to 6,000 years old, Cupps’s analysis notwithstanding.

    On the other hand, an old-earth perspective has the explanatory power to account for the low levels of carbon-14 associated with fossils and other geological samples.

    Resources

    Endnotes
    1. Vernon R. Cupps, “Radiocarbon Dating Can’t Prove an Old Earth,” Acts & Facts, April 2017, http://www.icr.org/article/9937.
    2. Ibid.

About Reasons to Believe

RTB's mission is to spread the Christian Gospel by demonstrating that sound reason and scientific research—including the very latest discoveries—consistently support, rather than erode, confidence in the truth of the Bible and faith in the personal, transcendent God revealed in both Scripture and nature. Learn More »

Support Reasons to Believe

Your support helps more people find Christ through sharing how the latest scientific discoveries affirm our faith in the God of the Bible.

Donate Now

U.S. Mailing Address
818 S. Oak Park Rd.
Covina, CA 91724
  • P (855) 732-7667
  • P (626) 335-1480
  • Fax (626) 852-0178
Reasons to Believe logo