Where Science and Faith Converge
  • Molecular Logic of the Electron Transport Chain Supports Creation

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Feb 27, 2019

    “It was said that some scientists attended the oxidative phosphorylation sessions of the Federation meetings because they knew a good punch up was on the cards.”

    —John Prebble, Department of Biological Sciences, University of London

    It has been described as one of the most “heated and acrimonious debates in biochemistry during the twentieth century,”1 and its resolution carries implications for a different ideological conflict—that of the origin of life.

    This battle royale (dubbed the Ox Phos Wars) took place in the 1960s and early 1970s. At that time, biochemists were trying to decipher the mechanism used by mitochondria to produce the high-energy compound called ATP (adenosine triphosphate) through a process called oxidative phosphorylation (Ox Phos for short). Many components of the cell’s machinery use ATP to power their operations.

    blog__inline--molecular-logic-of-the-electron-transport-chain-1

    Figure 1: A schematic of the synthesis and breakdown cycle of ATP and ADP. Image credit: Shutterstock

    So acrimonious was the debate that scientists involved in this controversy often came close to blows when publicly debating the mechanism of oxidative phosphorylation. Much of the controversy centered around an idea known as the chemiosmotic theory, proposed by biochemist Peter Mitchell. He argued that the electron transport chain generates a proton gradient across the mitochondrial inner membrane and, in turn, exploits that gradient through a coupling process to drive the synthesis of ATP from ADP (adenosine diphosphate) and inorganic phosphate. (see figures 1 and 2). (The reverse reaction liberates chemical energy that drives many biochemical processes.)

    blog__inline--molecular-logic-of-the-electron-transport-chain-2

    Figure 2: A schematic of the chemiosmotic theory. Image credit: Shutterstock

    At the time, this idea was met with a large measure of skepticism by biochemists. It didn’t fit with the orthodoxy, characteristic of classical biochemistry. Biochemists found Mitchell’s ideas hard to understand and his personality abrasive, both of which led to the acrimony.

    Origin-of-life researcher Leslie Orgel referred to the chemiosmotic theory as one of the most counterintuitive ideas to ever come out of biology, comparing it to the ideas that formed the foundations of quantum mechanics and relativity.2

    Many biochemists preferred the chemical theory of oxidative phosphorylation over Mitchell’s chemiosmotic theory. Researchers thought that the phosphate group added to ADP was transferred from one of the components of the electron transport chain. In an attempt to support this idea, many biochemists frantically searched for a chemical intermediate with a high-energy phosphate moiety that could power the synthesis of ATP.

    The chemical theory was based on a process called substrate-level phosphorylation, exemplified by two reactions that form ATP during glycolysis. In one reaction, 1,3-diphosphoglycerate transfers one of its phosphate groups to ADP to form ATP. (In this case, 3-diphosphoglycerate serves as the intermediate with a high-energy phosphate moiety.) In the second reaction, phosphoenolpyruvate transfers a phosphate group to ADP to make ATP, with phosphoenolpyruvate functioning as the intermediate bearing a high-energy phosphate residue. (See figure 3.)

    As it turns out, the elusive intermediate was never found, forcing adherents of the chemical theory to abandon their model. Peter Mitchell’s idea won the day. In fact, Mitchell was awarded the Nobel Prize in Chemistry in 1978 for his contribution to understanding the mechanism of oxidative phosphorylation.

    Today, biochemists readily recognize the importance of proton gradients and the chemiosmotic process. Proton gradients are pervasive in living systems. Mitochondria are not alone. Chloroplasts rely on proton gradients during the process of photosynthesis. Bacteria and archaea also use proton gradients across their plasma membranes to harvest energy. Cells use proton gradients to transport material across cell membranes. And proton gradients even power the bacterial flagellum.

    Now that oxidative phosphorylation is understood, some evolutionary biologists and origin-of-life researchers have turned their attention to two questions: (1) How did chemiosmosis originate? and (2) Why are proton gradients so central to biochemical operations?

    Oxidative Phosphorylation and the Evolutionary Paradigm

    For many evolutionary biologists, understanding the origin of oxidative phosphorylation (and the use of proton gradients, in general) assumes a position of unique prominence because of the central role this process plays in harvesting energy in both prokaryotic and eukaryotic organisms. In other words, understanding the origin of oxidative phosphorylation (and use of proton gradients) is central to understanding the origin of life and the fundamental design of biochemical systems.

    Because the use of proton gradients in living systems is odd and counterintuitive, it becomes tempting for many origin-of-life researchers and evolutionary biologists to conclude that chemiosmosis reflects the outworking of a historically contingent evolutionary process that relied on existing systems and designs that were co-opted and, in turn, modified. This notion becomes reinforced by the work of origin-of-life researcher Nick Lane.

    Lane and his collaborators conclude that proton gradients must have been integral to the biochemistry of LUCA (the last universal common ancestor) because proton gradients are a near-universal feature of living systems. If so, then the use of proton gradients must have emerged during the origin-of-life process before LUCA originated. Lane and his team go so far as to propose that the first proto-cells emerged near hydrothermal vents and made use of naturally occurring proton gradients found in these environments as their energy source.3

    Once this system was in place, the strategy was retained in the cell lines that diverged from these early proto-cellular entities as the electron transport chain evolved from a simple, naturally occurring vent process to the complex process found in both prokaryotic and eukaryotic organisms. In other words, it would seem that the odd, counterintuitive nature of proton gradients reflects the happenstance outworking of chemical evolution that began when the naturally occurring proton gradients were co-opted in the early stages of chemical evolution.

    But Lane’s recent insight indicates that, though counterintuitive, the use of proton gradients to harvest the energy required to make ATP makes sense, displaying an exquisite molecular rationale.4 And if so, it forces a rethink of the explanation for the origin of chemiosmosis. To appreciate this shift in perspective, it is helpful to understand the process of oxidative phosphorylation, beginning with glycolysis and the Kreb’s cycle.

    Glycolysis and the Kreb’s Cycle

    The glycolytic pathway converts the fuel molecule glucose (a 6-carbon sugar) into two pyruvate molecules (3-carbon). This process proceeds through eleven chemical intermediates and nets 2 molecules of ATP (generated through substrate-level phosphorylation) and two molecules of NADH (nicotinamide adenine dinucleotide). NADH harbors high-energy electrons generated from the energy liberated from the breakdown and oxidation of glucose. As it turns out, the NADH molecules play a central role in generating most of the ATP produced when a sugar molecule breaks down.

    blog__inline--molecular-logic-of-the-electron-transport-chain-3

    Figure 3: Glycolysis. Image credit: Shutterstock

    The pyruvate generated by glycolysis is transported across the mitochondrial inner membrane into the matrix of the organelle. Here pyruvate is transformed into a molecule of carbon dioxide and a 2-carbon intermediate called acetyl CoA. This process generates 2 additional molecules of NADH.

    In turn, the Kreb’s cycle converts each acetyl CoA molecule into two molecules of carbon dioxide. (The net reaction: a 6-carbon glucose molecule breaks down into 6 carbon dioxide molecules.) During the process, the breakdown of each acetyl CoA molecule generates 1 ATP molecule (via substrate-level phosphorylation) and 3 molecules of NADH. Additionally, 1 molecule of FADH2 is formed. Like NADH, this molecule possesses high-energy electrons. (See figure 4.)

    blog__inline--molecular-logic-of-the-electron-transport-chain-4

    Figure 4: Kreb’s cycle. Image credit: Shutterstock

    The Electron Transport Chain and Oxidative Phosphorylation

    The high-energy electrons of NADH and FADH2 are transferred to the electron transport chain, which is embedded in the inner membrane.

    Four protein complexes (dubbed I, II, III, and IV) make up the electron transport chain. The high-energy electrons from NADH and FADH2 are shuffled from one protein complex to the next, with each transfer releasing energy that is used to transport protons from the mitochondrial matrix across the inner membrane, establishing the proton gradient. (See figure 5.) Oxygen is the final electron acceptor in the electron transport chain. The electrons transferred to oxygen lead to the formation of a water molecule.

    Because protons are positively charged, the exterior region outside the inner membrane is positively charged and the interior region is negatively charged. The charge differential created by the proton gradient is analogous to a battery and the inner membrane is like a capacitor.

    blog__inline--molecular-logic-of-the-electron-transport-chain-5

    Figure 5: Electron Transport Chain. Image credit: Shutterstock

    The coupling of the proton gradient to ATP synthesis occurs as a result of the flow of positively charged protons through the F0 component of a protein complex called F1-F0ATPase (also embedded in the mitochondrial inner membrane). F1-F0ATPase uses this flux to convert electrochemical energy into mechanical energy that, in turn, is used to drive the formation of ATP from ADP and inorganic phosphate.

    The Molecular Logic of Proton Gradients

    So, why are chemiosmosis and proton gradients universal features of living systems? Are they an outworking of a historically contingent evolutionary process? Or is there something more at work?

    Even though proton gradients seem counterintuitive at first glance, the use of proton gradients to power the production of ATP and other cellular processes reflects an underlying ingenuity and exquisite molecular logic. Research shows that proton gradients allow the cell to efficiently extract as much energy as possible from the breakdown of glucose (and other biochemical foodstuffs).5 On the other hand, if ATP was produced exclusively by substrate-level phosphorylation, using a high-energy chemical intermediate, much of the energy liberated from the breakdown of glucose would be lost as heat.

    To understand why this is so, consider this analogy. Suppose people in a particular community receive their daily allotment of water in a 10-gallon bucket. The water they receive each day is retrieved from a reservoir with a 12-gallon bucket and then transferred to their bucket. In the process, two gallons of water is lost. Now, suppose the water from the reservoir is retrieved with a 12-gallon bucket but dumped into a secondary reservoir that has a tap. The tap allows each 10-gallon bucket to be filled without losing two gallons. Though the procedure is indirect and more complicated, using the secondary reservoir to distribute water is more efficient in the long run. In the first scenario, it takes 60 gallons of water (transferred from the reservoir in five 12-gallon buckets) to fill up five 10-gallon buckets. In the second scenario, the same amount of water transferred from the reservoir can fill six 10-gallon buckets. With each transfer, the additional two gallons accumulate in the reservoir until there is enough water to fill another 10-gallon bucket.

    With substrate-level phosphorylation, when the phosphate group is transferred from the high-energy intermediate to ADP to form ATP, excess energy released during the transfer is lost as heat. It takes 7 kcal/mole of energy to add a phosphate group to ADP to form ATP. Let’s say that the hypothetical chemical intermediate releases 10 kcal/mole when its high-energy phosphate bond is broken. Three kcal/mole of energy is lost.

    On the other hand, using the electron transport chain to build up a proton gradient is like the reservoir in our analogy. It allows that extra three kcal/mole to be stored in the proton gradient. We can think of the F1-F0ATPase as analogous to the tap. It uses 7 kcal/mole of energy released when protons flow through its channels to drive the formation of ATP from ADP and inorganic phosphate. The unused energy from the proton gradient continues to accumulate until enough energy is available to form another ATP molecule. So, in our hypothetical scenario, if the cell used substrate-level phosphorylation to make ATP, 70 molecules of the high-energy intermediate would yield 70 molecules of ATP with 210 kcal/mole of energy released as heat. But, using the electron transport chain to generate proton gradients yields 100 ATP molecules with no energy lost as heat.

    Chemiosmotic Theory and the Case for Creation

    The elegant molecular rationale that undergirds the use of proton gradients to harvest energy and to power certain cellular processes makes it unlikely that this biochemical feature reflects the outcome of a historically contingent process that just happened upon proton gradients. Instead, it points to a set of principles that underlie the structure and function of biochemical systems—principles that appear to have been set in place from the beginning of the universe.

    The most obvious and direct way for the first protocells to harvest energy would seemingly involve some type of mechanism that resembled substrate-level phosphorylation, not an indirect and more complicated mechanism that relies on proton gradients. If the origin of chemiosmosis and the use of proton gradients was, indeed, a historically contingent outcome—predicated on the fact that the first protocells just happened to employ a natural proton gradient—it seems almost eerie to think that evolutionary processes blindly stumbled upon what would later become such an elegant and efficient energy-harvesting process. And a process necessary for advanced life to be possible on Earth.

    If not for chemiosmosis, it is unlikely that eukaryotic cells and, hence, complex life such as animals, plants, and fungi, could have ever existed. Substrate-level phosphorylation just isn’t efficient enough to support the energy demands of eukaryotic organisms.

    It is also difficult to imagine how the natural proton gradients exploited by the first protocells could have been co-opted and then evolved so quickly into the complex components of the electron transport chain and F1-F0ATPase coupling mechanism found in cells that preceded LUCA. Not only are the components of the electron transport chain complex, but they have to work together in an integrated manner to establish the proton gradient across mitochondrial membranes (and the plasma membranes of bacteria and archaea). Without the existence of the F1-F0ATPase (or some other mechanism) to couple proton gradients to the synthesis of ATP, the generation of proton gradients would be for naught. The origin of the electron transport chain and F1-F0ATPase have to coincide.

    On the other hand, the ingenious use of proton gradients and the elegant molecular logic that accounts for their universal use by living systems are exactly the features I would expect if life stems from the work of a Mind. Moreover, the architecture and operation of complex I and F1-F0ATPase add to the case for creation. These two complexes are molecular motors that bear an startling similarity to man-made machines, revitalizing the Watchmaker argument for God’s existence.

    As noted, the use of proton gradients points to a set of deep, underlying principles that arise from the very nature of the universe itself and dictate how life must be. The molecular rationale that undergirds the use of proton gradients and their near-universal occurrence in living organisms suggests that proton gradients are an indispensable feature of living organisms. In other words, without the use of proton gradients to harvest energy and drive cellular processes, advanced life would not be possible. Or another way to say it: if life was discovered elsewhere in the universe, it would have to employ proton gradients to harvest energy.

    It is remarkable to think that proton gradients, which are a manifestation of the laws of nature, are, at the same time, precisely the type of system advanced life needs to exist. One way to interpret this “coincidence” is that it serves as evidence that our universe has been designed for a purpose.

    And as a Christian, I find that notion to resonate powerfully with the idea that life manifests from an intelligent Agent—namely, God.

    Resources

    Endnotes
    1. John Prebble, “Peter Mitchell and the Ox Phos Wars,” Trends in Biochemical Sciences 27 (April 1, 2002): 209–12, doi:10.1016/S0968-0004(02)02059-5.
    2. Leslie E. Orgel, “Are You Serious, Dr. Mitchell?” Nature 402 (November 4, 1999): 17, doi:10.1038/46903.
    3. Nick Lane, John F. Allen, and William Martin, “How Did LUCA Make a Living? Chemiosmosis in the Origin of Life,” Bioessays 32 (2010): 271–80, doi:10.1002/bires.200900131.
    4. Nick Lane, “Why Are Cells Powered by Proton Gradients?” Nature Education 3 (2010): 18.
    5. Nick Lane, “Bioenergetic Constraints on the Evolution of Complex Life,” Cold Spring Harbor Perspectives in Biology 6 (2014): a015982, doi:10.1101/cshperspect.a015982.
  • Electron Transport Chain Protein Complexes Rev Up the Case for a Creator

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Feb 06, 2019

    As a little kid, I spent many a Saturday afternoon “helping” my dad work on our family car. What a clunker.

    We didn’t have a garage, so we parked our car on the street in front of the house. Our home was built into a hillside and the only way to get to our house was to climb a long flight of stairs from the street.

    I wasn’t very old at the time—maybe 6 or 7—so my job was to serve as my dad’s gofer. Instead of asking me to carry his toolbox up and down the flight of stairs, he would send me back and forth when he needed a particular tool. It usually went like this: “Fuz, go get me a screwdriver.” Up and down the stairs I would go. And, when I returned: “That’s the wrong screwdriver. Get me the one with the flat head.” Again, after I returned from another roundtrip on the stairs: “No, the one with the flat head and the blue handle.” Up and down the stairs I went, but again: “Why did you bring all of the screwdrivers? Take the rest of them back up the stairs and put them in the toolbox.” By the time he finished working on our car I was frustrated and exhausted.

    Even though I didn’t have a lot of fun helping my dad, I did enjoy peering under the hood of our car. I was fascinated by the engine. From my vantage point as a little kid, the car’s engine seemed to be bewilderingly complex. And somehow my dad knew what to do to make the car run. Clearly, he understood how it was designed and assembled.

    As a graduate student, when I began studying biochemistry in earnest, I was taken aback by the bewildering complexity of the cell’s chemical systems. Like an automobile engine, the cell’s complexity isn’t haphazard, but instead displays a remarkable degree of order and organization. There is an underlying ingenuity to the way biochemical systems are put together and the way they operate. And, for the most part, biochemists have acquired a good understanding of how these systems are designed.

    Along these lines, one of the most remarkable and provocative insights into biochemical systems has been the discovery of protein complexes that serve the cell as molecular-scale machines and motors—many of which bear an eerie similarity to man-made machines. Two recent studies illustrate this stunning similarity by revealing new information about the structure and function of two protein complexes that are part of the electron transport chain: the F1-F0 ATPase and respiratory complex I. These ubiquitous protein complexes are two of the most important enzymes in biology because of the central role they play in energy-harvesting reactions.

    F1-F0 ATPase

    This well-studied protein complex plays a key role in harvesting energy for the cell to use. F1-F0 ATPase is a molecular-scale rotary motor (see figure 1). The F1 portion of the complex is mushroom-shaped and extends above the membrane’s surface. The “button of the mushroom” literally corresponds to an engine turbine. The F1-F0 ATPase turbine interacts with the part of the complex that looks like a “mushroom stalk.” This stalk-like component functions as a rotor.

     

    blog__inline--electron-transport-chain-protein-complexes-1

    Figure 1: A cartoon of the F1-F0 ATPase rotary motor. Image credit: Reasons to Believe

    Located in the inner membrane of mitochondria, F1-F0 ATPase makes use of a proton gradient across the inner membrane to drive the production of ATP (adenosine triphosphate), a high-energy compound used by the cell to power many of its operations. Because protons are positively charged, the exterior region outside the inner membrane is positively charged and the interior region is negatively charged. The charge differential created by the proton gradient is analogous to a battery and the inner membrane is like a capacitor.

    The flow of positively charged hydrogen ions through the F0 component, embedded in the cell membrane, drives the rotation of the rotor. A rod-shaped protein structure that also extends above the membrane surface serves as a stator. This protein rod interacts with the turbine, holding it stationary as the rotor rotates.

    The electrical current that flows through the channels of the F0 complex is transformed into mechanical energy, which then drives the rotor’s movement. A cam that extends at a right angle from the rotor’s surface causes displacements of the turbine. These back-and-forth motions are used to produce ATP.

    Even though biochemists have learned a lot about this protein complex, they still don’t understand some things. Recently, a team of collaborators from the US determined the path that protons take as they move through the F0 component embedded in the inner membrane.1

    To accomplish this feat, the research team trapped the enzyme complex into a single conformation by fusing the stator to the rotor. This procedure exposed the channels in the F0 complex and revealed the precise path taken by the protons as they move across the inner membrane. As protons shuttle through these channels, they trigger conformational changes that drive the rotation of the rotor by one full turn for each proton as it moves through the channel.

    Respiratory Complex I

    Respiratory complex I serves as the first enzyme complex of the electron transport chain. This complex transfers high-energy electrons from a compound called nicotinamide adenine dinucleotide (NADH) to a small molecule associated with the inner membrane of mitochondria called coenzyme Q. The high-energy electrons of NADH are captured during glycolysis and the Kreb’s cycle, two metabolic pathways involved in the breakdown of the sugar, glucose.

    During the electron-transfer process, respiratory complex I also transports four protons from the mitochondria’s interior across the inner membrane to the exterior space (figure 2). In other words, respiratory complex I helps to generate the proton gradient F1-F0 ATPase uses to generate ATP. By some estimates, respiratory complex I is responsible for establishing about 40 percent of the proton gradient across the inner membrane.

    blog__inline--electron-transport-chain-protein-complexes-2

    Figure 2: A cartoon of the electron transport chain. Image credit: Shutterstock

    Massive in size, 45 individual protein subunits comprise respiratory complex I. The subunits interact to form two arms, one embedded in the inner membrane and one extending into the mitochondrial matrix. The two arms are arranged to form an L-shaped geometry.

    blog__inline--electron-transport-chain-protein-complexes-3

    Figure 3: A cartoon of respiratory complex I. Image credit: Wikipedia

    The electron transfer process occurs in the peripheral arm that extends into the mitochondrial matrix (upward in figure 3). Conversely, the proton transport mechanism takes place in the membrane-embedded arm (to the right).

    The mechanism of proton translocation across the inner membrane served as the focus of a study conducted by a research team from Oxford University in the UK.2 These researchers discovered that proton transport across the inner membrane is driven by the machine-like behavior of respiratory complex I. The process of transferring electrons through the peripheral arm results in conformational changes (changes in shape) in this part of the complex. This conformational change drives the motion of an alpha-helix cylinder like a piston in the membrane arm of the complex. The pumping motion of the alpha-helix causes three other cylinders to tilt and, in doing so, opens up channels for protons to move through the membrane arm of the complex.

    Revitalized Watchmaker Argument

    Biochemists discovery of enzymes with machine-like domains, as exemplified by F1-F0 ATPase and respiratory complex I, revitalize the Watchmaker argument. Popularized by William Paley in the eighteenth century, this argument states that as a watch requires a watchmaker, so, too, does life require a Creator.

    This simple yet powerful analogy has been challenged by skeptics like David Hume, who assert that the necessary conclusion of a Creator, based on analogical reasoning, is only compelling if there is a high degree of similarity between the objects that form the analogy. Skeptics have long argued that nature and a watch are sufficiently dissimilar so that the conclusion drawn from the Watchmaker argument is unsound.

    But due to the striking similarity between the machine parts of these enzymes and man-made devices, the discovery of enzymes with domains that are direct analogs to man-made devices addresses this concern. Toward this end, it is provocative that the more we learn about enzyme complexes such as F1-F0 ATPase, the more its machine-like character becomes apparent. It is also thought-provoking that as biochemists study the structure and function of protein complexes, new examples of analogs to man-made machines emerge. In both cases, the Watchmaker argument receives new vitality.

    As a little kid, peering under the hood of our family car and watching my father work on the engine convinced me that some really smart people who knew what they were doing designed and built that machine. In like manner, the remarkable machine-like properties displayed by many protein complexes in the cell make it rational to conclude that life comes from the work of a Mind.

    Resources

    Endnotes
    1. Anurag P. Srivastava et al., “High-Resolution Cryo-EM Analysis of the Yeast ATP Synthase in a Lipid Membrane,” Science 360, no. 6389 (May 11, 2018), doi:10.1126/science.aas9699.
    2. Rouslan G. Efremov et al., “The Architecture of Respiratory Complex I,” Nature 465 (May 27, 2010): 441–45, doi:10.1038/nature09066.
  • Early Cave Art Supports the Image of God

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Jan 30, 2019

    The J. Paul Getty Museum in Los Angeles houses one of my favorite paintings: Édouard Manet’s The Rue Mosnier with Flags.

    blog__inline--early-cave-art-supports-the-image-of-god-1

    Figure 1: The Getty Center. Image credit: Shutterstock

    This masterpiece depicts the Rue Mosnier (an urban street) as seen from the window of Manet’s art studio on June 30, 1878, a national holiday in France. Flags line both sides of the Rue Mosnier, decorations that are part of the day’s celebration.

    Photos of this piece of art simply don’t do it justice. When viewing the original in person, the bright colors of the flags—the whites, blues, and reds—leap off the canvas. And yet, there is an element of darkness to the painting. Meant to be viewed from left to right, the first thing the viewer sees in the corner of the painting is a disabled veteran struggling to make his way up the street. The flags on the left side of the street—though brilliantly colored—hang limp. Yet, as the viewer’s gaze moves down and across the street, the flags are depicted as flapping in the breeze. The focal point of the painting is found near the center of the piece, where two women in brilliantly white dresses disembark from a carriage.

    blog__inline--early-cave-art-supports-the-image-of-god-2

    Figure 2: The Rue Mosnier with Flags by Édouard Manet. Image credit: WikiArt

    Some scholars believe that through The Rue Mosnier with Flags, Manet was portraying the inequities of French society in his day. Others see the painting as communicating a sense of optimism and hope for the future as France recovered from the Franco-Prussian War (1870–71).

    Art and Symbolic Capacity

    For many people, our ability to create and contemplate art serves as a defining feature of humanity—a quality that reflects our capacity for sophisticated cognitive processes. Art is a manifestation of symbolism. Through art (as well as music and language), we express and communicate complex ideas and emotions. We accomplish this feat by representing the world—and even ideas—with symbols. And, we can manipulate symbols, embedding one within the other to create alternate possibilities.

    Because artwork reflects the capacity for symbolism and open-ended generative capacity, it has become the focal point of some very big questions in anthropology. The earliest humans produced impressive artistic displays on the cave walls of Europe that date to around 40,000 years in age—the time when humans first made their way to this part of the world.

    But when did art first appear? Did it arise after humans made their way into Europe? Did it arise in Africa before humanity began to migrate around the world? Did art emerge suddenly? Did it appear gradually? Is artistic expression unique to human beings, or did other hominins, such as Neanderthals, produce art? The answers to these questions have important implications for how we understand humanity’s origin and place in the cosmos.

    As a Christian, I view these questions as vitally important for establishing the credibility of the biblical accounts of human origins and the biblical perspective on human beings. I believe that our capacity to make art is a manifestation of the image of God. As such, the appearance of art (as well as other artifacts that reflect our capacity for symbolism) serve as a diagnostic in the archaeological record for the image of God. The archaeological record provides the means to characterize the mode and tempo of the appearance of behavior that reflects the image of God. If the biblical account of human origins is true, then I would expect that artistic expression should be unique to modern humans and should appear at the same time that we made our first appearance as a species.

    So, is artistic expression unique to modern humans? This question has generated quite a bit of controversy. Some scientific evidence indicates that Neanderthals displayed the capacity for artistic expression (and hence, the capacity for symbolism). On the other hand, a number of studies question Neanderthal capacity for art (and, consequently, symbolism). In fact, when taken as a whole, the evidence indicates that Neanderthals were cognitively inferior to modern humans. (For more details, check out the articles listed in the Resources section.)

    When did artistic expression in humans appear? Some evidence indicates that artistic expression appeared well after anatomically modern humans first appeared. To put it another way: there is evidence that anatomically modern humans appeared before behaviorally modern humans.

    On the other hand, the most recent evidence indicates that the capacity for symbolism and advanced cognition appeared much earlier than many anthropologists thought. And that time of origin is close to the time that anatomically modern humans made their first appearance, as three recent discoveries attest.

    Oldest Animal Drawings in Asia

    Cave art in Europe has been well-known and carefully investigated by archaeologists and anthropologists for nearly a century. This work gives the impression that artistic capacity appeared only after anatomically modern humans made their way into Europe—about 100,000 years after anatomically modern humans appeared on Earth. To say it another way, anatomically modern humans appeared before our advanced cognitive abilities.

    Yet, in recent years archaeologists have gained access to a growing archaeological record in Asia—and characterizing these archaeological sites changes everything. In 2014, a large team of collaborators from Australia dated hand stencils discovered on the walls of a cave in Sulawesi, Indonesia, to be between 35,000 to 40,000 years in age.1 Originally discovered in the 1950s, this artwork was initially dated to be about 10,000 years old. The team redated the art using a newly developed technique that measures the age of calcite deposits—left behind by water flowing down the cave walls—overlaying the art. (Trace amounts of radioactive uranium and thorium isotopes associated with the calcite can be used to date the mineral deposits and, thus, provide a minimum age for the artwork.)2

    blog__inline--early-cave-art-supports-the-image-of-god-3

    Figure 3: A modern-day re-creation of hand stencils found in the caves of Europe and Asia. Image credit: Shutterstock

    Recently, this same team applied the same technique to animal drawings in the caves of Borneo, dating one drawing to be a minimum age of 40,000 years old. They also dated hand stencils from this cave to be between 37,000 and 52,000 years in age.3

    The hand stencils and art reflect the same quality and character as the cave art found in Europe. And it is older. This discovery means that modern humans most likely had the capacity to make art before beginning their migrations around the world from out of Africa.

    Oldest Abstract Drawings in Africa

    A recent discovery by a team of anthropologists and archaeologists from the University of Witwatersrand in South Africa also supports the notion that artistic expression emerged prior to the migration of humans around the world.4 These researchers recovered a silcrete flake from a layer in the Blombos Cave that dates to about 73,000 years in age. (The Blombos Cave is located in the coastal area of South Africa, around 150 miles east of Cape Town.)

    This silcrete flake looks as if it was broken off from a grindstone used to turn ochre into a powder. The silcrete flake has a crosshatch pattern on it that appears to have been intentionally drawn using an ochre crayon. Because the crosshatch markings end abruptly, it looks like they were part of a large, abstract drawing made on the grindstone. When the researchers tried to reproduce the pattern in the lab, it required a steady hand and a determined effort.

    The Blombos Cave has previously yielded other artifacts that evince the capacity for symbolism. Additionally, the crosshatch symbol has been found etched into ochre and ostrich eggshells from other sites in South Africa. But this recent find represents the first and oldest example of the symbol having been drawn on an artifact’s surface.

    Additional Evidence for Advanced Cognition

    In addition to art, anthropologists believe that another diagnostic of cognitive complexity in humans is the manufacture and use of specialized bone tools. In 2012, researchers unearthed a bone in a cave near the Atlantic coastline of Morocco. To manufacture this knife, modern humans had to remove the rib from a herbivore and then cut it in half, lengthwise. The tool manufacturers had to then scrape and chip away the bone to give it a knife-like shape. Anthropologists believe that the manufacture of bone tools, such as this rib knife, reflects the capacity for strategic planning for future survival.

    Recently, an international team of researchers provided a detailed characterization of this tool and dated it to be around 90,000 years in age.5 This insight indicates that the capacity for advanced cognition existed (at least minimally) around 90,000 years ago.

    A Convergence of Evidence

    These recent findings signify that advanced cognitive ability, including the capacity to make art, originated close to the same time that anatomically modern humans first appear in the fossil record. In fact, (as I have written about earlier) linguist Shigeru Miyagawa believes that artistic expression emerged in Africa earlier than 125,000 years ago. Archaeologists have discovered rock art produced by the San people that dates to 72,000 years ago. This art shares certain elements with the cave art found in Europe. Because the San diverged from the modern human lineage around 125,000 years ago, the ancestral people groups that gave rise to both lines must have possessed the capacity for artistic expression before that time.

    It is also significant that the globular brain shape of modern humans first appears in the archaeological record around 130,000 years ago. As I have written previously, globular brain shape allows for the expansion of the parietal lobe, which is responsible for these capacities:

    • Perception of stimuli
    • Sensorimotor transformation (which plays a role in planning)
    • Visuospatial integration (which provides hand-eye coordination needed for throwing spears and making art)
    • Imagery
    • Self-awareness
    • Working and long-term memory

    In other words, the archaeological and fossil records increasingly indicate that anatomically and behaviorally modern humans emerged at the same time, as predicted by the biblical creation accounts.

    And, while these first humans didn’t have the luxury of spending an afternoon in an art museum contemplating artistic masterpieces, they displayed the image of God by producing art that, for them, apparently had profound meaning.

    Resources

    Endnotes
    1. M. Aubert et al., “Pleistocene Cave Art from Sulawesi, Indonesia,” Nature 514 (October 9, 2014): 223–27, doi:10.1038/nature13422.
    2. It should be noted that the dating method used by these researchers has been criticized by a number of different research teams as potentially yielding artificially high ages. Knowing this concern, the team deliberately took steps to ensure that the sampling of the art and application of the dating method took into account the dating technique’s limitations.-
    3. M. Aubert et al., “Palaeolithic Cave Art in Borneo,” Nature 564 (November 7, 2018): 254–57, doi:10.1038/s41586-018-0679-9.
    4. Christopher S. Henshilwood et al., “An Abstract Drawing from the 73,000-Year-Old Levels at Blombos Cave, South Africa,” Nature 562 (2018): 115–18, doi:10.1038/s41586-018-0514-3.
    5. Abdeljalil Bouzouggar et al., “90,000 Year-Old Specialised Bone Technology in the Aterian Middle Stone Age of North Africa,” PLoS ONE 13 (October 3, 2018):e0202021, doi:10.1371/journal.pone.0202021.
  • Does Animal Planning Undermine the Image of God?

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Jan 23, 2019

    A few years ago, we had an all-white English Bulldog named Archie. He would lumber toward even complete strangers, eager to befriend them and earn their affections. And people happily obliged this playful pup.

    Archie wasn’t just an adorable dog. He was also well trained. We taught him to ring a bell hanging from a sliding glass door in our kitchen so he could let us know when he wanted to go out. He rarely would ring the bell. Instead, he would just sit by the door and wait . . . unless the neighbor’s cat was in the backyard. Then, Archie would repeatedly bang on the bell with great urgency. He had to get the cat at all costs. Clearly, he understood the bell’s purpose. He just chose to use it for his own devices.

    Anyone who has owned a cat or dog knows that these animals do remarkable things. Animals truly are intelligent creatures.

    But there are some people who go so far as to argue that animal intelligence is much more like human intelligence than we might initially believe. They base this claim, in part, on a handful of high-profile studies that indicate that some animals such as great apes and ravens can problem-solve and even plan for the future—behaviors that make them like us in some important ways.

    Great Apes Plan for the Future

    In 2006, two German anthropologists conducted a set of experiments on bonobos and orangutans in captivity that seemingly demonstrated that these creatures can plan for the future. Specifically, the test subjects selected, transported, and saved tools for use 1 hour and 14 hours later, respectively.1

    To begin the study, the researchers trained both bonobos and orangutans to use a tool to get a reward from an apparatus. In the first experiment, the researchers blocked access to the apparatus. They laid out eight tools for the apes to select—two were suitable for the task and six were unsuitable. After selecting the tools, the apes were ushered into another room where they were kept for 1 hour. The apes were then allowed back into the room and granted access to the apparatus. To gain the reward, the apes had to select the correct tool and transport it to and from the waiting area. The anthropologists observed that the apes successfully obtained the reward in 70 percent of the trials by selecting and hanging on to the correct tool as they moved from room to room.

    In the second experiment, the delay between tool selection and access to the apparatus was extended to 14 hours. This experiment focused on a single female individual. Instead of taking the test subject to the waiting room, the researchers took her to a sleeping room one floor above the waiting room before returning her to the room with the apparatus. She selected and held on to to the tool for 14 hours while she moved from room to room in 11 of the 12 trials—each time successfully obtaining the reward.

    On the basis of this study, the researchers concluded that great apes have the ability to plan for the future. They also argued that this ability emerged in the common ancestor of humans and great apes around 14 million years ago. So, even though we like to think of planning for the future as one of the “most formidable human cognitive achievements,”2 it doesn’t appear to be unique to human beings.

    Ravens Plan for the Future

    In 2017, two researchers from Lund University in Sweden demonstrated that ravens are capable of flexible planning just like the great apes.3 These cognitive scientists conducted a series of experiments with ravens, demonstrating that the large black birds can plan for future events and exert self-control for up to 17 hours prior to using a tool or bartering with humans for a reward. (Self-control is crucial for successfully planning for the future.)

    The researchers taught ravens to use a tool to gain a reward from an apparatus. As part of the training phase, the test subjects also learned that other objects wouldn’t work on the apparatus.

    In the first experiment, the ravens were exposed to the apparatus without access to tools. As such, they couldn’t gain the reward. Then the researchers removed the apparatus. One hour later, the ravens were taken to a different location and offered tools. Then, the researchers presented them with the apparatus 15 minutes later. On average, the raven test subjects selected and used tools to gain the reward in approximately 80 percent of the trials.

    In the next experiment, the ravens were trained to barter by exchanging a token for a food reward. After the training, the ravens were taken to a different location and presented with a tray containing the token and three distractor objects by a researcher who had no history of bartering with the ravens. As with the results of the tool selection experiment, the ravens selected and used the token to successfully barter for food in approximately 80 percent of the trials.

    When the scientists modified the experimental design to increase the time delay from 15 minutes to 17 hours between tool or token selection and access to the reward, the ravens successfully completed the task in nearly 90 percent of the trials.

    Next, the researchers wanted to determine if the ravens could exercise self-control as part of their planning for the future. First, they presented the ravens with trays that contained a small food reward. Of course, all of the ravens took the reward. Next, the researchers offered the ravens trays that had the food reward and either tokens or tools and distractor items. By selecting the token or the tools, the ravens were ensured a larger food reward in the future. The researchers observed that the ravens selected the tool in 75 percent of the trials and the token in about 70 percent, instead of taking the small morsel of food. After selecting the tool or token, the ravens were given the opportunity to receive the reward about 15 minutes later.

    The researchers concluded that, like the great apes, ravens can plan for the future. Moreover, these researchers argue that this insight opens up greater possibilities for animal cognition because, from an evolutionary perspective, ravens are regarded as avian dinosaurs. And mammals (including the great apes) are thought to have shared an evolutionary ancestor with dinosaurs 320 million years ago.

    Are Humans Exceptional?

    In light of these studies (and others like them), it becomes difficult to maintain that human beings are exceptional. Self-control and the ability to flexibly plan for future events is considered by many to be the cornerstone of human cognition. Planning for the future requires mental representation of temporally distant events, the ability to set aside current sensory inputs for unobservable future events, and an understanding of what current actions result in achieving a future goal.

    For many Christians, such as me, the loss of human exceptionalism is concerning because if this idea is untenable, so, too, is the biblical view of human nature. According to Scripture, human beings stand apart from all other creatures because we bear God’s image. And, because every human being possesses the image of God, every human being has intrinsic worth and value. But if, in essence, human beings are no different from animals, it is challenging to maintain that we are the crown of creation, as Scripture teaches.

    Yet recent work by biologist Johan Lind from Stockholm University (Sweden) indicates that the results of these two studies and others like them may be misleading. In effect, when properly interpreted, these studies pose no threat to human exceptionalism in any way. According to Lind, animals can engage in behavior that resembles flexible planning through a different behavior: associative learning.4 If so, this insight preserves the case for human exceptionalism and the image of God, because it means that only humans engage in genuine flexible planning for the future through higher-order cognitive processes.

    Associative Learning and Planning for the Future

    Lind points out that researchers working in artificial intelligence (AI) have long known that associative learning can produce complex behaviors in AI systems that give the appearance of having the capacity for planning. (Associative learning is the process that animals [and AI systems] use to establish an association between two stimuli or events, usually by the use of punishments or rewards.)

    blog__inline--does-animal-planning-undermine-the-image-of-god

    Figure 1: An illustration of associative learning in dogs. Image credit: Shutterstock

    Lind wonders why researchers studying animal cognition ignore the work in AI. Applying the insights from the work on AI systems, Lind developed mathematical models based on associative learning that he used to simulate results of the studies on the great apes and ravens. He discovered that associative learning produced the same behaviors as observed by the two research teams for the great apes and ravens. In other words, planning-like behavior can actually emerge through associative learning. That is, the same processes that give AI systems the capacity to beat humans in chess can, through associative learning, account for the planning-like behavior of animals.

    The results of Lind’s simulations mean that it is most likely that animals “plan” for the future in ways that are entirely different from humans. In effect, the planning-like behavior of animals is an outworking of associative learning. On the other hand, humans uniquely engage in bona fide flexible planning through advanced cognitive processes such as mental time travel, among others.

    Humans Are Exceptional

    Even though the idea of human exceptionalism is continually under assault, it remains intact, as the latest work by Johan Lind illustrates. When the entire body of evidence is carefully weighed, there really is only one reasonable conclusion: Human beings uniquely possess advanced cognitive abilities that make possible our capacity for symbolism, open-ended generative capacity, theory of mind, and complex social interactions—scientific descriptors of the image of God.

    Resources

    Endnotes
    1. Nicholas J. Mulcahy and Josep Call, “Apes Save Tools for Future Use,” Science 312 (May 19, 2006): 1038–40, doi:10.1126/science.1125456.
    2. Mulcahy and Call, “Apes Save Tools for Future Use.”
    3. Can Kabadayi and Mathias Osvath, “Ravens Parallel Great Apes in Flexible Planning for Tool-Use and Bartering,” Science 357 (July 14, 2017): 202–4, doi:10.1126/science.aam8138.
    4. Johan Lind, “What Can Associative Learning Do for Planning?” Royal Society Open Science 5 (November 28, 2018): 180778, doi:10.1098/rsos.180778.
  • Prebiotic Chemistry and the Hand of God

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Jan 16, 2019

    “Many of the experiments designed to explain one or other step in the origin of life are either of tenuous relevance to any believable prebiotic setting or involve an experimental rig in which the hand of the researcher becomes for all intents and purposes the hand of God.”

    Simon Conway Morris, Life’s Solution

    If you could time travel, would you? Would you travel to the past or the future?

    If asked this question, I bet many origin-of-life researchers would want to travel to the time in Earth’s history when life originated. Given the many scientifically impenetrable mysteries surrounding life’s genesis, I am certain many of the scientists working on these problems would love to see firsthand how life got its start.

    It is true, origin-of-life researchers have some access to the origin-of-life process through the fossil and geochemical records of the oldest rock formations on Earth—yet this evidence only affords them a glimpse through the glass, dimly.

    Because of these limitations, origin-of-life researchers have to carry out most of their work in laboratory settings, where they try to replicate the myriad steps they think contributed to the origin-of-life process. Pioneered by the late Stanley Miller in the early 1950s, this approach—dubbed prebiotic chemistry—has become a scientific subdiscipline in its own right.

    blog__inline--prebiotic-chemistry-and-the-hand-of-god-1

    Figure 1: Chemist Stanley Miller, circa 1980. Image credit: Wikipedia

    Prebiotic Chemistry

    In effect, the goals of prebiotic chemistry are threefold.

    • Proof of principle. The objective of these types of experiments is to determine—in principle—if a chemical or physical process that could potentially contribute to one or more steps in the origin-of-life pathway even exists.
    • Mechanism studies. Once processes have been identified that could contribute to the emergence of life, researchers study them in detail to get at the mechanisms undergirding these physicochemical transformations.
    • Geochemical relevance. Perhaps the most important goal of prebiotic studies is to establish the geochemical relevance of the physicochemical processes believed to have played a role in life’s start. In other words, how well do the chemical and physical processes identified and studied in the laboratory translate to early Earth’s conditions?

    Without question, over the last 6 to 7 decades, origin-of-life researchers have been wildly successful with respect to the first two objectives. It is safe to say that origin-of-life investigators have demonstrated that—in principle—the chemical and physical processes needed to generate life through chemical evolutionary pathways exist.

    But when it comes to the third objective, origin-of-life researchers have experienced frustration—and, arguably, failure.

    Researcher Intervention and Prebiotic Chemistry

    In an ideal world, humans would not intervene at all in any prebiotic study. But this ideal isn’t possible. Researchers involve themselves in the experimental design out of necessity, but also to ensure that the results of the study are reproducible and interpretable. If researchers don’t set up the experimental apparatus, adjust the starting conditions, add the appropriate reactants, and analyze the product, then by definition the experiment would never happen. Utilizing carefully controlled conditions and chemically pure reagents is necessary for reproducibility and to make sense of the results. In fact, this level of control is essential for proof-of-principle and mechanistic prebiotic studies—and perfectly acceptable.

    However, when it comes to prebiotic chemistry’s third goal, geochemical relevance, the highly controlled conditions of the laboratory become a liability. Here researcher intervention becomes potentially unwarranted. It goes without saying that the conditions of early Earth were uncontrolled and chemically and physically complex. Chemically pristine and physically controlled conditions didn’t exist. And, of course, origin-of-life researchers weren’t present to oversee the processes and guide them to their desired end. Yet, it is rare for prebiotic simulation studies to fully take the actual conditions of early Earth into account in the experimental design. It is rarer for origin-of-life investigators to acknowledge this limitation.

    blog__inline--prebiotic-chemistry-and-the-hand-of-god-2

    Figure 2: Laboratory technician. Image credit: Shutterstock

    This complication means that many prebiotic studies designed to simulate processes on early Earth seldom accomplish anything of the sort due to excessive researcher intervention. Yet, it isn’t always clear when examining an experimental design if researcher involvement is legitimate or unwarranted.

    As I point out in my book Creating Life in the Lab (Baker, 2011), one main reason for the lack of progress relates to the researcher’s role in the experimental design—a role not often recognized when experimental results are reported. Origin-of-life investigator Clemens Richert from the University of Stuttgart in Germany now acknowledges this very concern in a recent comment piece published by Nature Communications.1

    As Richert points out, the role of researcher intervention and a clear assessment of geochemical relevance is rarely acknowledged or properly explored in prebiotic simulation studies. To remedy this problem, Richert calls for origin-of-life investigators to do three things when they report the results of prebiotic studies.

    • State explicitly the number of instances in which researchers engaged in manual intervention.
    • Describe precisely the prebiotic scenario a particular prebiotic simulation study seeks to model.
    • Reduce the number of steps involving manual intervention in whatever way possible.

    Still, as Richert points out, it is not possible to provide a quantitative measure (a score) of geochemical relevance. And, hence, there will always be legitimate disagreement about the geochemical relevance of any prebiotic experiment.

    Yet, Richert’s commentary represents an important first step toward encouraging more realistic prebiotic simulation studies and a more cautious approach to interpreting the results of these studies. Hopefully, it will also lead to a more circumspect assessment on the importance of these types of studies for accounting for the various steps in the origin-of-life process.

    Researcher Intervention and the Hand of God

    One concern not addressed by Richert in his commentary piece is the fastidiousness of many of the physicochemical transformations origin-of-life researchers deem central to chemical evolution. As I discuss in Creating Life in the Lab, mechanistic studies indicate that these processes are often dependent upon exacting conditions in the laboratory. To put it another way, these processes only take place—even under the most ideal laboratory conditions—because of human intervention. As a corollary, these processes would be unproductive on early Earth. They often require chemically pristine conditions, unrealistically high concentrations of reactants, carefully controlled order of additions, carefully regulated temperature, pH, salinity levels, etc.

    As Richert states, “It’s not easy to see what replaced the flasks, pipettes, and stir bars of a chemistry lab during prebiotic evolution, let alone the hands of the chemist who performed the manipulations. (And yes, most of us are not comfortable with the idea of divine intervention.)”2

    Sadly, since I made the point about researcher intervention nearly a decade ago, it has often been ignored, dismissed, and even ridiculed by many in the scientific community—simply because I have the temerity to think that a Creator brought life into existence.

    Even though Richert and his many colleagues in the origin-of-life research community do whatever they can to eschew a Creator’s role in the origin-of-life, could it be that abiogenesis (life from nonlife) required the hand of God—divine intervention?

    I would argue that this conclusion follows from nearly seven decades of work in prebiotic chemistry and the consistent demonstration of the central role that origin-of-life researchers play in the success of prebiotic simulation studies. It is becoming increasingly evident for whoever will “see” that the hand of the researcher serves as the analog for the hand of God.

    Resources

    Endnotes
    1. Clemens Richert, “Prebiotic Chemistry and Human Intervention,” Nature Communications 9 (December 12, 2018): 5177, doi:10.1038/s41467-018-07219-5.
    2. Richert, “Prebiotic Chemistry.
  • Long Noncoding RNAs Extend the Case for Creation

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Jan 09, 2019

    I don’t like to think of myself as technology-challenged, but I am beginning to wonder if I just might be. As a case in point, I have no clue about all the things my iPhone can do. It isn’t uncommon for someone (usually much younger than me) to point out features of my iPhone that I didn’t even know existed. (And, of course, there is the TV remote—but that will have to serve as material for another lead.)

    The human genome is a lot like my iPhone. The more the scientific community learns about it, the more complex it becomes and the more functionality it displays—functionality about which no one in the scientific community had a clue. It has become commonplace for scientists to discover that features of the human genome—long thought to be useless vestiges of an evolutionary history—actually serve a critical role in the structure and function of the genome.

    Long noncoding RNAs (lncRNAs) illustrate this point nicely. This broad category of RNA molecules consists of transcripts (where genetic information is transferred from DNA to messenger RNA) that are over 200 nucleotides in length but are not translated into proteins.

    Though numbers vary from source to source, estimates indicate that somewhere between 60 to 90 percent of the human genome is transcribed. Yet only 2 percent of the genome consists of transcripts that are directly used to produce proteins. Of the transcripts that are untranslated, researchers estimate that somewhere between 60,000 to 120,000 of the transcripts are noncoding RNAs. Researchers categorize these transcripts as microRNAs (miRNAs), piwi-interacting RNAs (piwiRNAs), small interfering RNAs (siRNAs) and lncRNAs. The first three types of RNAs are relatively small in size and play a role in regulating gene expression.

    blog__inline--long-noncoding-rnas-1

    Figure 1: Transcription and Translation. Image credit: Shutterstock

    Initially, researchers thought for the most part that lncRNAs were transcriptional noise—junk. But this view has changed in recent years. Evidence continues to accrue demonstrating that lncRNAs play a wide range of roles in the cell.1 And as evidence for the utility of lncRNAs mounts, the case for the design of the human genome expands.

    The Functional Utility of Long Noncoding RNAs

    As it turns out, lncRNAs are extremely versatile molecules that can interact with: (1) other RNA molecules, (2) DNA, (3) proteins, and (4) cell membranes. This versatility opens up the possibility that these molecules play a diverse role in cellular metabolism.

    Recently, Harry Krause, a molecular geneticist from the University of Toronto, published two review articles summarizing the latest insights into lncRNA function. These insights, including the four to follow, demonstrate the functional pervasiveness of the transcripts.

    lncRNAs regulate gene expression. lncRNAs influence gene expression by a variety of mechanisms. One is through interactions with other transcripts forming RNA-RNA duplexes that typically interfere with translation of protein-coding messenger RNAs.

    Researchers have recently learned that lncRNAs can also influence gene expression by interacting with DNA. These interactions result in either: (1) a triple helix, made up of two DNA strands intertwined with one RNA strand, or (2) a double helix with the lncRNA intertwined with one of the DNA strands, leaving the other exposed as a single strand. When these duplexes form, the lncRNA forms a hairpin loop that can either indiscriminately or selectively attract transcription factors.

    blog__inline--long-noncoding-rnas-2

    Figure 2: A Hairpin Loop. Image credit: Wikipedia

    Though researchers are still learning about the role lncRNAs play in gene regulation, these varied interactions with DNA and proteins suggest that lncRNAs may influence gene expression through a variety of mechanisms.

    lncRNAs form microbodies within the nucleus and cytoplasm. A second function recognizes that lncRNAs interact with proteins to form hydrogel-like structures in the nucleus and cytoplasm. These structures are dense and heavily cross-linked subcellular structures that serve as functionally specific regions without a surrounding membrane. (In a sense, the microbodies could be viewed as somewhat analogous to ribosomes, the protein-RNA complexes that synthesize proteins.) In the nucleus, microbodies play a role in transcriptional processing, storage, and stress response. In the cytoplasm, microbodies play a role in storage, processing, and trafficking.

    lncRNAs interact with cell membranes. A third role stems from laboratory studies where lncRNAs have been shown to interact with model cell membranes. Such interactions suggest that lncRNAs may play a role in mediating biochemical processes that take place at cell membranes. Toward this end, researchers have recently observed certain lncRNA species interacting with phosphatidylinositol 3,4,5-triphosphate. This cell membrane component plays a central role in signal transduction inside cells.

    lncRNAs are associated with exosomes. Finally, lncRNAs have been found inside membrane-bound vesicles that are secreted by cells (called exosomes). These vesicles mediate cell-cell communication.

    In short, the eyes of the scientific community have been opened. And they now see the functional importance and functional diversity of lncRNAs. Given the trend line, it seems reasonable to think that the functional range of lncRNAs will only expand as researchers continue to study the human genome (and genomes of other organisms).

    The growing recognition of the functional versatility of lncRNAs aligns with studies demonstrating that other regions of the genome—long thought to be nonfunctional—do, in fact, play key roles in gene expression and other facets of cellular metabolism. Most significantly, toward this end, the functional versatility of lncRNAs supports the conclusions of the ENCODE Project—conclusions that have been challenged by some people in the scientific community.

    The ENCODE Project

    A program carried out by a consortium of scientists with the goal of identifying the functional DNA sequence elements in the human genome, the ENCODE Project, reported phase II results in the fall of 2012. (Currently, ENCODE is in phase IV.) To the surprise of many, the ENCODE Project reported that around 80 percent of the human genome displays biochemical activity—hence, function—with the expectation that this percentage should increase as results from phases III and IV of the project are reported.

    The ENCODE results have generated quite a bit of controversy. One of the most prominent complaints about the ENCODE conclusions relates to the way the consortium determined biochemical function. Critics argue that ENCODE scientists conflated biochemical activity with function. As a case in point, the critics argue that most of the transcripts produced by the human genome (which include lncRNAs) must be biochemical noise. This challenge flows out of predictions of the evolutionary paradigm. Yet, it is clear that the transcripts produced by the human genome are functional, as numerous studies on the functional significance of lncRNAs attest. In other words, the biochemical activity detected by ENCODE equates to biochemical function—at least with respect to transcription.

    A New View of Genomes

    These types of insights are radically changing scientists view of the human genome. Rather than a wasteland of junk DNA sequences stemming from the vestiges of an evolutionary history, genomes appear to be incredibly complex, sophisticated biochemical systems, with most of the genome serving useful and necessary functions.

    We have come a long way from the early days of the human genome project. When completed in 2003, many scientists at that time estimated that around 95 percent of the human genome consists of junk DNA. That acknowledgment seemingly provided compelling evidence that humans must be the product of an evolutionary history.

    Nearly 15 years later the evidence suggests that the more we learn about the structure and function of genomes, the more elegant and sophisticated they appear to be. It is quite possible that most of the human genome is functional.

    For creationists and intelligent design proponents, this changing view of the human genome—similar to discovering exciting new features of an iPhone—provides reasons to think that it is the handiwork of our Creator. A skeptic might ask, Why would a Creator make genomes littered with so much junk? But if a vast proportion of genomes consists of functional sequences, this challenge no longer carries weight and it becomes more and more reasonable to interpret genomes from within a creation model/intelligent design framework.

    Resources

    Endnotes
    1. Allison Jandura and Henry M. Krause, “The New RNA World: Growing Evidence for Long Noncoding RNA Functionality,” Trends in Genetics 33 (October 1, 2017): 665– 76, doi:10.1016/j.tig.2017.08.002; Henry M. Krause, “New and Prospective Roles for lncRNAs in Organelle Formation and Function,” Trends in Genetics 34 (October 1, 2018): 736–45, doi:10.1016/j.tig.2018.06.005.
  • Soft Tissue Preservation Mechanism Stabilizes the Case for Earth’s Antiquity

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Dec 19, 2018

    One of the highlights of the year at Reasons to Believe (well, it’s a highlight for some of us, anyway) is the white elephant gift exchange at our staff Christmas party. It is great fun to laugh together as a staff as we take turns unwrapping gifts—some cheesy, some useless, and others highly prized—and then “stealing” from one another those two or three gifts that everyone seems to want.

    Over the years, I have learned a few lessons about choosing a white elephant gift to unwrap. Avoid large gifts. If the gift is a dud, large items are more difficult to find a use for than small ones. Also, more often than not, the most beautifully wrapped gifts turn out to be the biggest letdowns of all.

    Giving and receiving gifts isn’t just limited to Christmas. People exchange all types of gifts with one another for all sorts of reasons.

    Gifting is even part of the scientific enterprise—with the gifts taking on the form of scientific discoveries and advances. Many times, discoveries lead to new beneficial insights and technologies—gifts for humanity. Other times, these breakthroughs are gifts for scientists, signaling a new way to approach a scientific problem or opening up new vistas of investigation.

    Soft Tissue Remnants Preserved in Fossils

    One such gift was given to the scientific community over a decade ago by Mary Schweitzer, a paleontologist at North Carolina State University. Schweitzer and her team of collaborators recovered flexible, hollow, and transparent blood vessels from the remains of a T. rex specimen after removing the mineral component of the fossil.1 These blood vessels harbored microstructures with a cell-like morphology (form and structure) that she and her collaborators interpreted to be the remnants of red blood cells. This work showed conclusively that soft tissue materials could be preserved in fossil remains.

    Though unexpected, the discovery was a landmark achievement for paleontology. Since Schweitzer’s discovery, paleontologists have unearthed the remnants of all sorts of soft tissue materials from fossils representing a wide range of organisms. (For a catalog of some of these finds, see my book Dinosaur Blood and the Age of the Earth.)

    With access to soft tissue materials in fossils, paleontologists have a new window into the biology of Earth’s ancient life.

    The Scientific Case for a Young Earth

    Some Christians also saw Schweitzer’s discovery as a gift. But for them the value of this scientific present wasn’t the insight it provides about past life on Earth. Instead, they viewed this discovery (and others like it) as evidence that the earth must be no more than a few thousand years old. From a young-earth creationist (YEC) perspective, the survival of soft tissue materials in fossils indicates that these remains can’t be millions of years old. As a case in point, at the time Schweitzer reported her findings, John Morris, a young-earth proponent from the Institute for Creation Research, wrote:

    Indeed, it is hard to imagine how soft tissue could have lasted even 5,000 years or so since the Flood of Noah’s day when creationists propose the dinosaur was buried. Such a thing could hardly happen today, for soft tissue decays rather quickly under any condition.2

    In other words, from a YEC perspective, it is impossible for fossils to contain soft tissue remnants and be millions of years old. Soft tissues shouldn’t survive that long; they should readily degrade in a few thousand years. From a YEC view, soft tissue discoveries challenge the reliability of radiometric dating methods used to determine the fossils’ ages and, consequently, Earth’s antiquity. Furthermore, these breakthrough discoveries provide compelling scientific evidence for a young earth and support the idea that the fossil record results from a recent global (worldwide) flood.

    Admittedly, on the surface the argument carries some weight. At first glance, it is hard to envision how soft tissue materials could survive for vast periods of time, given the wide range of mechanisms that drive the degradation of biological materials.

    Preservation of Soft Tissues in Fossil Remains

    Despite this first impression, over the last decade or so paleontologists have identified a number of mechanisms that can delay the degradation of soft tissues long enough for them to become entombed within a mineral shell. When this entombment happens, the soft tissue materials escape further degradation (for the most part). In other words, it is a race against time. Can mineral entombment take place before the soft tissue materials fully decompose? If so, then soft tissue remnants can survive for hundreds of millions of years. And any chemical or physical process that can delay the degradation will contribute to soft tissue survival by giving the entombment process time to take place.

    In Dinosaur Blood and the Age of the Earth, I describe several mechanisms that likely promote soft tissue survival. Since the book’s publication (2016), researchers have deepened their understanding of the processes that make it possible for soft tissues to survive. The recent work of an international team of collaborators headed by researchers from Yale University provides an example of this growing insight.3

    These researchers discovered that the deposition environment during the fossilization process plays a significant role in soft tissue preservation, and they have identified the chemical reactions that contribute to this preservation. The team examined 24 specimens of biomineralized vertebrate tissues ranging in age from modern to the Late Jurassic (approximately 163–145 million years ago) time frame. These specimens were taken from both chemically oxidative and reductive environments.

    After demineralizing the samples, the researchers discovered that all modern specimens yielded soft tissues. However, demineralization only yielded soft tissues for fossils formed under oxidative conditions. Fossils formed under reductive conditions failed to yield any soft tissue material, whatsoever. The soft tissues from the oxidative settings (which included extracellular matrices, cell remnants, blood vessel remnants, and nerve materials) were stained brown. Researchers noted that the brown color of the soft tissue materials increased in intensity as a function of the fossil’s age, with older specimens displaying greater browning than younger specimens.

    The team was able to reproduce this brown color in soft tissues taken from modern-day specimens by heating the samples and exposing them to air. This process converted the soft tissues from translucent white to brown in appearance.

    Using Raman spectroscopy, the researchers detected spectral signatures for proteins and N-heterocycle pyridine rings in the soft tissue materials. They believe that the N-heterocycle pyridine rings arise from the formation of advanced glycoxidation end-products (AGEs) and advanced lipoxidation end-products (ALEs). AGEs and ALEs are the by-products of the reactions that take place between proteins and sugars (AGEs) and proteins and lipids or fats (ALEs). (As an aside, AGEs and ALEs form when foods are cooked, and they occur at high levels when food is burnt, giving overly cooked foods their brownish color.) The researchers noted that spectral features for N-heterocycle pyridine rings become more prominent for soft tissues isolated from older fossil specimens, with the spectral features for the proteins becoming less pronounced.

    AGEs and ALEs are heavily cross-linked compounds. This chemical property makes them extremely difficult to break down once they form. In other words, the formation of AGEs and ALEs in soft tissue remnants delays their decomposition long enough for mineral entombment to take place.

    Iron from the environment or released from red blood cells promotes the formation of AGEs and ALEs. So do alkaline conditions.

    In addition to stabilizing soft tissues from degradation because of the cross-links, AGEs and ALEs protect adjacent proteins from breakdown because of their hydrophobic (water repellent) nature. Water promotes soft tissue breakdown through a chemical process called hydrolysis. But because AGEs and ALEs are hydrophobic, they inhibit the hydrolytic reactions that would otherwise break down proteins that escape glycoxidation and lipoxidation reactions.

    Finally, AGEs and ALEs are also resistant to microbial attack, further adding to the stability of the soft tissue materials. In other words, soft tissue materials recovered from fossil specimens are not the original, intact material, because they have undergone extensive chemical alteration. As it turns out, this alteration stabilized the soft tissue remnants long enough for mineral entombment to occur.

    In short, this research team has made significant strides toward understanding the process by which soft tissue materials become preserved in fossil remains. The recovery of soft tissue materials from the ancient fossil remains makes perfect sense within an old-earth framework. These insights also undermine what many people believe to be one of the most compelling scientific arguments for a young earth.

    Why Does It Matter?

    In my experience, many skeptics and seekers alike reject Christian truth claims because of the misperception that Genesis 1 teaches that the earth is only 6,000 years old. This misperception becomes reinforced by vocal (and well-meaning) YECs who not only claim the only valid interpretation of Genesis 1 is the calendar-day view, but also maintain that ample scientific evidence—such as the recovery of soft tissue remnants in fossils—exists for a young earth.

    Yet, as the latest work headed by scientists from Yale University demonstrates, soft tissue remnants associated with fossils find a ready explanation from an old-earth standpoint. It has been a gift to science that advances understanding of a sophisticated process.

    Unfortunately, for YECs the fossil-associated soft tissues have turned out to be little more than a bad white elephant gift.

    Resources:

    Endnotes
    1. Mary H. Schweitzer et al., “Soft-Tissue Vessels and Cellular Preservation in Tyrannosaurus rex,” Science 307 (March 25, 2005): 1952–55, doi:10.1126/science.1108397.
    2. John D. Morris, “Dinosaur Soft Parts,” Acts & Facts (June 1, 2005), icr.org/article/2032/.
    3. Jasmina Wiemann et al., “Fossilization Transforms Vertebrate Hard Tissue Proteins into N-Heterocyclic Polymers,” Nature Communications 9 (November 9, 2018): 4741, doi:10.1038/s41467-018-07013-3.
  • Endosymbiont Hypothesis and the Ironic Case for a Creator

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Dec 12, 2018

    i ·ro ·ny

    1. The use of words to express something different from and often opposite to their literal meaning.
    2. Incongruity between what might be expected and what actually occurs.

    —The Free Dictionary

    People often use irony in humor, rhetoric, and literature, but few would think it has a place in science. But wryly, this has become the case. Recent work in synthetic biology has created a real sense of irony among the scientific community—particularly for those who view life’s origin and design from an evolutionary framework.

    Increasingly, life scientists are turning to synthetic biology to help them understand how life could have originated and evolved. But, they have achieved the opposite of what they intended. Instead of developing insights into key evolutionary transitions in life’s history, they have, ironically, demonstrated the central role intelligent agency must play in any scientific explanation for the origin, design, and history of life.

    This paradoxical situation is nicely illustrated by recent work undertaken by researchers from Scripps Research (La Jolla, CA). Through genetic engineering, the scientific investigators created a non-natural version of the bacterium E. coli. This microbe is designed to take up permanent residence in yeast cells. (Cells that take up permanent residence within other cells are referred to as endosymbionts.) They hope that by studying these genetically engineered endosymbionts, they can gain a better understanding of how the first eukaryotic cells evolved. Along the way, they hope to find added support for the endosymbiont hypothesis.1

    The Endosymbiont Hypothesis

    Most biologists believe that the endosymbiont hypothesis (symbiogenesis) best explains one of the key transitions in life’s history; namely, the origin of complex cells from bacteria and archaea. Building on the ideas of Russian botanist Konstantin Mereschkowski, Lynn Margulis (1938–2011) advanced the endosymbiont hypothesis in the 1960s to explain the origin of eukaryotic cells.

    Margulis’s work has become an integral part of the evolutionary paradigm. Many life scientists find the evidence for this idea compelling and consequently view it as providing broad support for an evolutionary explanation for the history and design of life.

    According to this hypothesis, complex cells originated when symbiotic relationships formed among single-celled microbes after free-living bacterial and/or archaeal cells were engulfed by a “host” microbe. Presumably, organelles such as mitochondria were once endosymbionts. Evolutionary biologists believe that once engulfed by the host cell, the endosymbionts took up permanent residency, with the endosymbiont growing and dividing inside the host.

    Over time, the endosymbionts and the host became mutually interdependent. Endosymbionts provided a metabolic benefit for the host cell—such as an added source of ATP—while the host cell provided nutrients to the endosymbionts. Presumably, the endosymbionts gradually evolved into organelles through a process referred to as genome reduction. This reduction resulted when genes from the endosymbionts’ genomes were transferred into the genome of the host organism.

    blog__inline--endosymbiont-hypothesis-and-the-ironic-case-for-a-creator-1

    Figure 1: Endosymbiont hypothesis. Image credit: Wikipedia.

    Life scientists point to a number of similarities between mitochondria and alphaproteobacteria as evidence for the endosymbiont hypothesis. (For a description of the evidence, see the articles listed in the Resources section.) Nevertheless, they don’t understand how symbiogenesis actually occurred. To gain this insight, scientists from Scripps Research sought to experimentally replicate the earliest stages of mitochondrial evolution by engineering E. coli and brewer’s yeast (S. cerevisiae) to yield an endosymbiotic relationship.

    Engineering Endosymbiosis

    First, the research team generated a strain of E. coli that no longer has the capacity to produce the essential cofactor thiamin. They achieved this by disabling one of the genes involved in the biosynthesis of the compound. Without this metabolic capacity, this strain becomes dependent on an exogenous source of thiamin in order to survive. (Because the E. coli genome encodes for a transporter protein that can pump thiamin into the cell from the exterior environment, it can grow if an external supply of thiamin is available.) When incorporated into yeast cells, the thiamin in the yeast cytoplasm becomes the source of the exogenous thiamin, rendering E. coli dependent on the yeast cell’s metabolic processes.

    Next, they transferred the gene that encodes a protein called ADP/ATP translocase into the E. coli strain. This gene was harbored on a plasmid (which is a small circular piece of DNA). Normally, the gene is found in the genome of an endosymbiotic bacterium that infects amoeba. This protein pumps ATP from the interior of the bacterial cell to the exterior environment.2

    The team then exposed yeast cells (that were deficient in ATP production) to polyethylene glycol, which creates a passageway for E. coli cells to make their way into the yeast cells. In doing so, E. coli becomes established as endosymbionts within the yeast cells’ interior, with the E. coli providing ATP to the yeast cell and the yeast cell providing thiamin to the bacterial cell.

    Researchers discovered that once taken up by the yeast cells, the E. coli did not persist inside the cell’s interior. They reasoned that the bacterial cells were being destroyed by the lysosomal degradation pathway. To prevent their destruction, the research team had to introduce three additional genes into the E. coli from three separate endosymbiotic bacteria. Each of these genes encodes proteins—called SNARE-like proteins—that interfere with the lysosomal destruction pathway.

    Finally, to establish a mutualistic relationship between the genetically-engineered strain of E. coli and the yeast cell, the researchers used a yeast strain with defective mitochondria. This defect prevented the yeast cells from producing an adequate supply of ATP. Because of this limitation, the yeast cells grow slowly and would benefit from the E. coli endosymbionts, with the engineered capacity to transport ATP from their cellular interior to the exterior environment (the yeast cytoplasm.)

    The researchers observed that the yeast cells with E. coli endosymbionts appeared to be stable for 40 rounds of cell doublings. To demonstrate the potential utility of this system to study symbiogenesis, the research team then began the process of genome reduction for the E. coli endosymbionts. They successively eliminated the capacity of the bacterial endosymbiont to make the key metabolic intermediate NAD and the amino acid serine. These triply-deficient E. coli strains survived in the yeast cells by taking up these nutrients from the yeast cytoplasm.

    Evolution or Intentional Design?

    The Scripps Research scientific team’s work is impressive, exemplifying science at its very best. They hope that their landmark accomplishment will lead to a better understanding of how eukaryotic cells appeared on Earth by providing the research community with a model system that allows them to probe the process of symbiogenesis. It will also allow them to test the various facets of the endosymbiont hypothesis.

    In fact, I would argue that this study already has made important strides in explaining the genesis of eukaryotic cells. But ironically, instead of proffering support for an evolutionary origin of eukaryotic cells (even though the investigators operated within the confines of the evolutionary paradigm), their work points to the necessary role intelligent agency must have played in one of the most important events in life’s history.

    This research was executed by some of the best minds in the world, who relied on a detailed and comprehensive understanding of biochemical and cellular systems. Such knowledge took a couple of centuries to accumulate. Furthermore, establishing mutualistic interactions between the two organisms required a significant amount of ingenuity—genius that is reflected in the experimental strategy and design of their study. And even at that point, execution of their experimental protocols necessitated the use of sophisticated laboratory techniques carried out under highly controlled, carefully orchestrated conditions. To sum it up: intelligent agency was required to establish the endosymbiotic relationship between the two microbes.

    blog__inline--endosymbiont-hypothesis-and-the-ironic-case-for-a-creator-2

    Figure 2: Lab researcher. Image credit: Shutterstock.

    Or, to put it differently, the endosymbiotic relationship between these two organisms was intelligently designed. (All this work was necessary to recapitulate only the presumed first step in the process of symbiogenesis.) This conclusion gains added support given some of the significant problems confronting the endosymbiotic hypothesis. (For more details, see the Resources section.) By analogy, it seems reasonable to conclude that eukaryotic cells, too, must reflect the handiwork of a Divine Mind—a Creator.

    Resources

    Endnotes
    1. Angad P. Mehta et al., “Engineering Yeast Endosymbionts as a Step toward the Evolution of Mitochondria,” Proceedings of the National Academy of Sciences, USA 115 (November 13, 2018): doi:10.1073/pnas.1813143115.
    2. ATP is a biochemical that stores energy used to power the cell’s operation. Produced by mitochondria, ATP is one of the end products of energy harvesting pathways in the cell. The ATP produced in mitochondria is pumped into the cell’s cytoplasm from within the interior of this organelle by an ADP/ATP transporter.
  • Did Neanderthals Start Fires?

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Dec 05, 2018

    It is one of the most iconic Christmas songs of all time.

    Written by Bob Wells and Mel Torme in the summer of 1945, “The Christmas Song (subtitled “Chestnuts Roasting on an Open Fire”) was crafted in less than an hour. As the story goes, Wells and Torme were trying to stay cool during the blistering summer heat by thinking cool thoughts and then jotting them down on paper. And, in the process, “The Christmas Song” was born.

    Many of the song’s lyrics evoke images of winter, particularly around Christmastime. But none has come to exemplify the quiet peace of a Christmas evening more than the song’s first line, “Chestnuts roasting on an open fire . . . ”

    Gathering around the fire to stay warm, to cook food, and to share in a community has been an integral part of the human experience throughout history—including human prehistory. Most certainly our ability to master fire played a role in our survival as a species and in our ability as human beings to occupy and thrive in some of the world’s coldest, harshest climates.

    But fire use is not limited only to modern humans. There is strong evidence that Neanderthals made use of fire. But, did these creatures have control over fire in the same way we do? In other words, did Neanderthals master fire? Or, did they merely make opportunistic use of natural fires? These questions are hotly debated by anthropologists today and they contribute to a broader discussion about the cognitive capacity of Neanderthals. Part of that discussion includes whether these creatures were cognitively inferior to us or whether they were our intellectual equals.

    In an attempt to answer these questions, a team of researchers from the Netherlands and France characterized the microwear patterns on bifacial (having opposite sides that have been worked on to form an edge) tools made from flint recovered from Neanderthal sites, and concluded that the wear patterns suggest that these hominins used pyrite to repeatedly strike the flint. This process generates sparks that can be used to start fires.1 To put it another way, the researchers concluded that Neanderthals had mastery over fire because they knew how to start fires.

    blog__inline--did-neanderthals-start-fires-1

    Figure 1: Biface tools for cutting or scraping. Image credit: Shutterstock

    However, a closer examination of the evidence along with results of other studies, including recent insight into the cause of Neanderthal extinction, raises significant doubts about this conclusion.

    What Do the Microwear Patterns on Flint Say?

    The investigators focused on the microwear patterns of flint bifaces recovered from Neanderthal sites as a marker for fire mastery because of the well-known practice among hunter-gatherers and pastoralists of striking flint with pyrite (an iron disulfide mineral) to generate sparks to start fires. Presumably, the first modern humans also used this technique to start fires.

    blog__inline--did-neanderthals-start-fires-2

    Figure 2: Starting a fire with pyrite and flint. Image credit: Shutterstock

    The research team reasoned that if Neanderthals started fires, they would use a similar tactic. Careful examination of the microwear patterns on the bifaces led the research team to conclude that these tools were repeatedly struck by hard materials, with the strikes all occurring in the same direction along the bifaces’ long axis.

    The researchers then tried to experimentally recreate the microwear pattern in a laboratory setting. To do so, they struck biface replicas with a number of different types of materials, including pyrites, and concluded that the patterns produced by the pyrite strikes most closely matched the patterns on the bifaces recovered from Neanderthal sites. On this basis, the researchers claim that they have found evidence that Neanderthals deliberately started fires.

    Did Neanderthals Master Fire?

    While this conclusion is possible, at best this study provides circumstantial, not direct, evidence for Neanderthal mastery of fire. In fact, other evidence counts against this conclusion. For example, bifaces with the same type of microwear patterns have been found at other Neanderthal sites, locales that show no evidence of fire use. These bifaces would have had a range of usages, including butchery of the remains of dead animals. So, it is possible that these tools were never used to start fires—even at sites with evidence for fire usage.

    Another challenge to the conclusion comes from the failure to detect any pyrite on the bifaces recovered from the Neanderthal sites. Flint recovered from modern human sites shows visible evidence of pyrite. And yet the research team failed to detect even trace amounts of pyrite on the Neanderthal bifaces during the course of their microanalysis.

    This observation raises further doubt about whether the flint from the Neanderthal sites was used as a fire starter tool. Rather, it points to the possibility that Neanderthals struck the bifaces with materials other than pyrite for reasons not yet understood.

    The conclusion that Neanderthals mastered fire also does not square with results from other studies. For example, a careful assessment of archaeological sites in southern France occupied by Neanderthals from about 100,000 to 40,000 years ago indicates that Neanderthals could not create fire. Instead, these hominins made opportunistic use of natural fire when it was available to them.2

    These French sites do show clear evidence of Neanderthal fire use, but when researchers correlated the archaeological layers displaying evidence for fire use with the paleoclimate data, they found an unexpected pattern. Neanderthals used fire during warm climate conditions and failed to use fire during cold periods—the opposite of what would be predicted if Neanderthals had mastered fire.

    Lightning strikes that would generate natural fires are much more likely to occur during warm periods. Instead of creating fire, Neanderthals most likely harnessed natural fire and cultivated it as long as they could before it extinguished.

    Another study also raises questions about the ability of Neanderthals to start fires.3 This research indicates that cold climates triggered Neanderthal extinctions. By studying the chemical composition of stalagmites in two Romanian caves, an international research team concluded that there were two prolonged and extremely cold periods between 44,000 and 40,000 years ago. (The chemical composition of stalagmites varies with temperature.)

    The researchers also noted that during these cold periods, the archaeological record for Neanderthals disappears. They interpret this disappearance to reflect a dramatic reduction in Neanderthal population numbers. Researchers speculate that when this population downturn took place during the first cold period, modern humans made their way into Europe. Being better suited for survival in the cold climate, modern human numbers increased. When the cold climate mitigated, Neanderthals were unable to recover their numbers because of the growing populations of modern humans in Europe. Presumably, after the second cold period, Neanderthal numbers dropped to the point that they couldn’t recover, and hence, became extinct.

    But why would modern humans be more capable than Neanderthals of surviving under extremely cold conditions? It seems as if it should be the other way around. Neanderthals had a hyper-polar body design that made them ideally suited to withstand cold conditions. Neanderthal bodies were stout and compact, comprised of barrel-shaped torsos and shorter limbs, which helped them retain body heat. Their noses were long and sinus cavities extensive, which helped them warm the cold air they breathed before it reached their lungs. But, despite this advantage, Neanderthals died out and modern humans thrived.

    Some anthropologists believe that the survival discrepancy could be due to dietary differences. Some data indicates that modern humans had a more varied diet than Neanderthals. Presumably, these creatures primarily consumed large herbivores—animals that disappeared when the climatic conditions turned cold, thereby threatening Neanderthal survival. On the other hand, modern humans were able to adjust to the cold conditions by shifting their diets.

    But could there be a different explanation? Could it be that with their mastery of fire, modern humans were able to survive cold conditions? And did Neanderthals die out because they could not start fires?

    Taken in its entirety, the data seems to indicate that Neanderthals lacked mastery of fire but could use it opportunistically. And, in a broader context, the data indicates that Neanderthals were cognitively inferior to humans.

    What Difference Does It Make?

    One of the most important ideas taught in Scripture is that human beings uniquely bear God’s image. As such, every human being has immeasurable worth and value. And because we bear God’s image, we can enter into a relationship with our Maker.

    However, if Neanderthals possessed advanced cognitive ability just like that of modern humans, then it becomes difficult to maintain the view that modern humans are unique and exceptional. If human beings aren’t exceptional, then it becomes a challenge to defend the idea that human beings are made in God’s image.

    Yet, claims that Neanderthals are cognitive equals to modern humans fail to withstand scientific scrutiny, time and time, again. Now it’s time to light a fire in my fireplace and enjoy a few contemplative moments thinking about the real meaning of Christmas.

    Resources

    Endnotes
    1. A. C. Sorensen, E. Claud, and M. Soressi, “Neanderthal Fire-Making Technology Inferred from Microwear Analysis,” Scientific Reports 8 (July 19, 2018): 10065, doi:10.1038/s41598-018-28342-9.
    2. Dennis M. Sandgathe et al., “Timing of the Appearance of Habitual Fire Use,” Proceedings of the National Academy of Sciences, USA 108 (July 19, 2011), E298, doi:10.1073/pnas.1106759108; Paul Goldberg et al., “New Evidence on Neandertal Use of Fire: Examples from Roc de Marsal and Pech de l’Azé IV,” Quaternary International 247 (2012): 325–40, doi:10.1016/j.quaint.2010.11.015; Dennis M. Sandgathe et al., “On the Role of Fire in Neandertal Adaptations in Western Europe: Evidence from Pech de l’Azé IV and Roc de Marsal, France,” PaleoAnthropology (2011): 216–42, doi:10.4207/PA.2011.ART54.
    3. Michael Staubwasser et al., “Impact of Climate Change on the Transition of Neanderthals to Modern Humans in Europe,” Proceedings of the National Academy of Sciences, USA 115 (September 11, 2018): 9116–21, doi:10.1073/pnas.1808647115.
  • Spider Silk Inspires New Technology and the Case for a Creator

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Nov 28, 2018

    Mark your calendars!

    On December 14th (2018), Columbia Pictures—in collaboration with Sony Pictures Animation—will release a full-length animated feature: Spider-Man: Into the Spider-Verse. The story features Miles Morales, an Afro-Latino teenager, as Spider-Man.

    Morales accidentally becomes transported from his universe to ours, where Peter Parker is Spider-Man. Parker meets Morales and teaches him how to be Spider-Man. Along the way, they encounter different versions of Spider-Man from alternate dimensions. All of them team up to save the multiverse and to find a way to return back to their own versions of reality.

    What could be better than that?

    In 1962, Spider-Man’s creators, Stan Lee and Steve Ditko, drew inspiration for their superhero in the amazing abilities of spiders. And today, engineers find similar inspiration, particularly, when it comes to spider silk. The remarkable properties of spider’s silk is leading to the creation of new technologies.

    Synthetic Spider Silk

    Engineers are fascinated by spider silk because this material displays astonishingly high tensile strength and ductility (pliability), properties that allow it to absorb huge amounts of energy before breaking. Only one-sixth the density of steel, spider silk can be up to four times stronger, on a per weight basis.

    By studying this remarkable substance, engineers hope that they can gain insight and inspiration to engineer next-generation materials. According to Northwestern University researcher Nathan C. Gianneschi, who is attempting to produce synthetic versions of spider silk, “One cannot overstate the potential impact on materials and engineering if we can synthetically replicate the natural process to produce artificial fibers at scale. Simply put, it would be transformative.”1

    Gregory P. Holland of San Diego State University, one of Gianneschi’s collaborators, states, “The practical applications for materials like this are essentially limitless.”2 As a case in point, synthetic versions of spider silk could be used to make textiles for military personnel and first responders and to make construction materials such as cables. They would also have biomedical utility and could be used to produce environmentally friendly plastics.

    The Quest to Create Synthetic Spider Silk

    But things aren’t that simple. Even though life scientists and engineers understand the chemical structure of spider’s silk and how its structural features influence its mechanical properties, they have not been able to create synthetic versions of it with the same set of desired properties.

    blog__inline--spider-silk-inspires-new-technologyFigure 1: The Molecular Architecture of Spider Silk. Fibers of spider silk consist of proteins that contain crystalline regions separated by amorphous regions. The crystals form from regions of the protein chain that fold into structures called beta-sheets. These beta-sheets stack together to give the spider silk its tensile strength. The amorphous regions give the silk fibers ductility. Image credit: Chen-Pan Liao.

    Researchers working to create synthetic spider silk speculate that the process by which the spider spins the silk may play a critical role in establishing the biomaterial’s tensile strength and ductility. Before it is extruded, silk exists in a precursor form in the silk gland. Researchers think that the key to generating synthetic spider silk with the same properties as naturally formed spider silk may be found by mimicking the structure of the silk proteins in precursor form.

    Previous work suggests that the proteins that make up spider silk exist as simple micelles in the silk gland and that when spun from this form, fibers with greater-than-steel strength are formed. But researchers’ attempts to apply this insight in a laboratory setting failed to yield synthetic silk with the desired properties.

    The Structure of Spider Silk Precursors

    Hoping to help unravel this problem, a team of American collaborators led by Gianneschi and Holland recently provided a detailed characterization of the structure of the silk protein precursors in spider glands.3 They discovered that the silk proteins form micelles, but the micelles aren’t simple. Instead, they assemble into a complex structure comprised of a hierarchy of subdomains. Researchers also learned that when they sheared these nanoassemblies of precursor proteins, fibers formed. If they can replicate these hierarchical nanostructures in the lab, researchers believe they may be able to construct synthetic spider silk with the long-sought-after tensile strength and ductility.

    Biomimetics and Bioinspiration

    Attempts to find inspiration for new technology is n0t limited to spider silk. It has become rather commonplace for engineers to employ insights from arthropod biology (which includes spiders and insects) to solve engineering problems and to inspire the invention of new technologies—even technologies unlike anything found in nature. In fact, I discuss this practice in an essay I contributed for the book God and the World of Insects.

    This activity falls under the domain of two relatively new and exciting areas of engineering known as biomimetics and bioinspiration. As the names imply, biomimetics involves direct mimicry of designs from biology, whereas bioinspiration relies on insights from biology to guide the engineering enterprise.

    The Converse Watchmaker Argument for God’s Existence

    The idea that biological designs can inspire engineering and technology advances is highly provocative. It highlights the elegant designs found throughout the living realm. In the case of spider silk, design elegance is not limited to the structure of spider silk but extends to its manufacturing process as well—one that still can’t be duplicated by engineers.

    The elegance of these designs makes possible a new argument for God’s existence—one I have named the converse Watchmaker argument. (For a detailed discussion see the essay I contributed to the book Building Bridges, entitled, “The Inspirational Design of DNA.”)

    The argument can be stated like this: if biological designs are the work of a Creator, then these systems should be so well-designed that they can serve as engineering models for inspiring the development of new technologies. Indeed, this scenario is what scientists observe in nature. Therefore, it becomes reasonable to think that biological designs are the work of a Creator.

    Biomimetics and the Challenge to the Evolutionary Paradigm

    From my perspective, the use of biological designs to guide engineering efforts seems fundamentally at odds with evolutionary theory. Generally speaking, evolutionary biologists view biological systems as the products of an unguided, historically contingent process that co-opts preexisting systems to cobble together new ones. Evolutionary mechanisms can optimize these systems, but even then they are, in essence, still kludges.

    Given the unguided nature of evolutionary mechanisms, does it make sense for engineers to rely on biological systems to solve problems and inspire new technologies? Is it in alignment with evolutionary beliefs to build an entire subdiscipline of engineering upon mimicking biological designs? I would argue that these engineering subdisciplines do not fit with the evolutionary paradigm.

    On the other hand, biomimetics and bioinspiration naturally flow out of a creation model approach to biology. Using designs in nature to inspire engineering only makes sense if these designs arose from an intelligent Mind, whether in this universe or in any of the dimensions of the Spider-Verse.

    Resources

    Endnotes
    1. Northwestern University, “Mystery of How Black Widow Spiders Create Steel-Strength Silk Webs further Unravelled,” Phys.org, Science X, October 22, 2018, https://phys.org/news/2018-10-mystery-black-widow-spiders-steel-strength.html.
    2. Northwestern University, “Mystery of How Black Widow Spiders Create.”
    3. Lucas R. Parent et al., “Hierarchical Spidroin Micellar Nanoparticles as the Fundamental Precursors of Spider Silks,” Proceedings of the National Academy of Sciences USA (October 2018), doi:10.1073/pnas.1810203115.
  • Vocal Signals Smile on the Case for Human Exceptionalism

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Nov 21, 2018

    Before Thanksgiving each year, those of us who work at Reasons to Believe (RTB) headquarters take part in an annual custom. We put our work on pause and use that time to call donors, thanking them for supporting RTB’s mission. (It’s a tradition we have all come to love, by the way.)

    Before we start making our calls, our ministry advancement team leads a staff meeting to organize our efforts. And each year at these meetings, they remind us to smile when we talk to donors. I always found this to be an odd piece of advice, but they insist that when we talk to people, our smiles come across over the phone.

    Well, it turns out that the helpful advice of our ministry advancement team has scientific merit, based on a recent study from a team of neuroscientists and psychologists from France and the UK.1 This research highlights the importance of vocal signaling for communicating emotions between people. And from my perspective, the work also supports the notion of human exceptionalism and the biblical concept of the image of God.

    We Can Hear Smiles

    The research team was motivated to perform this study in order to learn the role vocal signaling plays in social cognition. They chose to focus on auditory “smiles,” because, as these researchers point out, smiles are among the most powerful facial expressions and one of the earliest to develop in children. As I am sure we all know, smiles express positive feelings and are contagious.

    When we smile, our zygomaticus major muscle contracts bilaterally and causes our lips to stretch. This stretching alters the sounds of our voices. So, the question becomes: Can we hear other people when they smile?

    blog__inline--vocal-signals-smile-on-the-case-for-human-exceptionalism

    Figure 1: Zygomaticus major. Image credit: Wikipedia

    To determine if people can “hear” smiles, the researchers recorded actors who spoke a range of French phonemes, with and without smiling. Then, they modeled the changes in the spectral patterns that occurred in the actors’ voices when they smiled while they spoke.

    The researchers used this model to manipulate recordings of spoken sentences so that they would sound like they were spoken by someone who was smiling (while keeping other features such as pitch, content, speed, gender, etc., unchanged). Then, they asked volunteers to rate the “smiley-ness” of voices before and after manipulation of the recordings. They found that the volunteers could distinguish the transformed phonemes from those that weren’t altered.

    Next, they asked the volunteers to mimic the sounds of the “smiley” phonemes. The researchers noted that for the volunteers to do so, they had to smile.

    Following these preliminary experiments, the researchers asked volunteers to describe their emotions when listening to transformed phonemes compared to those that weren’t transformed. They found that when volunteers heard the altered phonemes, they expressed a heightened sense of joy and irony.

    Lastly, the researchers used electromyography to monitor the volunteers’ facial muscles so that they could detect smiling and frowning as the volunteers listened to a set of 60 sentences—some manipulated (to sound as if they were spoken by someone who was smiling) and some unaltered. They found that when the volunteers judged speech to be “smiley,” they were more likely to smile and less likely to frown.

    In other words, people can detect auditory smiles and respond by mimicking them with smiles of their own.

    Auditory Signaling and Human Exceptionalism

    This research demonstrates that both the visual and auditory clues we receive from other people help us to understand their emotional state and to become influenced by it. Our ability to see and hear smiles helps us develop empathy toward others. Undoubtedly, this trait plays an important role in our ability to link our minds together and to form complex social structures—two characteristics that some anthropologists believe contribute to human exceptionalism.

    The notion that human beings differ in degree, not kind, from other creatures has been a mainstay concept in anthropology and primatology for over 150 years. And it has been the primary reason why so many people have abandoned the belief that human beings bear God’s image.

    Yet, this stalwart view in anthropology is losing its mooring, with the concept of human exceptionalism taking its place. A growing minority of anthropologists and primatologists now believe that human beings really are exceptional. They contend that human beings do, indeed, differ in kind, not merely degree, from other creatures—including Neanderthals. Ironically, the scientists who argue for this updated perspective have developed evidence for human exceptionalism in their attempts to understand how the human mind evolved. And, yet, these new insights can be used to marshal support for the biblical conception of humanity.

    Anthropologists identify at least four interrelated qualities that make us exceptional: (1) symbolism, (2) open-ended generative capacity, (3) theory of mind, and (4) our capacity to form complex social networks.

    Human beings effortlessly represent the world with discrete symbols and to denote abstract concepts. Our ability to represent the world symbolically and to combine and recombine those symbols in a countless number of ways to create alternate possibilities has interesting consequences. Human capacity for symbolism manifests in the form of language, art, music, and body ornamentation. And humans alone desire to communicate the scenarios we construct in our minds with other people.

    But there is more to our interactions with other human beings than a desire to communicate. We want to link our minds together and we can do so because we possess a theory of mind. In other words, we recognize that other people have minds just like ours, allowing us to understand what others are thinking and feeling. We also possess the brain capacity to organize people we meet and know into hierarchical categories, allowing us to form and engage in complex social networks.

    Thus, I would contend that our ability to hear people’s smiles plays a role in theory of mind and our sophisticated social capacities. It contributes to human exceptionalism.

    In effect, these four qualities could be viewed as scientific descriptors of the image of God. In other words, evidence for human exceptionalism is evidence that human beings bear God’s image.

    So, even though many people in the scientific community promote a view of humanity that denigrates the image of God, scientific evidence and common-day experience continually support the notion that we are unique and exceptional as human beings. It makes me grin from ear to ear to know that scientific investigations into our cognitive and behavioral capacities continue to affirm human exceptionalism and, with it, the image of God.

    Indeed, we are the crown of creation. And that makes me thankful!

    Resources

    Endnotes
    1. Pablo Arias, Pascal Belin, and Jean-Julien Aucouturier, “Auditory Smiles Trigger Unconscious Facial Imitation,” Current Biology 28 (July 23, 2018): PR782–R783, doi:10.1016/j.cub.2018.05.084.
  • When Did Modern Human Brains—and the Image of God—Appear?

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Nov 14, 2018

    When I was a kid, I enjoyed reading Ripley’s Believe It or Not! I couldn’t get enough of the bizarre facts described in the pages of this comic.

    I was especially drawn to the panels depicting people who had oddly shaped heads. I found it fascinating to learn about people whose skulls were purposely forced into unnatural shapes by a practice known as intentional cranial deformation.

    For the most part, this practice is a thing of the past. It is rarely performed today (though there are still a few people groups who carry out this procedure). But for much of human history, cultures all over the world have artificially deformed people’s crania (often for reasons yet to be fully understood). They accomplished this feat by binding the heads of infants, which distorts the normal growth of the skull. Through this practice, the shape of the human head can be readily altered to be abnormally flat, elongated, rounded, or conical.

    blog__inline--when-did-modern-human-brains-and-the-image-of-god-appear-1

    Figure 1: Deformed ancient Peruvian skull. Image credit: Shutterstock.

    It is remarkable that the human skull is so malleable. Believe it, or not!

    blog__inline--when-did-modern-human-brains-and-the-image-of-god-appear-2

    Figure 2: Parts of the human skull. Image credit: Shutterstock.

    For physical anthropologists, the normal shape of the modern human skull is just as bizarre as the conical-shaped skulls found among the remains of the Nazca culture of Peru. Compared to other hominins (such as Neanderthals and Homo erectus), modern humans have oddly shaped skulls. The skull shape of the hominins was elongated along the anterior-posterior axis. But the skull shape of modern humans is globular, with bulging and enlarged parietal and cerebral areas. The modern human skull also has another distinctive feature: the face is retracted and relatively small.

    blog__inline--when-did-modern-human-brains-and-the-image-of-god-appear-3

    Figure 3: Comparison of modern human and Neanderthal skulls. Image credit: Wikipedia.

    Anthropologists believe that the difference in skull shape (and hence, brain shape) has profound significance and helps explain the advanced cognitive abilities of modern humans. The parietal lobe of the brain is responsible for:

    • Perception of stimuli
    • Sensorimotor transformation (which plays a role in planning)
    • Visuospatial integration (which provides hand-eye coordination needed for throwing spears and making art)
    • Imagery
    • Self-awareness
    • Working and long-term memory

    Human beings seem to uniquely possess these capabilities. They make us exceptional compared to other hominins. Thus, for paleoanthropologists, two key questions are: when and how did the globular human skull appear?

    Recently, a team of researchers from the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, addressed these questions. And their answers add evidence for human exceptionalism while unwittingly providing support for the RTB human origins model.1

    The Appearance of the Modern Human Brain

    To characterize the mode and tempo for the origin of the unusual morphology (shape) of the modern human skull, the German researchers generated and analyzed the CT scans of 20 fossil specimens representing three windows of time: (1) 300,000 to 200,000 years ago; (2) 130,000 to 100,000 years ago; and (3) 35,000 to 10,000 years ago. They also included 89 cranially diverse skulls from present-day modern humans, 8 Neanderthal skulls, and 8 from Homo erectus in their analysis.

    The first group consisted of three specimens: (1) Jebel Irhoud 1 (dating to 315,000 years in age); (2) Jebel Irhoud 2 (also dating to 315,000 years in age); and (3) Omo Kibish (dating to 195,000 years in age). The specimens that comprise this group are variously referred to as near anatomically modern humans or archaic Homo sapiens.

    The second group consisted of four specimens: (1) LH 18 (dating to 120,000 years in age); (2) Skhul (dating to 115,000 years in age); (3) Qafzeh 6; and (4) Qafzeh 9 (both dating to about 115,000 years in age. This group consists of specimens typically considered to be anatomically modern humans. The third group consisted of thirteen specimens that are all considered to be anatomically and behaviorally modern humans.

    Researchers discovered that the group one specimens had facial features like that of modern humans. They also had brain sizes that were similar to Neanderthals and modern humans. But their endocranial shape was unlike that of modern humans and appeared to be intermediate between H. erectus and Neanderthals.

    On the other hand, the specimens from group two displayed endocranial shapes that clustered with the group three specimens and the present-day samples. In short, modern human skull morphology (and brain shape) appeared between 130,000 to 100,000 years ago.

    Confluence of Evidence Locates Humanity’s Origin

    This result aligns with several recent archaeological finds that place the origin of symbolism in the same window of time represented by the group two specimens. (See the Resources section for articles detailing some of these finds.) Symbolism—the capacity to represent the world and abstract ideas with symbols—appears to be an ability that is unique to modern humans and is most likely a manifestation of the modern human brain shape, specifically an enlarged parietal lobe.

    Likewise, this result coheres with the most recent dates for mitochondrial Eve and Y-chromosomal Adam around 120,000 to 150,000 years ago. (Again, see the Resources section for articles detailing some of these finds.) In other words, the confluence of evidence (anatomical, behavioral, and genetic) pinpoints the origin of modern humans (us) between 150,000 to 100,000 years ago, with the appearance of modern human anatomy coinciding with the appearance of modern human behavior.

    What Does This Finding Mean for the RTB Human Origins Model?

    To be clear, the researchers carrying out this work interpret their results within the confines of the evolutionary framework. Therefore, they conclude that the globular skulls—characteristic of modern humans—evolved recently, only after the modern human facial structure had already appeared in archaic Homo sapiens around 300,000 years ago. They also conclude that the globular skull of modern humans had fully emerged by the time humans began to migrate around the world (around 40,000 to 50,000 years ago).

    Yet, the fossil evidence doesn’t show the gradual emergence of skull globularity. Instead, modern human specimens form a distinct cluster isolated from the distinct clusters formed by H. erectus, Neanderthals, and archaic H. sapiens. There are no intermediate globular specimens between archaic and modern humans, as would be expected if this trait evolved. Alternatively, the distinct clusters are exactly as expected if modern humans were created.

    It appears that the globularity of our skull distinguishes modern humans from H. erectus, Neanderthals, and archaic Homo sapiens (near anatomically modern humans). This globularity of the modern human skull has implications for when modern human behavior and advanced cognitive abilities emerged.

    For this reason, I see this work as offering support for the RTB human origins creation model (and, consequently, the biblical account of human origins and the biblical conception of human nature). RTBs model (1) views human beings as cognitively superior and distinct from other hominins, and (2) posits that human beings uniquely possess a quality called the image of God that I believe manifests as human exceptionalism.

    This work supports both predictions by highlighting the uniqueness and exceptional qualities of modern humans compared to H. erectus, Neanderthals, and archaic H. sapiens, calling specific attention to our unusual skull and brain morphology. As noted, anthropologists believe that this unusual brain morphology supports our advanced cognitive capabilities—abilities that I believe reflect the image of God. Because archaic H. sapiens, Neanderthals, and H. erectus did not possess this brain morphology, it makes it unlikely that these creatures had the sophisticated cognitive capacity displayed by modern humans.

    In light of RTBs model, it is gratifying to learn that the origin of anatomically modern humans coincides with the origin of modern human behavior.

    Believe it or not, our oddly shaped head is part of the scientific case that can be made for the image of God.

    Resources

    Endnotes
    1. Simon Neubauer, Jean-Jacques Hublin, and Philipp Gunz, “The Evolution of Modern Human Brain Shape,” Science Advances 4 (January 24, 2018): eaao596, doi:10.1126/sciadv.aao5961.
  • Is Raising Children with Religion Child Abuse?

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Nov 07, 2018

    “Horrible as sexual abuse no doubt was, the damage was arguably less than the long-term psychological damage inflicted by bringing the child up Catholic in the first place.”

    —Richard Dawkins, atheist and evolutionary biologist

    blog__inline--is-raising-children-with-religion-child-abuse

    Image: Richard Dawkins. Image credit: Shutterstock

    With his typical flair for provocation, on more than one occasion Richard Dawkins has asserted that indoctrinating children in religion is a form of child abuse. In fact, he argues that the mental torment inflicted by religion on children is worse than sexual abuse carried out by priests—or by any other adult, for that matter. By way of support, he cites a conversation he had with someone who was molested by a Catholic priest. According to Dawkins, a woman told him “that while being abused by a priest was a ‘yucky’ experience, being told as a child that a Protestant friend who died would ‘roast in Hell’ was more distressing.”1

    Of course, every time Dawkins has made this proclamation, people from nearly every philosophical and religious perspective have condemned his outlandish statements. But are condemnations enough to keep him from making the assertion? What about evidence?

    A study recently published by researchers from Harvard T. H. Chan School of Public Health demonstrates that when Dawkins claims that indoctrinating children with religion is worse than child molestation, he is not only being outlandish, but wrong. The Harvard researchers discovered that children raised with religion are mentally and physically healthier than children raised without religion.2

    The study’s conclusions are based on analysis of data from the Nurses’ Health Study II and the Growing Up Today Study. Sampling between 5,500 to 7,500 individuals (depending on the question asked), the researchers discovered that compared to no attendance, children and adolescents (between 12–19 years of age) who attended weekly religious services displayed:

    • Greater life satisfaction
    • Greater sense of mission
    • Greater volunteerism
    • Greater forgiveness
    • Fewer depressive symptoms
    • Lower likelihood of PTSD
    • Lower drug use
    • Reduced cigarette smoking
    • Lower sexual initiation
    • Lower levels of STIs (sexually transmitted infections)
    • Reduced incidences of abnormal Pap smears

    The team noticed that when regular attendance of religious services was combined with prayer and meditation, the effects were slightly diminished. At this juncture, they don’t understand this counterintuitive finding.

    They also discovered mental and physical health benefits for children and adolescents who did not attend religious services but prayed and/or meditated.

    Apparently, attending religious services regularly and praying keeps young people from engaging in risky behaviors, makes them more disciplined, and helps develop their character. All of this translates into healthier, better adjusted, more resilient young men and women.

    The results of this study align with results of previous studies. Study after study consistently shows that people who practice religion enjoy numerous mental and physical health benefits compared to those who don’t. (See the Resources section for more on this topic.) Previous studies focused on adults, but as the study by the Harvard School of Public Health team reveals, the benefits are realized for children and adolescents, too.

    Ying Chen, one of the study’s authors, concludes, “These findings are important for both our understanding of health and our understanding of parenting practices. Many children are raised religiously, and our study shows that this can powerfully affect their health behaviors, mental health, and overall happiness and well-being.”3

    Far from being abusive, raising children with religion comprises one facet of responsible parenting. And if Richard Dawkins is truly a man of science, he should be willing to acknowledge the real benefits of teaching religion to our children.

    Resources:

    Endnotes
    1. Rob Cooper, “Forcing a Religion on Your Children Is as Bad as Child Abuse, Claims Atheist Professor Richard Dawkins,” The Daily Mail (April 23, 2013), co.uk/news/article-2312813/Richard-Dawkins-Forcing-religion-children-child-abuse-claims-atheist-professor.html.
    2. Ying Chen and Tyler J. VanderWeele, “Associations of Religious Upbringing with Subsequent Health and Well-Being from Adolescence to Young Adulthood: An Outcome-Wide Analysis,” American Journal of Epidemiology (2018): kwy142, doi:10.1093/aje/kwy142.
    3. Alice G. Walton, “Raising Kids with Religion or Spirituality May Protect Their Mental Health,” Forbes (September 17, 2018), com/sites/alicegwalton/2018/09/17/raising-kids-with-religion-or-spirituality-may-protect-their-mental-health-study/#7555d89f3287.
  • Is Fossil-Associated Cholesterol a Biomarker for a Young Earth?

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Oct 24, 2018

    Like many Americans, I receive a yearly physical. Even though I find these exams to be a bit of a nuisance, I recognize their importance. These annual checkups allow my doctor to get a read on my overall health.

    An important part of any physical exam is blood work. Screening a patient’s blood for specific biomarkers gives physicians data that allows them to assess a patient’s risk for various diseases. For example, the blood levels of total cholesterol and the ratio of HDLs to LDLs serve as useful biomarkers for cardiovascular disease.

    blog__inline--is-fossil-asscociated-cholesterol-2

    Figure 1: Cholesterol. Image credit: BorisTM. Public domain via Wikimedia Commons, https://commons.wikimedia.org/wiki/File:Cholesterol.svg.

    As it turns out, physicians aren’t the only ones who use cholesterol as a diagnostic biomarker. So, too, do paleontologists. In fact, recently a team of paleontologists used cholesterol biomarkers to determine the identity of an enigmatic fossil recovered in Precambrian rock formations that dated to 588 million years in age.1 This diagnosis was possible because they were able to extract low levels of cholesterol derivatives from the fossil. Based on the chemical profile of the extracts, researchers concluded that Dickinsonia specimens are the fossil remains of some of the oldest animals on Earth.

    Without question, this finding has important implications for how we understand the origin and history of animal life on Earth. But young-earth creationists (YECs) think that this finding has important implications for another reason. They believe that the recovery of cholesterol derivatives from Dickinsonia provides compelling evidence that the earth is only a few thousand years old and the fossil record results from a worldwide flood event. They argue that there is no way organic materials such as cholesterol could survive for hundreds of millions of years in the geological column. Consequently, they argue that the methods used to date fossils such as Dickinsonia must not be reliable, calling into question the age of the earth determined by radiometric techniques.

    Is this claim valid? Is the recovery of cholesterol derivatives from fossils that date to hundreds of millions of years evidence for a young earth? Or can the recovery of cholesterol derivatives from 588 million-year-old fossils be explained in an old-earth paradigm?

    How Can Cholesterol Derivatives Survive for Millions of Years?

    Despite the protests of YECs, for several converging reasons the isolation of cholesterol derivatives from the Dickinsonia specimen is easily explained—even if the specimen dates to 588 million years in age.

    • The research team did not recover high levels of cholesterol from the Dickinsonia specimen (which would be expected if the fossils were only 3,000 years old), but trace levels of cholestane (which would be expected if the fossils were hundreds of millions of years old). Cholestane is a chemical derivative of cholesterol that is produced when cholesterol undergoes diagenetic changes.

    blog__inline--is-fossil-asscociated-cholesterol-3

    Figure 2: Cholestane. Image credit: Calvero. (Self-made with ChemDraw.) Public domain via Wikimedia Commons, https://commons.wikimedia.org/wiki/File:Cholestane.svg.

    • Cholestane is a chemically inert hydrocarbon that is expected to be stable for vast periods of time. In fact, geochemists have recovered steranes (other biomarkers) from rock formations that date to 2.8 billion years in age.
    • The Dickinsonia specimens that yielded cholestanes were exceptionally well-preserved. Specifically, they were unearthed from the White Sea Cliffs in northwest Russia. This rock formation has escaped deep burial and geological heating, making it all the more reasonable that compounds such as cholestanes could survive for nearly 600 million years.

    In short, the recovery of cholesterol derivatives from Dickinsonia does not reflect poorly on the health of the old-earth paradigm. When the chemical properties of cholesterol and cholestane are considered, and given the preservation conditions of the Dickinsonia specimens, the interpretation that these materials were recovered from 588-million-year-old fossil specimens passes the physical exam.

    Resources

    Featured image: Dickinsonia Costata. Image credit: https://commons.wikimedia.org/wiki/File:DickinsoniaCostata.jpg.

    Endnotes
    1. Ilya Bobrovskiy et al., “Ancient Steroids Establish the Ediacaran Fossil Dickinsonia as One of the Earliest Animals,” Science 361 (September 21, 2018): 1246–49, doi:10.1126/science.aat7228.
  • Further Review Overturns Neanderthal Art Claim

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Oct 17, 2018

    As I write this blog post, the 2018–19 NFL season is just underway.

    During the course of any NFL season, several key games are decided by a controversial call made by the officials. Nobody wants the officials to determine the outcome of a game, so the NFL has instituted a way for coaches to challenge calls on the field. When a call is challenged, part of the officiating crew looks at a computer tablet on the sidelines—reviewing the game footage from a number of different angles in an attempt to get the call right. After two minutes of reviewing the replays, the senior official makes his way to the middle of the field and announces, “Upon further review, the call on the field . . .”

    Recently, a team of anthropologists from Spain and the UK created quite a bit of controversy based on a “call” they made from working in the field. Using a new U-Th dating method, these researchers age-dated the artwork in caves from Iberia. Based on the age of a few of their samples, they concluded that Neanderthals produced cave paintings.1 But new work by three independent research teams challenges the “call” from the field—overturning the conclusion that Neanderthals made art and displayed symbolism like modern humans.

    U-Th Dating Method

    The new dating method under review measures the age of calcite deposits beneath cave paintings and those formed over the artwork after the paintings were created. As water flows down cave walls, it deposits calcite. When calcite forms, it contains trace amounts of U-238. This isotope decays into Th-230. Normally, detection of such low quantities of the isotopes would require extremely large samples. Researchers discovered that by using accelerator mass spectrometry, they could get by with 10-milligram samples. And by dating the calcite samples with this technique, they produced minimum and maximum ages for the cave paintings.2

    Call from the Field: Neanderthals Are Artists

    The team applied their dating method to the art found in three cave sites in Iberia (ancient Spain): (1) La Pasiega, which houses paintings of animals, linear signs, claviform signs, and dots; (2) Ardales, which contains about 1,000 paintings of animals, along with dots, discs, lines, geometric shapes, and hand stencils; and (3) Maltravieso, which displays a set of hand stencils and geometric designs. The research team took a total of 53 samples from 25 carbonate formations associated with the cave art in these three cave sites. While most of the samples dated to 40,000 years old or less (which indicates that modern humans were the artists), three measurements produced minimum ages of around 65,000 years, including: (1) red scalariform from La Pasiega, (2) red areas from Ardales, and (3) a hand stencil from Maltravieso. On the basis of the three measurements, the team concluded that the art must have been made by Neanderthals because modern humans had not made their way into Iberia at that time. In other words, Neanderthals made art, just like modern humans did.

    blog__inline--further-review-overturns-neanderthal-art-claim

    Figure: Maltravieso Cave Entrance, Spain. Image credit: Shutterstock

    Shortly after the findings were published, I wrote a piece expressing skepticism about this claim for two reasons.

    First, I questioned the reliability of the method. Once the calcite deposit forms, the U-Th method will only yield reliable results if none of the U or Th moves in or out of the deposit. Based on the work of researchers from France and the US, it does not appear as if the calcite films are closed systems. The calcite deposits on the cave wall formed because of hydrological activity in the cave. Once a calcite film forms, water will continue to flow over its surface, leeching out U (because U is much more water soluble than Th). By removing U, water flowing over the calcite will make it seem as if the deposit and, hence, the underlying artwork is much older than it actually is.3

    Secondly, I expressed concern that the 65,000-year-old dates measured for a few samples are outliers. Of the 53 samples measured, only three gave age-dates of 65,000 years. The remaining samples dated much younger, typically around 40,000 years in age. So why should we give so much credence to three measurements, particularly if we know that the calcite deposits are open systems?

    Upon Further Review: Neanderthals Are Not Artists

    Within a few months, three separate research groups published papers challenging the reliability of the U-Th method for dating cave art and, along with it, the claim that Neanderthals produced cave art.4 It is not feasible to detail all their concerns in this article, but I will highlight six of the most significant complaints. In several instances, the research teams independently raised the same concerns.

    1. The U-Th method is unreliable because the calcite deposits are an open system. The concern that I raised was reiterated by two of the research teams for the same reason I expressed. The U-Th dating technique can only yield reliable results if no U or Th moves in or out of the system once the calcite film forms. The continued water flow over the calcite deposits will preferentially leech U from the deposit, making the deposit appear to be older than it is.
    2. The U-Th method is unreliable because it fails to account for nonradiogenic Th. This isotope would have been present in the source water producing the calcite deposits. As a result, Th would already be present in calcite at the time of formation. This nonradiogenic Th would make the samples appear to be older than they actually are.
    3. The 65,000-year-old dates for the three measurements from La Pasiega, Ardales, and Maltravieso are likely outliers. Just as I pointed out before, two of the research groups expressed concern that only 3 of the 53 measurements came in at 65,000 years in age. This discrepancy suggests that these dates are outliers, most likely reflecting the fact that the calcite deposits are an open system that formed with Th already present. Yet, the researchers from Spain and the UK who reported these results emphasized the few older dates while downplaying the younger dates.
    4. Multiple measurements on the same piece of art yielded discordant ages. For example, the researchers made five age-date measurements of the hand stencil at Maltravieso. These dates (66.7 kya [thousand years ago], 55.2 kya, 35.3 kya, 23.1 kys, and 14.7 kya) were all over the place. And yet, the researchers selected the oldest date for the age of the hand stencil, without justification.
    5. Some of the red “markings” on cave walls that were dated may not be art. Red markings are commonplace on cave walls and can be produced by microorganisms that secrete organic materials or iron oxide deposits. It is possible that some of the markings that were dated were not art at all.
    6. The method used by the researchers to sample the calcite deposits may have been flawed. One team expressed concern that the sampling technique may have unwittingly produced dates for the cave surface on which the paintings were made rather than the pigments used to make the art itself. If the researchers inadvertently dated the cave surface, it could easily be older than the art.

    In light of these many shortcomings, it is questionable if the U-Th method to date cave art is reliable. After review, the call from the field is overturned. There is no conclusive evidence that Neanderthals made art.

    Why Does This Matter?

    Artistic expression reflects a capacity for symbolism. And many people view symbolism as a quality unique to human beings that contributes to our advanced cognitive abilities and exemplifies our exceptional nature. In fact, as a Christian, I see symbolism as a manifestation of the image of God. If Neanderthals possessed symbolic capabilities, such a quality would undermine human exceptionalism (and with it the biblical view of human nature), rendering human beings nothing more than another hominin. At this juncture, every claim for Neanderthal symbolism has failed to withstand scientific scrutiny.

    Now, it is time for me to go back to the game.

    Who dey! Who dey! Who dey think gonna beat dem Bengals!

    Resources:

    Endnotes
    1. L. Hoffmann et al., “U-Th Dating of Carbonate Crusts Reveals Neandertal Origin of Iberian Cave Art,” Science359 (February 23, 2018): 912–15, doi:10.1126/science.aap7778.
    2. W. G. Pike et al., “U-Series Dating of Paleolithic Art in 11 Caves in Spain,” Science 336 (June 15, 2012): 1409–13, doi:10.1126/science.1219957.
    3. Georges Sauvet et al., “Uranium-Thorium Dating Method and Palaeolithic Rock Art,” Quaternary International 432 (2017): 86–92, doi:10.1016/j.quaint.2015.03.053.
    4. Ludovic Slimak et al., “Comment on ‘U-Th Dating of Carbonate Crusts Reveals Neandertal Origin of Iberian Cave Art,’” Science 361 (September 21, 2018): eaau1371, doi:10.1126/science.aau1371; Maxime Aubert, Adam Brumm, and Jillian Huntley, “Early Dates for ‘Neanderthal Cave Art’ May Be Wrong,” Journal of Human Evolution (2018), doi:10.1016/j.jhevol.2018.08.004; David G. Pearce and Adelphine Bonneau, “Trouble on the Dating Scene,” Nature Ecology and Evolution 2 (June 2018): 925–26, doi:10.1038/s41559-018-0540-4.
  • Can Evolution Explain the Origin of Language?

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Oct 10, 2018

    Oh honey hush, yes you talk too much
    Oh honey hush, yes you talk too much
    Listenin’ to your conversation is just about to separate us

    —Albert Collins

    He was called the “Master of the Telecaster.” He was also known as the “Iceman,” because his guitar playing was so hot, he was cold. Albert Collins (1932–93) was an electric blues guitarist and singer whose distinct style of play influenced the likes of Stevie Ray Vaughn and Robert Cray.

    blog__inline--can-evolution-explain-the-origin-of-language

    Image: Albert Collins in 1990. Image Credit: Masahiro Sumori [GFDL (https://www.gnu.org/copyleft/fdl.html), CC-BY-SA-3.0 (https://creativecommons.org/licenses/by-sa/3.0/) or CC BY-SA 2.5 (https://creativecommons.org/licenses/by-sa/2.5)], from Wikimedia Commons.

    Collins was known for his sense of humor and it often came through in his music. In one of Collins’s signature songs, Honey Hush, the bluesman complains about his girlfriend who never stops talking: “You start talkin’ in the morning; you’re talkin’ all day long.” Collins finds his girlfriend’s nonstop chatter so annoying that he contemplates ending their relationship.

    While Collins may have found his girlfriend’s unending conversation irritating, the capacity for conversation is a defining feature of human beings (modern humans). As human beings, we can’t help ourselves—we “talk too much.”

    What does our capacity for language tell us about human nature and our origins?

    Language and Human Exceptionalism

    Human language flows out of our capacity for symbolism. Humans have the innate ability to represent the world (and abstract ideas) using symbols. And we can embed symbols within symbols to construct alternative possibilities and then link our scenario-building minds together through language, music, art, etc.

    As a Christian, I view our symbolism as a facet of the image of God. While animals can communicate, as far as we know only human beings possess abstract language. And despite widespread claims about Neanderthal symbolism, the scientific case for symbolic expression among these hominids keeps coming up short. To put it another way, human beings appear to be uniquely exceptional in ways that align with the biblical concept of the image of God, with our capacity for language serving as a significant contributor to the case for human exceptionalism.

    Recent insights into the mode and tempo of language’s emergence strengthen the scientific case for the biblical view of human nature. As I have written in previous articles (see Resources) and in Who Was Adam?, language appears to have emerged suddenly—and it coincides with the appearance of anatomically modern humans. Additionally, when language first appeared, it was syntactically as complex as contemporary language. That is, there was no evolution of language—proceeding from a proto-language through simple language and then to complex language. Language emerges all at once as a complete package.

    From my vantage point, the sudden appearance of language that uniquely coincides with the first appearance of humans is a signature for a creation event. It is precisely what I would expect if human beings were created in God’s image, as Scripture describes.

    Darwin’s Problem

    This insight into the origin of language also poses significant problems for the evolutionary paradigm. As linguist Noam Chomsky and anthropologist Ian Tattersall admit, “The relatively sudden origin of language poses difficulties that may be called ‘Darwin’s problem.’”1

    Anthropologist Chris Knight’s insights compound “Darwin’s problem.” He concludes that “language exists, but for reasons which no currently accepted theoretical paradigm can explain.”2 Knight arrives at this conclusion by surveying the work of three scientists (Noam Chomsky, Amotz Zahavi, and Dan Sperber) who study language’s origin using three distinct approaches. All three converge on the same conclusion; namely, evolutionary processes should not produce language or any form of symbolic communication.

    Chris Knight writes:

    Language evolved in no other species than humans, suggesting a deep-going obstacle to its evolution. One possibility is that language simply cannot evolve in a Darwinian world—that is, in a world based ultimately on competition and conflict. The underlying problem may be that the communicative use of language presupposes anomalously high levels of mutual cooperation and trust—levels beyond anything which current Darwinian theory can explain . . . suggesting a deep-going obstacle to its evolution.3

    To support this view, Knight synthesizes the insights of linguist Noam Chomsky, ornithologist and theoretical biologist Amotz Zahavi, and anthropologist Dan Sperber. All three scientists determine that language cannot evolve from animal communication for three distinct reasons.

    Three Reasons Why Language Is Unique to Humans

    Chomsky views animal minds as only being capable of bounded ranges of expression. On the other hand, human language makes use of a finite set of symbols to communicate an infinite array of thoughts and ideas. For Chomsky, there are no intermediate steps between bounded and infinite expression of ideas. The capacity to express an unlimited array of thoughts and ideas stems from a capacity that must have appeared all at once. And this ability must be supported by brain and vocalization structures. Brain structures and the ability to vocalize would either have to already be in place at the time language appeared (because these structures were selected by the evolutionary process for entirely different purposes) or they simultaneously arose with the capacity to conceive of infinite thoughts and ideas. To put it another way, language could not have emerged from animal communication through a step-evolutionary process. It had to appear all at once and be fully intact at the time of its genesis. No one knows of any mechanism that can effect that type of transformation.

    Zahavi’s work centers on understanding the evolutionary origin of signaling in the animal world. Endemic to his approach, Zahavi divides natural selection into two components: utilitarian selection (which describes selection for traits that improve the efficiency of some biological process—enhancing the organism’s fitness) and signal selection (which involves the selection of traits that are wasteful). Though counterintuitive, signal selection contributes to the fitness of the organism because it communicates the organism’s fitness to other animals (either members of the same or different species). The example Zahavi uses to illustrate signal selection is the unusual behavior of gazelles. These creatures stot (jump up and down, stomp the ground, loudly snort) when they detect a predator, which calls attention to themselves. This behavior is counterintuitive. Shouldn’t these creatures use their energy to run away, getting the biggest jump they can on the pursuing predator? As it turns out, the “wasteful and costly” behavior communicates to the predator the fitness of the gazelle. In the face of danger, the gazelle is willing to take on risk, because it is so fit. The gazelle’s behavior dissuades the predator from attacking. Observations in the wild confirm Zahavi’s ideas. Predators most often will go after gazelles that don’t stot or that display limited stotting behavior.

    Animal signaling is effective and reliable only when actual costly handicaps are communicated. The signaling can only be effective when a limited and bounded range of signals is presented. This constraint is the only way to communicate the handicap. In contrast, language is open-ended and infinite. Given the constraints on animal signaling, it cannot evolve into language. Natural selection prevents animal communication from evolving into language because, in principle, when the infinite can be communicated, in practice, nothing is communicated at all.

    Based in part on fieldwork he conducted in Ethiopia with the Dorze people, Dan Sperber concluded that people use language to primarily communicate alternative possibilities and realities—falsehoods—rather than information that is true about the world. To be certain, people use language to convey brute facts about the world. But most often language is used to communicate institutional facts—agreed-upon truths—that don’t necessarily reflect the world as it actually is. According to Sperber, symbolic communication is characterized by extravagant imagery and metaphor. Human beings often build metaphor upon metaphor—and falsehood upon falsehood—when we communicate. For Sperber, this type of communication can’t evolve from animal signaling. What evolutionary advantage arises by transforming communication about reality (animal signaling) to communication about alternative realities (language)?

    Synthesizing the insights of Chomsky, Zahavi, and Sperber, Knight concludes that language is impossible in a Darwinian world. He states, “The Darwinian challenge remains real. Language is impossible not simply by definition, but—more interestingly—because it presupposes unrealistic levels of trust. . . . To guard against the very possibility of being deceived, the safest strategy is to insist on signals that just cannot be lies. This rules out not only language, but symbolic communication of any kind.”4

    Signal for Creation

    And yet, human beings possess language (along with other forms of symbolism, such as art and music). Our capacity for abstract language is one of the defining features of human beings.

    For Christians like me, our language abilities reflect the image of God. And what appears as a profound challenge and mystery for the evolutionary paradigm finds ready explanation in the biblical account of humanity’s origin.

    Is it time for our capacity for conversation to separate us from the evolutionary explanation for humanity’s origin?

    Resources:

    Endnotes
    1. Johan J. Bolhuis et al., “How Could Language Have Evolved?” PLoS Biology 12 (August 2014): e1001934, doi:10.1371/journal.pbio.1001934.
    2. Chris Knight, “Puzzles and Mysteries in the Origins of Language,” Language and Communication 50 (September 2016): 12–21, doi:10.1016/j.langcom.2016.09.002.
    3. Knight, “Puzzles and Mysteries,” 12–21.
    4. Knight, “Puzzles and Mysteries,” 12–21.
  • The Optimal Design of the Genetic Code

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Oct 03, 2018

    Were there no example in the world of contrivance except that of the eye, it would be alone sufficient to support the conclusion which we draw from it, as to the necessity of an intelligent Creator.

    –William Paley, Natural Theology

    In his classic work, Natural Theology, William Paley surveyed a range of biological systems, highlighting their similarities to human-made designs. Paley noticed that human designs typically consist of various components that interact in a precise way to accomplish a purpose. According to Paley, human designs are contrivances—things produced with skill and cleverness—and they come about via the work of human agents. They come about by the work of intelligent designers. And because biological systems are contrivances, they, too, must come about via the work of a Creator.

    For Paley, the pervasiveness of biological contrivances made the case for a Creator compelling. But he was especially struck by the vertebrate eye. For Paley, if the only example of a biological contrivance available to us was the eye, its sophisticated design and elegant complexity alone justify the “necessity of an intelligent creator” to explain its origin.

    As a biochemist, I am impressed with the elegant designs of biochemical systems. The sophistication and ingenuity of these designs convinced me as a graduate student that life must stem from the work of a Mind. In my book The Cell’s Design, I follow in Paley’s footsteps by highlighting the eerie similarity between human designs and biochemical systems—a similarity I describe as an intelligent design pattern. Because biochemical systems conform to the intelligent design pattern, they must be the work of a Creator.

    As with Paley, I view the pervasiveness of the intelligent design pattern in biochemical systems as critical to making the case for a Creator. Yet, in particular, I am struck by the design of a single biochemical system: namely, the genetic code. On the basis of the structure of the genetic code alone, I think one is justified to conclude that life stems from the work of a Divine Mind. The latest work by a team of German biochemists on the genetic code’s design convinces me all the more that the genetic code is the product of a Creator’s handiwork.1

    To understand the significance of this study and the code’s elegant design, a short primer on molecular biology is in order. (For those who have a background in biology, just skip ahead to The Optimal Genetic Code.)

    Proteins

    The “workhorse” molecules of life, proteins take part in essentially every cellular and extracellular structure and activity. Proteins are chain-like molecules folded into precise three-dimensional structures. Often, the protein’s three-dimensional architecture determines the way it interacts with other proteins to form a functional complex.

    Proteins form when the cellular machinery links together (in a head-to-tail fashion) smaller subunit molecules called amino acids. To a first approximation, the cell employs 20 different amino acids to make proteins. The amino acids that make up proteins possess a variety of chemical and physical properties.

    blog__inline--the-optimal-design-of-the-genetic-code-1

    Figure 1: The Amino Acids. Image credit: Shutterstock

    Each specific amino acid sequence imparts the protein with a unique chemical and physical profile along the length of its chain. The chemical and physical profile determines how the protein folds and, therefore, its function. Because structure determines the function of a protein, the amino acid sequence is key to dictating the type of work a protein performs for the cell.

    DNA

    The cell’s machinery uses the information harbored in the DNA molecule to make proteins. Like these biomolecules, DNA consists of chain-like structures known as polynucleotides. Two polynucleotide chains align in an antiparallel fashion to form a DNA molecule. (The two strands are arranged parallel to one another with the starting point of one strand located next to the ending point of the other strand, and vice versa.) The paired polynucleotide chains twist around each other to form the well-known DNA double helix. The cell’s machinery forms polynucleotide chains by linking together four different subunit molecules called nucleotides. The four nucleotides used to build DNA chains are adenosine, guanosine, cytidine, and thymidine, familiarly known as A, G, C, and T, respectively.

    blog__inline--the-optimal-design-of-the-genetic-code-2

    Figure 2: The Structure of DNA. Image credit: Shutterstock

    As noted, DNA stores the information necessary to make all the proteins used by the cell. The sequence of nucleotides in the DNA strands specifies the sequence of amino acids in protein chains. Scientists refer to the amino-acid-coding nucleotide sequence that is used to construct proteins along the DNA strand as a gene.

    The Genetic Code

    A one-to-one relationship cannot exist between the 4 different nucleotides of DNA and the 20 different amino acids used to assemble polypeptides. The cell addresses this mismatch by using a code comprised of groupings of three nucleotides to specify the 20 different amino acids.

    The cell uses a set of rules to relate these nucleotide triplet sequences to the 20 amino acids making up proteins. Molecular biologists refer to this set of rules as the genetic code. The nucleotide triplets, or “codons” as they are called, represent the fundamental communication units of the genetic code, which is essentially universal among all living organisms.

    Sixty-four codons make up the genetic code. Because the code only needs to encode 20 amino acids, some of the codons are redundant. That is, different codons code for the same amino acid. In fact, up to six different codons specify some amino acids. Others are specified by only one codon.

    Interestingly, some codons, called stop codons or nonsense codons, code no amino acids. (For example, the codon UGA is a stop codon.) These codons always occur at the end of the gene, informing the cell where the protein chain ends.

    Some coding triplets, called start codons, play a dual role in the genetic code. These codons not only encode amino acids, but also “tell” the cell where a protein chain begins. For example, the codon GUG encodes the amino acid valine and also specifies the starting point of the proteins.

    blog__inline--the-optimal-design-of-the-genetic-code-3

    Figure 3: The Genetic Code. Image credit: Shutterstock

    The Optimal Genetic Code

    Based on visual inspection of the genetic code, biochemists had long suspected that the coding assignments weren’t haphazard—a frozen accident. Instead it looked to them like a rationale undergirds the genetic code’s architecture. This intuition was confirmed in the early 1990s. As I describe in The Cell’s Design, at that time, scientists from the University of Bath (UK) and from Princeton University quantified the error-minimization capacity of the genetic code. Their initial work indicated that the naturally occurring genetic code withstands the potentially harmful effects of substitution mutations better than all but 0.02 percent (1 out of 5,000) of randomly generated genetic codes with codon assignments different from the universal genetic code.2

    Subsequent analysis performed later that decade incorporated additional factors. For example, some types of substitution mutations (called transitions) occur more frequently in nature than others (called transversions). As a case in point, an A-to-G substitution occurs more frequently than does either an A-to-C or an A-to-T mutation. When researchers included this factor into their analysis, they discovered that the naturally occurring genetic code performed better than one million randomly generated genetic codes. In a separate study, they also found that the genetic code in nature resides near the global optimum for all possible genetic codes with respect to its error-minimization capacity.3

    It could be argued that the genetic code’s error-minimization properties are more dramatic than these results indicate. When researchers calculated the error-minimization capacity of one million randomly generated genetic codes, they discovered that the error-minimization values formed a distribution where the naturally occurring genetic code’s capacity occurred outside the distribution. Researchers estimate the existence of 1018 (a quintillion) possible genetic codes possessing the same type and degree of redundancy as the universal genetic code. Nearly all of these codes fall within the error-minimization distribution. This finding means that of 1018 possible genetic codes, only a few have an error-minimization capacity that approaches the code found universally in nature.

    Frameshift Mutations

    Recently, researchers from Germany wondered if this same type of optimization applies to frameshift mutations. Biochemists have discovered that these mutations are much more devastating than substitution mutations. Frameshift mutations result when nucleotides are inserted into or deleted from the DNA sequence of the gene. If the number of inserted/deleted nucleotides is not divisible by three, the added or deleted nucleotides cause a shift in the gene’s reading frame—altering the codon groupings. Frameshift mutations change all the original codons to new codons at the site of the insertion/deletion and onward to the end of the gene.

    blog__inline--the-optimal-design-of-the-genetic-code-4

    Figure 4: Types of Mutations. Image credit: Shutterstock

    The Genetic Code Is Optimized to Withstand Frameshift Mutations

    Like the researchers from the University of Bath, the German team generated 1 million random genetic codes with the same type and degree of redundancy as the genetic code found in nature. They discovered that the code found in nature is better optimized to withstand errors that result from frameshift mutations (involving either the insertion or deletion of 1 or 2 nucleotides) than most of the random genetic codes they tested.

    The Genetic Code Is Optimized to Harbor Multiple Overlapping Codes

    The optimization doesn’t end there. In addition to the genetic code, genes harbor other overlapping codes that independently direct the binding of histone proteins and transcription factors to DNA and dictate processes like messenger RNA folding and splicing. In 2007, researchers from Israel discovered that the genetic code is also optimized to harbor overlapping codes.4

    The Genetic Code and the Case for a Creator

    In The Cell’s Design, I point out that common experience teaches us that codes come from minds. By analogy, the mere existence of the genetic code suggests that biochemical systems come from a Mind. This conclusion gains considerable support based on the exquisite optimization of the genetic code to withstand errors that arise from both substitution and frameshift mutations, along with its optimal capacity to harbor multiple overlapping codes.

    The triple optimization of the genetic code arises from its redundancy and the specific codon assignments. Over 1018 possible genetic codes exist and any one of them could have been “selected” for the code in nature. Yet, the “chosen” code displays extreme optimization—a hallmark feature of designed systems. As the evidence continues to mount, it becomes more and more evident that the genetic code displays an eerie perfection.5

    An elegant contrivance such as the genetic code—which resides at the heart of biochemical systems and defines the information content in the cell—is truly one in a million when it comes to reasons to believe.

    Resources

    Endnotes
    1. Regine Geyer and Amir Madany Mamlouk, “On the Efficiency of the Genetic Code after Frameshift Mutations,” PeerJ 6 (2018): e4825, doi:10.7717/peerj.4825.
    2. David Haig and Laurence D. Hurst, “A Quantitative Measure of Error Minimization in the Genetic Code,” Journal of Molecular Evolution33 (1991): 412–17, doi:1007/BF02103132.
    3. Gretchen Vogel, “Tracking the History of the Genetic Code,” Science281 (1998): 329–31, doi:1126/science.281.5375.329; Stephen J. Freeland and Laurence D. Hurst, “The Genetic Code Is One in a Million,” Journal of Molecular Evolution 47 (1998): 238–48, doi:10.1007/PL00006381.; Stephen J. Freeland et al., “Early Fixation of an Optimal Genetic Code,” Molecular Biology and Evolution 17 (2000): 511–18, doi:10.1093/oxfordjournals.molbev.a026331.
    4. Shalev Itzkovitz and Uri Alon, “The Genetic Code Is Nearly Optimal for Allowing Additional Information within Protein-Coding Sequences,” Genome Research(2007): advanced online, doi:10.1101/gr.5987307.
    5. In The Cell’s Design, I explain why the genetic code cannot emerge through evolutionary processes, reinforcing the conclusion that the cell’s information systems—and hence, life—must stem from the handiwork of a Creator.
  • Neuroscientists Transfer "Memories" from One Snail to Another: A Christian Perspective on Engrams

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Sep 26, 2018

    Scientists from UCLA recently conducted some rather bizarre experiments. For me, it’s these types of things that make it so much fun to be a scientist.

    Biologists transferred memories from one sea slug to another by extracting RNA from the nervous system of a trained sea slug and then injecting the extract into an untrained sea slug.1 After the injection, the untrained sea snails responded to environmental stimuli just like the trained ones, based on false memories created by the transfer of biomolecules.

    Why would researchers do such a thing? Even though it might seem like their motives were nefarious, they weren’t inspired to carry out these studies by Dr. Frankenstein or Dr. Moreau. Instead, they had really good reasons for performing these experiments: they wanted to gain insight into the physical basis of memory.

    How are memories encoded? How are they stored in the brain? And how are memories retrieved? These are some of the fundamental scientific questions that interest researchers who work in cognitive neuroscience. It turns out that sea slugs belonging to the group Aplysia (commonly referred to as sea hares) make ideal organisms to study in order to address these questions. The fact that we can gain insight into how memories are stored with sea slugs is mind-blowing and indicates to me (as a Christian and a biochemist) that biological systems have been designed for discovery.

    Sea Hares

    Sea hares have become the workhorses of cognitive neuroscience. This creature has a nervous system that’s complex enough to allow neuroscientists to study reflexes and learned behaviors, but simple enough that they can draw meaningful conclusions from their experiments. (By way of comparison, members of Aplysia have about 20,000 neurons in their nervous systems compared to humans who have 85 billion neurons in our brains alone.)

    Toward this end, neuroscientists took advantage of a useful reflexive behavior displayed by sea hares, called gill and siphon withdrawal. When these creatures are disturbed, they rapidly withdraw their delicate gill and siphon.

    The nervous system of these creatures can also experience sensitization, which is learned by repeated exposure to stimuli, resulting in an enhanced and broad response by the nervous system to stimuli that are related—say, stimuli that connote danger.

    What Causes Memories?

    Sensitization is a learned response that is possible because memories have been encoded and stored in the sea hares’ nervous system. But how is this memory stored?

    Many neuroscientists think that the physical instantiation of memories (called engrams) reside in the synaptic connections between nerve cells (neurons). Other neuroscientists hold a differing view. Instead of being mediated by cell-cell interactions, others think that engrams form within the interior of neurons, through biochemical events that take place within the cell nucleus. In fact, some studies have implicated RNA molecules in memory formation and storage.2 The UCLA researchers sought to determine if RNA plays a role in memory formation.

    Memory Transfer from One Sea Hare to Another

    To test this hypothesis, the researchers sensitized sea hares to painful stimuli. They accomplished this feat by inserting an electrode in the tail regions of several sea hares and delivering a shock. The shock caused the sea hares to withdraw their gill and siphon. After 20 minutes, they repeated the shock protocol and continued to do so in 20-minute intervals five more times. Twenty-four hours later, they repeated the shock protocol. By this point, the sea hare test subjects were sensitized to threatening stimuli. When touched, the trained sea hares would withdraw their gill and siphon for nearly 1 minute. Untrained sea hares (who weren’t subjected to the shock protocol) would withdraw their gill and siphon when touched for only about 1 second.

    Next, the researchers sacrificed the sensitized sea hares and isolated RNA from their nervous system. Then they injected the RNA extracts into the hemocoel of untrained sea hares. When touched, the sea hares withdrew their gill and siphon for about 45 seconds.

    To confirm that this response was not due to the injection procedure, they repeated it by injecting RNA extracted from the nervous system of an untrained sea hare into untrained sea hares. When touched, the gill and siphon withdrawal reflex lasted only about 1 second.

    blog__inline--neuroscientists-transfer-memories-from-one-snail-to-another

    Figure: Sea Hare Stimulus Protocol. Image credit: Alexis Bédécarrats, Shanping Chen, Kaycey Pearce, Diancai Cai, and David L. Glanzman, eNeuro 14 May 2018, 5 (3) ENEURO.0038-18.2018; doi:10.1523/ENEURO.0038-18.2018.

    The researchers then applied the RNA extracts from both trained and untrained sea hares to sensory neurons grown in the lab. The RNA extracts from the trained sea hares caused the sensory neurons to display heightened activity. Conversely, the RNA extracts from the untrained sea hares had no effect on the activity of the cultured sensory neurons.

    Finally, the researchers added compounds called methylase inhibitors to the RNA extracts before injecting them into untrained sea hares. These inhibitors blocked the memory transfer. This result indicates that epigenetic modifications of DNA mediated by RNA molecules play a role in forming engrams.

    Based on these results, it appears that RNA mediates the formation and storage of memories. And, though the research team does not know which class of RNAs play a role in the formation of engrams, they suspect that micro RNAs may be the biochemical actors.

    Biomedical Implications

    Now that the UCLA researchers have identified RNA and epigenetic modifications of DNA as central to the formation of engrams, they believe that it might one day be possible to develop biomedical procedures that could treat memory loss that occurs with old age or with diseases such as Alzheimer’s and dementia. Toward this end, it is particularly encouraging that the researchers could transfer memories from one sea hare to another. This insight might even lead to therapies that would erase horrific memories.

    Of course, this raises questions about human nature—specifically, the relationship between the brain and mind. For many people, the fact that there is a physical basis for memories suggests that our mind is indistinguishable from the activities taking place within our brains. To put it differently, many people would reject the idea that our mind is a nonphysical substance, based on the discovery of engrams.

    Engrams, Brain, and Mind

    However, I would contend that if we adopt the appropriate mind-body model, it is possible to preserve the concept of the mind as a nonphysical entity distinct from the brain even if engrams are a reality. A model I find helpful is based on a computer hardware/software analogy. Accordingly, the brain is the hardware that manifests the mind’s activity. Meanwhile, the mind is analogous to the software programming. According to this model, hardware structures—brain regions—support the expression of the mind, the software.

    A computer system needs both the hardware and software to function properly. Without the hardware, the software is just a set of instructions. For those instructions to take effect, the software must be loaded into the hardware. It is interesting that data accessed by software is stored in the computer’s hardware. So, why wouldn’t the same be true for the human brain?

    We need to be careful not to take this analogy too far. However, from my perspective, it illustrates how it is possible for memories to be engrams while preserving the mind as a nonphysical, distinct entity.

    Designed for Discovery

    The significance of this discovery extends beyond the mind-brain problem. It’s provocative that the biology of a creature such as the sea hare could provide such important insight into human biology.

    This is possible only because of the universal nature of biological systems. All life on Earth shares the same biochemistry. All life is made up of the same type of cells. Animals possess similar anatomical and physiological systems.

    Most biologists today view these shared features as evidence for an evolutionary history of life. Yet, as a creationist and an intelligent design proponent, I interpret the universal nature of the cell’s chemistry and shared features of biological systems as manifestations of archetypical designs that emanate from the Creator’s mind. To put it another way, I regard the shared features of biological systems as evidence for common design, not common descent.

    This view leads to the follow-up rebuttal: Why would God create using the same template? Why not create each biochemical system from scratch to be ideally suited for its function? There may be several reasons why a Creator would design living systems around a common set of templates. In my estimation, one of the most significant reasons is discoverability. The shared features of biochemical and biological systems make it possible to apply what we learn by studying one organism to all others. Without life’s shared features, the discipline of biology wouldn’t exist.

    This discoverability makes it easier to appreciate God’s glory and grandeur, as evinced by the elegance, sophistication, and ingenuity in biochemical and biological systems. Discoverability of biochemical systems also reflects God’s providence and care for humanity. If not for the shared features, it would be nearly impossible for us to learn enough about the living realm for our benefit. Where would biomedical science be without the ability to learn fundamental aspects of our biology by studying model organisms such as yeast, fruit flies, mice—and sea hares?

    The shared features in the living realm are a manifestation of the Creator’s care and love for humanity. And there is nothing bizarre about that.

    Resources

    Endnotes
    1. Alexis Bédécarrats et al., “RNA from Trained Aplysia Can Induce an Epigenetic Engram for Long-Term Sensitization in Untrained Aplysia,” eNeuro 5 (May/June 2018): e0038-18.2018, 1–11, doi:10.1523/ENEURO.0038-18.2018.
    2. For example, see Germain U. Busto et al., “microRNAs That Promote Or Inhibit Memory Formation in Drosophila melanogaster,” Genetics 200 (June 1, 2015): 569–80, doi:10.1534/genetics.114.169623.
  • Differences in Human and Neanderthal Brains Explain Human Exceptionalism

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Sep 19, 2018

    When I was a little kid, my mom went through an Agatha Christie phase. She was a huge fan of the murder mystery writer and she read all of Christie’s books.

    Agatha Christie was caught up in a real-life mystery of her own when she disappeared for 10 days in December 1926 under highly suspicious circumstances. Her car was found near her home, close to the edge of a cliff. But, she was nowhere to be found. It looked as if she disappeared without a trace, without any explanation. Eleven days after her disappearance, she turned up in a hotel room registered under an alias.

    Christie never offered an explanation for her disappearance. To this day, it remains an enduring mystery. Some think it was a callous publicity stunt. Some say she suffered a nervous breakdown. Others think she suffered from amnesia. Some people suggest more sinister reasons. Perhaps, she was suicidal. Or maybe she was trying to frame her husband and his mistress for her murder.

    Perhaps we will never know.

    Like Christie’s fictional detectives Hercule Poirot and Miss Marple, paleoanthropologists are every bit as eager to solve a mysterious disappearance of their own. They want to know why Neanderthals vanished from the face of the earth. And what role did human beings (Homo sapiens) play in the Neanderthal disappearance, if any? Did we kill off these creatures? Did we outcompete them or did Neanderthals just die off on their own?

    Anthropologists have proposed various scenarios to account for the Neanderthals’ disappearance. Some paleoanthropologists think that differences in the cognitive capabilities of modern humans and Neanderthals help explain the creatures’ extinction. According to this model, superior reasoning abilities allowed humans to thrive while Neanderthals faced inevitable extinction. As a consequence, we replaced Neanderthals in the Middle East, Europe, and Asia when we first migrated to these parts of the world.

    Computational Neuroanatomy

    Innovative work by researchers from Japan offers support for this scenario.1 Using a technique called computational neuroanatomy, researchers reconstructed the brain shape of Neanderthals and modern humans from the fossil record. In their study, the researchers used four Neanderthal specimens:

    • Amud 1 (50,000 to 70,000 years in age)
    • La Chapelle-aux Saints 1 (47,000 to 56,000 years in age)
    • La Ferrassie 1 (43,000 to 45,000 years in age)
    • Forbes’ Quarry 1 (no age dates)

    They also worked with four Homo sapiens specimens:

    • Qafzeh 9 (90,000 to 120,000 years in age)
    • Skhūl 5 (100,000 to 135,000 years in age
    • Mladeč 1 (35,000 years in age)
    • Cro-Magnon 1 (32,000 years in age)

    Researchers used computed tomography scans to construct virtual endocasts (cranial cavity casts) of the fossil brains. After generating endocasts, the team determined the 3D brain structure of the fossil specimens by deforming the 3D structure of the average human brain so that it fit into the fossil crania and conformed to the endocasts.

    This technique appears to be valid, based on control studies carried out on chimpanzee and bonobo brains. Using computational neuroanatomy, researchers can deform a chimpanzee brain to accurately yield the bonobo brain, and vice versa.

    Brain Differences, Cognitive Differences

    The Japanese team learned that the chief difference between human and Neanderthal brains is the size and shape of the cerebellum. The cerebellar hemisphere is projected more toward the interior in the human brain than in the Neanderthal brain and the volume of the human cerebellum is larger. Researchers also noticed that the right side of the Neanderthal cerebellum is significantly smaller than the left side—a phenomenon called volumetric laterality. This discrepancy doesn’t exist in the human brain. Finally, the Japanese researchers observed that the parietal regions in the human brain were larger than those regions in Neanderthals’ brains.

    blog__inline--differences-in-human-and-neanderthal-brains
    Image credit: Shutterstock

     

    Because of these brain differences, the researchers argue that humans were socially and cognitively more sophisticated than Neanderthals. Neuroscientists have discovered that the cerebellum helps motor functions and higher cognition by contributing to language function, working memory, thought, and social abilities. Hence, the researchers argue that the reduced size of the right cerebellar hemisphere in Neanderthals limits the connection to the prefrontal regions—a connection critical for language processing. Neuroscientists have also discovered that the parietal lobe plays a role in visuo-spatial imagery, episodic memory, self-related mental representations, coordination between self and external spaces, and sense of agency.

    On the basis of this study, it seems that humans either outcompeted Neanderthals for limited resources—driving them to extinction—or simply were better suited to survive than Neanderthals because of superior mental capabilities. Or perhaps their demise occurred for more sinister reasons. Maybe we used our sophisticated reasoning skills to kill off these creatures.

    Did Neanderthals Make Art, Music, Jewelry, etc.?

    Recently, a flurry of reports has appeared in the scientific literature claiming that Neanderthals possessed the capacity for language and the ability to make art, music, and jewelry. Other studies claim that Neanderthals ritualistically buried their dead, mastered fire, and used plants medicinally. All of these claims rest on highly speculative interpretations of the archaeological record. In fact, other studies present evidence that refutes every one of these claims (see Resources).

    Comparisons of human and Neanderthal brain morphology and size become increasingly important in the midst of this controversy. This recent study—along with previous work (go here and here)—indicates that Neanderthals did not have the brain architecture and, hence, cognitive capacity to communicate symbolically through language, art, music, and body ornamentation. Nor did they have the brain capacity to engage in complex social interactions. In short, Neanderthal brain anatomy does not support any interpretation of the archaeological record that attributes advanced cognitive abilities to these creatures.

    While this study provides important clues about the disappearance of Neanderthals, we still don’t know why they went extinct. Nor do we know any of the mysterious details surrounding their demise as a species.

    Perhaps we will never know.

    But we do know that in terms of our cognitive and social capacities, human beings stand apart from Neanderthals and all other creatures. Human brain biology and behavior render us exceptional, one-of-a-kind, in ways consistent with the image of God.

    Resources

    Endnotes
    1. Takanori Kochiyama et al., “Reconstructing the Neanderthal Brain Using Computational Anatomy,” Science Reports 8 (April 26, 2018): 6296, doi:10.1038/s41598-018-24331-0.
  • Yeast Gene Editing Study Raises Questions about the Evolutionary Origin of Human Chromosome 2

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Sep 12, 2018

    As a biochemist and a skeptic of the evolutionary paradigm, people often ask me two interrelated questions:

    1. What do you think are the greatest scientific challenges to the evolutionary paradigm?
    2. How do you respond to all the compelling evidence for biological evolution?

    When it comes to the second question, people almost always ask about the genetic similarity between humans and chimpanzees. Unexpectedly, new research on gene editing in brewer’s yeast helps answer these questions more definitively than ever.

    For many people, the genetic comparisons between the two species convince them that human evolution is true. Presumably, the shared genetic features in the human and chimpanzee genomes reflect the species’ shared evolutionary ancestry.

    One high-profile example of these similarities is the structural features human chromosome 2 shares with two chimpanzee chromosomes labeled chromosome 2A and chromosome 2B. When the two chimpanzee chromosomes are placed end to end, they look remarkably like human chromosome 2. Evolutionary biologists interpret this genetic similarity as evidence that human chromosome 2 arose when chromosome 2A and chromosome 2B underwent an end-to-end fusion. They claim that this fusion took place in the human evolutionary lineage at some point after it separated from the lineage that led to chimpanzees and bonobos. Therefore, the similarity in these chromosomes provides strong evidence that humans and chimpanzees share an evolutionary ancestry.

    blog__inline--yeast-gene-editing-study-1 

    Figure 1: Human and Chimpanzee Chromosomes Compared

    Image credit: Who Was Adam? (Covina, CA: RTB Press, 2015), p. 210.

    Yet, new work by two separate teams of synthetic biologists from the United States and China, respectively, raises questions about this evolutionary scenario. Working independently, both research teams devised similar gene editing techniques that, in turn, they used to fuse the chromosomes in the yeast species, Saccharomyces cerevisiae (brewer’s yeast).1 Their work demonstrates the central role intelligent agency must play in end-on-end chromosome fusion, thereby countering the evolutionary explanation while supporting a creation model interpretation of human chromosome 2.

    The Structure of Human Chromosome 2

    Chromosomes are large structures visible in the nucleus during the cell division process. These structures consist of DNA combined with proteins to form the chromosome’s highly condensed, hierarchical architecture.

    blog__inline--yeast-gene-editing-study-2Figure 2: Chromosome Structure

    Image credit: Shutterstock

    Each species has a characteristic number of chromosomes that differ in size and shape. For example, humans have 46 chromosomes (23 pairs); chimpanzees and other apes have 48 (24 pairs).

    When exposed to certain dyes, chromosomes stain. This staining process produces a pattern of bands along the length of the chromosome that is diagnostic. The bands vary in number, location, thickness, and intensity. And the unique banding profile of each chromosome helps geneticists identify them under a microscope.

    In the early 1980s, evolutionary biologists compared the chromosomes of humans, chimpanzees, gorillas, and orangutans for the first time.2 These studies revealed an exceptional degree of similarity between human and chimp chromosomes. When aligned, the human and corresponding chimpanzee chromosomes display near-identical banding patterns, band locations, band size, and band stain intensity. To evolutionary biologists, this resemblance reveals powerful evidence for human and chimpanzee shared ancestry.

    The most noticeable difference between human and chimp chromosomes is the quantity: 46 for humans and 48 for chimpanzees. As I pointed out, evolutionary biologists account for this difference by suggesting that two chimp chromosomes (2A and 2B) fused. This fusion event would have reduced the number of chromosome pairs from 24 to 23, and the chromosome number from 48 to 46.

    As noted, evidence for this fusion comes from the close similarity of the banding patterns for human chromosome 2 and chimp chromosomes 2A and 2B when the two are oriented end on end. The case for fusion also gains support by the presence of: (1) two centromeres in human chromosome 2, one functional, the other inactive; and (2) an internal telomere sequence within human chromosome 2.3 The location of the two centromeres and internal telomere sequences corresponds to the expected locations if, indeed, human chromosome 2 arose as a fusion event.4

    Evidence for Evolution or Creation?

    Even though human chromosome 2 looks like it is a fusion product, it seems unlikely to me that its genesis resulted from undirected natural processes. Instead, I would argue that a Creator intervened to create human chromosome 2 because combining chromosomes 2A and 2B end to end to form it would have required a succession of highly improbable events.

    I describe the challenges to the evolutionary explanation in some detail in a previous article:

    • End-to-end fusion of two chromosomes at the telomeres faces nearly insurmountable hurdles.
    • And, if somehow the fusion did occur, it would alter the number of chromosomes and lead to one of three possible scenarios: (1) nonviable offspring, (2) viable offspring that suffers from a diseased state, or (3) viable but infertile offspring. Each of these scenarios would prevent the fused chromosome from entering and becoming entrenched in the human gene pool.
    • Finally, if chromosome fusion took place and if the fused chromosome could be passed on to offspring, the event would have had to create such a large evolutionary advantage that it would rapidly sweep through the population, becoming fixed.

    This succession of highly unlikely events makes more sense, from my vantage point, if we view the structure of human chromosome 2 as the handiwork of a Creator instead of the outworking of evolutionary processes. But why would these chromosomes appear to be so similar, if they were created? As I discuss elsewhere, I think the similarity between human and chimpanzee chromosomes reflects shared design, not shared evolutionary ancestry. (For more details, see my article Chromosome 2: The Best Evidence for Evolution?”)

    Yeast Chromosome Studies Offer Insight

    Recent work by two independent teams of synthetic biologists from the US and China corroborates my critique of the evolutionary explanation for human chromosome 2. Working within the context of the evolutionary framework, both teams were interested in understanding the influence that chromosome number and organization have on an organism’s biology and how chromosome fusion shapes evolutionary history. To pursue this insight, both research groups carried out similar experiments using CRISPR/Cas9 gene editing to reduce the number of chromosomes in brewer’s yeast from 16 to 1 (for the Chinese team) and from 16 to 2 (for the team from the US) through a succession of fusion events.

    Both teams reduced the number of chromosomes in stages by fusing pairs of chromosomes. The first attempt reduced the number from 16 to 8. In the next round they fused pairs of the newly created chromosome to reduce the number from 8 to 4, and so on.

    To their surprise, the yeast seemed to tolerate this radical genome editing quite well—although their growth rate slowed and the yeast failed to thrive under certain laboratory conditions. Gene expression was altered in the modified yeast genomes, but only for a few genes. Most of the 5,800 genes in the brewer’s yeast genome were normally expressed, compared to the wild-type strain.

    For synthetic biology, this work is a milestone. It currently stands as one of the most radical genome reconfigurations ever achieved. This discovery creates an exciting new research tool to address fundamental questions about chromosome biology. It also may have important applications in biotechnology.

    The experiment also ranks as a milestone for the RTB human origins creation model because it helps address questions about the origin of human chromosome 2. Specifically, the work with brewer’s yeast provides empirical evidence that human chromosome 2 must have been shaped by an Intelligent Agent. This research also reinforces my concerns about the capacity of evolutionary mechanisms to generate human chromosome 2 via the fusion of chimpanzee chromosomes 2A and 2B.

    Chromosome fusion demonstrates the critical role intelligent agency plays.

    Both research teams had to carefully design the gene editing system they used so that it would precisely delete two distinct regions in the chromosomes. This process affected end-on-end chromosome fusions in a way that would allow the yeast cells to survive. Specifically, they had to delete regions of the chromosomes near the telomeres, including the highly repetitive telomere-associated sequences. While they carried out this deletion, they carefully avoided deleting DNA sequences near the telomeres that harbored genes. They also simultaneously deleted one of the centromeres of the fused chromosomes to ensure that the fused chromosome would properly replicate and segregate during cell division. Finally, they had to make sure that when the two chromosomes fused, the remaining centromere was positioned near the center of the resulting chromosome.

    In addition to the high-precision gene editing, they had to carefully construct the sequence of donor DNA that accompanied the CRISPR/Cas9 gene editing package so that the chromosomes with the deleted telomeres could be directed to fuse end on end. Without the donor DNA, the fusion would have been haphazard.

    In other words, to fuse the chromosomes so that the yeast survived, the research teams needed a detailed understanding of chromosome structure and biology and a strategy to use this knowledge to design precise gene editing protocols. Such planning would ensure that chromosome fusion occurred without the loss of key genetic information and without disrupting key processes such as DNA replication and chromosome segregation during cell division. The researchers’ painstaking effort is a far cry from the unguided, undirected, haphazard events that evolutionary biologists think caused the end-on-end chromosome fusion that created human chromosome 2. In fact, given the high-precision gene editing required to create fused chromosomes, it is hard to envision how evolutionary processes could ever produce a functional fused chromosome.

    A discovery by both research teams further complicates the evolutionary explanation for the origin of human chromosome 2. Namely, the yeast cells could not replicate unless the centromere of one of the chromosomes was deleted at the time the chromosomes fused. The researchers learned that if this step was omitted, the fused chromosomes weren’t stable. Because centromeres serve as the point of attachment for the mitotic spindle, if a chromosome possesses two centromeres, mistakes occur in the chromosome segregation step during cell division.

    It is interesting that human chromosome 2 has two centromeres but one of them has been inactivated. (In the evolutionary scenario, this inactivation would have happened through a series of mutations in the centromeric DNA sequences that accrued over time.) However, if human chromosome 2 resulted from the fusion of two chimpanzee chromosomes, the initial fusion product would have possessed two centromeres, both functional. In the evolutionary scenario, it would have taken millennia for one of the chromosomes to become inactivated. Yet, the yeast studies indicate that centromere loss must take place simultaneously with end-to-end fusion. However, based on the nature of evolutionary mechanisms, it cannot.

    Chromosome fusion in yeast leads to a loss of fitness.

    Perhaps one of the most remarkable outcomes of this work is the discovery that the yeast cells lived after undergoing that many successive chromosome fusions. In fact, experts in synthetic biology such as Gianni Liti (who commented on this work for Nature), expressed surprise that the yeast survived this radical genome restructuring.5

    Though both research teams claimed that the fusion had little effect on the fitness of the yeast, the data suggests otherwise. The yeast cells with the fused chromosomes grew more slowly than wild-type cells and struggled to grow under certain culture conditions. In fact, when the Chinese research team cultured the yeast with the single fused chromosome with the wild-type strain, the wild-type yeast cells out-competed the cells with the fused chromosome.

    Although researchers observed changes in gene expression only for a small number of genes, this result appears to be a bit misleading. The genes with changed expression patterns are normally located near telomeres. The activity of these genes is normally turned down low because they usually are needed only under specific growth conditions. But with the removal of telomeres in the fused chromosomes, these genes are no longer properly regulated; in fact, they may be over-expressed. And, as a consequence of chromosome fusion, some genes that normally reside at a distance from telomeres find themselves close to telomeres, leading to reduced activity.

    This altered gene expression pattern helps explains the slower growth rate of the yeast strain with fused chromosomes and the yeast cells’ difficulty to grow under certain conditions. The finding also raises more questions about the evolutionary scenario for the origin of human chromosome 2. Based on the yeast studies, it seems reasonable to think that the end-to-end fusion of chromosomes 2A and 2B would have reduced the fitness of the offspring that first inherited the fused chromosome 2, making it less likely that the fusion would have taken hold in the human gene pool.

    Chromosome fusion in yeast leads to a loss of fertility.

    Normally, yeast cells reproduce asexually. But they can also reproduce sexually. When yeast cells mate, they fuse. As a result of this fusion event, the resulting cell has two sets of chromosomes. In this state, the yeast cells can divide or form spores. In many respects, the sexual reproduction of yeast cels resembles the sexual reproduction in humans, in which egg and sperm cells, each with one set of chromosomes, fuse to form a zygote with two sets of chromosomes.

    blog__inline--yeast-gene-editing-study-3 

    Figure 3: Yeast Cell Reproduction

    Image credit: Shutterstock

    Both research groups discovered that genetically engineered yeast cells with fused chromosomes could mate and form spores, but spore viability was lower than for wild-type yeast.

    They also discovered that after the first round of chromosome fusion when the genetically engineered yeast possessed 8 chromosomes, mating normal yeast cells with those harboring fused chromosomes resulted in low fertility. When wild-type yeast cells were mated with yeast strains that had been subjected to additional rounds of chromosome fusion, spore formation failed altogether.

    The synthetic biologists find this result encouraging because it means that if they use yeast with fused chromosomes for biotechnology applications, there is little chance that the genetically engineered yeast will mate with wild-type yeast. In other words, the loss of fertility serves as a safeguard.

    However, this loss of fertility does not bode well for evolutionary explanations for the origin of human chromosome 2. The yeast studies indicate that chromosome fusion leads to a loss of fertility because of the mismatch in chromosome number, which makes it difficult for chromosomes to align and properly segregate during cell division. So, why wouldn’t this loss of fertility happen if chromosomes 2A and 2B fuse?

    blog__inline--yeast-gene-editing-study-4 

    Figure 4: Cell Division

    Image credit: Shutterstock

    In short, the theoretical concerns I expressed about the evolutionary origin of human chromosome 2 find experimental support in the yeast studies. And the indisputable role intelligent agency plays in designing and executing the protocols to fuse yeast chromosomes provides empirical evidence that a Creator must have intervened in some capacity to design human chromosome 2.

    Of course, there are a number of outstanding questions that remain for a creation model interpretation of human chromosome 2, including:

    • Why would a Creator seemingly fuse together two chromosomes to create human chromosome 2?
    • Why does this chromosome possess internal telomere sequences?
    • Why does human chromosome 2 harbor seemingly nonfunctional centromere sequences?

    We predict that as we learn more about the biology of human chromosome 2, we will discover a compelling rationale for the structural features of this chromosome, in a way that befits a Creator.

    But, at this juncture the fusion of yeast chromosomes in the lab makes it hard to think that unguided evolutionary processes could ever successfully fuse two chromosomes, including human chromosome 2, end on end. Creation appears to make more sense.

    Resources

    Endnotes
    1. Jingchuan Luo et al., “Karyotype Engineering by Chromosome Fusion Leads to Reproductive Isolation in Yeast,” Nature 560 (2018): 392–96, doi:10.1038/s41586-018-0374-x; Yangyang Shao et al., “Creating a Functional Single-Chromosome Yeast,” Nature 560 (2018): 331–35, doi:10.1038/s41586-018-0382-x.
    2. Jorge J. Yunis, J. R. Sawyer, and K. Dunham, “The Striking Resemblance of High-Resolution G-Banded Chromosomes of Man and Chimpanzee,” Science 208 (1980): 1145–48, doi:10.1126/science.7375922; Jorge J. Yunis and Om Prakash, “The Origin of Man: A Chromosomal Pictorial Legacy,” Science 215 (1982): 1525–30, doi:10.1126/science.7063861.
    3. The centromere is a region of the DNA molecule near the center of the chromosome that serves as the point of attachment for the mitotic spindle during the cell division process. Telomeres are DNA sequences located at the tip ends of chromosomes designed to stabilize the chromosome and prevent it from undergoing degradation.
    4. J. W. Ijdo et al., “Origin of Human Chromosome 2: An Ancestral Telomere-Telomere Fusion,” Proceedings of the National Academy of Sciences USA 88 (1991): 9051–55, doi:10.1073/pnas.88.20.9051; Rosamaria Avarello et al., “Evidence for an Ancestral Alphoid Domain on the Long Arm of Human Chromosome 2,” Human Genetics 89 (1992): 247–49, doi:10.1007/BF00217134.
    5. Gianni Liti, “Yeast Chromosome Numbers Minimized Using Genome Editing,” Nature 560 (August 1, 2018): 317–18, doi:10.1038/d41586-018-05309-4.

About Reasons to Believe

RTB's mission is to spread the Christian Gospel by demonstrating that sound reason and scientific research—including the very latest discoveries—consistently support, rather than erode, confidence in the truth of the Bible and faith in the personal, transcendent God revealed in both Scripture and nature. Learn More »

Support Reasons to Believe

Your support helps more people find Christ through sharing how the latest scientific discoveries affirm our faith in the God of the Bible.

Donate Now

U.S. Mailing Address
818 S. Oak Park Rd.
Covina, CA 91724
  • P (855) 732-7667
  • P (626) 335-1480
  • Fax (626) 852-0178
Reasons to Believe logo