Where Science and Faith Converge
  • When Did Modern Human Brains—and the Image of God—Appear?

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Nov 14, 2018

    When I was a kid, I enjoyed reading Ripley’s Believe It or Not! I couldn’t get enough of the bizarre facts described in the pages of this comic.

    I was especially drawn to the panels depicting people who had oddly shaped heads. I found it fascinating to learn about people whose skulls were purposely forced into unnatural shapes by a practice known as intentional cranial deformation.

    For the most part, this practice is a thing of the past. It is rarely performed today (though there are still a few people groups who carry out this procedure). But for much of human history, cultures all over the world have artificially deformed people’s crania (often for reasons yet to be fully understood). They accomplished this feat by binding the heads of infants, which distorts the normal growth of the skull. Through this practice, the shape of the human head can be readily altered to be abnormally flat, elongated, rounded, or conical.

    blog__inline--when-did-modern-human-brains-and-the-image-of-god-appear-1

    Figure 1: Deformed ancient Peruvian skull. Image credit: Shutterstock.

    It is remarkable that the human skull is so malleable. Believe it, or not!

    blog__inline--when-did-modern-human-brains-and-the-image-of-god-appear-2

    Figure 2: Parts of the human skull. Image credit: Shutterstock.

    For physical anthropologists, the normal shape of the modern human skull is just as bizarre as the conical-shaped skulls found among the remains of the Nazca culture of Peru. Compared to other hominins (such as Neanderthals and Homo erectus), modern humans have oddly shaped skulls. The skull shape of the hominins was elongated along the anterior-posterior axis. But the skull shape of modern humans is globular, with bulging and enlarged parietal and cerebral areas. The modern human skull also has another distinctive feature: the face is retracted and relatively small.

    blog__inline--when-did-modern-human-brains-and-the-image-of-god-appear-3

    Figure 3: Comparison of modern human and Neanderthal skulls. Image credit: Wikipedia.

    Anthropologists believe that the difference in skull shape (and hence, brain shape) has profound significance and helps explain the advanced cognitive abilities of modern humans. The parietal lobe of the brain is responsible for:

    • Perception of stimuli
    • Sensorimotor transformation (which plays a role in planning)
    • Visuospatial integration (which provides hand-eye coordination needed for throwing spears and making art)
    • Imagery
    • Self-awareness
    • Working and long-term memory

    Human beings seem to uniquely possess these capabilities. They make us exceptional compared to other hominins. Thus, for paleoanthropologists, two key questions are: when and how did the globular human skull appear?

    Recently, a team of researchers from the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, addressed these questions. And their answers add evidence for human exceptionalism while unwittingly providing support for the RTB human origins model.1

    The Appearance of the Modern Human Brain

    To characterize the mode and tempo for the origin of the unusual morphology (shape) of the modern human skull, the German researchers generated and analyzed the CT scans of 20 fossil specimens representing three windows of time: (1) 300,000 to 200,000 years ago; (2) 130,000 to 100,000 years ago; and (3) 35,000 to 10,000 years ago. They also included 89 cranially diverse skulls from present-day modern humans, 8 Neanderthal skulls, and 8 from Homo erectus in their analysis.

    The first group consisted of three specimens: (1) Jebel Irhoud 1 (dating to 315,000 years in age); (2) Jebel Irhoud 2 (also dating to 315,000 years in age); and (3) Omo Kibish (dating to 195,000 years in age). The specimens that comprise this group are variously referred to as near anatomically modern humans or archaic Homo sapiens.

    The second group consisted of four specimens: (1) LH 18 (dating to 120,000 years in age); (2) Skhul (dating to 115,000 years in age); (3) Qafzeh 6; and (4) Qafzeh 9 (both dating to about 115,000 years in age. This group consists of specimens typically considered to be anatomically modern humans. The third group consisted of thirteen specimens that are all considered to be anatomically and behaviorally modern humans.

    Researchers discovered that the group one specimens had facial features like that of modern humans. They also had brain sizes that were similar to Neanderthals and modern humans. But their endocranial shape was unlike that of modern humans and appeared to be intermediate between H. erectus and Neanderthals.

    On the other hand, the specimens from group two displayed endocranial shapes that clustered with the group three specimens and the present-day samples. In short, modern human skull morphology (and brain shape) appeared between 130,000 to 100,000 years ago.

    Confluence of Evidence Locates Humanity’s Origin

    This result aligns with several recent archaeological finds that place the origin of symbolism in the same window of time represented by the group two specimens. (See the Resources section for articles detailing some of these finds.) Symbolism—the capacity to represent the world and abstract ideas with symbols—appears to be an ability that is unique to modern humans and is most likely a manifestation of the modern human brain shape, specifically an enlarged parietal lobe.

    Likewise, this result coheres with the most recent dates for mitochondrial Eve and Y-chromosomal Adam around 120,000 to 150,000 years ago. (Again, see the Resources section for articles detailing some of these finds.) In other words, the confluence of evidence (anatomical, behavioral, and genetic) pinpoints the origin of modern humans (us) between 150,000 to 100,000 years ago, with the appearance of modern human anatomy coinciding with the appearance of modern human behavior.

    What Does This Finding Mean for the RTB Human Origins Model?

    To be clear, the researchers carrying out this work interpret their results within the confines of the evolutionary framework. Therefore, they conclude that the globular skulls—characteristic of modern humans—evolved recently, only after the modern human facial structure had already appeared in archaic Homo sapiens around 300,000 years ago. They also conclude that the globular skull of modern humans had fully emerged by the time humans began to migrate around the world (around 40,000 to 50,000 years ago).

    Yet, the fossil evidence doesn’t show the gradual emergence of skull globularity. Instead, modern human specimens form a distinct cluster isolated from the distinct clusters formed by H. erectus, Neanderthals, and archaic H. sapiens. There are no intermediate globular specimens between archaic and modern humans, as would be expected if this trait evolved. Alternatively, the distinct clusters are exactly as expected if modern humans were created.

    It appears that the globularity of our skull distinguishes modern humans from H. erectus, Neanderthals, and archaic Homo sapiens (near anatomically modern humans). This globularity of the modern human skull has implications for when modern human behavior and advanced cognitive abilities emerged.

    For this reason, I see this work as offering support for the RTB human origins creation model (and, consequently, the biblical account of human origins and the biblical conception of human nature). RTBs model (1) views human beings as cognitively superior and distinct from other hominins, and (2) posits that human beings uniquely possess a quality called the image of God that I believe manifests as human exceptionalism.

    This work supports both predictions by highlighting the uniqueness and exceptional qualities of modern humans compared to H. erectus, Neanderthals, and archaic H. sapiens, calling specific attention to our unusual skull and brain morphology. As noted, anthropologists believe that this unusual brain morphology supports our advanced cognitive capabilities—abilities that I believe reflect the image of God. Because archaic H. sapiens, Neanderthals, and H. erectus did not possess this brain morphology, it makes it unlikely that these creatures had the sophisticated cognitive capacity displayed by modern humans.

    In light of RTBs model, it is gratifying to learn that the origin of anatomically modern humans coincides with the origin of modern human behavior.

    Believe it or not, our oddly shaped head is part of the scientific case that can be made for the image of God.

    Resources

    Endnotes
    1. Simon Neubauer, Jean-Jacques Hublin, and Philipp Gunz, “The Evolution of Modern Human Brain Shape,” Science Advances 4 (January 24, 2018): eaao596, doi:10.1126/sciadv.aao5961.
  • Is Raising Children with Religion Child Abuse?

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Nov 07, 2018

    “Horrible as sexual abuse no doubt was, the damage was arguably less than the long-term psychological damage inflicted by bringing the child up Catholic in the first place.”

    —Richard Dawkins, atheist and evolutionary biologist

    blog__inline--is-raising-children-with-religion-child-abuse

    Image: Richard Dawkins. Image credit: Shutterstock

    With his typical flair for provocation, on more than one occasion Richard Dawkins has asserted that indoctrinating children in religion is a form of child abuse. In fact, he argues that the mental torment inflicted by religion on children is worse than sexual abuse carried out by priests—or by any other adult, for that matter. By way of support, he cites a conversation he had with someone who was molested by a Catholic priest. According to Dawkins, a woman told him “that while being abused by a priest was a ‘yucky’ experience, being told as a child that a Protestant friend who died would ‘roast in Hell’ was more distressing.”1

    Of course, every time Dawkins has made this proclamation, people from nearly every philosophical and religious perspective have condemned his outlandish statements. But are condemnations enough to keep him from making the assertion? What about evidence?

    A study recently published by researchers from Harvard T. H. Chan School of Public Health demonstrates that when Dawkins claims that indoctrinating children with religion is worse than child molestation, he is not only being outlandish, but wrong. The Harvard researchers discovered that children raised with religion are mentally and physically healthier than children raised without religion.2

    The study’s conclusions are based on analysis of data from the Nurses’ Health Study II and the Growing Up Today Study. Sampling between 5,500 to 7,500 individuals (depending on the question asked), the researchers discovered that compared to no attendance, children and adolescents (between 12–19 years of age) who attended weekly religious services displayed:

    • Greater life satisfaction
    • Greater sense of mission
    • Greater volunteerism
    • Greater forgiveness
    • Fewer depressive symptoms
    • Lower likelihood of PTSD
    • Lower drug use
    • Reduced cigarette smoking
    • Lower sexual initiation
    • Lower levels of STIs (sexually transmitted infections)
    • Reduced incidences of abnormal Pap smears

    The team noticed that when regular attendance of religious services was combined with prayer and meditation, the effects were slightly diminished. At this juncture, they don’t understand this counterintuitive finding.

    They also discovered mental and physical health benefits for children and adolescents who did not attend religious services but prayed and/or meditated.

    Apparently, attending religious services regularly and praying keeps young people from engaging in risky behaviors, makes them more disciplined, and helps develop their character. All of this translates into healthier, better adjusted, more resilient young men and women.

    The results of this study align with results of previous studies. Study after study consistently shows that people who practice religion enjoy numerous mental and physical health benefits compared to those who don’t. (See the Resources section for more on this topic.) Previous studies focused on adults, but as the study by the Harvard School of Public Health team reveals, the benefits are realized for children and adolescents, too.

    Ying Chen, one of the study’s authors, concludes, “These findings are important for both our understanding of health and our understanding of parenting practices. Many children are raised religiously, and our study shows that this can powerfully affect their health behaviors, mental health, and overall happiness and well-being.”3

    Far from being abusive, raising children with religion comprises one facet of responsible parenting. And if Richard Dawkins is truly a man of science, he should be willing to acknowledge the real benefits of teaching religion to our children.

    Resources:

    Endnotes
    1. Rob Cooper, “Forcing a Religion on Your Children Is as Bad as Child Abuse, Claims Atheist Professor Richard Dawkins,” The Daily Mail (April 23, 2013), co.uk/news/article-2312813/Richard-Dawkins-Forcing-religion-children-child-abuse-claims-atheist-professor.html.
    2. Ying Chen and Tyler J. VanderWeele, “Associations of Religious Upbringing with Subsequent Health and Well-Being from Adolescence to Young Adulthood: An Outcome-Wide Analysis,” American Journal of Epidemiology (2018): kwy142, doi:10.1093/aje/kwy142.
    3. Alice G. Walton, “Raising Kids with Religion or Spirituality May Protect Their Mental Health,” Forbes (September 17, 2018), com/sites/alicegwalton/2018/09/17/raising-kids-with-religion-or-spirituality-may-protect-their-mental-health-study/#7555d89f3287.
  • Is Fossil-Associated Cholesterol a Biomarker for a Young Earth?

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Oct 24, 2018

    Like many Americans, I receive a yearly physical. Even though I find these exams to be a bit of a nuisance, I recognize their importance. These annual checkups allow my doctor to get a read on my overall health.

    An important part of any physical exam is blood work. Screening a patient’s blood for specific biomarkers gives physicians data that allows them to assess a patient’s risk for various diseases. For example, the blood levels of total cholesterol and the ratio of HDLs to LDLs serve as useful biomarkers for cardiovascular disease.

    blog__inline--is-fossil-asscociated-cholesterol-2

    Figure 1: Cholesterol. Image credit: BorisTM. Public domain via Wikimedia Commons, https://commons.wikimedia.org/wiki/File:Cholesterol.svg.

    As it turns out, physicians aren’t the only ones who use cholesterol as a diagnostic biomarker. So, too, do paleontologists. In fact, recently a team of paleontologists used cholesterol biomarkers to determine the identity of an enigmatic fossil recovered in Precambrian rock formations that dated to 588 million years in age.1 This diagnosis was possible because they were able to extract low levels of cholesterol derivatives from the fossil. Based on the chemical profile of the extracts, researchers concluded that Dickinsonia specimens are the fossil remains of some of the oldest animals on Earth.

    Without question, this finding has important implications for how we understand the origin and history of animal life on Earth. But young-earth creationists (YECs) think that this finding has important implications for another reason. They believe that the recovery of cholesterol derivatives from Dickinsonia provides compelling evidence that the earth is only a few thousand years old and the fossil record results from a worldwide flood event. They argue that there is no way organic materials such as cholesterol could survive for hundreds of millions of years in the geological column. Consequently, they argue that the methods used to date fossils such as Dickinsonia must not be reliable, calling into question the age of the earth determined by radiometric techniques.

    Is this claim valid? Is the recovery of cholesterol derivatives from fossils that date to hundreds of millions of years evidence for a young earth? Or can the recovery of cholesterol derivatives from 588 million-year-old fossils be explained in an old-earth paradigm?

    How Can Cholesterol Derivatives Survive for Millions of Years?

    Despite the protests of YECs, for several converging reasons the isolation of cholesterol derivatives from the Dickinsonia specimen is easily explained—even if the specimen dates to 588 million years in age.

    • The research team did not recover high levels of cholesterol from the Dickinsonia specimen (which would be expected if the fossils were only 3,000 years old), but trace levels of cholestane (which would be expected if the fossils were hundreds of millions of years old). Cholestane is a chemical derivative of cholesterol that is produced when cholesterol undergoes diagenetic changes.

    blog__inline--is-fossil-asscociated-cholesterol-3

    Figure 2: Cholestane. Image credit: Calvero. (Self-made with ChemDraw.) Public domain via Wikimedia Commons, https://commons.wikimedia.org/wiki/File:Cholestane.svg.

    • Cholestane is a chemically inert hydrocarbon that is expected to be stable for vast periods of time. In fact, geochemists have recovered steranes (other biomarkers) from rock formations that date to 2.8 billion years in age.
    • The Dickinsonia specimens that yielded cholestanes were exceptionally well-preserved. Specifically, they were unearthed from the White Sea Cliffs in northwest Russia. This rock formation has escaped deep burial and geological heating, making it all the more reasonable that compounds such as cholestanes could survive for nearly 600 million years.

    In short, the recovery of cholesterol derivatives from Dickinsonia does not reflect poorly on the health of the old-earth paradigm. When the chemical properties of cholesterol and cholestane are considered, and given the preservation conditions of the Dickinsonia specimens, the interpretation that these materials were recovered from 588-million-year-old fossil specimens passes the physical exam.

    Resources

    Featured image: Dickinsonia Costata. Image credit: https://commons.wikimedia.org/wiki/File:DickinsoniaCostata.jpg.

    Endnotes
    1. Ilya Bobrovskiy et al., “Ancient Steroids Establish the Ediacaran Fossil Dickinsonia as One of the Earliest Animals,” Science 361 (September 21, 2018): 1246–49, doi:10.1126/science.aat7228.
  • Further Review Overturns Neanderthal Art Claim

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Oct 17, 2018

    As I write this blog post, the 2018–19 NFL season is just underway.

    During the course of any NFL season, several key games are decided by a controversial call made by the officials. Nobody wants the officials to determine the outcome of a game, so the NFL has instituted a way for coaches to challenge calls on the field. When a call is challenged, part of the officiating crew looks at a computer tablet on the sidelines—reviewing the game footage from a number of different angles in an attempt to get the call right. After two minutes of reviewing the replays, the senior official makes his way to the middle of the field and announces, “Upon further review, the call on the field . . .”

    Recently, a team of anthropologists from Spain and the UK created quite a bit of controversy based on a “call” they made from working in the field. Using a new U-Th dating method, these researchers age-dated the artwork in caves from Iberia. Based on the age of a few of their samples, they concluded that Neanderthals produced cave paintings.1 But new work by three independent research teams challenges the “call” from the field—overturning the conclusion that Neanderthals made art and displayed symbolism like modern humans.

    U-Th Dating Method

    The new dating method under review measures the age of calcite deposits beneath cave paintings and those formed over the artwork after the paintings were created. As water flows down cave walls, it deposits calcite. When calcite forms, it contains trace amounts of U-238. This isotope decays into Th-230. Normally, detection of such low quantities of the isotopes would require extremely large samples. Researchers discovered that by using accelerator mass spectrometry, they could get by with 10-milligram samples. And by dating the calcite samples with this technique, they produced minimum and maximum ages for the cave paintings.2

    Call from the Field: Neanderthals Are Artists

    The team applied their dating method to the art found in three cave sites in Iberia (ancient Spain): (1) La Pasiega, which houses paintings of animals, linear signs, claviform signs, and dots; (2) Ardales, which contains about 1,000 paintings of animals, along with dots, discs, lines, geometric shapes, and hand stencils; and (3) Maltravieso, which displays a set of hand stencils and geometric designs. The research team took a total of 53 samples from 25 carbonate formations associated with the cave art in these three cave sites. While most of the samples dated to 40,000 years old or less (which indicates that modern humans were the artists), three measurements produced minimum ages of around 65,000 years, including: (1) red scalariform from La Pasiega, (2) red areas from Ardales, and (3) a hand stencil from Maltravieso. On the basis of the three measurements, the team concluded that the art must have been made by Neanderthals because modern humans had not made their way into Iberia at that time. In other words, Neanderthals made art, just like modern humans did.

    blog__inline--further-review-overturns-neanderthal-art-claim

    Figure: Maltravieso Cave Entrance, Spain. Image credit: Shutterstock

    Shortly after the findings were published, I wrote a piece expressing skepticism about this claim for two reasons.

    First, I questioned the reliability of the method. Once the calcite deposit forms, the U-Th method will only yield reliable results if none of the U or Th moves in or out of the deposit. Based on the work of researchers from France and the US, it does not appear as if the calcite films are closed systems. The calcite deposits on the cave wall formed because of hydrological activity in the cave. Once a calcite film forms, water will continue to flow over its surface, leeching out U (because U is much more water soluble than Th). By removing U, water flowing over the calcite will make it seem as if the deposit and, hence, the underlying artwork is much older than it actually is.3

    Secondly, I expressed concern that the 65,000-year-old dates measured for a few samples are outliers. Of the 53 samples measured, only three gave age-dates of 65,000 years. The remaining samples dated much younger, typically around 40,000 years in age. So why should we give so much credence to three measurements, particularly if we know that the calcite deposits are open systems?

    Upon Further Review: Neanderthals Are Not Artists

    Within a few months, three separate research groups published papers challenging the reliability of the U-Th method for dating cave art and, along with it, the claim that Neanderthals produced cave art.4 It is not feasible to detail all their concerns in this article, but I will highlight six of the most significant complaints. In several instances, the research teams independently raised the same concerns.

    1. The U-Th method is unreliable because the calcite deposits are an open system. The concern that I raised was reiterated by two of the research teams for the same reason I expressed. The U-Th dating technique can only yield reliable results if no U or Th moves in or out of the system once the calcite film forms. The continued water flow over the calcite deposits will preferentially leech U from the deposit, making the deposit appear to be older than it is.
    2. The U-Th method is unreliable because it fails to account for nonradiogenic Th. This isotope would have been present in the source water producing the calcite deposits. As a result, Th would already be present in calcite at the time of formation. This nonradiogenic Th would make the samples appear to be older than they actually are.
    3. The 65,000-year-old dates for the three measurements from La Pasiega, Ardales, and Maltravieso are likely outliers. Just as I pointed out before, two of the research groups expressed concern that only 3 of the 53 measurements came in at 65,000 years in age. This discrepancy suggests that these dates are outliers, most likely reflecting the fact that the calcite deposits are an open system that formed with Th already present. Yet, the researchers from Spain and the UK who reported these results emphasized the few older dates while downplaying the younger dates.
    4. Multiple measurements on the same piece of art yielded discordant ages. For example, the researchers made five age-date measurements of the hand stencil at Maltravieso. These dates (66.7 kya [thousand years ago], 55.2 kya, 35.3 kya, 23.1 kys, and 14.7 kya) were all over the place. And yet, the researchers selected the oldest date for the age of the hand stencil, without justification.
    5. Some of the red “markings” on cave walls that were dated may not be art. Red markings are commonplace on cave walls and can be produced by microorganisms that secrete organic materials or iron oxide deposits. It is possible that some of the markings that were dated were not art at all.
    6. The method used by the researchers to sample the calcite deposits may have been flawed. One team expressed concern that the sampling technique may have unwittingly produced dates for the cave surface on which the paintings were made rather than the pigments used to make the art itself. If the researchers inadvertently dated the cave surface, it could easily be older than the art.

    In light of these many shortcomings, it is questionable if the U-Th method to date cave art is reliable. After review, the call from the field is overturned. There is no conclusive evidence that Neanderthals made art.

    Why Does This Matter?

    Artistic expression reflects a capacity for symbolism. And many people view symbolism as a quality unique to human beings that contributes to our advanced cognitive abilities and exemplifies our exceptional nature. In fact, as a Christian, I see symbolism as a manifestation of the image of God. If Neanderthals possessed symbolic capabilities, such a quality would undermine human exceptionalism (and with it the biblical view of human nature), rendering human beings nothing more than another hominin. At this juncture, every claim for Neanderthal symbolism has failed to withstand scientific scrutiny.

    Now, it is time for me to go back to the game.

    Who dey! Who dey! Who dey think gonna beat dem Bengals!

    Resources:

    Endnotes
    1. L. Hoffmann et al., “U-Th Dating of Carbonate Crusts Reveals Neandertal Origin of Iberian Cave Art,” Science359 (February 23, 2018): 912–15, doi:10.1126/science.aap7778.
    2. W. G. Pike et al., “U-Series Dating of Paleolithic Art in 11 Caves in Spain,” Science 336 (June 15, 2012): 1409–13, doi:10.1126/science.1219957.
    3. Georges Sauvet et al., “Uranium-Thorium Dating Method and Palaeolithic Rock Art,” Quaternary International 432 (2017): 86–92, doi:10.1016/j.quaint.2015.03.053.
    4. Ludovic Slimak et al., “Comment on ‘U-Th Dating of Carbonate Crusts Reveals Neandertal Origin of Iberian Cave Art,’” Science 361 (September 21, 2018): eaau1371, doi:10.1126/science.aau1371; Maxime Aubert, Adam Brumm, and Jillian Huntley, “Early Dates for ‘Neanderthal Cave Art’ May Be Wrong,” Journal of Human Evolution (2018), doi:10.1016/j.jhevol.2018.08.004; David G. Pearce and Adelphine Bonneau, “Trouble on the Dating Scene,” Nature Ecology and Evolution 2 (June 2018): 925–26, doi:10.1038/s41559-018-0540-4.
  • Can Evolution Explain the Origin of Language?

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Oct 10, 2018

    Oh honey hush, yes you talk too much
    Oh honey hush, yes you talk too much
    Listenin’ to your conversation is just about to separate us

    —Albert Collins

    He was called the “Master of the Telecaster.” He was also known as the “Iceman,” because his guitar playing was so hot, he was cold. Albert Collins (1932–93) was an electric blues guitarist and singer whose distinct style of play influenced the likes of Stevie Ray Vaughn and Robert Cray.

    blog__inline--can-evolution-explain-the-origin-of-language

    Image: Albert Collins in 1990. Image Credit: Masahiro Sumori [GFDL (https://www.gnu.org/copyleft/fdl.html), CC-BY-SA-3.0 (https://creativecommons.org/licenses/by-sa/3.0/) or CC BY-SA 2.5 (https://creativecommons.org/licenses/by-sa/2.5)], from Wikimedia Commons.

    Collins was known for his sense of humor and it often came through in his music. In one of Collins’s signature songs, Honey Hush, the bluesman complains about his girlfriend who never stops talking: “You start talkin’ in the morning; you’re talkin’ all day long.” Collins finds his girlfriend’s nonstop chatter so annoying that he contemplates ending their relationship.

    While Collins may have found his girlfriend’s unending conversation irritating, the capacity for conversation is a defining feature of human beings (modern humans). As human beings, we can’t help ourselves—we “talk too much.”

    What does our capacity for language tell us about human nature and our origins?

    Language and Human Exceptionalism

    Human language flows out of our capacity for symbolism. Humans have the innate ability to represent the world (and abstract ideas) using symbols. And we can embed symbols within symbols to construct alternative possibilities and then link our scenario-building minds together through language, music, art, etc.

    As a Christian, I view our symbolism as a facet of the image of God. While animals can communicate, as far as we know only human beings possess abstract language. And despite widespread claims about Neanderthal symbolism, the scientific case for symbolic expression among these hominids keeps coming up short. To put it another way, human beings appear to be uniquely exceptional in ways that align with the biblical concept of the image of God, with our capacity for language serving as a significant contributor to the case for human exceptionalism.

    Recent insights into the mode and tempo of language’s emergence strengthen the scientific case for the biblical view of human nature. As I have written in previous articles (see Resources) and in Who Was Adam?, language appears to have emerged suddenly—and it coincides with the appearance of anatomically modern humans. Additionally, when language first appeared, it was syntactically as complex as contemporary language. That is, there was no evolution of language—proceeding from a proto-language through simple language and then to complex language. Language emerges all at once as a complete package.

    From my vantage point, the sudden appearance of language that uniquely coincides with the first appearance of humans is a signature for a creation event. It is precisely what I would expect if human beings were created in God’s image, as Scripture describes.

    Darwin’s Problem

    This insight into the origin of language also poses significant problems for the evolutionary paradigm. As linguist Noam Chomsky and anthropologist Ian Tattersall admit, “The relatively sudden origin of language poses difficulties that may be called ‘Darwin’s problem.’”1

    Anthropologist Chris Knight’s insights compound “Darwin’s problem.” He concludes that “language exists, but for reasons which no currently accepted theoretical paradigm can explain.”2 Knight arrives at this conclusion by surveying the work of three scientists (Noam Chomsky, Amotz Zahavi, and Dan Sperber) who study language’s origin using three distinct approaches. All three converge on the same conclusion; namely, evolutionary processes should not produce language or any form of symbolic communication.

    Chris Knight writes:

    Language evolved in no other species than humans, suggesting a deep-going obstacle to its evolution. One possibility is that language simply cannot evolve in a Darwinian world—that is, in a world based ultimately on competition and conflict. The underlying problem may be that the communicative use of language presupposes anomalously high levels of mutual cooperation and trust—levels beyond anything which current Darwinian theory can explain . . . suggesting a deep-going obstacle to its evolution.3

    To support this view, Knight synthesizes the insights of linguist Noam Chomsky, ornithologist and theoretical biologist Amotz Zahavi, and anthropologist Dan Sperber. All three scientists determine that language cannot evolve from animal communication for three distinct reasons.

    Three Reasons Why Language Is Unique to Humans

    Chomsky views animal minds as only being capable of bounded ranges of expression. On the other hand, human language makes use of a finite set of symbols to communicate an infinite array of thoughts and ideas. For Chomsky, there are no intermediate steps between bounded and infinite expression of ideas. The capacity to express an unlimited array of thoughts and ideas stems from a capacity that must have appeared all at once. And this ability must be supported by brain and vocalization structures. Brain structures and the ability to vocalize would either have to already be in place at the time language appeared (because these structures were selected by the evolutionary process for entirely different purposes) or they simultaneously arose with the capacity to conceive of infinite thoughts and ideas. To put it another way, language could not have emerged from animal communication through a step-evolutionary process. It had to appear all at once and be fully intact at the time of its genesis. No one knows of any mechanism that can effect that type of transformation.

    Zahavi’s work centers on understanding the evolutionary origin of signaling in the animal world. Endemic to his approach, Zahavi divides natural selection into two components: utilitarian selection (which describes selection for traits that improve the efficiency of some biological process—enhancing the organism’s fitness) and signal selection (which involves the selection of traits that are wasteful). Though counterintuitive, signal selection contributes to the fitness of the organism because it communicates the organism’s fitness to other animals (either members of the same or different species). The example Zahavi uses to illustrate signal selection is the unusual behavior of gazelles. These creatures stot (jump up and down, stomp the ground, loudly snort) when they detect a predator, which calls attention to themselves. This behavior is counterintuitive. Shouldn’t these creatures use their energy to run away, getting the biggest jump they can on the pursuing predator? As it turns out, the “wasteful and costly” behavior communicates to the predator the fitness of the gazelle. In the face of danger, the gazelle is willing to take on risk, because it is so fit. The gazelle’s behavior dissuades the predator from attacking. Observations in the wild confirm Zahavi’s ideas. Predators most often will go after gazelles that don’t stot or that display limited stotting behavior.

    Animal signaling is effective and reliable only when actual costly handicaps are communicated. The signaling can only be effective when a limited and bounded range of signals is presented. This constraint is the only way to communicate the handicap. In contrast, language is open-ended and infinite. Given the constraints on animal signaling, it cannot evolve into language. Natural selection prevents animal communication from evolving into language because, in principle, when the infinite can be communicated, in practice, nothing is communicated at all.

    Based in part on fieldwork he conducted in Ethiopia with the Dorze people, Dan Sperber concluded that people use language to primarily communicate alternative possibilities and realities—falsehoods—rather than information that is true about the world. To be certain, people use language to convey brute facts about the world. But most often language is used to communicate institutional facts—agreed-upon truths—that don’t necessarily reflect the world as it actually is. According to Sperber, symbolic communication is characterized by extravagant imagery and metaphor. Human beings often build metaphor upon metaphor—and falsehood upon falsehood—when we communicate. For Sperber, this type of communication can’t evolve from animal signaling. What evolutionary advantage arises by transforming communication about reality (animal signaling) to communication about alternative realities (language)?

    Synthesizing the insights of Chomsky, Zahavi, and Sperber, Knight concludes that language is impossible in a Darwinian world. He states, “The Darwinian challenge remains real. Language is impossible not simply by definition, but—more interestingly—because it presupposes unrealistic levels of trust. . . . To guard against the very possibility of being deceived, the safest strategy is to insist on signals that just cannot be lies. This rules out not only language, but symbolic communication of any kind.”4

    Signal for Creation

    And yet, human beings possess language (along with other forms of symbolism, such as art and music). Our capacity for abstract language is one of the defining features of human beings.

    For Christians like me, our language abilities reflect the image of God. And what appears as a profound challenge and mystery for the evolutionary paradigm finds ready explanation in the biblical account of humanity’s origin.

    Is it time for our capacity for conversation to separate us from the evolutionary explanation for humanity’s origin?

    Resources:

    Endnotes
    1. Johan J. Bolhuis et al., “How Could Language Have Evolved?” PLoS Biology 12 (August 2014): e1001934, doi:10.1371/journal.pbio.1001934.
    2. Chris Knight, “Puzzles and Mysteries in the Origins of Language,” Language and Communication 50 (September 2016): 12–21, doi:10.1016/j.langcom.2016.09.002.
    3. Knight, “Puzzles and Mysteries,” 12–21.
    4. Knight, “Puzzles and Mysteries,” 12–21.
  • The Optimal Design of the Genetic Code

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Oct 03, 2018

    Were there no example in the world of contrivance except that of the eye, it would be alone sufficient to support the conclusion which we draw from it, as to the necessity of an intelligent Creator.

    –William Paley, Natural Theology

    In his classic work, Natural Theology, William Paley surveyed a range of biological systems, highlighting their similarities to human-made designs. Paley noticed that human designs typically consist of various components that interact in a precise way to accomplish a purpose. According to Paley, human designs are contrivances—things produced with skill and cleverness—and they come about via the work of human agents. They come about by the work of intelligent designers. And because biological systems are contrivances, they, too, must come about via the work of a Creator.

    For Paley, the pervasiveness of biological contrivances made the case for a Creator compelling. But he was especially struck by the vertebrate eye. For Paley, if the only example of a biological contrivance available to us was the eye, its sophisticated design and elegant complexity alone justify the “necessity of an intelligent creator” to explain its origin.

    As a biochemist, I am impressed with the elegant designs of biochemical systems. The sophistication and ingenuity of these designs convinced me as a graduate student that life must stem from the work of a Mind. In my book The Cell’s Design, I follow in Paley’s footsteps by highlighting the eerie similarity between human designs and biochemical systems—a similarity I describe as an intelligent design pattern. Because biochemical systems conform to the intelligent design pattern, they must be the work of a Creator.

    As with Paley, I view the pervasiveness of the intelligent design pattern in biochemical systems as critical to making the case for a Creator. Yet, in particular, I am struck by the design of a single biochemical system: namely, the genetic code. On the basis of the structure of the genetic code alone, I think one is justified to conclude that life stems from the work of a Divine Mind. The latest work by a team of German biochemists on the genetic code’s design convinces me all the more that the genetic code is the product of a Creator’s handiwork.1

    To understand the significance of this study and the code’s elegant design, a short primer on molecular biology is in order. (For those who have a background in biology, just skip ahead to The Optimal Genetic Code.)

    Proteins

    The “workhorse” molecules of life, proteins take part in essentially every cellular and extracellular structure and activity. Proteins are chain-like molecules folded into precise three-dimensional structures. Often, the protein’s three-dimensional architecture determines the way it interacts with other proteins to form a functional complex.

    Proteins form when the cellular machinery links together (in a head-to-tail fashion) smaller subunit molecules called amino acids. To a first approximation, the cell employs 20 different amino acids to make proteins. The amino acids that make up proteins possess a variety of chemical and physical properties.

    blog__inline--the-optimal-design-of-the-genetic-code-1

    Figure 1: The Amino Acids. Image credit: Shutterstock

    Each specific amino acid sequence imparts the protein with a unique chemical and physical profile along the length of its chain. The chemical and physical profile determines how the protein folds and, therefore, its function. Because structure determines the function of a protein, the amino acid sequence is key to dictating the type of work a protein performs for the cell.

    DNA

    The cell’s machinery uses the information harbored in the DNA molecule to make proteins. Like these biomolecules, DNA consists of chain-like structures known as polynucleotides. Two polynucleotide chains align in an antiparallel fashion to form a DNA molecule. (The two strands are arranged parallel to one another with the starting point of one strand located next to the ending point of the other strand, and vice versa.) The paired polynucleotide chains twist around each other to form the well-known DNA double helix. The cell’s machinery forms polynucleotide chains by linking together four different subunit molecules called nucleotides. The four nucleotides used to build DNA chains are adenosine, guanosine, cytidine, and thymidine, familiarly known as A, G, C, and T, respectively.

    blog__inline--the-optimal-design-of-the-genetic-code-2

    Figure 2: The Structure of DNA. Image credit: Shutterstock

    As noted, DNA stores the information necessary to make all the proteins used by the cell. The sequence of nucleotides in the DNA strands specifies the sequence of amino acids in protein chains. Scientists refer to the amino-acid-coding nucleotide sequence that is used to construct proteins along the DNA strand as a gene.

    The Genetic Code

    A one-to-one relationship cannot exist between the 4 different nucleotides of DNA and the 20 different amino acids used to assemble polypeptides. The cell addresses this mismatch by using a code comprised of groupings of three nucleotides to specify the 20 different amino acids.

    The cell uses a set of rules to relate these nucleotide triplet sequences to the 20 amino acids making up proteins. Molecular biologists refer to this set of rules as the genetic code. The nucleotide triplets, or “codons” as they are called, represent the fundamental communication units of the genetic code, which is essentially universal among all living organisms.

    Sixty-four codons make up the genetic code. Because the code only needs to encode 20 amino acids, some of the codons are redundant. That is, different codons code for the same amino acid. In fact, up to six different codons specify some amino acids. Others are specified by only one codon.

    Interestingly, some codons, called stop codons or nonsense codons, code no amino acids. (For example, the codon UGA is a stop codon.) These codons always occur at the end of the gene, informing the cell where the protein chain ends.

    Some coding triplets, called start codons, play a dual role in the genetic code. These codons not only encode amino acids, but also “tell” the cell where a protein chain begins. For example, the codon GUG encodes the amino acid valine and also specifies the starting point of the proteins.

    blog__inline--the-optimal-design-of-the-genetic-code-3

    Figure 3: The Genetic Code. Image credit: Shutterstock

    The Optimal Genetic Code

    Based on visual inspection of the genetic code, biochemists had long suspected that the coding assignments weren’t haphazard—a frozen accident. Instead it looked to them like a rationale undergirds the genetic code’s architecture. This intuition was confirmed in the early 1990s. As I describe in The Cell’s Design, at that time, scientists from the University of Bath (UK) and from Princeton University quantified the error-minimization capacity of the genetic code. Their initial work indicated that the naturally occurring genetic code withstands the potentially harmful effects of substitution mutations better than all but 0.02 percent (1 out of 5,000) of randomly generated genetic codes with codon assignments different from the universal genetic code.2

    Subsequent analysis performed later that decade incorporated additional factors. For example, some types of substitution mutations (called transitions) occur more frequently in nature than others (called transversions). As a case in point, an A-to-G substitution occurs more frequently than does either an A-to-C or an A-to-T mutation. When researchers included this factor into their analysis, they discovered that the naturally occurring genetic code performed better than one million randomly generated genetic codes. In a separate study, they also found that the genetic code in nature resides near the global optimum for all possible genetic codes with respect to its error-minimization capacity.3

    It could be argued that the genetic code’s error-minimization properties are more dramatic than these results indicate. When researchers calculated the error-minimization capacity of one million randomly generated genetic codes, they discovered that the error-minimization values formed a distribution where the naturally occurring genetic code’s capacity occurred outside the distribution. Researchers estimate the existence of 1018 (a quintillion) possible genetic codes possessing the same type and degree of redundancy as the universal genetic code. Nearly all of these codes fall within the error-minimization distribution. This finding means that of 1018 possible genetic codes, only a few have an error-minimization capacity that approaches the code found universally in nature.

    Frameshift Mutations

    Recently, researchers from Germany wondered if this same type of optimization applies to frameshift mutations. Biochemists have discovered that these mutations are much more devastating than substitution mutations. Frameshift mutations result when nucleotides are inserted into or deleted from the DNA sequence of the gene. If the number of inserted/deleted nucleotides is not divisible by three, the added or deleted nucleotides cause a shift in the gene’s reading frame—altering the codon groupings. Frameshift mutations change all the original codons to new codons at the site of the insertion/deletion and onward to the end of the gene.

    blog__inline--the-optimal-design-of-the-genetic-code-4

    Figure 4: Types of Mutations. Image credit: Shutterstock

    The Genetic Code Is Optimized to Withstand Frameshift Mutations

    Like the researchers from the University of Bath, the German team generated 1 million random genetic codes with the same type and degree of redundancy as the genetic code found in nature. They discovered that the code found in nature is better optimized to withstand errors that result from frameshift mutations (involving either the insertion or deletion of 1 or 2 nucleotides) than most of the random genetic codes they tested.

    The Genetic Code Is Optimized to Harbor Multiple Overlapping Codes

    The optimization doesn’t end there. In addition to the genetic code, genes harbor other overlapping codes that independently direct the binding of histone proteins and transcription factors to DNA and dictate processes like messenger RNA folding and splicing. In 2007, researchers from Israel discovered that the genetic code is also optimized to harbor overlapping codes.4

    The Genetic Code and the Case for a Creator

    In The Cell’s Design, I point out that common experience teaches us that codes come from minds. By analogy, the mere existence of the genetic code suggests that biochemical systems come from a Mind. This conclusion gains considerable support based on the exquisite optimization of the genetic code to withstand errors that arise from both substitution and frameshift mutations, along with its optimal capacity to harbor multiple overlapping codes.

    The triple optimization of the genetic code arises from its redundancy and the specific codon assignments. Over 1018 possible genetic codes exist and any one of them could have been “selected” for the code in nature. Yet, the “chosen” code displays extreme optimization—a hallmark feature of designed systems. As the evidence continues to mount, it becomes more and more evident that the genetic code displays an eerie perfection.5

    An elegant contrivance such as the genetic code—which resides at the heart of biochemical systems and defines the information content in the cell—is truly one in a million when it comes to reasons to believe.

    Resources

    Endnotes
    1. Regine Geyer and Amir Madany Mamlouk, “On the Efficiency of the Genetic Code after Frameshift Mutations,” PeerJ 6 (2018): e4825, doi:10.7717/peerj.4825.
    2. David Haig and Laurence D. Hurst, “A Quantitative Measure of Error Minimization in the Genetic Code,” Journal of Molecular Evolution33 (1991): 412–17, doi:1007/BF02103132.
    3. Gretchen Vogel, “Tracking the History of the Genetic Code,” Science281 (1998): 329–31, doi:1126/science.281.5375.329; Stephen J. Freeland and Laurence D. Hurst, “The Genetic Code Is One in a Million,” Journal of Molecular Evolution 47 (1998): 238–48, doi:10.1007/PL00006381.; Stephen J. Freeland et al., “Early Fixation of an Optimal Genetic Code,” Molecular Biology and Evolution 17 (2000): 511–18, doi:10.1093/oxfordjournals.molbev.a026331.
    4. Shalev Itzkovitz and Uri Alon, “The Genetic Code Is Nearly Optimal for Allowing Additional Information within Protein-Coding Sequences,” Genome Research(2007): advanced online, doi:10.1101/gr.5987307.
    5. In The Cell’s Design, I explain why the genetic code cannot emerge through evolutionary processes, reinforcing the conclusion that the cell’s information systems—and hence, life—must stem from the handiwork of a Creator.
  • Neuroscientists Transfer "Memories" from One Snail to Another: A Christian Perspective on Engrams

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Sep 26, 2018

    Scientists from UCLA recently conducted some rather bizarre experiments. For me, it’s these types of things that make it so much fun to be a scientist.

    Biologists transferred memories from one sea slug to another by extracting RNA from the nervous system of a trained sea slug and then injecting the extract into an untrained sea slug.1 After the injection, the untrained sea snails responded to environmental stimuli just like the trained ones, based on false memories created by the transfer of biomolecules.

    Why would researchers do such a thing? Even though it might seem like their motives were nefarious, they weren’t inspired to carry out these studies by Dr. Frankenstein or Dr. Moreau. Instead, they had really good reasons for performing these experiments: they wanted to gain insight into the physical basis of memory.

    How are memories encoded? How are they stored in the brain? And how are memories retrieved? These are some of the fundamental scientific questions that interest researchers who work in cognitive neuroscience. It turns out that sea slugs belonging to the group Aplysia (commonly referred to as sea hares) make ideal organisms to study in order to address these questions. The fact that we can gain insight into how memories are stored with sea slugs is mind-blowing and indicates to me (as a Christian and a biochemist) that biological systems have been designed for discovery.

    Sea Hares

    Sea hares have become the workhorses of cognitive neuroscience. This creature has a nervous system that’s complex enough to allow neuroscientists to study reflexes and learned behaviors, but simple enough that they can draw meaningful conclusions from their experiments. (By way of comparison, members of Aplysia have about 20,000 neurons in their nervous systems compared to humans who have 85 billion neurons in our brains alone.)

    Toward this end, neuroscientists took advantage of a useful reflexive behavior displayed by sea hares, called gill and siphon withdrawal. When these creatures are disturbed, they rapidly withdraw their delicate gill and siphon.

    The nervous system of these creatures can also experience sensitization, which is learned by repeated exposure to stimuli, resulting in an enhanced and broad response by the nervous system to stimuli that are related—say, stimuli that connote danger.

    What Causes Memories?

    Sensitization is a learned response that is possible because memories have been encoded and stored in the sea hares’ nervous system. But how is this memory stored?

    Many neuroscientists think that the physical instantiation of memories (called engrams) reside in the synaptic connections between nerve cells (neurons). Other neuroscientists hold a differing view. Instead of being mediated by cell-cell interactions, others think that engrams form within the interior of neurons, through biochemical events that take place within the cell nucleus. In fact, some studies have implicated RNA molecules in memory formation and storage.2 The UCLA researchers sought to determine if RNA plays a role in memory formation.

    Memory Transfer from One Sea Hare to Another

    To test this hypothesis, the researchers sensitized sea hares to painful stimuli. They accomplished this feat by inserting an electrode in the tail regions of several sea hares and delivering a shock. The shock caused the sea hares to withdraw their gill and siphon. After 20 minutes, they repeated the shock protocol and continued to do so in 20-minute intervals five more times. Twenty-four hours later, they repeated the shock protocol. By this point, the sea hare test subjects were sensitized to threatening stimuli. When touched, the trained sea hares would withdraw their gill and siphon for nearly 1 minute. Untrained sea hares (who weren’t subjected to the shock protocol) would withdraw their gill and siphon when touched for only about 1 second.

    Next, the researchers sacrificed the sensitized sea hares and isolated RNA from their nervous system. Then they injected the RNA extracts into the hemocoel of untrained sea hares. When touched, the sea hares withdrew their gill and siphon for about 45 seconds.

    To confirm that this response was not due to the injection procedure, they repeated it by injecting RNA extracted from the nervous system of an untrained sea hare into untrained sea hares. When touched, the gill and siphon withdrawal reflex lasted only about 1 second.

    blog__inline--neuroscientists-transfer-memories-from-one-snail-to-another

    Figure: Sea Hare Stimulus Protocol. Image credit: Alexis Bédécarrats, Shanping Chen, Kaycey Pearce, Diancai Cai, and David L. Glanzman, eNeuro 14 May 2018, 5 (3) ENEURO.0038-18.2018; doi:10.1523/ENEURO.0038-18.2018.

    The researchers then applied the RNA extracts from both trained and untrained sea hares to sensory neurons grown in the lab. The RNA extracts from the trained sea hares caused the sensory neurons to display heightened activity. Conversely, the RNA extracts from the untrained sea hares had no effect on the activity of the cultured sensory neurons.

    Finally, the researchers added compounds called methylase inhibitors to the RNA extracts before injecting them into untrained sea hares. These inhibitors blocked the memory transfer. This result indicates that epigenetic modifications of DNA mediated by RNA molecules play a role in forming engrams.

    Based on these results, it appears that RNA mediates the formation and storage of memories. And, though the research team does not know which class of RNAs play a role in the formation of engrams, they suspect that micro RNAs may be the biochemical actors.

    Biomedical Implications

    Now that the UCLA researchers have identified RNA and epigenetic modifications of DNA as central to the formation of engrams, they believe that it might one day be possible to develop biomedical procedures that could treat memory loss that occurs with old age or with diseases such as Alzheimer’s and dementia. Toward this end, it is particularly encouraging that the researchers could transfer memories from one sea hare to another. This insight might even lead to therapies that would erase horrific memories.

    Of course, this raises questions about human nature—specifically, the relationship between the brain and mind. For many people, the fact that there is a physical basis for memories suggests that our mind is indistinguishable from the activities taking place within our brains. To put it differently, many people would reject the idea that our mind is a nonphysical substance, based on the discovery of engrams.

    Engrams, Brain, and Mind

    However, I would contend that if we adopt the appropriate mind-body model, it is possible to preserve the concept of the mind as a nonphysical entity distinct from the brain even if engrams are a reality. A model I find helpful is based on a computer hardware/software analogy. Accordingly, the brain is the hardware that manifests the mind’s activity. Meanwhile, the mind is analogous to the software programming. According to this model, hardware structures—brain regions—support the expression of the mind, the software.

    A computer system needs both the hardware and software to function properly. Without the hardware, the software is just a set of instructions. For those instructions to take effect, the software must be loaded into the hardware. It is interesting that data accessed by software is stored in the computer’s hardware. So, why wouldn’t the same be true for the human brain?

    We need to be careful not to take this analogy too far. However, from my perspective, it illustrates how it is possible for memories to be engrams while preserving the mind as a nonphysical, distinct entity.

    Designed for Discovery

    The significance of this discovery extends beyond the mind-brain problem. It’s provocative that the biology of a creature such as the sea hare could provide such important insight into human biology.

    This is possible only because of the universal nature of biological systems. All life on Earth shares the same biochemistry. All life is made up of the same type of cells. Animals possess similar anatomical and physiological systems.

    Most biologists today view these shared features as evidence for an evolutionary history of life. Yet, as a creationist and an intelligent design proponent, I interpret the universal nature of the cell’s chemistry and shared features of biological systems as manifestations of archetypical designs that emanate from the Creator’s mind. To put it another way, I regard the shared features of biological systems as evidence for common design, not common descent.

    This view leads to the follow-up rebuttal: Why would God create using the same template? Why not create each biochemical system from scratch to be ideally suited for its function? There may be several reasons why a Creator would design living systems around a common set of templates. In my estimation, one of the most significant reasons is discoverability. The shared features of biochemical and biological systems make it possible to apply what we learn by studying one organism to all others. Without life’s shared features, the discipline of biology wouldn’t exist.

    This discoverability makes it easier to appreciate God’s glory and grandeur, as evinced by the elegance, sophistication, and ingenuity in biochemical and biological systems. Discoverability of biochemical systems also reflects God’s providence and care for humanity. If not for the shared features, it would be nearly impossible for us to learn enough about the living realm for our benefit. Where would biomedical science be without the ability to learn fundamental aspects of our biology by studying model organisms such as yeast, fruit flies, mice—and sea hares?

    The shared features in the living realm are a manifestation of the Creator’s care and love for humanity. And there is nothing bizarre about that.

    Resources

    Endnotes
    1. Alexis Bédécarrats et al., “RNA from Trained Aplysia Can Induce an Epigenetic Engram for Long-Term Sensitization in Untrained Aplysia,” eNeuro 5 (May/June 2018): e0038-18.2018, 1–11, doi:10.1523/ENEURO.0038-18.2018.
    2. For example, see Germain U. Busto et al., “microRNAs That Promote Or Inhibit Memory Formation in Drosophila melanogaster,” Genetics 200 (June 1, 2015): 569–80, doi:10.1534/genetics.114.169623.
  • Differences in Human and Neanderthal Brains Explain Human Exceptionalism

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Sep 19, 2018

    When I was a little kid, my mom went through an Agatha Christie phase. She was a huge fan of the murder mystery writer and she read all of Christie’s books.

    Agatha Christie was caught up in a real-life mystery of her own when she disappeared for 10 days in December 1926 under highly suspicious circumstances. Her car was found near her home, close to the edge of a cliff. But, she was nowhere to be found. It looked as if she disappeared without a trace, without any explanation. Eleven days after her disappearance, she turned up in a hotel room registered under an alias.

    Christie never offered an explanation for her disappearance. To this day, it remains an enduring mystery. Some think it was a callous publicity stunt. Some say she suffered a nervous breakdown. Others think she suffered from amnesia. Some people suggest more sinister reasons. Perhaps, she was suicidal. Or maybe she was trying to frame her husband and his mistress for her murder.

    Perhaps we will never know.

    Like Christie’s fictional detectives Hercule Poirot and Miss Marple, paleoanthropologists are every bit as eager to solve a mysterious disappearance of their own. They want to know why Neanderthals vanished from the face of the earth. And what role did human beings (Homo sapiens) play in the Neanderthal disappearance, if any? Did we kill off these creatures? Did we outcompete them or did Neanderthals just die off on their own?

    Anthropologists have proposed various scenarios to account for the Neanderthals’ disappearance. Some paleoanthropologists think that differences in the cognitive capabilities of modern humans and Neanderthals help explain the creatures’ extinction. According to this model, superior reasoning abilities allowed humans to thrive while Neanderthals faced inevitable extinction. As a consequence, we replaced Neanderthals in the Middle East, Europe, and Asia when we first migrated to these parts of the world.

    Computational Neuroanatomy

    Innovative work by researchers from Japan offers support for this scenario.1 Using a technique called computational neuroanatomy, researchers reconstructed the brain shape of Neanderthals and modern humans from the fossil record. In their study, the researchers used four Neanderthal specimens:

    • Amud 1 (50,000 to 70,000 years in age)
    • La Chapelle-aux Saints 1 (47,000 to 56,000 years in age)
    • La Ferrassie 1 (43,000 to 45,000 years in age)
    • Forbes’ Quarry 1 (no age dates)

    They also worked with four Homo sapiens specimens:

    • Qafzeh 9 (90,000 to 120,000 years in age)
    • Skhūl 5 (100,000 to 135,000 years in age
    • Mladeč 1 (35,000 years in age)
    • Cro-Magnon 1 (32,000 years in age)

    Researchers used computed tomography scans to construct virtual endocasts (cranial cavity casts) of the fossil brains. After generating endocasts, the team determined the 3D brain structure of the fossil specimens by deforming the 3D structure of the average human brain so that it fit into the fossil crania and conformed to the endocasts.

    This technique appears to be valid, based on control studies carried out on chimpanzee and bonobo brains. Using computational neuroanatomy, researchers can deform a chimpanzee brain to accurately yield the bonobo brain, and vice versa.

    Brain Differences, Cognitive Differences

    The Japanese team learned that the chief difference between human and Neanderthal brains is the size and shape of the cerebellum. The cerebellar hemisphere is projected more toward the interior in the human brain than in the Neanderthal brain and the volume of the human cerebellum is larger. Researchers also noticed that the right side of the Neanderthal cerebellum is significantly smaller than the left side—a phenomenon called volumetric laterality. This discrepancy doesn’t exist in the human brain. Finally, the Japanese researchers observed that the parietal regions in the human brain were larger than those regions in Neanderthals’ brains.

    blog__inline--differences-in-human-and-neanderthal-brains
    Image credit: Shutterstock

     

    Because of these brain differences, the researchers argue that humans were socially and cognitively more sophisticated than Neanderthals. Neuroscientists have discovered that the cerebellum helps motor functions and higher cognition by contributing to language function, working memory, thought, and social abilities. Hence, the researchers argue that the reduced size of the right cerebellar hemisphere in Neanderthals limits the connection to the prefrontal regions—a connection critical for language processing. Neuroscientists have also discovered that the parietal lobe plays a role in visuo-spatial imagery, episodic memory, self-related mental representations, coordination between self and external spaces, and sense of agency.

    On the basis of this study, it seems that humans either outcompeted Neanderthals for limited resources—driving them to extinction—or simply were better suited to survive than Neanderthals because of superior mental capabilities. Or perhaps their demise occurred for more sinister reasons. Maybe we used our sophisticated reasoning skills to kill off these creatures.

    Did Neanderthals Make Art, Music, Jewelry, etc.?

    Recently, a flurry of reports has appeared in the scientific literature claiming that Neanderthals possessed the capacity for language and the ability to make art, music, and jewelry. Other studies claim that Neanderthals ritualistically buried their dead, mastered fire, and used plants medicinally. All of these claims rest on highly speculative interpretations of the archaeological record. In fact, other studies present evidence that refutes every one of these claims (see Resources).

    Comparisons of human and Neanderthal brain morphology and size become increasingly important in the midst of this controversy. This recent study—along with previous work (go here and here)—indicates that Neanderthals did not have the brain architecture and, hence, cognitive capacity to communicate symbolically through language, art, music, and body ornamentation. Nor did they have the brain capacity to engage in complex social interactions. In short, Neanderthal brain anatomy does not support any interpretation of the archaeological record that attributes advanced cognitive abilities to these creatures.

    While this study provides important clues about the disappearance of Neanderthals, we still don’t know why they went extinct. Nor do we know any of the mysterious details surrounding their demise as a species.

    Perhaps we will never know.

    But we do know that in terms of our cognitive and social capacities, human beings stand apart from Neanderthals and all other creatures. Human brain biology and behavior render us exceptional, one-of-a-kind, in ways consistent with the image of God.

    Resources

    Endnotes
    1. Takanori Kochiyama et al., “Reconstructing the Neanderthal Brain Using Computational Anatomy,” Science Reports 8 (April 26, 2018): 6296, doi:10.1038/s41598-018-24331-0.
  • Yeast Gene Editing Study Raises Questions about the Evolutionary Origin of Human Chromosome 2

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Sep 12, 2018

    As a biochemist and a skeptic of the evolutionary paradigm, people often ask me two interrelated questions:

    1. What do you think are the greatest scientific challenges to the evolutionary paradigm?
    2. How do you respond to all the compelling evidence for biological evolution?

    When it comes to the second question, people almost always ask about the genetic similarity between humans and chimpanzees. Unexpectedly, new research on gene editing in brewer’s yeast helps answer these questions more definitively than ever.

    For many people, the genetic comparisons between the two species convince them that human evolution is true. Presumably, the shared genetic features in the human and chimpanzee genomes reflect the species’ shared evolutionary ancestry.

    One high-profile example of these similarities is the structural features human chromosome 2 shares with two chimpanzee chromosomes labeled chromosome 2A and chromosome 2B. When the two chimpanzee chromosomes are placed end to end, they look remarkably like human chromosome 2. Evolutionary biologists interpret this genetic similarity as evidence that human chromosome 2 arose when chromosome 2A and chromosome 2B underwent an end-to-end fusion. They claim that this fusion took place in the human evolutionary lineage at some point after it separated from the lineage that led to chimpanzees and bonobos. Therefore, the similarity in these chromosomes provides strong evidence that humans and chimpanzees share an evolutionary ancestry.

    blog__inline--yeast-gene-editing-study-1 

    Figure 1: Human and Chimpanzee Chromosomes Compared

    Image credit: Who Was Adam? (Covina, CA: RTB Press, 2015), p. 210.

    Yet, new work by two separate teams of synthetic biologists from the United States and China, respectively, raises questions about this evolutionary scenario. Working independently, both research teams devised similar gene editing techniques that, in turn, they used to fuse the chromosomes in the yeast species, Saccharomyces cerevisiae (brewer’s yeast).1 Their work demonstrates the central role intelligent agency must play in end-on-end chromosome fusion, thereby countering the evolutionary explanation while supporting a creation model interpretation of human chromosome 2.

    The Structure of Human Chromosome 2

    Chromosomes are large structures visible in the nucleus during the cell division process. These structures consist of DNA combined with proteins to form the chromosome’s highly condensed, hierarchical architecture.

    blog__inline--yeast-gene-editing-study-2Figure 2: Chromosome Structure

    Image credit: Shutterstock

    Each species has a characteristic number of chromosomes that differ in size and shape. For example, humans have 46 chromosomes (23 pairs); chimpanzees and other apes have 48 (24 pairs).

    When exposed to certain dyes, chromosomes stain. This staining process produces a pattern of bands along the length of the chromosome that is diagnostic. The bands vary in number, location, thickness, and intensity. And the unique banding profile of each chromosome helps geneticists identify them under a microscope.

    In the early 1980s, evolutionary biologists compared the chromosomes of humans, chimpanzees, gorillas, and orangutans for the first time.2 These studies revealed an exceptional degree of similarity between human and chimp chromosomes. When aligned, the human and corresponding chimpanzee chromosomes display near-identical banding patterns, band locations, band size, and band stain intensity. To evolutionary biologists, this resemblance reveals powerful evidence for human and chimpanzee shared ancestry.

    The most noticeable difference between human and chimp chromosomes is the quantity: 46 for humans and 48 for chimpanzees. As I pointed out, evolutionary biologists account for this difference by suggesting that two chimp chromosomes (2A and 2B) fused. This fusion event would have reduced the number of chromosome pairs from 24 to 23, and the chromosome number from 48 to 46.

    As noted, evidence for this fusion comes from the close similarity of the banding patterns for human chromosome 2 and chimp chromosomes 2A and 2B when the two are oriented end on end. The case for fusion also gains support by the presence of: (1) two centromeres in human chromosome 2, one functional, the other inactive; and (2) an internal telomere sequence within human chromosome 2.3 The location of the two centromeres and internal telomere sequences corresponds to the expected locations if, indeed, human chromosome 2 arose as a fusion event.4

    Evidence for Evolution or Creation?

    Even though human chromosome 2 looks like it is a fusion product, it seems unlikely to me that its genesis resulted from undirected natural processes. Instead, I would argue that a Creator intervened to create human chromosome 2 because combining chromosomes 2A and 2B end to end to form it would have required a succession of highly improbable events.

    I describe the challenges to the evolutionary explanation in some detail in a previous article:

    • End-to-end fusion of two chromosomes at the telomeres faces nearly insurmountable hurdles.
    • And, if somehow the fusion did occur, it would alter the number of chromosomes and lead to one of three possible scenarios: (1) nonviable offspring, (2) viable offspring that suffers from a diseased state, or (3) viable but infertile offspring. Each of these scenarios would prevent the fused chromosome from entering and becoming entrenched in the human gene pool.
    • Finally, if chromosome fusion took place and if the fused chromosome could be passed on to offspring, the event would have had to create such a large evolutionary advantage that it would rapidly sweep through the population, becoming fixed.

    This succession of highly unlikely events makes more sense, from my vantage point, if we view the structure of human chromosome 2 as the handiwork of a Creator instead of the outworking of evolutionary processes. But why would these chromosomes appear to be so similar, if they were created? As I discuss elsewhere, I think the similarity between human and chimpanzee chromosomes reflects shared design, not shared evolutionary ancestry. (For more details, see my article Chromosome 2: The Best Evidence for Evolution?”)

    Yeast Chromosome Studies Offer Insight

    Recent work by two independent teams of synthetic biologists from the US and China corroborates my critique of the evolutionary explanation for human chromosome 2. Working within the context of the evolutionary framework, both teams were interested in understanding the influence that chromosome number and organization have on an organism’s biology and how chromosome fusion shapes evolutionary history. To pursue this insight, both research groups carried out similar experiments using CRISPR/Cas9 gene editing to reduce the number of chromosomes in brewer’s yeast from 16 to 1 (for the Chinese team) and from 16 to 2 (for the team from the US) through a succession of fusion events.

    Both teams reduced the number of chromosomes in stages by fusing pairs of chromosomes. The first attempt reduced the number from 16 to 8. In the next round they fused pairs of the newly created chromosome to reduce the number from 8 to 4, and so on.

    To their surprise, the yeast seemed to tolerate this radical genome editing quite well—although their growth rate slowed and the yeast failed to thrive under certain laboratory conditions. Gene expression was altered in the modified yeast genomes, but only for a few genes. Most of the 5,800 genes in the brewer’s yeast genome were normally expressed, compared to the wild-type strain.

    For synthetic biology, this work is a milestone. It currently stands as one of the most radical genome reconfigurations ever achieved. This discovery creates an exciting new research tool to address fundamental questions about chromosome biology. It also may have important applications in biotechnology.

    The experiment also ranks as a milestone for the RTB human origins creation model because it helps address questions about the origin of human chromosome 2. Specifically, the work with brewer’s yeast provides empirical evidence that human chromosome 2 must have been shaped by an Intelligent Agent. This research also reinforces my concerns about the capacity of evolutionary mechanisms to generate human chromosome 2 via the fusion of chimpanzee chromosomes 2A and 2B.

    Chromosome fusion demonstrates the critical role intelligent agency plays.

    Both research teams had to carefully design the gene editing system they used so that it would precisely delete two distinct regions in the chromosomes. This process affected end-on-end chromosome fusions in a way that would allow the yeast cells to survive. Specifically, they had to delete regions of the chromosomes near the telomeres, including the highly repetitive telomere-associated sequences. While they carried out this deletion, they carefully avoided deleting DNA sequences near the telomeres that harbored genes. They also simultaneously deleted one of the centromeres of the fused chromosomes to ensure that the fused chromosome would properly replicate and segregate during cell division. Finally, they had to make sure that when the two chromosomes fused, the remaining centromere was positioned near the center of the resulting chromosome.

    In addition to the high-precision gene editing, they had to carefully construct the sequence of donor DNA that accompanied the CRISPR/Cas9 gene editing package so that the chromosomes with the deleted telomeres could be directed to fuse end on end. Without the donor DNA, the fusion would have been haphazard.

    In other words, to fuse the chromosomes so that the yeast survived, the research teams needed a detailed understanding of chromosome structure and biology and a strategy to use this knowledge to design precise gene editing protocols. Such planning would ensure that chromosome fusion occurred without the loss of key genetic information and without disrupting key processes such as DNA replication and chromosome segregation during cell division. The researchers’ painstaking effort is a far cry from the unguided, undirected, haphazard events that evolutionary biologists think caused the end-on-end chromosome fusion that created human chromosome 2. In fact, given the high-precision gene editing required to create fused chromosomes, it is hard to envision how evolutionary processes could ever produce a functional fused chromosome.

    A discovery by both research teams further complicates the evolutionary explanation for the origin of human chromosome 2. Namely, the yeast cells could not replicate unless the centromere of one of the chromosomes was deleted at the time the chromosomes fused. The researchers learned that if this step was omitted, the fused chromosomes weren’t stable. Because centromeres serve as the point of attachment for the mitotic spindle, if a chromosome possesses two centromeres, mistakes occur in the chromosome segregation step during cell division.

    It is interesting that human chromosome 2 has two centromeres but one of them has been inactivated. (In the evolutionary scenario, this inactivation would have happened through a series of mutations in the centromeric DNA sequences that accrued over time.) However, if human chromosome 2 resulted from the fusion of two chimpanzee chromosomes, the initial fusion product would have possessed two centromeres, both functional. In the evolutionary scenario, it would have taken millennia for one of the chromosomes to become inactivated. Yet, the yeast studies indicate that centromere loss must take place simultaneously with end-to-end fusion. However, based on the nature of evolutionary mechanisms, it cannot.

    Chromosome fusion in yeast leads to a loss of fitness.

    Perhaps one of the most remarkable outcomes of this work is the discovery that the yeast cells lived after undergoing that many successive chromosome fusions. In fact, experts in synthetic biology such as Gianni Liti (who commented on this work for Nature), expressed surprise that the yeast survived this radical genome restructuring.5

    Though both research teams claimed that the fusion had little effect on the fitness of the yeast, the data suggests otherwise. The yeast cells with the fused chromosomes grew more slowly than wild-type cells and struggled to grow under certain culture conditions. In fact, when the Chinese research team cultured the yeast with the single fused chromosome with the wild-type strain, the wild-type yeast cells out-competed the cells with the fused chromosome.

    Although researchers observed changes in gene expression only for a small number of genes, this result appears to be a bit misleading. The genes with changed expression patterns are normally located near telomeres. The activity of these genes is normally turned down low because they usually are needed only under specific growth conditions. But with the removal of telomeres in the fused chromosomes, these genes are no longer properly regulated; in fact, they may be over-expressed. And, as a consequence of chromosome fusion, some genes that normally reside at a distance from telomeres find themselves close to telomeres, leading to reduced activity.

    This altered gene expression pattern helps explains the slower growth rate of the yeast strain with fused chromosomes and the yeast cells’ difficulty to grow under certain conditions. The finding also raises more questions about the evolutionary scenario for the origin of human chromosome 2. Based on the yeast studies, it seems reasonable to think that the end-to-end fusion of chromosomes 2A and 2B would have reduced the fitness of the offspring that first inherited the fused chromosome 2, making it less likely that the fusion would have taken hold in the human gene pool.

    Chromosome fusion in yeast leads to a loss of fertility.

    Normally, yeast cells reproduce asexually. But they can also reproduce sexually. When yeast cells mate, they fuse. As a result of this fusion event, the resulting cell has two sets of chromosomes. In this state, the yeast cells can divide or form spores. In many respects, the sexual reproduction of yeast cels resembles the sexual reproduction in humans, in which egg and sperm cells, each with one set of chromosomes, fuse to form a zygote with two sets of chromosomes.

    blog__inline--yeast-gene-editing-study-3 

    Figure 3: Yeast Cell Reproduction

    Image credit: Shutterstock

    Both research groups discovered that genetically engineered yeast cells with fused chromosomes could mate and form spores, but spore viability was lower than for wild-type yeast.

    They also discovered that after the first round of chromosome fusion when the genetically engineered yeast possessed 8 chromosomes, mating normal yeast cells with those harboring fused chromosomes resulted in low fertility. When wild-type yeast cells were mated with yeast strains that had been subjected to additional rounds of chromosome fusion, spore formation failed altogether.

    The synthetic biologists find this result encouraging because it means that if they use yeast with fused chromosomes for biotechnology applications, there is little chance that the genetically engineered yeast will mate with wild-type yeast. In other words, the loss of fertility serves as a safeguard.

    However, this loss of fertility does not bode well for evolutionary explanations for the origin of human chromosome 2. The yeast studies indicate that chromosome fusion leads to a loss of fertility because of the mismatch in chromosome number, which makes it difficult for chromosomes to align and properly segregate during cell division. So, why wouldn’t this loss of fertility happen if chromosomes 2A and 2B fuse?

    blog__inline--yeast-gene-editing-study-4 

    Figure 4: Cell Division

    Image credit: Shutterstock

    In short, the theoretical concerns I expressed about the evolutionary origin of human chromosome 2 find experimental support in the yeast studies. And the indisputable role intelligent agency plays in designing and executing the protocols to fuse yeast chromosomes provides empirical evidence that a Creator must have intervened in some capacity to design human chromosome 2.

    Of course, there are a number of outstanding questions that remain for a creation model interpretation of human chromosome 2, including:

    • Why would a Creator seemingly fuse together two chromosomes to create human chromosome 2?
    • Why does this chromosome possess internal telomere sequences?
    • Why does human chromosome 2 harbor seemingly nonfunctional centromere sequences?

    We predict that as we learn more about the biology of human chromosome 2, we will discover a compelling rationale for the structural features of this chromosome, in a way that befits a Creator.

    But, at this juncture the fusion of yeast chromosomes in the lab makes it hard to think that unguided evolutionary processes could ever successfully fuse two chromosomes, including human chromosome 2, end on end. Creation appears to make more sense.

    Resources

    Endnotes
    1. Jingchuan Luo et al., “Karyotype Engineering by Chromosome Fusion Leads to Reproductive Isolation in Yeast,” Nature 560 (2018): 392–96, doi:10.1038/s41586-018-0374-x; Yangyang Shao et al., “Creating a Functional Single-Chromosome Yeast,” Nature 560 (2018): 331–35, doi:10.1038/s41586-018-0382-x.
    2. Jorge J. Yunis, J. R. Sawyer, and K. Dunham, “The Striking Resemblance of High-Resolution G-Banded Chromosomes of Man and Chimpanzee,” Science 208 (1980): 1145–48, doi:10.1126/science.7375922; Jorge J. Yunis and Om Prakash, “The Origin of Man: A Chromosomal Pictorial Legacy,” Science 215 (1982): 1525–30, doi:10.1126/science.7063861.
    3. The centromere is a region of the DNA molecule near the center of the chromosome that serves as the point of attachment for the mitotic spindle during the cell division process. Telomeres are DNA sequences located at the tip ends of chromosomes designed to stabilize the chromosome and prevent it from undergoing degradation.
    4. J. W. Ijdo et al., “Origin of Human Chromosome 2: An Ancestral Telomere-Telomere Fusion,” Proceedings of the National Academy of Sciences USA 88 (1991): 9051–55, doi:10.1073/pnas.88.20.9051; Rosamaria Avarello et al., “Evidence for an Ancestral Alphoid Domain on the Long Arm of Human Chromosome 2,” Human Genetics 89 (1992): 247–49, doi:10.1007/BF00217134.
    5. Gianni Liti, “Yeast Chromosome Numbers Minimized Using Genome Editing,” Nature 560 (August 1, 2018): 317–18, doi:10.1038/d41586-018-05309-4.
  • The Endosymbiont Hypothesis: Things Aren’t What They Seem to Be

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Aug 29, 2018

    Sometimes, things just aren’t what they seem to be. For example, when it comes to the world of biology:

    • Fireflies are not flies; they are beetles
    • Prairie dogs are not dogs; they are rodents
    • Horned toads are not toads; they are lizards
    • Douglas firs are not firs; they are pines
    • Silkworms are not worms; they are caterpillars
    • Peanuts are not nuts; they are legumes
    • Koala bears are not bears; they are marsupials
    • Guinea pigs are not from Guinea and they are not pigs; they are rodents from South America
    • Banana trees are not trees; they are herbs
    • Cucumbers are not vegetables; they are fruit
    • Mexican jumping beans are not beans; they are seeds with a larva inside

    And . . . mitochondria are not alphaproteobacteria. In fact, evolutionary biologists don’t know what they are—at least, if recent work by researchers from Uppsala University in Sweden is to be taken seriously.1

    As silly as this list may be, evolutionary biologists are not amused by this latest insight about the identity of mitochondria. Uncertainty about the evolutionary origin of mitochondria removes from the table one of the most compelling pieces of evidence for the endosymbiont hypothesis.

    A cornerstone idea within the modern evolutionary framework, biology textbooks often present the endosymbiont hypothesis as a well-evidenced, well-established evolutionary explanation for the origin of complex cells (eukaryotic cells). Yet, confusion and uncertainty surround this idea, as this latest discovery attests. To put it another way: when it comes to the evolutionary explanation for the origin of complex cells in biology textbooks, things arent what they seem.

    The Endosymbiont Hypothesis

    Most evolutionary biologists believe that the endosymbiont hypothesis is the best explanation for one of the key transitions in life’s history—namely, the origin of complex cells from bacteria and archaea. Building on the ideas of Russian botanist Konstantin Mereschkowski, Lynn Margulis (1938–2011) advanced the endosymbiont hypothesis to explain the origin of eukaryotic cells in the 1960s.

    Since that time, Margulis’s ideas on the origin of complex cells have become an integral part of the evolutionary paradigm. Many life scientists find the evidence for this hypothesis compelling; consequently, they view it as providing broad support for an evolutionary explanation for the history and design of life.

    According to this hypothesis, complex cells originated when symbiotic relationships formed among single-celled microbes after free-living bacterial and/or archaeal cells were engulfed by a “host” microbe. (Ingested cells that take up permanent residence within other cells are referred to as endosymbionts.)

    blog__inline--the-endosymbiont-hypothesis 

    The Evolution of Eukaryotic Cells According to the Endosymbiont Hypothesis

    Image source: Wikipedia

    Presumably, organelles such as mitochondria were once endosymbionts. Evolutionary biologists believe that once taken inside the host cell, the endosymbionts took up permanent residence, with the endosymbiont growing and dividing inside the host. Over time, endosymbionts and hosts became mutually interdependent, with the endosymbionts providing a metabolic benefit for the host cell. The endosymbionts gradually evolved into organelles through a process referred to as genome reduction. This reduction resulted when genes from endosymbionts’ genomes were transferred into the genome of the host organism. Eventually, the host cell evolved machinery to produce proteins needed by the former endosymbiont and processes to transport those proteins into the organelle’s interior.

    Evidence for the Endosymbiont Hypothesis

    The morphological similarity between organelles and bacteria serve as one line of evidence for the endosymbiont hypothesis. For example, mitochondria are about the same size and shape as a typical bacterium and they have a double membrane structure like the gram-negative cells. These organelles also divide in a way that is reminiscent of bacterial cells.

    Biochemical evidence also seems to support the endosymbiont hypothesis. Evolutionary biologists view the presence of the diminutive mitochondrial genome as a vestige of this organelle’s evolutionary history. Additionally, biologists also take the biochemical similarities between mitochondrial and bacterial genomes as further evidence for the evolutionary origin of these organelles.

    The presence of the unique lipid cardiolipin in the mitochondrial inner membrane also serves as evidence for the endosymbiont hypothesis. Cardiolipin is an important lipid component of bacterial inner membranes. Yet, it is not found in the membranes of eukaryotic cells—except for the inner membranes of mitochondria. In fact, biochemists consider it a signature lipid for mitochondria and a vestige of this organelle’s evolutionary history.

    But, as compelling as these observations may be, for many evolutionary biologists phylogenetic analysis provides the most convincing evidence for the endosymbiont hypothesis. Evolutionary trees built from the DNA sequences of mitochondria, bacteria, and archaea place these organelles among a group of microbes called alphaproteobacteria. And, for many (but not all) evolutionary trees, mitochondria cluster with the bacteria, Rickettsiales. For evolutionary biologists, these results mean that the endosymbionts that eventually became the first mitochondria were alphaproteobacteria. If mitochondria were not evolutionarily derived from alphaproteobacteria, why would the DNA sequences of these organelles group with these bacteria in evolutionary trees?

    But . . . Mitochondria Are Not Alphaproteobacteria

    Even though evolutionary biologists seem certain about the phylogenetic positioning of mitochondria among the alphaproteobacteria, there has been an ongoing dispute as to the precise positioning of mitochondria in evolutionary trees, specifically whether or not mitochondria group with Rickettsiales. Looking to bring an end to this dispute, the Uppsula University research team developed a more comprehensive data set to build their evolutionary trees, with the hope that they could more precisely locate mitochondria among alphaproteobacteria. The researchers point out that the alphaproteobacterial genomes used to construct evolutionary trees stem from microbes found in clinical and agricultural settings, which is a small sampling of the alphaproteobacteria found in nature. Researchers knew this was a limitation, but, up to this point, this was the only DNA sequence data available to them.

    To avoid the bias that arises from this limited data set, the researchers screened databases of DNA sequences collected from the Pacific and Atlantic Oceans for undiscovered alphaproteobacteria. They uncovered twelve new groups of alphaproteobacteria. In turn, they included these new genome sequences along with DNA sequences from previously known alphaproteobacterial genomes to build a new set of evolutionary trees. To their surprise, their analysis indicates that mitochondria are not alphaproteobacteria.

    Instead, it looks like mitochondria belong to a side branch that separated from the evolutionary tree before alphaproteobacteria emerged. Adding to their surprise, the research team was unable to identify any bacterial species alive today that would group with mitochondria.

    To put it another way: the latest study indicates that evolutionary biologists have no candidate for the evolutionary ancestor of mitochondria.

    Does the Endosymbiont Hypothesis Successfully Account for the Origin of Mitochondria?

    Evolutionary biologists suggest that there’s compelling evidence for the endosymbiont hypothesis. But when researchers attempt to delineate the details of this presumed evolutionary transition, such as the identity of the original endosymbiont, it becomes readily apparent that biologists lack a genuine explanation for the origin of mitochondria and, in a broader context, the origin of eukaryotic cells.

    As I have written previously, the problems with the endosymbiont hypothesis are not limited to the identity of the evolutionary ancestor of mitochondria. They are far more pervasive, confounding each evolutionary step that life scientists envision to be part of the emergence of complex cells. (For more examples, see the Resources section.)

    When it comes to the endosymbiont hypothesis, things are not what they seem to be. If mitochondria are not alphaproteobacteria, and if evolutionary biologists have no candidate for their evolutionary ancestor, could it be possible that they are the handiwork of the Creator?

    Resources

    Endnotes
    1. Joran Martijn et al., “Deep Mitochondrial Origin Outside the Sampled Alphaproteobacteria,” Nature 557 (May 3, 2018): 101–5, doi:10.1038/s41586-018-0059-5.
  • The Multiplexed Design of Neurons

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Aug 22, 2018

    In 1910, Major General George Owen Squier developed a technique to increase the efficiency of data transmission along telephone lines that is still used today in telecommunications and computer networks. This technique, called multiplexing, allows multiple signals to be combined and transmitted along a single cable, making it possible to share a scarce resource (available phone lines, in Squier’s day).

    Today, there are a number of ways to carry out multiplexing. One of them is called time-division multiplexing. While other forms of multiplexing can be used for analog data, this technique can only be applied to digital data. Data is transmitted as a collection of bits along a single channel separated by a time interval that allows the data groups to be directed to the appropriate receiver.

    Researchers from Duke University have discovered that neurons employ time-division multiplexing to transmit multiple electrical signals along a single axon.1 The remarkable similarity between data transmission techniques used by neurons and telecommunication systems and computer networks is provocative. It can also be marshaled to add support to the revitalized Watchmaker argument for God’s existence and role in the origin and design of life.

    A brief primer on neurons will help us better appreciate the work of the Duke research team.

    Neurons

    The primary component of the nervous system (the brain, spinal cord, and the peripheral system of nerves), neurons are electrically excitable cells that rely on electrochemical processes to receive and send electrical signals. By connecting to each other through specialized structures called synapses, neurons form pathways that transmit information throughout the nervous system.

    Neurons consist of the soma or cell body, along with several outward extending projections called dendrites and axons.

    blog__inline--the-multiplexed-design-of-neurons 

    Image credit: Wikipedia

    Dendrites are “tree-like” projections that extend from the soma into the synaptic space. Receptors on the surface of dendrites bind neurotransmitters deposited by adjacent neurons in the synapse. These binding events trigger an electrical signal that travels along the length of the dendrites to the soma. However, axons conduct electrical impulses away from the soma toward the synapse, where this signal triggers the release of neurotransmitters into the extracellular medium, initiating electrical activity in the dendrites of adjacent neurons.

    Sensory Neurons

    In the world around us, many things happen at the same time. And we need to be aware of all of these events. Sensory neurons react to stimuli, communicating information about the environment to our brains. Many different types of sensory neurons exist, making possible our sense of sight, smell, taste, hearing, touch, and temperature. These sensory neurons have to be broadly tuned and may have to respond to more than one environmental stimulus at the same time. An example of this scenario would be carrying on a conversation with a friend at an outdoor café while the sounds of the city surround us.

    The Duke University researchers wanted to understand the mechanism neurons employ when they transmit information about two or more environmental stimuli at the same time. To accomplish this, the scientists trained two macaques (monkeys) to look in the direction of two distinct sounds produced at two different locations in the room. After achieving this step, the researchers planted electrodes into the inferior colliculus of the monkeys’ brains and used these electrodes to record the activity of single neurons as the monkeys responded to auditory stimuli. The researchers discovered that each sound produced a unique firing rate along single neurons and that when the two sounds were presented at the same time, the neuron transmitting the electrical signals alternated back and forth between the two firing rates. In other words, the neurons employed time-division multiplexing to transmit the two signals.

    Neuron Multiplexing and the Case for Creation

    The capacity of neurons to multiplex signals generated by environmental stimuli exemplifies the elegance and sophistication of biological designs. And it is discoveries such as these that compel me to believe that life must stem from the work of a Creator.

    But the case for a Creator extends beyond the intuition of design. Discoveries like this one breathe new life into the Watchmaker argument.

    British natural theologian William Paley (1743–1805) advanced this argument by pointing out that the characteristics of a watch—with the complex interaction of its precision parts for the purpose of telling time—implied the work of an intelligent designer. Paley asserted by analogy that just as a watch requires a watchmaker, so too, does life require a Creator, since organisms display a wide range of features characterized by the precise interplay of complex parts for specific purposes.

    Over the centuries, skeptics have maligned this argument by claiming that biological systems only bear a superficial similarity to human designs. That is, the analogy between human designs and biological systems is weak and, therefore, undermines the conclusion that a Divine Watchmaker exits. But, as I discuss in The Cell’s Design, the discovery of molecular motors, biochemical watches, and DNA computers—biochemical complexes with machine-like characteristics—energizes the argument. These systems are identical to the highly sophisticated machines and devices we build as human designers. In fact, these biochemical systems have been directly incorporated into nanotechnologies. And, we recognize that motors and computers, not to mention watches, come from minds. So, why wouldn’t we conclude that these biochemical systems come from a mind, as well?

    Analogies between human machines and biological systems are not confined to biochemical systems. We see them at the biological level as well, as the latest work by the research team from Duke University illustrates.

    It is fascinating to me that as we learn more about living systems, whether at the molecular scale, the cellular level, or the systems stage, we discover more and more instances in which biological systems bear eerie similarities to human designs. This learning strengthens the Watchmaker argument and the case for a Creator.

    Resources

    Endnotes
    1. Valeria C. Caruso et al., “Single Neurons May Encode Simultaneous Stimuli by Switching between Activity Patterns,” Nature Communications 9 (2018): 2715, doi:10.1038/s41467-018-05121-8.
  • Design Principles Explain Neuron Anatomy

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Aug 15, 2018

    It’s one of the classic episodes of I Love Lucy. Originally aired on September 15, 1952, the episode entitled Job Switching finds Lucy and Ethel working at a candy factory. They have been assigned to an assembly line, where they are supposed to pick up pieces of candy from a moving conveyor belt, wrap them, and place the candy back on the assembly line. But the conveyor belt moves too fast for Lucy and Ethel to keep up. Eventually, they both start stuffing pieces of candy into their mouths, under their hats, and in their blouses, as fast as they can as pieces of candy on the assembly line quickly move beyond their reach—a scene of comedic brilliance.

    This chaotic (albeit hilarious) scene is a good analogy for how neurons would transmit electrical signals throughout the nervous system if not for the clever design of the axons that project from the nerve cell’s soma, or cell body.

    The principles that undergird the design of axons were recently discovered by a team of bioengineers at the University of California, San Diego (UCSD).1 Insights such as this highlight the elegant designs that characterize biological systems—designs worthy to be called the Creator’s handiwork—no joke.

    Neurons

    The primary component of the nervous system (the brain, spinal cord, and the peripheral system of nerves), neurons are electrically excitable cells, thanks to electrochemical processes that take place across their cell membranes. These electrochemical activities allow the cells to receive and send electrical signals. By connecting to each other through specialized structures called synapses, neurons form pathways that transmit information throughout the nervous system. Neurologists refer to these pathways as neural circuits.

    The heart of a neuron is the soma or cell body. This portion of the cell harbors the nucleus. Two sets of projections emanate from the soma: dendrites and axons. Dendrites are “tree-like” projections that extend from the soma into the synaptic space. Receptors on the surface of dendrites bind neurotransmitters deposited by adjacent neurons in the synapse. These binding events trigger an electrical signal that travels along the length of the dendrites to the soma. On the other hand, axons conduct electrical impulses away from the soma toward the synapse where this signal triggers the release of neurotransmitters into the extracellular medium, initiating electrical activity in the dendrites of adjacent neurons. Many dendrites feed the soma, but the soma gives rise to only a single axon, though the axon can branch extensively for some types of nerve cells. Axons vary significantly in terms of their diameter and length. Their diameter ranges from 1 to 20 microns. Axons can be quite long, up to a meter in length.

    blog__inline--design-principles-explain-neuron-anatomy

    Image: A Neuron. Image source: Wikipedia

    The electrical excitability of neurons stems from the charge separation across its cell or plasma membrane that arises due to concentration differences in positively charged sodium, potassium, and calcium ions between the cell’s interior and exterior surroundings. This charge difference sets up a voltage across the membrane that is maintained by the activity of proteins embedded within the membranes called ion pumps. This voltage is called the resting potential. When the neuron binds neurotransmitters, this event triggers membrane-bound proteins called ion channels to open up, allowing ions to flow across the membrane. This causes a localized change in the membrane voltage that propagates along the length of the dendrite or axon. This propagating voltage change is called an action potential. When the action potential reaches the end of the axon, it triggers the release of neurotransmitters into the synaptic space.

    Why Are Neurons the Way They Are?

    The UCSD researchers wanted to understand the principles that undergird the neuron design, specifically why the length and diameter of the axons varies so much. Previous studies indicate that axons aren’t structured to minimize the use of cellular material—otherwise they wouldn’t be so long and convoluted. Nor are they structured for speed because axons don’t propagate electrical signals as fast as they could, theoretically speaking.

    Even though the UCSD bioengineers adhere to the evolutionary paradigm, they were convinced that design principles must exist that explain the anatomy and physiology of neurons. From my perspective, their conviction is uncharacteristic of many life scientists because of the nature of evolutionary mechanisms (unguided, historically contingent processes that co-opt and cobble together existing designs to create new biological systems). Based on these mechanisms, there need not be any rationale for why things are the way they are. In fact, many evolutionary biologists view most biological systems as flawed, imperfect systems that are little more than kludge jobs.

    But their conviction paid off. They discovered an elegant rationale that explains the variation in axon lengths.

    Refraction Ratio

    The UCSD investigators reasoned that the cellular architecture of axons may reflect a trade-off between (1) the speed of signal transduction along the axon, and (2) the time it takes the axon to reset the resting potential after the action potential propagates along the length of the axon and to ready the cell for the next round of neurotransmitter release.

    To test this idea, the research team defined a quantity they dubbed the refraction ratio. This is the ratio of the refractory period of a neuron and the time it takes the electrical signal to transmit along the length of the axon. These researchers calculated the refraction ratio for 12,000 axon branches of rat basket cells. (These are a special type of neuron with heavily branched axons.) They found the information they needed for these calculations in the NeuroMorpho database. They determined that the average value for the refraction ratio was 0.92. The ideal value for the refraction ratio is 1.0. A value of 1.0 for the refraction ratio reflects optimal efficiency. In other words, the refraction ratio appears to be nearly optimal.

    If not for this optimization, then signal transmission along axons would suffer the same fate as the pieces of candy on the assembly line manned by Lucy and Ethel. Things would become a jumbled mess along the length of the axons and at the synaptic terminus. And, if this happened, the information transmitted by the neurons would be lost.

    The researchers concluded that the axon diameter—and, more importantly, its length—are varied to ensure that the refraction ratio remains as close to 1.0 as possible. This design principle explains why the shape, length, and width of axons varies so much. The reset time (refractory period) cannot be substantially altered. But the axon geometry can be altered, and this variation controls the transmission time of the electrical signal along the axon. To put it another way, axon geometry is analogous to slowing down or speeding up the conveyor belt to ensure that the candy factory workers can wrap as many pieces of candy as possible, without having to eat any or tuck them under their hats.

    The Importance of Axon Geometry

    The researchers from UCSD think that the design principles they have uncovered may be helpful in understanding some neurological disorders. They reason that if a disease leads to changes in neuronal anatomy, the axon geometry may no longer be optimized (causing the refraction ratio to deviate from its ideal value). This deviation will lead to loss of information when nerve cells transmit electrical signals through neural circuits, potentially contributing to the etiology of neurological diseases.

    This research team also thinks that their insights might have use in computer technology. Understanding the importance of refraction ratio should benefit the design of machine-learning systems based on brain-like neural networks. At this time, the design of machine-learning systems doesn’t account for the time it takes for signals to reach neural network nodes. By incorporating this temporal parameter into the design, the researchers believe that they can dramatically improve the power of neural networks. In fact, this research team is now building new types of machine-learning architectures based on these new insights.2

    Axon Geometry and the Case for Creation

    The elegant, optimized, sophisticated, and ingenious design displayed by axon geometry is the type of evidence that convinced me, as an agnostic graduate student studying biochemistry, that life must stem from the work of a Creator. The designs we observe in biology (and biochemistry) are precisely the types of designs that we would expect to see if a Creator was responsible for life’s origin, history, and design.

    On the other hand, evolutionary mechanisms (based on unguided, directionless processes that rely on co-opting and modifying existing designs to create biological innovation) are expected to yield biological designs that are inherently limited and flawed. For many life scientists, the varying length and meandering, convoluted paths taken by axons serve as a reminder that evolution produces imperfect designs, just good enough for survival, but nothing more.

    And, in spite of this impoverished view of biology, the UCSD bioengineers were convinced that there must be a design principle that explained the variable length of axons. And herein lies the dilemma faced by many life scientists. The paradigm they embrace demands that they view biological systems as flawed and imperfect. Yet, biological systems appear to be designed for a purpose. And, hence, biologists can’t stop from using design language when they describe the structure and function of these systems. Nor can they keep themselves from seeking design principles when they study the architecture and operation of these systems. In other words, many life scientists operate as if life was the product of a Creator’s handiwork, though they might vehemently deny God’s influence in shaping biology—and even go as far as denying God’s existence. In this particular case, the commitment these researchers had to a de facto design paradigm paid off handsomely for them—and scientific advance.

    The Converse Watchmaker Argument

    Along these lines, it is provocative that the insights the researchers gleaned regarding axon geometry and the refraction ratio may well translate into improved designs for neural networks and machine-learning systems. The idea that biological designs can inspire engineering and technology advances makes possible a new argument for God’s existence—one I have named the converse Watchmaker argument.

    The argument goes something like this: if biological designs are the work of a Creator, then these systems should be so well-designed that they can serve as engineering models and otherwise inspire the development of new technologies.

    At some level, I find the converse Watchmaker argument more compelling than the classical Watchmaker analogy. Again, it is remarkable to me that biological designs can inspire engineering efforts.

    It is even more astounding to think that engineers would turn to biological designs to inspire their work if biological systems were truly generated by an unguided, historically contingent process, as evolutionary biologists claim.

    Using biological designs to guide engineering efforts seems to be fundamentally incompatible with an evolutionary explanation for life’s origin and history. To think otherwise is only possible after taking a few swigs of Vitameatavegamin mix.

    Resources

    Endnotes
    1. Francesca Puppo, Vivek George, and Gabriel A. Silva, “An Optimized Structure-Function Design Principle Underlies Efficient Signaling Dynamics in Neurons,” Scientific Reports 8 (2018): 10460, doi:10.1038/s41598-018-28527-2.
    2. Katherine Connor, “Why Are Neuron Axons Long and Spindly? Study Shows They’re Optimizing Signaling Efficiency,” UC San Diego News Center, July 11, 2018, https://ucsdnews.ucsd.edu/pressrelease/why_are_neuron_axons_long_and_spindly.
  • Evolution’s Flawed Approach to Science

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Aug 08, 2018

    One of the things I find most troubling about the evolutionary paradigm is the view it fosters about the nature of biological systems—including human beings.

    Evolution’s mechanisms, it is said, generate biological innovations by co-opting existing designs and cobbling them together to create new ones. As a result, many people in the scientific community regard biological systems as fundamentally flawed.

    As biologist Ken Miller explains in an article for Technology Review:

    “Evolution . . . does not produce perfection. The fact that every intermediate stage in the development of an organ must confer a selective advantage means that the simplest and most elegant design for an organ cannot always be produced by evolution. In fact, the hallmark of evolution is the modification of pre-existing structures. An evolved organism, in short, should show the tell-tale signs of this modification.1″

    So, instead of regarding humans as “fearfully and wonderfully made” (as Scripture teaches), the evolutionary paradigm denigrates human beings, as a logical entailment of its mechanisms. It renders human beings as nothing more than creatures that have been cobbled together by evolutionary mechanisms.

    Adding to this concern is the impact that the evolutionary paradigm has on scientific advance. Because many in the scientific community view biological systems as fundamentally flawed, they are predisposed to conclude—oftentimes, prematurely—that biological systems lack function or purpose when initial investigations into these systems fail to uncover any obvious rationale for why these systems are the way they are. And, once these investigators conclude that a biological system is flawed, the motivation to continue studying the system dissipates. Why try to understand a flawed design? Why focus attention on biological systems that lack function? Why invest research dollars studying systems that serve no purpose?

    I would contend that viewing biological systems as the Creator’s handiwork provides a superior framework for promoting scientific advance, particularly when the rationale for the structure and function of a particular biological system is not apparent. If biological systems have been created, then there must be good reasons why these systems are structured and function the way they do. And this expectation drives further study of seemingly nonfunctional, purposeless systems with the full anticipation that their functional roles will eventually be uncovered.

    Recent history validates the creation model approach. During the course of the last couple of decades, the scientific community has made discovery after discovery demonstrating (1) function for biological systems long thought to be useless evolutionary vestiges, or (2) an ingenious rationale for the architecture and operation of systems long regarded as flawed designs. (For examples, see the articles listed in the Resources section.)

    These discoveries were made not because of the evolutionary paradigm but in spite of it.

    So often, creationists and intelligent design proponents are accused of standing in the way of scientific advance. Skeptics of creation claim that if we conclude that God created biological systems, then science grinds to a halt. If God made it, then why continue to investigate the system in question?

    But, I would assert that the opposite is true. The evolutionary paradigm stultifies science by viewing biological systems as flawed and vestigial. Yet, for the biological systems discussed in the articles listed in the Resources section, the view spawned by the evolutionary paradigm delayed important advances that could have been leveraged for biomedical purposes sooner, alleviating a lot of pain and suffering.

    Because a creation model perspective regards designs in nature as part of God’s handiwork, it provides the motivation to keep pressing forward, seeking a rationale for systems that seemingly lack purpose. In the handful of instances in which the scientific community has adopted this mindset, it has been rewarded, paving the way for new scientific insight that leads to biomedical breakthroughs.

    Resources

    Endnotes
    1. Kenneth R. Miller, “Life’s Grand Design,” Technology Review 97 (February/March 1994): 24–32.
  • “Silenced” B Cells Loudly Proclaim the Case for a Creator

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Aug 01, 2018

    When I was an undergraduate student studying chemistry and biology, I hated the course work I did in immunology. The immune system is fascinating, to be certain. And, as a student, I marveled at how our body defends itself from invading microorganisms. But, I hated trying to keep track of the bewildering number of the cells that comprise the immune system.

    But my efforts to learn about these cells has finally paid off. It allows me to appreciate the recently discovered insights into the role “silenced” B cells play in the immune system. Not only do these insights have important biomedical implications, but, in my view, they also add to the mounting evidence for creation and further validate a creation model approach to biology.

    First discovered thirty years ago, these cells were initially deemed nonfunctional junk produced by a flawed immune system. And this view has persisted for three decades. Immunologists viewed silenced B cells as harmful. Presumably, these cells impair immune system function by cluttering up immune tissues. Or worse, they considered these cells to be potentially deadly, contributing to autoimmune disorders. Yet, immunologists are changing their view of’silenced B cells, thanks to the efforts of researchers from Australia.1

    A Brief (and Incomplete) Primer on Immunology

    To understand the newly discovered role silenced B cells play in the immune system, a brief primer on immunology is in order.

    It goes without saying that the immune system’s job is to protect the body from pathogens. To do this, it must recognize pathogens as foreign materials. To put it another way, it must distinguish self from nonself. (Autoimmune disorders result when the immune system mistakes the body’s own tissues as foreign materials, and then attacks itself.)

    An incredibly complex biological system, the immune system contains one component called the humoral immune system. This part of the immune system relies on proteins, such as antibodies, circulating in extracellular fluids to mediate the body’s immune response.

    Plasma cells secrete antibodies into the circulatory system. Antibodies then bind to the invading pathogen, decorating its surface. The antibodies serve as a beacon that attracts certain immune cells, such as macrophages and killer cells, that will engulf the pathogen, clearing it from the body.

    Plasma cells originate in bone marrow as B cells (also known as B lymphocytes). B cells develop from hematopoietic stem cells. As they develop, genes in the developing B cell genome that encode for antibodies (and receptor proteins) undergo rearrangements (just like shuffling a deck of cards). These rearrangements generate genes that encode an ensemble of receptor proteins that reside on the B cell surface, with each receptor protein (and corresponding antibody) recognizing and binding a specific pathogen. Collectively, these cell surface receptors (and antibodies) can detect a large and varied number of foreign agents.

    blog__inline--silenced-b-cells-loudly-proclaim-case-for-creator 

    Image credit: Shutterstock

    After developing in the bone marrow, B cells migrate to either the spleen or lymph nodes. Here, the B cells are exposed to the flow of lymph, the fluid that moves through the lymphatic circulatory system. If pathogens have invaded the body, they will encounter B cells in lymph tissue. If a B cell has a receptor that recognizes that particular pathogen, it will bind it. This binding event will trigger the transformation of the B cell. Once activated by the binding event, the B cell migrates into a region of the lymph tissue called the germinal center. Here the B cells undergo clonal expansion, rapidly proliferating into plasma cells and memory B cells. The plasma cells produce antibodies that help identify the pathogen as a foreign invader. The memory B cells hang around in the immune tissue so the immune system can rapidly respond to that pathogen if it invades the body in the future.

    A Flaw in the Immune System?

    During the B cell maturation process in the bone marrow, about 50 percent of the nascent B cells produce cell surface receptors that bind to materials in the body, instead of pathogens. That is, these B cells can’t discriminate self from nonself. This outcome is a by-product of the random-shuffling mechanism that generates protein receptor diversity. The random shuffling of the genes is equally likely to produce receptors that bind to materials in the body as it is pathogens. But when this misidentification happens, an elaborate quality control system kicks in, either eliminating the faulty B cells or reworking them so that they can be a functioning part of the immune system. This reworking process involves additional gene shuffling with the hope of generating cell receptors that recognize foreign materials.

    However, a few of the faulty B cells escape destruction and avoid having their genes reshuffled. In this case, the immune system silences these cells (called anergic cells), but they still hang around in immune tissue, clogging things up. It seemingly gets worse: if these cells become activated they can cause an autoimmune reaction—just the type of sloppy design evolutionary mechanisms would produce. Or is it?

    A Critical Role for Silenced B Cells

    Recent work by the research team from Australia provides a rationale for the persistence of silenced anergic B cells in the immune system. These cells play a role in combating pathogens such as HIV and campylobacter, which cloak themselves from the immune system by masquerading as part of our body. While these pathogens escape detection by most of the components of our immune system, they can be detected by silenced B cells with receptors that recognize self as nonself.

    The silenced B cells are redeemed by the immune system in the germinal center through a process called receptor revision. Here the genes that encode the receptors experience hypermutation, altering their receptors to the extent that they now can recognize foreign materials. But the capacity of the receptors to recognize self serves the immune system well when infectious agents such as HIV or campylobacter invade.

    The researchers who made the discovery think that this insight might one day help pathologists do a better job treating autoimmune disorders. They also hope it might lead to a vaccine for HIV.

    A Remarkable Turnaround

    In a piece for Science Alert, journalist Peter Dockrill summarizes the significance of the discovery: “It’s a remarkable turnaround for a class of immune cells long mistaken for dangerous junk—and one which shows there’s still so much we have to learn about what the immune system can do for us, and how its less than perfectly obvious mechanisms might be leveraged to do us good.”2

    The surprise expressed by Dockrill reflects the influence of the evolutionary paradigm and the view that biological systems must be imperfect because of the nature of evolutionary mechanisms. And yet this discovery (along with others discussed in the articles listed in the Resources section) raises questions for me about the validity of the evolutionary paradigm. And it raises questions about the usefulness of this paradigm, as well. Viewing silenced B cell as the flawed outcome of evolutionary processes has stood in the way of discovering their functional importance, delaying work that “might be leveraged to do us good.”

    The more we learn about biological systems, the more evident it becomes: Instead of being flawed, biological designs display an ingenuity and a deep rationale for the way they are—as would be expected if they were the handiwork of a Creator.

    Resources

    Endnotes
    1. Deborah L. Burnett et al., “Germinal Center Antibody Maturation Trajectories Are Determined by Rapid Self/Foreign Discrimination,” Science 360 (April 13, 2018): 223–26, doi: 10.1126/science.aa03859; Ervin E. Kara and Michel C.Nussenzweig, “Redemption for Self-Reactive Antibodies,” Science 360 (April 13, 2018): 152–53, doi:10.1126/science.aat5758.
    2. Peter Dockrill, “Immune Cells We Thought Were ‘Useless’ Are Actually a Weapon Against Infections Like HIV,” Science Alert (April 16, 2018), https://www.sciencealert.com/new-discovery-bad-immune-cells-actually-secret-weapon-against-infection-b-silenced-redemption-lymphocytes.
  • Do Plastic-Eating Bacteria Dump the Case for Creation?

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Jul 18, 2018

    At the risk of stating the obvious: Plastics are an indispensable part of our modern world. Yet, plastic materials cause untold problems for the environment. One of the properties that makes plastics so useful also makes them harmful. Plastics don’t readily degrade.

    Recently, researchers discovered a new strain of bacteria that had recently evolved the ability to degrade plastics. These microbes may help solve some of the environmental problems caused by plastics, but their evolution seemingly causes new problems for people who hold the view that a Creator is responsible for life’s origin and design. But, is this really the case? To find out, we need to break down this discovery.

    One plastic in widespread use today is polyethylene terephthalate (PET). This polymer was patented in the 1940s and became widely used in the 1970s. Most people are familiar with PET because it is used to make drinking bottles.

    This material is produced by reacting ethylene glycol with terephthalic acid (both produced from petroleum). Crystalline in nature, this plastic is a durable material that is difficult to break down, because of the inaccessibility of the ester linkages that form between the terephthalic acid and ethylene glycol subunits that make up the polymer backbone.

    PET can be recycled, thereby mitigating its harmful effects on the environment. A significant portion of PET is mechanically recycled by converting it into fibers used to manufacture carpets.

    In principle, PET could be recycled by chemically breaking the ester linkages holding the polymer together. When the ester linkages are cleaved, ethylene glycol and terephthalic acid are the breakdown products. These recovered starting materials could be reused to make more PET. Unfortunately, chemical recycling of PET is expensive and difficult to carry out because of the inaccessibility of the ester linkages. In fact, it is cheaper to produce PET from petroleum products than from the recycled monomers.

    Can Bacteria Recycle PET?

    An interesting advance took place in 2016 that has important implications for PET recycling. A team of Japanese researchers discovered a strain of the bacterium Ideonella sakaiensis that could break down PET into terephthalic acid and ethylene glycol.1 This strain was discovered by screening wastewater, soil, sediments, and sludge from a PET recycling facility. The microbe produces two enzymes, dubbed PETase and MHETase, that work in tandem to convert PET into its constituent monomers.

    Evolution in Action

    Researchers think that this microbe acquired DNA from the environment or another microbe via horizontal gene transfer. Presumably, this DNA fragment harbored the genes for cutinase, an enzyme that breaks down ester linkages. Once the I. sakaiensis strain picked up the DNA and incorporated it into its genome, the cutinase gene must have evolved so that it now encodes the information to produce two enzymes with the capacity to break down PET. Plus, this new capability must have evolved rather quickly, over the span of a few decades.

    PETase Structure and Evolution

    In an attempt to understand how PETase and MHETase evolved and how these two enzymes might be engineered for recycling and bioremediation purposes, a team of investigators from the University of Plymouth determined the structure of PETase with atomic level detail.2 They learned that this enzyme has the structural components characteristic of a family of enzymes called alpha/beta hydrolases. Based on the amino acid sequence of the PETase, the researchers concluded that its closest match to any existing enzyme is to a cutinase produced by the bacterium Thermobifida fusca. One of the most significant differences between these two enzymes is found at their active sites. (The active site is the location on the enzyme surface that binds the compounds that the enzyme chemically alters.) The active site of the PETase is broader than the T. fusca cutinase, allowing it to accommodate PET polymers.

    As researchers sought to understand how PETase evolved from cutinase, they engineered amino acid changes in PETase, hoping to revert it to a cutinase. To their surprise, the resulting enzyme was even more effective at degrading PET than the PETase found in nature.

    This insight does not help explain the evolutionary origin of PETase, but the serendipitous discovery does point the way to using engineered PETases for recycling and bioremediation. One could envision spraying this enzyme (or the bacterium I. sakaiensis) onto a landfill or in patches of plastics floating in the Earth’s oceans. Or alternatively using this enzyme at recycling facilities to generate the PET monomers.

    As a Christian, I find this discovery exciting. Advances such as these will help us do a better job as planetary caretakers and as stewards of God’s creation, in accord with the mandate given to us in Genesis 1.

    But, this discovery does raise a question: Does the evolution of a PET-eating bacterium prove that evolution is true? Does this discovery undermine the case for creation? After all, it is evolution happening right before our eyes.

    Is Evolution in Action Evidence for Evolution?

    To answer this question, we need to recognize that the term “evolution” can take on a variety of meanings. Each one reflects a different type of biological transformation (or presumed transformation).

    It is true that organisms can change as their environment changes. This occurs through mutations to the genetic material. In rare circumstances, these mutations can create new biochemical and biological traits, such as the ones that produced the strain of I. sakaiensis that can degrade PET. If these new traits help the organism survive, it will reproduce more effectively than organisms lacking the trait. Over time, this new trait will take hold in the population, causing a transformation of the species.

    And this is precisely what happened with I. sakaiensis. However, microbial evolution is not controversial. Most creationists and intelligent design proponents acknowledge evolution at this scale. In a sense, it is not surprising that single-celled microbes can evolve, given their extremely large population sizes and capacity to take up large pieces of DNA from their surroundings and incorporate it into their genomes.

    Yet, I. sakaiensis is still I. sakaiensis. In fact, the similarity between PETase and cutinases indicates that only a few amino acid changes can explain the evolutionary origin of new enzymes. Along these lines, it is important to note that both cutinase and PETase cleave ester linkages. The difference between these two enzymes involves subtle structural differences triggered by altering a few amino acids. In other words, the evolution of a PET-degrading bacterium is easy to accomplish through a form of biochemical microevolution.

    But just because microbes can undergo limited evolution at a biochemical level does not mean that evolutionary mechanisms can account for the origin of biochemical systems and the origin of life. That is an unwarranted leap. This study is evidence for microbial evolution, nothing more.

    Though this advance can help us in our planetary stewardship role, this study does not provide the type of evidence needed to explain the origin of biochemistry and, hence, the origin of life through evolutionary means. Nor does it provide the type of evidence needed to explain the evolutionary origin of lifes major groups. Evolutionary biologists must develop appropriate evidence for these putative transformations, and so far, they haven’t.

    Evidence of microbial evolution in action is not evidence for the evolutionary paradigm.

    Resources:

    Endnotes
    1. Shosuke Yoshida et al., “A Bacterium that Degrades and Assimilates Poly(ethylene terephthalate)” Science 351 (March 11, 2016): 1196–99, doi:10.1126/science.aad6359.
    2. Harry P. Austin, et al., “Characterization and Engineering of a Plastic-Degrading Aromatic Polyesterase,” Proceedings of the National Academy of Sciences, USA (April 17, 2018): preprint, doi:10.1073/pnas.1718804115.
  • Sophisticated Cave Art Evinces the Image of God

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | May 23, 2018

    It’s a new trend in art. Museums and galleries all over the world are exploring the use of sounds, smells, and lighting to enhance the viewer’s experience as they interact with pieces of art. The Tate Museum in London is one institution pioneering this innovative approach to experiencing artwork. For example, on display recently at Tate’s Sensorium was Irish artist Francis Bacon’s Figure in a Landscape, a piece that depicts a gray human figure on a bench. Visitors to the Sensorium put on headphones while they view this painting, and they hear sounds of a busy city. Added to the visual and auditory experiences are the bitter burnt smell of chocolate and the sweet aroma of oranges that engulf the viewer. This multisensory experience is meant to depict a lonely, brooding figure lost in the never-ending activities of a city, with the contrasting aromas simultaneously communicating the harshness and warmth of life in an urban setting.

    It goes without saying that designing multisensory experiences like the ones on display at the Sensorium requires expertise in sound, taste, and lighting. This expertise makes recent discoveries on ancient cave and rock art found throughout the world all the more remarkable. As it turns out, the cave and rock art found in Europe, Asia, and Africa are multisensory displays.1 The sophistication of this early art highlights the ingenuity of the first artists—modern humans, who were people just like us.

    Cave Art

    Though many people have the perception that cave and rock art is crude and simplistic, in fact, it is remarkably sophisticated. For example, the Chauvet-Pont-d’Arc Cave in southern France houses cave art that dates (using carbon-14 measurements) to two periods: 28,000 to 31,000 years ago and 33,500 to 37,000 years ago. These cave sites house realistic depictions of hundreds of animals including herbivores such as horses, cattle, and mammoths. The art also depicts rhinos and carnivores such as cave lions, panthers, bears, and hyenas. The site also contains hand stencils and geometric shapes, such as lines and dots.

    The Chauvet Cave human artists painted the animal figures on areas of the cave walls that they polished to make smooth and lighter in color. They also made incisions and etchings around the outline of the painted figures to create a three-dimensional quality to the art and to give the figures a sense of movement.

    Multisensory Cave Art

    One of the most intriguing aspects of cave art is its location in caves. Oftentimes, the animal figures are depicted deep within the cave’s interior, at unusual locations for the placement of cave paintings.

    Recently, archaeologists have offered an explanation for the location of the cave art. It appears as if the artists made use of the caves’ acoustical properties to create a multisensory experience. To say it another way, the cave art is depicted in areas of the caves where the sounds in that area of the cave reinforce the cave paintings. For example, hoofed animals are often painted in areas of the caves where the echoes and reverberations make percussive sounds like those made by thundering hooves when these animals are running. Carnivores are often depicted in areas of the caves that are unusually quiet.

    San Rock Art

    Recently, researchers have discovered that the rock art produced by the San (indigenous hunter-gatherer people from Southern Africa), the oldest of which dates to about 70,000 years ago, also provides viewers a multisensory experience.2 Archaeologists believe that the art depicted on the rocks reflects the existence of a spirit world beneath the surface. These rock paintings are often created in areas where echoes can be heard, presumably reflecting the activities of the spirit world.

    Who Made the Cave and Rock Art?

    Clearly, the first human artists were sophisticated. But, when did this sophisticated behavior emerge? The discovery of art in Europe and Asia indicates that the first humans who made their way out of Africa as they migrated around the world carried with them the capacity for art. To put it another way, the capacity for art did not emerge in humans after they reached Europe, but instead was an intrinsic part of human nature before we began to make our way around the world.

    The discovery of symbolic artifacts as old as 80,000 years in age in caves in South Africa (artistic expression is a manifestation of the capacity to represent the world with symbols) and the dating of the oldest San rock art at 70,000 years in age adds support to this view.

    Linguist Shigeru Miyagawa points out that genetic evidence indicates that the San separated from the rest of humanity around 125,000 years ago. While the San remained in Africa, the group of humans who separated from the San and made their way into Asia and Europe came from a separate branch of humanity. And yet, the art produced by the San displays the same multisensory character as the art found in Europe and Asia. To say it another way, the rock art of the San and the cave art in Europe and Asia display unifying characteristics. These unifying features indicate that the art share the same point of origin. Given that the data seems to indicate that humanity’s origin is about 150,000 years ago, it appears that the origin of art coincides closely to the time that modern humans appear in the fossil record.3

    Cave Art and Rock Evince the Biblical View of Human Nature

    The sophistication of the earliest art highlights the exceptional nature of the first artists—modern humans, people just like you and me. The capacity to produce art reflects the capacity for symbolism—a quality that appears to be unique to human beings, a quality contributing to our advanced cognitive abilities, and a quality that contributes to our exceptional nature. As a Christian, I view symbolism (and artistic expression) as one of the facets of God’s image. And, as such, I would assert that the latest insights on cave art provide scientific credibility for the biblical view of human nature.

    Resources

    Endnotes
    1. Shigeru Miyagawa, Cora Lesure, and Vitor A. Nóbrega, “Cross-Modality Information Transfer: A Hypothesis about the Relationship among Prehistoric Cave Paintings, Symbolic Thinking, and the Emergence of Language,” Frontiers in Psychology 9 (February 20, 2018): 115, doi:10.3389/fpsyg.2018.00115.
    2. Francis Thackery, “Eland, Hunters and Concepts of ‘Symapthetic Control’: Expressed in Southern African Rock Art,’ Cambridge Archaeological Journal 15 (2005): 27–35, doi:10.1017/S0959774305000028.
    3. Miyagawa et al., “Cross-Modality Information Transfer,” 115.
  • A Genetically Engineered Case for a Creator

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | May 09, 2018

    Since the 1960’s, the drug noscapine has been used in many parts of the world as a non-narcotic cough-suppressant. Recently, biomedical researchers have learned that that noscapine (and chemically-modified derivatives of this drug) has potential as a cancer drug. And that is nothing to sneeze at.

    The use of the drug for nearly a half century as a cough suppressant means the safety of noscapine has already been established. In fact, pre-clinical studies indicate that noscapine has fewer side effects than many anti-cancer drugs.

    Unfortunately, the source of noscapine is opium poppies. Even though tens of tons of noscapine is isolated each year from thousands of tons of raw plant material, biochemical engineers question if the agricultural supply line can meet the extra demand if noscapine finds use as an anti-cancer agent. Estimates indicate that the amounts of noscapine needed for cancer treatments would be about ten times the amount currently produced for its use as a cough suppressant. Complicating matters are the extensive regulations and bureaucratic red tape associated with growing poppy plants and extracting chemical materials from them.

    It takes about 1 year to grow mature poppy plants. And once grown, the process of isolating pure noscapine is time intensive and expensive. This drug has to be separated from narcotics and other chemicals found in the opium extract, and then purified. Because poppy plants are an agricultural product, considerable batch-to-batch variation occurs for noscapine supplies.

    Chemists have developed synthetic routes to make noscapine. But, these chemical routes are too complex and costly to scale up for large scale production of this drug.

    But, researchers from Stanford University believe that they have come up with a solution to the noscapine supply problem. They have genetically engineered brewer’s yeast to produce large quantities of noscapine.1 This work demonstrates the power of synthetic biology to solve some of the world’s most pressing problems. But, the importance of this work extends beyond science and technology. This work has significant theological implications, as well. This work provides empirical proof that intelligent agency is necessary for the large-scale transformation of life forms.

    Genetically Engineered Yeast

    To modify brewer’s yeast to produce noscapine, the Stanford University research team had to: 1) first, construct a biosynthetic pathway that would convert simple carbon- and nitrogen-containing compounds into noscapine, and then, 2) add genes to the yeast’s genome that would produce the enzymes needed to carry out this transformation. Specifically, they added 25 genes from plants, bacteria, and mammals to this microbe’s genome. On top of the gene additions, they also had to modify 6 of genes in the yeast’s genome.

    Biosynthetic pathways that yield complex molecules such as noscapine can be rather elaborate. Enzymes form these pathways. These protein machines bind molecules and convert them into new materials by facilitating chemical reactions. In biosynthetic pathways the starting molecule is modified by the first enzyme in the pathway and after its transformation is shuttled to the second enzyme in the pathway. This process continues until the original molecule is converted step-by-step into the final product.

    Designing a biosynthetic route from scratch would be nearly impossible. Fortunately, the team from Stanford took advantage of previous work done by other life scientists who have characterized the metabolic reactions that produce noscapine in opium poppies. These pioneering researchers have identified a cluster of 10 genes that encode enzymes that work collaboratively to convert the compound scoulerine to noscapine.

    The Stanford University researchers used these 10 poppy genes as the basis for the noscapine biosynthetic route they designed. They expanded this biosynthetic pathway by using genes that encode for the enzymes that convert glucose into reticuline. This compound is converted into scoulerine by the berberine bridge enzyme. They discovered that the conversion of glucose to reticuline is tricky, because one of the intermediary compounds in the pathway is dopamine. Life scientists don’t have a good understanding how this compound is made in poppies, so they used the genes that encode the enzymes to make dopamine from rats.

    They discovered that when they added all of these genes into the yeast, these modified microbes produced noscapine, but at very low levels. At this point, the research team carried out a series of steps to optimize noscapine production, which included:

    • Genetically altering some of the enzymes in the noscapine biosynthetic pathway to improve their efficiency
    • Manipulating other metabolic pathways (by altering the expression of the genes that encode enzymes in these metabolic routes) to divert the maximum amounts of metabolic intermediates into the newly constructed noscapine pathway
    • Varying the media used to grow the yeast

    These steps led to an 18,000-fold improvement in noscapine production.

    With accomplishment, the scientific community is one step closer to have a commercially-viable source of noscapine.

    Synthetic Biology and the Case for a Creator

    Without question, the engineering of brewer’s yeast to produce noscapine is science at its very best. The level of ingenuity displayed by the research team from Stanford University is something to behold. And, it is for this reason, I maintain that this accomplishment (along with other work in synthetic biology) provides empirical evidence that a Creator must play a role in the origin, history, and design of life.

    In short, these researchers demonstrated that intelligent agency is required to originate new metabolic capabilities in an organism. This work also illustrates the level of ingenuity required to optimize a metabolic pathway once it is in place.

    Relying on hundreds of years of scientific knowledge, these researchers rationally designed the novel noscapine metabolic pathway. Then, they developed an elaborate experimental strategy to introduce this pathway in yeast. And then, it took highly educated and skilled molecular biologists to go in the lab to carry out the experimental strategy, under highly controlled conditions, using equipment that itself was designed. And, afterwards, the researchers employed rational design strategies to optimize the noscapine production.

    Given the amount of insight, ingenuity, and skill it took to engineer and optimize the metabolic pathway for noscapine in yeast, is it reasonable to think that unguided, undirected, historically contingent evolutionary processes produced life’s metabolic processes?

    Resources:

    Creating Life in the Lab: How New Discoveries in Synthetic Biology Make a Case for a Creator by Fazale Rana (book)

    New Discovery Fuels the Case for Intelligent Designby Fazale Rana (article)

    Fattening Up the Case for Intelligent Designby Fazale Rana (article)

    A Case for Intelligent Design, Part 1by Fazale Rana (article)

    A Case for Intelligent Design, Part 2by Fazale Rana (article)

    A Case for Intelligent Design, Part 3by Fazale Rana (article)

    A Case for Intelligent Design, Part 4by Fazale Rana (article)

    The Blueprint for an Artificial Cellby Fazale Rana (article)

    Do Self-Replicating Protocells Undermine the Evolutionary Theoryby Fazale Rana (article)

    A Theology for Synthetic Biology, Part 1by Fazale Rana (article)

    A Theology for Synthetic Biology, Part 2by Fazale Rana (article)

    Endnotes
    1. Yanran Li et al., “Complete Biosynthesis of Noscapine and Halogenated Alkaloids in Yeast,” Proceedings of the National Academy of Sciences, USA(2018), doi: 10.1073/pnas.1721469115.
  • Did Neanderthals Produce Cave Paintings?

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Apr 25, 2018

    One time when our kids were little, my wife and I discovered that someone had drawn a picture on one of the walls in our house. Though all of our children professed innocence, it was easy to figure out who the culprit was, because the little artist also wrote the first letter of her name on the wall next to her masterpiece.

    If only archaeologists had it as easy as my wife and me when it comes to determining who made the ancient artwork on the cave walls in Europe. Most anthropologists think that modern humans produced the art. But, a growing minority of scientists think that Neanderthals were the artists, not modern humans. If anthropologists only had some initials to go by.

    In the absence of a smoking gun, archaeologists believe they now have an approach that will help them determine the artists’ identity. Instead of searching for initials, researchers are trying to indirectly determine who the artists were by dating the cave art. They hope this approach will work because modern humans did not make their way into Europe until around 40,000 years ago. And Neanderthals disappeared around that same time. So, knowing the age of the art would help narrow down the artists identity.

    Recently, a team from the UK and Spain have applied this new dating method to art found in the caves of Iberia (southwest corner of Europe). And based on the age of the art, they think that the paintings on the cave walls were produced by Neanderthals, not modern humans.1

    Artistic expression reflects a capacity for symbolism. And many people view symbolism as a quality unique to human beings, contributing to our advanced cognitive abilities and reflecting our exceptional nature. In fact, as a Christian, I see symbolism as a manifestation of the image of God. Yet, if Neanderthals possessed symbolic capabilities, such a quality would undermine human exceptionalism (and with it the biblical view of human nature), rendering human beings nothing more than another hominin.

    Limitations of Dating Cave Art

    Dating cave art is challenging, to say the least. Typically, archaeologists will either: (1) date the remains associated with the cave art and try to establish a correlation, or (2) attempt to directly date the cave paintings using carbon-14 measurements of the pigments and charcoal used to make the art. Both approaches have limitations.

    In 2012, researchers from the UK and Spain employed a new technique to date the art found on the walls in 11 caves located in northwest Spain.2 This dating method measures the age of the calcite deposits beneath the cave paintings and those that formed over the artwork, once the paintings had been created. As water flows down cave walls, it deposits calcite. When calcite forms it contains trace amounts of U-238. This isotope decays into Th-230. Normally, detection of such low quantities of these isotopes would require extremely large samples. The researchers discovered that by using accelerator mass spectrometry they could get by with 10-milligram samples.

    By dating the calcite samples, they produced minimum and maximum ages for the cave paintings. While most of the 50 samples they took dated to around 25,000 years in age (or more recent than that), three were significantly older. They found a claviform-like symbol that dated to 31,000 years in age. They also found hand stencils that were 37,000 years old and, finally, a red disk that dated to 41,000 years in age.

    Most anthropologists believe modern humans made their way into Europe around 40,000 years ago, prompting researchers to suggest that maybe Neanderthals created some of the cave art, “because of the 40.8 ky date for the disk is a minimum age, it cannot be ruled out that the earliest paintings were symbolic expressions of Neanderthals, which were present at Cantabrian Spain until at least 42 ka.”3

    Dating the Art from Three Cave Sites in Iberia

    Recently, this research team applied the same U-Th dating method to the art found in three cave sites in Iberia: (1) La Pasiega, which houses paintings of animals, linear signs, claviform signs, and dots; (2) Ardales, which contains about 1,000 paintings of animals, along with dots, discs, lines, geometric shapes, and hand stencils; and (3) Maltravieso, which displays a set of hand stencils and geometric designs.

    The research team took a total of 53 samples from 25 carbonate formations associated with the cave art in these three cave sites. While most of the samples dated to 40,000 years old or less, three measurements produced minimum ages of around 65,000 years in age, including: (1) red scalariform from La Pasiega, (2) red areas from Ardales, and (3) a hand stencil from Maltravieso. On the basis of the three measurements, the team concluded that the art must have been made by Neanderthals because modern humans had not made their way into Iberia at that time. In other words, Neanderthals made art, just like modern humans did.

    Are These Results Valid?

    At first glance, it seems like the research team has a compelling case for Neanderthal art. Yet, careful examination of the U-Th method and the results raise some concerns.

    First, it is not clear if the U-Th method yields reliable results. Recently, a team from France and the US questioned the application of the U-Th method to date cave art.4 Like all radiometric dating methods, the U-Th method only works if the system to be age-dated is closed. In other words, once the calcite deposit forms, the U-Th method will only yield reliable results if none of the U or Th moves in or out of the deposit. Unfortunately, it does not appear as if the calcite films are closed systems. The calcite films form as a result of hydrological activity in the cave. Once a calcite film forms, water will continue to flow over its surface, leeching out U (because U is much more water soluble than Th). This process will make it seem as if the calcite film and, hence, the underlying artwork is much older than it actually is.

    In the face of this criticism, the team from the UK and Spain assert the reliability of their method because, for a few of the calcite deposits, they sampled the outermost surface, the middle of the deposit, and the innermost region. Measurements of these three samples gave ages that matched the expected chronology, with the innermost layer measuring older than the outermost surface. But, as the researchers from France and the US (who challenge the validity of the U-Th method to date cave art) point out, this sampling protocol doesn’t ensure that the calcite is a closed system.

    Additionally, critics from France and the US identified several examples of cave art dated by both carbon-14 methods and U-Th methods, noting that the carbon-14 method consistently gives much younger ages than the U-Th method. This difference is readily explained if the calcite is an open system.

    Secondly, it seems more plausible that the 65,000-year-old dates are outliers. It is important to note that of the 53 samples measured, only three gave age-dates of 65,000 years. The remaining samples gave dates much younger, typically around 40,000 years in age. Given the concerns about the calcite being an open system, should the 65,000-year-old samples be viewed as mere outliers?

    Compounding this concern is the fact that samples taken from the same piece of art give discordant dates, with one of the samples dating to 65,000 years in age and the other two samples dating to be much younger. The team from the UK and Spain argue that the artwork was produced in a patchwork manner. But this explanation does not account for the observation that the artwork appears to be a unified piece.

    What Does Neanderthal Biology Say?

    The archaeological record is not the only evidence we have available to us to assess Neanderthals capacity for symbolism (and advanced cognitive abilities). Scientists can also glean insight from Neanderthal biology.

    As I discuss in Who Was Adam?, comparisons of the genomes of Neanderthals and modern humans reveal important differences in a number of genes related to neural development, suggesting that there are cognitive differences between the two species. Additionally, the fossil remains of Neanderthals indicate that their brain development s took a different trajectory than ours after birth. As a result, it doesnt appear as if Neanderthals experienced much of an adolescence (which is the time that significant brain development takes place in modern humans). Finally, the brain structure of Neanderthals indicates that these creatures lacked advanced cognitive capacity and the hand-eye coordination needed to make art.

    On the basis of concerns about the validity of the U-Th method when applied to dating calcite films and Neanderthal brain biology, I remain unconvinced that Neanderthals made cave art, let alone had the capacity to do so. So, to me, it appears as if modern humans are, indeed, the guilty party. The entire body of evidence still indicates that they are the ones who painted the walls of caves throughout the world. Though, I doubt either my wife or I will have these early artists scrub down the cave walls as punishment. The cave art is much too precious.

    Resources

    Endnotes
    1. D. L. Hoffmann et al., “U-Th Dating of Carbonate Crusts Reveals Neanderthal Origin of Iberian Cave Art,” Science 359 (February 23, 2018): 912–15, doi:10.1126/science.aap7778.
    2. A. W. G. Pike et al., “U-Series Dating of Paleolithic Art in 11 Caves in Spain,” Science 336 (June 15, 2012): 1409–13, doi:10.1126/science.1219957.
    3. A. W. G. Pike et al., “U-Series Dating of Paleolithic Art.
    4. Georges Sauvet et al., “Uranium-Thorium Dating Method and Paleolithic Rock Art,” Quaternary International 432 (2017): 86–92, doi:10.1016/j.quaint.2015.03.053.
  • Why Are Whales So Big? In Wisdom God Made Them That Way

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Apr 18, 2018

    When I was in high school, I had the well-deserved reputation of being a wise guy—though the people who knew me then might have preferred to call me a wise—, instead. Either way, for being a wise guy, I sure didn’t display much wisdom during my teenage years.

    I would like to think that I am wiser today. But, the little wisdom I do possess didn’t come easy. To quote singer and songwriter, Helen Reddy, “It’s wisdom born of pain.”

    Life’s hardships sure have a way of teaching you lessons. But, I also learned that there is a shortcut to gaining wisdom—if you are wise enough to recognize it. (See what I did there?) It is better to solicit the advice of wise people than to gain wisdom through life’s bitter experiences. And, perhaps there was no wiser person ever than Solomon. Thankfully, Solomon’s wisdom was captured in the book of Proverbs. Many of life’s difficulties can be sidestepped if we are willing to heed Solomon’s advice.

    Solomon gained his wisdom through observation and careful reflection. But, his wisdom also came through divine inspiration, and according to Solomon, it was through wisdom that God created the world in which we live (Proverbs 8:22–31). And, it is out of this wisdom that the Holy Spirit inspired Solomon to offer the insights found in the Proverbs.

    In Psalm 104, the Psalmist (presumably David) echoes the same idea as Solomon: God created our world through wisdom. The Psalmist writes:

    How many are your works, Lord!

    In wisdom you made them all;

    Based on Proverbs 8 and Psalm 104, I would expect God’s wisdom to be manifested in the created order. The Creator’s fingerprints—so evident in nature—should not only reflect the work of intelligent agency but also display undeniable wisdom. In my view, that wisdom should be reflected in the elegance, cleverness, and ingenuity of the designs seen throughout nature. Designs that reflect an underlying purpose. And these features are exactly what we observe when we study the biological realm—as demonstrated by recent work on aquatic mammal body size conducted by investigators from Stanford University.1

    Body Sizes of Aquatic Mammals

    Though the majority of the world’s mammals live in terrestrial habitats, the most massive members of this group reside in Earth’s oceans. For evolutionary biologists, common wisdom has it that the larger size of aquatic mammals reflects fewer evolutionary constraints on their body size because they live in the ocean. After all, the ocean habitat is more expansive than those found on land, and aquatic animals don’t need to support their weight because they are buoyed by the ocean.

    As it turns out, common wisdom is wrong in this case. Through the use of mathematical modeling (employing body mass data from about 3,800 living species of aquatic mammals and around 3,000 fossil species), the research team from Stanford learned that living in an aquatic setting imposes tight constraints on body size, much more so than when animals live on land. In fact, they discovered (all things being equal) that the optimal body mass for aquatic mammals is around 1,000 pounds. Interestingly, the body mass distributions for members of the order Sirenia (dugongs and manatees), and the clades Cetacea (whales and dolphins), and Pinnipeds (sea lions and seals) cluster near 1,000 pounds.

    Scientists have learned that the optimal body mass of aquatic mammals displays an underlying biological rationale and logic. It reflects a trade-off between two opposing demands: heat retention and caloric intake. Because the water temperatures of the oceans are below mammals’ body temperatures, heat retention becomes a real problem. Mammals with smaller bodies can’t consume enough food to compensate for heat loss to the oceans, and they don’t have the mass to retain body heat. The way around this problem is to increase their body mass. Larger bodies do a much better job at retaining heat than do smaller bodies. But, the increase in body mass can’t go unchecked. Maintaining a large body requires calories. At some point, metabolic demands outpace the capacity for aquatic mammals to feed, so body mass has to be capped (near 1,000 pounds).

    The researchers noted a few exceptions to this newly discovered rule. Baleen whales have a body mass that is much greater than 1,000 pounds. But, as the researchers noted, these creatures employ a unique feeding mechanism that allows them to consume calories needed to support their massive body sizes. Filter feeding is a more efficient way to consume calories than hunting prey. The other exception is creatures such as otters. The researchers believe that their small size reflects a lifestyle that exploits both aquatic and terrestrial habitats.

    Argument for God’s Existence from Wisdom

    The discovery that the body mass of aquatic mammals has been optimized is one more example of the many elegant designs found in biological systems—designs worthy to be called the Creator’s handiwork. However, from my perspective, this optimization also reflects the Creator’s sagacity as he designed mammals for the purpose of living in the earth’s oceans.

    But, instead of relying on intuition alone to make a case for a Creator, I want to present a formal argument for God’s existence based on the wisdom of biology’s designs. To make this argument, I follow after philosopher Richard Swinburne’s case for God’s existence based on beauty. Swinburne argues, “If God creates a universe, as a good workman he will create a beautiful universe. On the other hand, if the universe came into existence without being created by God, there is no reason to suppose that it would be a beautiful universe.”2 In other words, the beauty in the world around us signifies the Divine.

    In like manner, if God created the universe, including the biological realm, we should expect to see wisdom permeating the designs in nature. On the other hand, if the universe came into being without God’s involvement, then there is no reason to think that the designs in nature would display a cleverness and ingenuity that affords a purpose—a sagacity, if you will. In fact, evolutionary biologists are quick to assert that most biological designs are flawed in some way. They argue that there is no purpose that undergirds biological systems. Why? Because evolutionary processes do not produce biological systems from scratch, but from preexisting systems that are co-opted through a process dubbed exaptation (by the late evolutionary biologist Stephen Jay Gould), and then modified by natural selection to produce new designs.3 According to biologist Ken Miller:

    Evolution . . . does not produce perfection. The fact that every intermediate stage in the development of an organ must confer a selective advantage means that the simplest and most elegant design for an organ cannot always be produced by evolution. In fact, the hallmark of evolution is the modification of pre-existing structures. An evolved organism, in short, should show the tell-tale signs of this modification.”4

    And yet we see designs in biology that are not just optimized, but characterized by elegance, cleverness, and ingenuity—wisdom.

    Truly, God is a wise guy.

    Resources

    Endnotes
    1. William Gearty, Craig R. McClain, and Jonathan L. Payne, “Energetic Tradeoffs Control the Size Distribution of Aquatic Mammals,” Proceedings of the National Academy of Sciences USA (March 2018): doi:10.1073/pnas.1712629115.
    2. Richard Swinburne, The Existence of God, 2nd ed. (New York: Oxford University Press, 2004), 190–91.
    3. Stephen Jay Gould and Elizabeth S. Vrba, “Exaptation: A Missing Term in the Science of Form,” Paleobiology 8 (January 1, 1982): 4–15, doi:10.1017/S0094837300004310.
    4. Kenneth R. Miller, “Life’s Grand Design,” Technology Review 97 (February/March 1994): 24–32.
  • Mitochondria’s Deviant Genetic Code: Evolution or Creation?

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Apr 11, 2018

    Before joining Reasons to Believe, I worked for nearly a decade in research and development (R&D) for a Fortune 500 company. During my tenure, on several occasions I was assigned to work on a resurrected project—one that was mothballed years earlier for one reason or another but was then deemed worthy of another go-around by upper management.

    Of course, the first thing we did when we began work on the old-project-made-new was to review the work done by the previous R&D team. Invariably, we would come across things they had done that didnt make sense to us whatsoever. I quickly learned that instead of deriding the previous team members for their questionable decision-making skills and flawed strategy, it was better to track down past team members and find out why they did things the way they did. Almost always, there were good reasons justifying their decisions. In fact, understanding their rationale often revealed an ingenuity to their approach.

    The same can be said for mitochondria—bean-shaped organelles found in eukaryotic cells. Mitochondria play a crucial role in producing the energy that powers the cell’s operations. Based on a number of features possessed by these organelles—features that seemingly don’t make sense if mitochondria were created by a Divine Mind—many biologists believe that mitochondria have an evolutionary origin. Yet, as we learn more about mitochondria, scientists are discovering that the features that we thought made little sense from a creation model vantage point have a rationale for why they are the way they are. In fact, these features reflect an underlying ingenuity, as work by biochemists from Germany attests.1

    We will take a look at the work of these biochemists later in this article. But first, it would be helpful to understand why evolutionary biologists think that the design of mitochondria makes no sense if these subcellular structures are to be understood as a Creator’s handiwork.

    The Endosymbiont Hypothesis

    Most evolutionary biologists believe the best explanation for the origin of mitochondria is the endosymbiont hypothesis. Lynn Margulis (1938–2011) advanced this idea to explain the origin of eukaryotic cells in the 1960s, building on the ideas of Russian botanist Konstantin Mereschkowsky.

    Taught in introductory high school and college biology courses, Margulis’s work has become a cornerstone idea of the evolutionary paradigm. This classroom exposure explains why students often ask me about the endosymbiont hypothesis when I speak on university campuses. Many first-year biology students and professional life scientists alike find the evidence for the idea compelling and, consequently, view it as providing broad support for an evolutionary explanation for the history and design of life.

    According to this hypothesis, complex cells originated when symbiotic relationships formed among single-celled microbes after free-living bacterial and/or archaeal cells were engulfed by a “host” microbe. (Ingested cells that take up permanent residence within other cells are referred to as endosymbionts.)

    Presumably, organelles such as mitochondria were once endosymbionts. Once taken into the host cell, the endosymbionts then took up permanent residency within the host, with the endosymbiont growing and dividing inside the host. Over time, the endosymbionts and the host became mutually interdependent, with the endosymbionts providing a metabolic benefit for the host cell. The endosymbionts gradually evolved into organelles through a process referred to as genome reduction. This reduction resulted when genes from the endosymbionts’ genomes were transferred into the genome of the host organism. Eventually, the host cell evolved the machinery to produce the proteins needed by the former endosymbiont and processes to transport those proteins into the organelle’s interior.

    Evidence for the Endosymbiont Hypothesis

    The similarities between organelles and bacteria serve as the main line of evidence for the endosymbiont hypothesis. For example, mitochondria—which are believed to be descended from a group of alphaproteobacteria—are about the same size and shape as a typical bacterium and have a double-membrane structure like these gram-negative cells. These organelles also divide in a way that is reminiscent of bacterial cells.

    Biochemical evidence also exists for the endosymbiont hypothesis. Evolutionary biologists view the presence of the diminutive mitochondrial genome as a vestige of this organelle’s evolutionary history. Additionally, biologists view the biochemical similarities between mitochondrial and bacterial genomes as further evidence for the evolutionary origin of these organelles.

    The presence of the unique lipid cardiolipin in the mitochondrial inner membrane also serves as evidence for the endosymbiont hypothesis. This is an important lipid component of bacterial inner membranes. Yet it is not found in the membranes of eukaryotic cells—except for in the inner membranes of mitochondria. In fact, biochemists consider it a signature lipid for mitochondria and a vestige of this organelle’s evolutionary history.

    Does the Endosymbiont Hypothesis Successfully Account for the Origin of Mitochondria?

    Despite the seemingly compelling evidence for the endosymbiont hypothesis, when researchers attempt to delineate the details of a presumed evolutionary transition, it becomes readily apparent that biologists lack a genuine explanation for the origin of mitochondria and, in a broader context, the origin of eukaryotic cells. In three previous articles, I detail some of the scientific challenges facing the endosymbiont hypothesis:

    A Creation Model Approach for the Origin of Mitochondria

    Given the scientific shortcomings of the endosymbiont hypothesis, is it reasonable to view mitochondria (and eukaryotic cells) as the work of a Creator?

    I would maintain that it is. I argue that the shared similarities between mitochondria and alphaproteobacteria—which stand as the chief evidence for the endosymbiont hypothesis—reflect shared designs rather than a shared evolutionary history. It is common for human designers and engineers to reuse designs. So, why wouldn’t a Creator? See this article for more on this idea:

    Why Do Mitochondria Have Their Own Genome and Cardiolipin in Their Inner Membranes?

    However, to legitimately interpret the genesis of mitochondria from a creation model perspective, there must be a rationale for why mitochondria have their own diminutive genomes. And there has to be an explanation for why these organelles possess cardiolipin in their inner membranes, because on the surface, it appears as though mitochondrial genomes and cardiolipin are vestiges of the evolutionary history of these organelles.

    As I have described previously (see the articles listed below), biochemists have recently learned that there are good reasons why mitochondria have their own genome—independent of the nuclear genome—and a sound rationale for the presence of cardiolipin in the inner membrane of these organelles. In other words, these features of mitochondria make sense from the vantage point of a creation model.

    Why Do Mitochondria Have Their Own Genetic Code?

    But, there is at least one other troubling feature of mitochondrial genomes that requires an explanation if we are to legitimately view these organelles as the handiwork of a Creator. For if they are a Creator’s handiwork, then why do mitochondria make use of deviant, nonuniversal genetic codes? Again, at first blush it would seem that the nonuniversal genetic code in mitochondria reflects their evolutionary origin. To understand why mitochondria have their own genetic code, a little background information is in order.

    A Universal Genetic Code

    The genetic code is a set of rules that define the information stored in DNA. These rules specify the sequence of amino acids that the cell’s machinery uses to build proteins. The genetic code consists of coding units, called codons, where each codon corresponds to one of the 20 amino acids found in proteins.

    To a first approximation, all life on Earth possesses the same genetic code. To put it another way, the genetic code is universal. However, there are examples of organisms that possess a genetic code that deviates from the universal code in one or two of the coding assignments. Presumably, these deviant codes originate when the universal genetic code evolves, altering coding assignments.

    The Deviant Genetic Codes of Mitochondria

    Quite frequently, mitochondrial genomes employ deviant codes. One of the most common differences between the universal genetic code and the one found in mitochondrial genomes is the reassignment of one of the codons that specifies isoleucine (in the universal code) so that it specifies methionine. In fact, evolutionary biologists believe that this evolutionary transition happened five times in independent mitochondrial lineages.2

    So, while many biologists believe that the nonuniversal genetic codes in mitochondria can be explained through evolutionary mechanisms, creationists (and ID proponents) must come up with a compelling reason for a Creator to alter the universal genetic code in the genome of these organelles. This issue becomes particularly pressing because biochemists have come to learn that the rules that define the genetic code are exquisitely optimized for error minimization (among other things), as I discuss in these articles:

    The Genius of Deviant Codes in Mitochondria

    So, is there a rationale for the reassignment of the isoleucine codon?

    Work by a team of German biochemists provides an answer to this question—one that underscores an elegant molecular logic to the deviant genetic codes in mitochondria. These researchers provide evidence that the reassignment of the isoleucine codon for methionine protects proteins in the inner membrane of mitochondria from oxidative damage.

    Metabolic reactions that take place in mitochondria during the energy harvesting process generate high levels of reactive oxygen species (ROS). These highly corrosive compounds will damage the lipids and the proteins of the mitochondrial inner membranes. The amino acid methionine is also readily oxidized by ROS to form methionine sulfoxide. Once this happens, the enzyme methionine sulfoxide reductase (MSR) reverses the oxidation reaction by reconverting the oxidized amino acid to methionine.

    As a consequence of reassigning the isoleucine codon, methionine replaces isoleucine in the proteins encoded by the mitochondrial genome. Many of these proteins reside in the mitochondrial inner membrane. Interestingly, many of the isoleucine residues of the inner mitochondrial membrane proteins are located on the surfaces of the biomolecules. The replacement of isoleucine by methionine has minimal effect on the structure and function of these proteins because these two amino acids possess a similar size, shape, and hydrophobicity. But because methionine can react with ROS to form methionine sulfoxide and then be converted back to methionine by MSR, the mitochondrial inner membrane proteins and lipids are protected from oxidative damage. To put it another way, the codon reassignment results in a highly efficient antioxidant system for mitochondrial inner membranes.

    The discovery of this antioxidant mechanism leads to another question: Why is the codon reassignment not universally found in the mitochondria of all organisms? As it turns out, the German biochemists discovered that this codon reassignment occurs in animals that are active, placing a high metabolic demand on mitochondria (and with it, concomitantly elevated production of ROS). On the other hand, this codon reassignment does not occur in Platyhelminthes (flatworms, which live without requiring oxygen) and inactive animals, such as sponges and echinoderms.

    From a creation model vantage point, there are good reasons why things are the way they are regarding mitochondrial biochemistry. In fact, understanding the rationale for the design of mitochondria reveals an ingenuity to life’s designs.

    Endnotes
    1. Aline Bender, Parvana Hajieva, and Bernd Moosmann, “Adaptive Antioxidant Methionine Accumulation in Respiratory Chain Complexes Explains the Use of a Deviant Genetic Code in Mitochondria,” Proceedings of the National Academy of Sciences, USA 105 (October 2008): 16496–16501, doi:10.1073/pnas.0802779105.
    2. As I have argued elsewhere, the seemingly independent evolutionary origin of identical (or nearly identical) biological features stands as a significant challenge to the evolutionary paradigm, while at the same time evincing a role for a Creator in the origin and history of life. For example, see my article “Like a Fish Out of Water: Why I’m Skeptical of the Evolutionary Paradigm.”

About Reasons to Believe

RTB's mission is to spread the Christian Gospel by demonstrating that sound reason and scientific research—including the very latest discoveries—consistently support, rather than erode, confidence in the truth of the Bible and faith in the personal, transcendent God revealed in both Scripture and nature. Learn More »

Support Reasons to Believe

Your support helps more people find Christ through sharing how the latest scientific discoveries affirm our faith in the God of the Bible.

Donate Now

U.S. Mailing Address
818 S. Oak Park Rd.
Covina, CA 91724
  • P (855) 732-7667
  • P (626) 335-1480
  • Fax (626) 852-0178
Reasons to Believe logo