Where Science and Faith Converge
  • A Christian Perspective on Living Electrodes

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Jan 13, 2021

    We live in interesting times, but I don’t know if our interesting circumstances are a blessing or a curse.

    As a case in point, advances in biotechnology are nothing short of mind-boggling. They seem to come right out of the pages of a science fiction novel. Take, for example, the recent creation of living electrodes by scientists from the University of Pennsylvania. These novel electrodes are made from genetically altered neurons and may one day be used to improve the performance of electronic devices used to interface the brain’s electrical activity with machine hardware and computer software.

    Advances such as these hold the potential to transform our lives in ways we could only imagine just a few years ago.

    In the near future, biomedical applications of these types of emerging technologies may allow us to treat—maybe even cure—diseases and disabilities for which there are no good therapies today. Yet these same biotechnologies come with complex ethical issues that should—and do—give us pause about pursuing and implementing these advances.

    How do we resolve this ethical conundrum? How do we decide which biotechnologies to pursue? And for those we do decide to advance, how should we deploy them? In short, how do we ensure that these emerging biotechnologies are, indeed, a blessing and not a curse?

    These questions take on real urgency when it comes to clinical uses of brain-computer interface (BCI) devices which are already in preclinical trials. As part of this effort, bioengineers and biomedical researchers have been continuously working to improve their designs. These improvements are exciting, but they also add to the ethical dilemma, as the recent development of living electrodes attests.

    Brain-Computer Interfaces
    BCIs are electronic devices that provide an interface between the electrical activities of a user’s brain and computer and machine hardware and software. Users learn to use BCIs to control computer software and hardware with their “thoughts.” Through the use of sophisticated algorithms, the BCIs help extract the user’s intent from the electrical activity in their brain. In this sense, a collaboration exists between the user and the BCI.

    These devices hold the potential to transform medicine in exciting ways. Researchers have already demonstrated the potential of BCIs to assist patients with locked-in syndrome (due to brain injury or stroke) to communicate by controlling software. Biomedical scientists have also demonstrated the utility of BCIs to aid quadriplegic and paraplegic patients, affording them the means to control exoskeletons with their thoughts. A number of high-profile studies have also shown that amputees, likewise, can learn to activate robotic prosthetic limbs by communicating their intent to these devices with their thoughts.

    BCIs are already in use in cochlear implants—returning hearing to deaf patients—and may one day be used to restore vision to patients who have lost their sight. Researchers have also explored the use of BCIs to treat dementia, Alzheimer’s, and Parkinson’s, with some interesting and hopeful results.

    Types of Brain-Computer Interfaces
    To date, bioengineers have developed three different types of BCIs, each with advantages and limitations.

    Noninvasive BCIs
    This type of BCI usually involves use of a modified EEG (electroencephalogram) cap which is fitted on the patient’s head with electrodes that make contact with the patient’s scalp. These contacts pick up electrical activity in the brain. An obvious advantage of this type of BCI is that it doesn’t require surgery to install. Unfortunately, this type of BCI suffers from poor resolution and signal strength. Also, noninvasive BCIs are unable to pinpoint electrical activity to specific brain regions, hindering their usefulness. Instead, noninvasive BCIs record the average diffuse electrical activities of millions of neurons. Also, the electrical signals from the brain must pass through the skull and scalp before they can be recorded by noninvasive BCIs, which degrades the signal quality.

    Semi-Invasive BCIs
    To overcome the limitations of noninvasive BCIs, biomedical practitioners have explored placing the BCI directly on the brain surface. This approach eliminates the signal degradation that occurs when the electrical activity from the brain passes through the skull and scalp. On the other hand, it requires surgery to position the BCI on the brain surface. It also requires wires to pass through the skull and scalp. Like noninvasive BCIs, semi-invasive ones can only record the average electrical activity of millions of neurons; hence, it suffers from poor resolution.

    Invasive BCIs
    In an attempt to improve the resolution of BCIs, some biomedical researchers have directly implanted BCIs into the brain. This approach allows biomedical researchers to stimulate and record the average electrical activity of thousands (as opposed to millions) of neurons in specific regions of the brain. Unfortunately, this improvement comes with a cost. The process of inserting electrodes into the brain can damage tissue, leading to scar formation. Electrodes implanted in the brain can also trigger an immune response. And, over time, glial cells in the brain migrate to the electrodes and coat them. When this happens, it leads to loss of function.

    Researchers continue to pursue ways to overcome these biocompatibility issues. One highly original approach recently explored by researchers from the University of Pennsylvania involves the creation of living electrodes.1

    Living Electrodes
    Working with a rat model system, the UPenn researchers carried out a proof-of-principle study with the hope of demonstrating they could manufacture living electrodes that could transmit electrical signals into and out of the brain. “Living electrodes” are built from genetically altered neurons and designed to respond to light. Using the techniques of optogenetics, the researchers genetically modified neurons in the laboratory, incorporating a gene into the neuron’s genome that encodes a protein that assembles into a light-sensitive channel.

    Scientists working in optogenetics have discovered that once these types of inserted genes are transcribed by the cell’s machinery, the channel proteins become embedded in the neuron’s plasma membrane. Researchers have also learned that when these types of genetically modified neurons are pulsed with light, the channels open up, allowing specific ions to flow through them. These ion currents depolarize the neuron’s membranes. This depolarization can either activate the neuron or inhibit it depending on the specific identity of the channel-forming proteins.

    blog__inline--a-christian-perspective-on-living-electrodes

    Figure 1: A Neuron

    Credit: Shutterstock

    The UPenn researchers produced the living electrodes by seeding one end of a microcolumn made up of an agarose gel with neurons modified to respond to light. The microcolumns were 2 to 6 millimeters in length with an outer diameter of around 300 to 400 microns and an inner diameter of 180 microns. The interior of the column was filled with cell culture medium. The researchers coaxed the axons of the neurons to grow along the length of the column’s interior, forming a mass of electrodes with the cell body at one end and the terminus of the axons at the other end. This architecture results in a living electrode made up of neurons. The researchers learned that once they had manufactured the living electrodes, they remained viable for up to 40 days in a laboratory setting.

    The bioengineers then successfully implanted the living electrodes in the brains of rats. They demonstrated that the implanted electrodes could be stimulated with an LED situated on the top of the brain. These living electrodes could also send electrical signals from the brain to a microelectrode array, also placed on top of the brain. They further discovered that they could even stimulate the living electrodes with neurotransmitters.

    Based on these results, the UPenn researchers hope that this technology can progress to preclinical studies involving human subjects.

    Bolstering the hope is the fact that these living electrodes have dramatically improved biocompatibility when compared to electrodes currently in use. In principle, the neurons used to construct these living electrodes could be cultivated from induced pluripotent stem cells derived from the patient’s own cells. This step would dramatically reduce the likelihood of the patient developing an immune response to the implanted BCI electrodes. Also, because the electrodes are made up of neurons, the connection between the brain and the BCI makes use of natural synaptic interactions compared to the unnatural connections formed between neurons in the brain and electrodes made from other types of materials.

    The initial successes associated with BCIs coupled with the UPenn team’s advances continue to propel us closer to routinely using BCIs to treat people suffering from disabling diseases and injuries.

    Ethical Concerns
    As exciting as these prospects may be, the widespread use of BCIs raises serious ethical issues that we are only beginning to appreciate.2

    Equitable Access
    As is always the case with most new technologies, there are justifiable concerns about equitable access to BCIs. This challenge already confronts medicine today, as the cost of health care continues to increase. BCIs will be expensive when they first appear in the clinical setting and, for that reason, may not be available to people in socioeconomically disadvantaged groups.

    Human Identity
    Some neuroethicists have also raised concerns that using BCIs may lead to the loss of the patient’s individual identity. For example, use of BCIs to deliver electrical stimulation to specific regions of the brain does help arrest symptoms in Parkinson’s patients. But it also causes a change in their personality. Electrical stimulations to the brain cause Parkinson’s patients to lose impulse control and engage in risky behaviors. This change in behavior, in and of itself, is deeply troubling. But the concern doesn’t stop there. When a patient loses impulse control, it becomes unclear if these patients can truly give consent for ongoing BCI treatments. This unintended consequence observed for Parkinson’s patients raises the uneasy sense that BCI-induced changes in personality may be a much more widespread occurrence.

    Some patients who have received BCI implants report that they feel as if the BCI has become a part of them. In other words, these patients view themselves as human-machine hybrids. On one hand, a patient’s sense of ownership of the BCI may be a good thing, indicating a seamless integration of the technology. On the other hand, it does raise concerns about the patient’s sense of self and loss of identity—and even humanity.

    Human Agency
    Loss of agency is another issue. Many BCIs rely on sophisticated algorithms to decode the electrical activity in the brain and, from it, extract the patient’s intent. In other words, when a patient uses a BCI to control computer software and hardware, the affected action stems from a collaboration of the patient and the BCI. This joint effort prompts several questions: Who (or what) is actually dictating the action? Is it the patient? Are the patient and the BCI coagents? Or is the BCI the responsible agent? Would the patient have chosen to carry out that action without the BCI? Where does the patient’s intent end and the algorithm’s influence begin?

    Stem Cell Source
    Apart from the ethical qualms that apply to BCIs in general, there are specific concerns uniquely associated with the use of living electrodes. To create living electrodes from neurons, a source of stem cells is required. As stated above, these neurons could be derived from induced pluripotent stem cells developed from the patient’s own cells. Unfortunately, right now the cost of doing this procedure is prohibitive. To make the production of living electrodes financially feasible, biomedical practitioners will likely make use of stem cell banks created from embryonic stem cells. Of course, the source of these types of stem cells are human embryos, created in the lab by in vitro fertilization. In other words, the production of living electrodes to support wide-scale clinical applications will most likely require the generation and destruction of human embryos.

    We have only sampled some of the ethical concerns associated with the biomedical uses of BCIs. This sampling should be enough to convince anyone that the ethical issues surrounding the use of BCIs—for biomedical purposes alone—are complex. And the way forward isn’t evident.

    When we consider the prospects that BCIs could be used as enhancements for otherwise healthy people, the ethical concerns become amplified, forcing society into ethical deliberations for which we may not be fully equipped to engage. We need an ethical framework that does three things:

    • Encourages the development of BCIs as the means to mitigate pain and suffering.
    • Ensures just and equitable access to the transformative power of BCIs.
    • Protects the dignity and sanctity of human life.

    BCIs and the Christian Worldview
    The ethical framework that flows out of the Christian worldview meets all three requirements and can function as just such a system.

    The centerpiece of Christian ethics is the idea that human beings are made in God’s image. Because of this quality every person has infinite worth and value. The implications of this idea are profound and far-reaching. This conviction serves as powerful motivation to develop biotechnologies such as BCIs with the hope of mitigating human pain and suffering, and promoting human flourishing. It also recognizes the very real potential of this type of technology to undermine human identity and human dignity, bringing a posture of caution to the table. The Christian system of ethics also posits the notion that everyone should have equitable access to this powerful technology. This ethical framework gives voice to those who are often on the fringes of society and who never seem to benefit from the latest medical advances.

    It is remarkable that an ethical system that originated two millennia ago can help us today to navigate the ethical issues that arise out of emerging biotechnologies—technologies that at the birth of the Christian faith no one could have imagined.

    The potential of the Christian worldview to guide development of biotechnologies means that Christianity must be given a place at the table as we, as a society, deliberate about the uses of emerging biotechnologies. Of course, this means that Christians must understand these emerging technologies. But, even more so, it means that as Christians we must develop a thoughtful attitude toward emerging biotechnologies. Sadly, it is often the case that Christians ignore these advances and hope they will never materialize or summarily condemn them and those that seek to develop them. My hope is that, as Christians, we will take up this challenge and choose to thoughtfully and respectfully engage the scientific and biomedical communities and our culture at large.

    As Christians, we indeed live in interesting times that present us the opportunity to shape the development and use of such promising emerging biotechnologies. By doing so, we can ensure that these impressive advances are a true blessing to the world—and not an unintended curse.

    For a more detailed discussion of the science and technology behind BCIs and their ethical implications, along with the power of the Christian worldview to guide decisions regarding the development and use of emerging biotechnologies, check out the book I coauthored with philosopher and theologian Kenneth Samples, Humans 2.0.

    Resources

    Neuroscience and the Case for a Creator

    Endnotes
    1. Dayo O. Adewole et al., “Development of Optically-Controlled ‘Living Electrodes’ with Long-Projecting Axon Tracts for a Synaptic Brain-Machine Interface,” bioRxiv, doi:10.1101/333526.
    2. Liam Drew, “The Ethics of Brain-Computer Interfaces,” Nature 571 (July 24, 2019): S19-S21, doi:10.1038/d41586-019-02214-2.
  • The COVID-19 Vaccines and God’s Providence

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Dec 23, 2020

    At last. There is light at the end of the tunnel.

    The ride has been long and dark. And there is still a ways to go before we exit to the other side, but we will arrive there soon.

    The emergency approval and first distributions of the Pfizer-BioNTech and Moderna COVID-19 vaccines give us all hope that we will soon see an end to the COVID-19 pandemic and return to some semblance of normalcy by the end of 2021.

    As a biochemist, I find it a remarkable achievement. Within the span of months, we have gone from experiencing the first cases of COVID-19 in the US (most likely in early 2020) to having two vaccines that appear to be highly effective against the SARS-2 coronaviruses less than a year later. Prior to this accomplishment, the fastest that we have been able to develop a new vaccine is four years.

    This success reflects the resolve of governments around the world who have worked collaboratively with public and private research teams. It also reflects the hard work of life scientists and biomedical investigators who have labored tirelessly around the clock to understand the biology of the SARS-2 coronavirus, translating this knowledge into public health policies, treatments for COVID-19, and ultimately, vaccines to prevent infections and halt the transmission of the virus.

    As a Christian, I see a divine hand in the rapid development of the COVID-19 vaccines, reflecting God’s providential care for humanity.

    To fully unpack this theological idea, I need to begin by describing the science that undergirds the Pfizer-BioNTech and Moderna vaccines and offer a brief history of messenger RNA (mRNA) vaccines.

    Messenger RNA Vaccines

    Both the Pfizer-BioNTech and Moderna vaccines belong to a category called mRNA vaccines. The chief component of these vaccines is a laboratory-made mRNA designed to encode a viral protein, usually one that resides on the virus surface. (Both the Pfizer-BioNTech and Moderna vaccines contain mRNA that encodes the SARS-2 coronavirus spike protein. This protein coats the virus surface and plays the central role in the binding and entry of the virus into the host cell.)

    Vaccines made from mRNA were first proposed by life scientists in the early 1990s. The principle behind mRNA vaccines is straightforward. Once injected into the patient, the mRNA finds its way into immune cells, where the cell’s machinery translates the synthetic viral mRNA into copies of the viral protein. Some of these newly made proteins are broken down inside the cell, with the fragments becoming incorporated into major histocompatibility complex class I (MHC-I). The MHC-I is transported to the cell surface, becoming embedded in the plasma membrane. Here it presents the viral protein fragment to the immune system, triggering a response that leads to the production of antibodies against the viral protein—and, hence, the virus. Initially this process provides sterilizing immunity. More importantly, it triggers the production of memory T cells and memory B cells, providing long-term immunity against the viral pathogen.

    Once the viral protein is translated, the synthetic mRNA undergoes degradation. Once this breakdown occurs, the mRNA component of the vaccine becomes cleared from the patient’s cells.

    blog__inline--the-covid-19-vaccines

    Schema of the RNA Vaccine Mechanism” by Jmarchn is licensed under CC BY-SA 3.0

    Challenges Developing mRNA Vaccines

    While the principles behind mRNA vaccines are straightforward, life scientists have faced significant hurdles developing workable vaccines.1 These technical challenges include:

    • Lack of mRNA stability. RNA molecules are inherently unstable, readily hydrolyzing into their constituent components. Once injected in the patient, naked mRNA rarely survives long enough to make its way to the target cells. Even if it does find its way into the cell’s interior, it may undergo breakdown before it can be translated into high enough levels of the viral protein so that an immune response becomes triggered.
    • Low rates of translation. All mRNA molecules are not equal when it comes to their rate of translation. Those RNA molecules which encode viral proteins often have certain sequence characteristics that make them appear unusual to our cells’ machinery, preventing these molecules from being efficiently translated into proteins.
    • Difficulty in delivering mRNA into cells. It is a real challenge for the mRNA component of the vaccine, once it has been injected into the patient, to make its way into the interior of target cells, because the mRNA has to penetrate the cell’s plasma membrane. This penetration process (and tendency to traverse the cell membrane barrier) is influenced by the nucleotide sequence of the mRNA (which, in turn, determines the mRNA’s physicochemical properties). Also, some cell types are more amenable to mRNA penetration through their plasma membranes than others. It is rare for sufficient levels of “naked” mRNA to cross the cell membrane so that the immune system can be activated.
    • Reactogenicity of the mRNA. The mRNA component of the vaccine can trigger an adverse reaction in some patients, causing an unintended immune response that can lead to anaphylactic shock.

    Despite these serious challenges, life scientists and biomedical researchers have continued to pursue mRNA vaccines because of the significant advantages they offer compared to both conventional and putative next generation vaccines.

    Advantages of mRNA Vaccines

    Some of the advantages of mRNA vaccines include:

    • Safety. Vaccines using mRNA are inherently safer than vaccines made up of inactivated or attenuated viruses. These latter types of vaccines can cause infections in the patients if the viral particles are not adequately inactivated or if they are not completely attenuated. Also, because the production of these vaccines involves handling live viruses, the risk to the workers is real, potentially leading to an outbreak of the disease at research and production facilities.

    Compared to DNA vaccines (which are being pursued as a potential future generation vaccine type), mRNA vaccines have virtually no risk of modifying the patient’s genomein part because mRNA will degrade once it has been translated, never making its way to the cell nucleus.

    • Ease of development and manufacturing. Researchers have long held the view that once these technical challenges are overcome, new mRNA vaccines will be much easier to develop than conventional vaccines. (The rapid development of the Pfizer-BioNTech and Moderna vaccines attests to this view.) Vaccines made from mRNA are also much easier to produce than conventional vaccines, which require viruses to be cultured. Culturing viruses takes time and adds complexity to the manufacturing process. In other words, mRNA vaccines are much more amenable to mass production than conventional vaccines.

    Clearing the Technical Hurdles

    Over the course of the last decade or so, life scientists and biomedical researchers have learned ways to overcome many of the technical issues that are endemic to mRNA vaccines. In fact, by the end of 2018, researchers had successfully developed mRNA vaccine technology to the point that they were on the verge of translating it to widespread therapeutic use.

    Through these efforts, researchers have learned that:

    • The stability of the mRNA can be improved by making modifications to the nucleotide sequences, particularly in the 3 and 5 untranslated regions of the molecule. RNA stability can also be enhanced by manipulating the coding region of the molecule, increasing the guanine and cytosine content. These changes can be affected without changing its coding information. mRNA stability can also be improved by complexing it with positively charged materials. (These types of complexes readily form because RNA molecules are negatively charged.)
    • The translatability of the vaccine’s mRNA can be enhanced by making changes to the mRNA sequences in the 3 and 5 untranslated regions and through the preferential use of specific codons. These changes lead to the production of high levels of the viral protein, once the mRNA makes its way into the cells.
    • The reactogenicity of the mRNA can be minimized in a number of different ways. For example, adverse reactions to mRNA can be reduced by incorporating nonnatural nucleotides into the mRNA. Complexing the mRNA with other materials can also minimize adverse reactions to the mRNA. (The Pfizer-BioNTech vaccine uses a positively charged, nonnatural lipid to complex with the vaccine’s mRNA, reducing its immunogenicity and stabilizing the mRNA.)
    • The delivery of mRNA to cells can be dramatically improved through a variety of means. The vaccines produced by Pfizer-BioNTech and Moderna both make use of lipid nanoparticles to encapsulate the mRNA. The development of lipid nanoparticles to facilitate the delivery of mRNA to cells has been perhaps the biggest breakthrough for mRNA vaccines. Not only do these nanoparticles facilitate the entry of mRNA into cells, but they protect the mRNA from degradation before reaching the cells.

    Even though the Pfizer-BioNTech and Moderna vaccines represent the first-ever mRNA vaccines used on humans, they took nearly three decades to develop thanks to the tireless efforts of life scientists and biomedical researchers. This developmental history includes numerous studies in which their safety has been assessed, leading to significant improvements in vaccine design, ensuring that any adverse reaction to mRNA vaccines is negligible.

    The COVID-19 Vaccines and God’s Providence

    This concerted effort has paid off. And, in large measure, these previous studies have made it possible for the Pfizer-BioNTech and Moderna scientists to rapidly develop their COVID-19 vaccines. At the point when the COVID-19 outbreak was declared a pandemic, researchers had already developed mRNA vaccines for a number of viral pathogens and tested them in animal models. They had even progressed some of these vaccines into small-scale human clinical studies that included safety assessments. Bioengineers had already started work on pilot scale production of mRNA vaccines, along the way developing GMPs (Good Manufacturing Practices) for the manufacture of mRNA vaccines.2

    In effect, when the pandemic broke, all the researchers at Pfizer-BioNTech and Moderna had to do to develop their COVID-19 vaccines was to know the right sequence to use for the vaccine’s mRNA. In other words, the scientific and biomedical communities just happened to be poised and ready to go with mRNA vaccines when the first outbreaks of COVID-19 appeared around the world.

    Harvard medical doctor Anthony Komaroff puts it this way:

    So, 30 years of painstaking research allowed several groups of scientists—including a group at Pfizer working with a German company called BioNTech, and a young company in Massachusetts called Moderna—to bring mRNA vaccine technology to the threshold of actually working. The companies had built platforms that, theoretically, could be used to create a vaccine for any infectious disease simply by inserting the right mRNA sequence for that disease.

    Then along came COVID-19. Within weeks of identifying the responsible virus, scientists in China had determined the structure of all of its genes, including the genes that make the spike protein, and published this information on the Internet.

    Within minutes, scientists 10,000 miles away began working on the design of an mRNA vaccine. Within weeks, they had made enough vaccine to test it in animals, and then in people. Just 11 months after the discovery of the SARS-CoV-2 virus, regulators in the United Kingdom and the US confirmed that an mRNA vaccine for COVID-19 is effective and safely tolerated, paving the path to widespread immunization. Previously, no new vaccine had been developed in less than four years.3

    We were literally at the point of matriculating mRNA vaccines into large-scale human clinical trials at the precise point in time that the COVID-19 outbreak began. If this outbreak occurred even a few years earlier, I question if we would have been able to develop effective mRNA vaccines against COVID-19 with the same speed and have the capacity to rapidly produce and distribute large quantities of vaccines once the mRNA vaccine was ready to go. The rapid response to the COVID-19 pandemic has been made possible because of the advances in mRNA vaccines that have occurred over the course of the last few years, yielding the technical knowledge to rapidly develop and manufacture mRNA vaccines. In fact, some biomedical scientists consider mRNA vaccines to be the ideal vaccines for this reason.

    As a Christian and a biochemist, I can’t help but see God’s providential hand at work in the timing of the COVID-19 outbreak. It happened precisely at the time that advances in mRNA vaccines would allow for a rapid response. The remarkable confluence of the COVID-19 pandemic with the advances in mRNA vaccines has one of two possible explanations: It’s either a fortuitous accident or a reflection of God’s providential timing and faithful provision to humanity.

    As a Christian, I choose the latter explanation.

    You might say that mRNA vaccines were prepared in advance for such a time as this.

    Resources

    Endnotes
    1. Norbert Pardi et al., “mRNA Vaccines—A New Era in Vaccinology,” Nature Reviews Drug Discovery 17 (April 2018): 261–79, doi:10.1038/nrd.2017.243.
    2. Pardi et al., “mRNA Vaccines.”
    3. Anthony Komaroff, “Why Are MRNA Vaccines So Exciting?,” Harvard Health Blog (December 18, 2020) health.harvard.edu/blog/why-are-mrna-vaccines-so-exciting-2020121021599, accessed December 18, 2020.
  • Scientists Turn Back the Hands of Time: But Should They?

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Dec 16, 2020

    “The Curious Case of Benjamin Button,” written by F. Scott Fitzgerald—one of the great American writers of the twentieth century—is a short story about a boy born with the physical appearance of a 70-year-old. As he lives his young life, Benjamin’s parents soon discover that he ages in reverse, becoming younger as he gets older.

    This fictional tale about the extraordinary life of Benjamin Button is odd to us for one simple reason: aging is a natural part of life. It’s inevitable. We struggle to conceive what life would be like if we didn’t age. Yet, if we are honest, many of us find ourselves a bit envious of Benjamin. We wish we would become younger as we grow older.

    As fanciful as it might sound, we just might get our wish, thanks to work carried out by a team of biomedical researchers from Israel.1 These researchers treated 35 human test subjects with hyperbaric oxygen over the span of two months and discovered that two markers used to assess biological age showed a reversal, indicating that the test subjects became biologically younger as they chronologically aged.

    This work is exciting—and concerning.

    Scientists may be close to clinically arresting and even reversing the aging process. This potential breakthrough would carry staggering biomedical implications, perhaps allowing us to stave off diseases such as cancer, type II diabetes, and a host of cardiovascular disorders. But this work also has broad-ranging ethical and societal implications, paving the way for dramatic extensions in human life expectancy, while at the same time fueling the transhumanism movement.

    Some Biological Consequences of Aging
    It goes without saying that as humans age, we experience a loss of physiological integrity (to use scientific jargon), which, in turn, makes us susceptible to diseases and leads to death. In fact, aging is the major risk factor in cancer, cardiovascular diseases, diabetes, and Alzheimer’s.

    Biogerontologists have identified a number of physiological changes associated with aging, such as: (1) telomere shortening, (2) an accrual of gene mutations, (3) reduced cell-cell communication in tissues and organs, and (4) impaired cell function, which includes arrested growth and division (called senescence) and (5) impaired function of mitochondria.

    One of the challenges facing investigators who study the biology of aging is determining which changes are a consequence of aging and which cause aging. Most scientists working in the biology of aging think that both telomere shortening and cellular senescence have a direct impact on the aging process. For this reason, many life scientists view telomere length and cellular senescence as reliable markers for biological age, with telomeres becoming predictably shorter and cellular senescence predictably increasing as each of us ages chronologically.

    Reversing the Aging Process
    A growing number of biomedical researchers believe if we interrupt telomere shortening and cellular senescence, we can delay the onset of aging—maybe even reverse it. For example, in Humans 2.0, Ken Samples and I discuss the antiaging strategy advanced by Dr. Michael Fossel, which targets telomeres. Specifically, Fossel believes that through the use of the enzyme telomerase, it might be possible to lengthen telomeres, bringing an end to aging as we know it.

    Even though Fossel’s idea seems reasonable, many life scientists and biomedical researchers have looked askance at the idea of antiaging therapies. Yet recent work published in 2019 by investigators from the US and Canada indicates that we are on the cusp of having genuine antiaging therapies.2

    These investigators developed a drug cocktail that caused the thymus (an organ located between the heart and sternum) to increase in size. They carried out a small-scale clinical trial, administering their drug mixture to a small group of men between 50 and 60 years of age three or four times a week over the course of a year.

    The thymus serves as the site for the maturation of white blood cells, a critical component of our immune system. As we age, our thymus becomes smaller, leading to loss of immune function. These researchers believe that by increasing thymus size, the loss of immune function can be arrested—perhaps even reversed.

    As an afterthought, the researchers decided to take samples of blood from the test subjects, using an epigenetic clock as a way to measure the biological age of the study participants. To their surprise, the drug cocktail not only increased thymus size but also it turned back the epigenetic clock by two years, with the effect lasting six months after the drug trial ended. In other words, though the test subjects aged by a year chronologically over the course of a year, they became two years younger—at least based on an epigenetic marker for biological age.

    New Study Holds Added Promise
    In contrast to the earlier study by US and Canadian investigators, the team from Israel deliberately tried to reverse the aging process by administering hyperbaric oxygen. Earlier studies indicated that hyperbaric oxygen treatments can improve cognition in test subjects by increasing cerebral blood flow. Administering hyperbaric oxygen also triggers stem cell proliferation and increases the biogenesis of mitochondria. These researchers reasoned that hyperbaric oxygen may well delay aging. To test their idea, the investigators enlisted the help of 35 volunteers who were over the age of 65. Over the course of 3 months, the investigators delivered 100% molecular oxygen to the test subjects, with sessions lasting 90 minutes. During the study, they measured telomere length and cell senescence of several different types of white blood cells, observing nearly a 40% decrease in cell senescence and a 20% increase in telomere length. In other words, hyperbaric oxygen treatments appear to have turned back the hands of time.

    Is Aging a Disease?
    These types of studies are a harbinger of change in the way the biomedical community—and the public at large—is beginning to view aging. A growing number of biogerontologists argue that aging should be viewed not as an inevitable part of life, but as a disease.3 And, if viewed as a disease, it means that aging can be treated—maybe even cured. These scientists argue that if we successfully treat aging, diseases such as cancer, type II diabetes, cardiovascular disorders, dementia, Alzheimer’s, and others will wane because they are a byproduct of aging.

    Though this view of aging is controversial and not widely accepted, it will likely increase in prominence in the years to come both within the biomedical research community and in a culture already obsessed with antiaging products and regimens. The difference is that studies such as the one conducted by the scientists from Israel are not based on questionable “junk” science. Rather, they herald the arrival of bona fide antiaging technologies undergirded by real scientific evidence.

    Antiaging Therapies and Transhumanism
    The idea that aging is a disease instead of an inevitability has broad-ranging implications, as you might imagine. Now that biomedical researchers have demonstrated that it is possible to reverse biological markers for aging, the prospects that we might be able to extend human life expectancy well beyond our natural biological limits become a real possibility. This hope gives credibility to an intellectual movement called transhumanism.

    Advocates of the transhumanist vision maintain that humanity has an obligation to use advances in biotechnology and bioengineering to correct our biological flaws—augmenting our physical, intellectual, and psychological capabilities beyond our natural limits. Perhaps there are no greater biological limitations that human beings experience than those caused by aging bodies and associated diseases.

    Transhumanists see science and technology as the means to alleviate pain and suffering and to promote human flourishing. They note that, in the case of aging, pain, suffering, and loss characterize senescence in human beings.

    Antiaging as a Source of Hope and of Salvation?
    Using science and technology to mitigate pain and suffering and to drive human progress is nothing new. But transhumanists desire more. They advocate the use of advances in biotechnology and bioengineering to take control of our own evolution with the grand vision of creating new and improved versions of human beings. They hope to usher in a posthuman future. Transhumanists desire to create a utopia of our own design. In fact, many transhumanists go one step further, arguing that advances in gene editing, computer-brain interfaces, and antiaging technologies could extend our life expectancy perhaps indefinitely, allowing us to attain a practical immortality.

    In essence, transhumanism has a religious element to it, with science and technology serving as the means for salvation. But can the transhumanist agenda deliver on its promises?

    I think the answer is no for the simple reason that ethical concerns abound when it comes to the prospects of wide-scale application of antiaging and life extension technologies.

    Ethical Concerns
    What could possibly be wrong with wanting to live a longer, healthier, and more productive life? For the most part, don’t people do everything they can to delay the onset and effects of aging? Don’t we do what we can to avoid an early death? We try to eat right, take health supplements, exercise, submit ourselves to all sorts of medical screening procedures to ensure that we live as long as we possibly can—even sacrificing quality of life in some cases. It is hard to imagine anything inherently wrong with wanting to live longer. In fact, disrupting—even reversing—the aging process would offer benefits to society by potentially reducing medical costs associated with age-related diseases such as dementia, cancer, heart disease, and stroke.

    Yet, these biomedical advances in antiaging therapies hold the potential to change who we are as human beings. After all, aging is part of our nature and it shapes our life experiences. Antiaging technology most likely will fundamentally alter the nature of society, too, by ushering in wide-scale social and economic changes. Unfettered access to antiaging technologies will lead to overpopulation as people live longer and death rates fall, putting demands on limited planetary resources. In the end, antiaging technologies may well be unsustainable, undesirable, and unwise.

    One way to make long life spans in humans sustainable would be to curtail the birthrate. This may mean that married couples would be restricted on how many children they have, or perhaps some couples may be denied the right to reproduce at all. One could easily envision a future world in which only couples who meet certain criteria would be allowed to have children.

    Further, in an effort to avoid overpopulation perhaps only certain individuals will be granted access to antiaging technology because of their demonstrated or potential contributions to society. Or maybe only the wealthy will have access to the technology, because they can pay for it and they possess the monetary resources to live for hundreds of years.

    There seems to be an inherent unfairness to denying some people the opportunity to have children or restricting access to antiaging technology to only a select few—particularly if that technology can ameliorate the suffering that accompanies aging.

    Having access to life-extending technologies likely will change intergenerational attitudes and relationships. For example, people living in the current generation may well become more concerned with selfishly devoting resources to extend their life and hold onto their place in society, than with investing in the success of the next generation. This selfishness poses the real risk of changing the way people view members of future generations. It is conceivable that people living in a current generation will begin to view members of the next one as a threat, as the next generation consumes already limited resources and seeks to replace the current generation in the workforce and society.

    It is reasonable to think that this animosity will extend in both directions. People in the next generation may view members of the current generation as standing in their way, preventing them from assuming their place in society. It is conceivable that the next generation will believe that people of the current generation have unjustly imposed their will on all future generations. (This discussion merely scratches the surface. For a more detailed analysis of the ethical issues surrounding antiaging technology, check out the book listed below that I cowrote with Kenneth Samples, Humans 2.0.)

    Transhumanism: A False Gospel?
    Can transhumanism truly deliver on its promises of a utopian future and a practical immortality? Cataloging the many ethical concerns surrounding antiaging technologies highlights the real risks of pursuing a transhumanist future. If we don’t carefully consider these concerns, we might create a dystopian, not a utopian, world.

    The mere risk of this type of unintended future should give us pause for thought about turning to science and technology for our salvation. Transhumanism exposes the real need in all of us for hope, purpose, and destiny. I submit that the only way that need can ever be fulfilled is through the gospel of Jesus Christ.

    Resources

    Endnotes
    1. Yafit Hachmo et al., “Hyperbaric Oxygen Therapy Increases Telomere Length and Decreases Immunosenescence in Isolated Blood Cells: A Prospective Trial,” Aging 12, no. 22 (November 18, 2020): 22445–56, doi:10.18632/aging.202188.
    2. Gregory M. Fahy et al., “Reversal of Epigenetic Aging and Immunosenescent Trends in Humans,” Aging Cell (September 8, 2019): e13028, doi:10.1111/acel.13028.
    3. Joelle Renstrom, “Is Aging a Disease?” Slate (March 02, 2020), https://slate.com/technology/2020/03/aging-disease-classification.html.
  • If God Hates Abortion Why Do So Many Occur Spontaneously in Humans?

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Nov 25, 2020

     

    A common fundamentalist argument against abortion is that each human being is granted a soul at the moment of conception, and that destroying that “soul” is equivalent to murder. . . However, there’s some serious problems with the logic of ensoulation at the point of conception. The CDC as well as the March of Dimes and several fertility experts have conducted studies to see exactly how hard it is to carry a pregnancy to term. In general, less than 70% of all fertilized eggs will even implant into the mother’s womb causing pregnancy to continue. From there, there is a 25-50% chance of aborting before you even know you are pregnant. So if you look at it from the fundamentalist point of view, all those little souls are being given a home, only to be miscarried before they even know they are alive. Scientific research has compiled the following information about the rates of naturally aborted pregnancies in human beings (or, if you believe everything happens for a reason, pregnancies aborted by God himself).

    RationalWiki, “Spontaneous Abortion in Humans”

    Miscarriage and Troubling Questions
    Perhaps nothing is more painful and confusing for a woman than when she experiences a miscarriage. My wife and I know this firsthand. Amy’s first pregnancy ended with a miscarriage early in the first trimester. Our joy and excitement were replaced by sadness and an indescribable disappointment. I don’t know if I could ever truly understand how my wife felt then or how she feels now about our loss. We wonder, all these years later, if our first child was a boy or girl. Still, we are so grateful for the wonderful children God did give to us.

    Questions surrounding spontaneous abortions and miscarriages are painful, indeed. But they also expose profound philosophical and theological problems with far-reaching implications for the Christian faith. The high rate of spontaneous abortions during human pregnancies raises questions about God’s goodness and also impacts the creation/evolution controversy and the abortion debate.

    • If human beings are made in God’s image—as the crown of creation—wouldn’t a Creator have designed a less-flawed and error-prone process for human reproduction?
    • If a Creator made human beings with a soul at the point of conception, why would some of these embryos live, ever so briefly before the pregnancy—and their life—comes to an end?
    • If a Creator hates abortion, why is the rate of spontaneous abortions so great?
    • In light of the high rate of spontaneous abortions, why is it so wrong for human beings to voluntarily end a pregnancy?

    Without a doubt, these questions represent a serious challenge to the Christian faith. Fortunately, new scientific insights into embryo mortality and the cause of early miscarriages help address some of these challenging and heart-wrenching concerns, even if other questions remain a troubling mystery.

    Before we take a look at these new insights, it is necessary to address a broader concern about the consequences of the constancy of nature’s laws and how this feature impacts the incidences of spontaneous abortions.

    Consequences of the Second Law of Thermodynamics

    Given the complexity of biological systems such as human reproduction, it is unreasonable to think that these processes, no matter how well designed, will perform flawlessly every time. All the more so given the influence the second law of thermodynamics wields.

    As a consequence of this law, errors will inevitably occur—at least, on occasion—during all biological processes. Because of the invariance of the laws of nature, the second law is always in operation. Hence, errors will occur during: (1) fertilization, (2) implantation of the embryo in the uterine wall, and (3) placenta formation and embryo growth and development. These errors lead to spontaneous abortions, miscarriages, and stillbirths.

    While it is tempting to view entropy (the second law of thermodynamics) in a negative light, it is important to recognize that, if not for entropy, life’s existence would be impossible. Entropy enables metabolism and plays the central role in the formation and stability of cell membranes, protein higher-order structures, and the DNA double helix.

    Of course, the unrelenting operation of the laws of nature leads to profound theological and philosophical issues that I have addressed elsewhere. Even if errors are inevitable in biological processes, couldn’t God have somehow designed human reproduction to be less error-prone?

    Fortunately, recent scientific insights help address this issue, beginning with a detailed assessment of early embryo mortality.

    What Is the Actual Rate of Spontaneous Abortions?

    A survey of the scientific literature finds that the reported rates for spontaneous abortions are highly varied. Still, these rates seem to indicate human reproduction is a highly inefficient process with embryo mortality rates:

    • before and during implantation—as high as 75%
    • before the first six weeks of pregnancy—as high as 80%
    • during the first trimester—as high as 70%
    • before the first 20 weeks—as high as 50%
    • from fertilization to birth—as high as 90%

    But as physiologist Gavin Jarvis from Cambridge University points out, these rates of spontaneous abortions are most certainly exaggerated and find little evidential support. These statistics are based on speculation and imprecise estimates of embryo mortality.1 In an attempt to remedy this problem, Jarvis carried out a careful reassessment of the published data on embryo mortality.

    As part of this assessment, Jarvis concludes that it is impossible to know how many embryos die—or survive—during the first week of pregnancy, from the point of fertilization to the beginning stages of implantation. The earliest point that embryo survival can be realistically studied in a clinical setting is after the first week of pregnancy when the hormone human chorionic gonadotrophin (CG) can be detected. Prior to that point, the rate of embryo loss is merely a guess.

    Some biomedical researchers have attempted to estimate embryo mortality during the first week of pregnancy from in vitro fertilization studies. Jarvis argues that these estimates are meaningless. He says it is hard to believe that embryo survival under laboratory conditions would reflect embryo survival rates under natural conditions. In fact, given that in vitro fertilization and subsequent embryo growth occur under nonoptimal, nonnatural conditions suggests that embryo mortality is likely much higher when carried out in the laboratory than when fertilization and early stage embryo development take place in vivo. Jarvis notes, “It’s impossible to give a precise figure for how many embryos survive in the first week but in normal healthy women, it probably lies somewhere between 60–90%.”2

    Insight into embryo mortality becomes quantifiable after the first week. As it turns out, about 1 in 5 embryos die during implantation. In fact, in many of these instances the woman would not be aware she was pregnant, because she would not miss her period. Once a woman misses her period, only about 10 to 15% of the embryos die before birth. In total, about 70% of embryos make it to live birth, once implantation commences and the pregnancy is clinically confirmed from CG levels.

    As Jarvis notes, “Although we can’t be precise, we can avoid exaggeration, and from reviewing the studies that do exist, it is clear that many more [embryos] survive than is often claimed.”3

    Even though the rate of spontaneous abortion isn’t as high as often reported, skeptics still have grounds to question the design of human reproduction, viewing it as an error-prone, flawed process. Yet, new insight into the causes of spontaneous abortions and miscarriages suggests that a rationale undergirds pregnancy loss, especially during the early stages of pregnancy. In light of this insight it appears that spontaneous abortions may be rightly understood as a necessary part of the design of human reproduction.

    Why Do Spontaneous Abortions Occur?

    Most miscarriages appear to be the result of chromosomal abnormalities. Embryos with damaged chromosomes or an abnormal number of chromosomes often die. Biomedical researchers have discovered that somewhere between 50 to 80% of human embryos produced by in vitro fertilization have at least one cell that displays chromosomal abnormalities. (As mentioned, the statistics for in vitro fertilization are not likely to be reliable measures of naturally occurring fertilization, so we need to be cautious about how we interpret this finding.)4 Researchers have also learned that the leading cause of embryo mortality during in vitro fertilization appears to be associated with chromosomal abnormalities.

    As I noted, these abnormalities inevitably arise as a consequence of the complexity of human reproduction and the second law of thermodynamics. In light of this inevitability, biomedical investigators now think that spontaneous abortions serve as the means to prevent embryos with chromosomal abnormalities from developing once they begin the process of implantation. By studying the interactions between embryos created via in vitro fertilization with cells cultured from the endometrium (the cell layers that line the uterine wall), investigators have discovered that when healthy embryos are introduced to endometrial cells in a Petri dish, they cluster around the embryo, releasing chemicals that promote implantation. On the other hand, endometrial cells eschew embryos with chromosomal abnormalities, halting the release of chemicals that prompt implantation.5 These investigators also discovered that endometrial cells exposed to embryos with chromosomal abnormalities underwent a stress response, whereas healthy embryos activated gene networks in the endometrial cells that led to the production of metabolic enzymes and the secretion of implantation factors. Researchers confirmed this result by exposing the uteri of mice to cell culture media that was used to grow abnormal human embryos and they observed the same response in the mouse cells in vivo as the human cells in vitro.

    In other words, it appears as if the endometrium serves as a gatekeeper rejecting embryos with chromosomal abnormalities and embracing developmentally viable embryos. Because the rejection of abnormal embryos happens so early in the pregnancy, most women are unaware that they were pregnant.

    Ironically, some researchers believe the widespread occurrence of miscarriages actually led to the success of our species. Compared to other mammals, humans have an unusually high rate of spontaneous abortions (even when we consider Jarvis’s revised estimates). For the most part, humans give birth to a single child that requires nine months of gestation. Other mammals have shorter pregnancies, some birthing litters. For these mammals, a process that allows a few abnormal embryos to grow and develop has relatively little consequences because a significant number of the litter will be healthy. But for humans, allowing a single ill-fated pregnancy to go to full-term is a flawed strategy. As biologist Shawn Chavez notes, “In the case of animals that have litters, maybe they make 10 embryos a month and only eight make it to live birth, but that’s still eight. Whereas we typically can only make one embryo per month, so if it isn’t a good one, maybe it’s better to try again next month.”6 Biologist Tim Bruckner makes a similar point. He states, “According to the theory of natural selection, we want to have children that survive infancy and grow up and have children of their own so they can pass on our genes. There’s this idea that human reproduction is inefficient because so many pregnancies are lost, but overall it may have led to the preservation of our species.”7

    These insights into the cause of miscarriage also contribute to our understanding of infertility. Women with a hypervigilant endometrium may struggle to get pregnant because the endometrium rejects both abnormal and healthy embryos. By the same token, these insights explain why some women are prone to miscarriages. In this case, their endometrium isn’t selective enough, allowing embryos to develop which otherwise “biologically” shouldn’t.

    Spontaneous Abortions: A Necessary Design Feature of Human Reproductions

    On the surface, the high rate of spontaneous abortions appears to be a flawed design. In reality, this feature of human reproduction reflects an exquisite biological rationale. Though emotionally brutal, miscarriages are a necessary feature of the human reproduction process that arises from the complexity of human reproduction and the second law of thermodynamics. If not for the high rate of spontaneous abortions we would have a reduced likelihood of having healthy children.

    Though this scientific insight doesn’t answer all the difficult questions associated with spontaneous abortions, it can offer some source of comfort knowing that a rationale exists for pregnancy loss. As science journalist Alice Klein writes:

    As traumatic as my own miscarriage was, it is comforting to learn that it probably wasn’t because of anything I did or anything that was wrong with me. On the contrary, it was most likely due to a random genetic error that I had no control over. Instead of my body failing me, it may have protected me from investing further in a pregnancy that probably wasn’t going to produce a healthy baby.8

    All these years later, I find comfort, too, in knowing that there is a reason why my wife suffered a miscarriage. Still, Amy and I are left with many questions—questions for which we may never receive answers. Though it may sound odd to nonreligious people, in the midst of this uncertainty, we choose to rely on the fact that God is just and merciful and sovereign over all things.

    Resources

    The Fixed Laws of Nature

    The Elegant Design of Human Reproduction

    Disabilities and the Image of God

    Pro-Life Argument

    Endnotes
    1. Gavin E. Jarvis, “Early Embryo Mortality in Natural Human Reproduction: What the Data Say,” F1000Research 5 (June 12, 2017): 2765, doi:10.12688/f1000research.8937.2.
    2. University of Cambridge, “Human Reproduction Likely to Be More Efficient Than Previously Thought,” ScienceDaily (June 13, 2017), sciencedaily.com/releases/2017/06/170613101932.htm.
    3. University of Cambridge, “Human Reproduction.”
    4. Lucia Carbone and Shawn L. Chavez, “Mammalian Pre-Implantation Chromosomal Instability: Species Comparison, Evolutionary Considerations, and Pathological Correlations,” Systems Biology in Reproductive Medicine 61, no. 6 (2015): 321–35, doi:10.3109/1939638.2015.1073406.
    5. Gijs Teklenburg et al., “Natural Selection of Human Embryos: Decidualizing Endometrial Stromal Cells Serve as Sensors of Embryo Quality upon Implantation,” PLoS One 5 (April 21, 2010): e10258, doi:10.1371/journal.pone.0010258; Jan J. Brosens et al., “Uterine Selection of Human Embryos at Implantation,” Scientific Reports 4 (February 6, 2014): 3894, doi:10.1038/srep03894.
    6. Alice Klein, “The Real Reasons Miscarriage Exists—And Why It’s So Misunderstood,” New Scientist (August 5, 2020), https://www.newscientist.com/article/mg24732940-900-the-real-reasons-miscarriage-exists-and-why-its-so-misunderstood/.
    7. Klein, “The Real Reasons Miscarriage Exists.”
    8. Klein, “The Real Reasons Miscarriage Exists.”
  • Have Researchers Developed a Computer Algorithm that Explains the Origin of Life?

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Nov 04, 2020

    As a chemistry major at West Virginia State College during the early 1980s, I was required to take a library course on the chemical literature before I could graduate. During the class, we learned how to use the many library reference materials devoted to cataloging and retrieving the vast amount of chemistry research published in the scientific literature. Included in this list was the multivolume Beilstein’s Handbook of Organic Chemistry.

    Beilstein’s Handbook of Organic Chemistry

    Beilstein’s Handbook consists of hundreds of volumes with entries for well over 10 million compounds. The books that originally made up Beilstein’s Handbook took up rows of shelves in the library with new volumes added to the collection every few years. Today, the Beilstein’s volumes are no longer published as printed editions. Instead the entries are now housed online in the Beilstein’s Handbook database, with the old print volumes serving as little more than artifacts of a bygone era in the annals of chemistry.

    Learning to master Beilstein’s Handbook is no easy task. In fact, there are textbooks devoted to teaching chemists how to use this massive database effectively. It is well worth the effort. If you know what you are doing, Beilstein’s Handbook holds the key to finding quickly anything you need to know about any organic compound, provided it has been published somewhere.

    Beilstein Synthesis and the Origin-of-Life Problem

    The utility of Beilstein’s Handbook is endless and its applications far-reaching. In fact, Beilstein’s has even served as the inspiration for origin-of-life chemists seeking to make sense of prebiotic chemistry and chemical evolution. These investigators think that if they can master an approach to prebiotic chemistry called a Beilstein synthesis, then they may well gain key insight into how chemical evolution generated the first life on Earth. In short, a Beilstein synthesis involves a chemical reaction taking place in a single flask with a large number of chemical compounds serving as the reactants. This process is so named as a nod to the 10 million entries in the Beilstein’s database.

    Origin-of-life scientists are interested in Beilstein synthesis because they think that these types of reactions more closely reflect the chemical and physical complexity of early Earth’s environment. Yet, very few origin-of-life researchers have even attempted this type of reaction. Understanding what transpired during a Beilstein synthesis has long been an intractable problem. Until very recently, the analytical capabilities didn’t exist to efficiently and effectively characterize the myriad products that would form during a Beilstein reaction, let alone identify and characterize the different chemical routes in play. For this reason, origin-of-life researchers have focused on singular prebiotic processes involving a limited number of compounds, reacting under highly controlled laboratory conditions. In these types of reactions, it is far easier to make sense of experimental outcomes—but the ease of interpretation comes with a cost.

    Over the last 70 years, the focus on singular sets of reactions and highly controlled conditions has produced some successes for origin-of-life researchers—albeit qualified ones. Focusing on isolated reactions and specific sets of conditions has made it possible for researchers to identify a number of physicochemical processes that could have contributed to the early stages of chemical evolution—at least, in principle. Unfortunately, serious concerns remain about the geochemical relevance of these types of experiments. These reactions perform well in the laboratory, under the auspices of chemists, but significant questions abound about the productivity of the same laboratory processes in the milieu of early Earth. (For a detailed discussion of this problem, I recommend my blog article “Prebiotic Chemistry and the Hand of God.”)

    Additionally, these highly controlled reactions—carried out under pristine conditions—fail to take into account the chemical and physical complexity of early Earth. Undoubtedly, this complexity will impact the physicochemical processes on early Earth, shaping the outcome of plausible prebiotic reaction routes. No one really knows if this complexity will facilitate chemical evolution or frustrate it, but now we have some idea, thanks to the work of a research team from the Polish Academy of Sciences. These investigators moved the origin-of-life research community closer to achieving a prebiotic Beilstein synthesis by developing and deploying a computer algorithm (called Allchemy) to perform computer-assisted organic chemistry designed to mimic the earliest stages of chemical evolution. In effect, they performed an in silico Beilstein reaction with some rather intriguing results.1

    Allchemy and the Prebiotic Chemistry

    The researchers used Allchemy to identify the reaction pathways and products that could have formed under plausible early Earth conditions. They initiated the computer-assisted reactions by starting with hydrogen sulfide, water, ammonia, nitrogen, methane, and hydrogen cyanide as the original set of reactants, under the assumption that these small molecules would have been present on early Earth. After the reactions reached completion, the researchers removed any products that possessed an “invalid” chemical structure, then incorporated the remaining reaction products into the original set of starting compounds, and ran the computer-assisted reactions again. They repeated this process 7 times.

    For each generation of reactions, they “computed” reaction pathways and products using a set of 614 rules. These rules were developed by encoding into the algorithm all of the known prebiotic reactions published in the scientific literature. They also encoded plausible conditions of early Earth. As they developed the list of rules, the researchers also paid close attention to chemical functional groups that would be incompatible with one another. As it turns out, it was possible to group these 614 rules into 72 chemical reaction classes. The algorithm began each generation of reactions by identifying suitable reactants for each class of reactions and then “reacting” them to discover the types of products that would form.

    Allchemy Results

    Through the course of 7 generations of reactions, Allchemy produced almost 37,000 chemical compounds from the initial set of 6 gaseous molecules. Of these compounds, only 82 were biotic. And, of this collection, 41 were peptides (formed when amino acids react together to form an adduct).

    As it turns out the biotic compounds had some unusual properties that distinguished them from the vast collection of abiotic molecules. These compounds:

    • Are more thermodynamically stable
    • Display less hydrophobicity (water-insolubility)
    • Harbor fewer distinct functional groups
    • Possess fewer reactive functional groups
    • Have a balanced number of functional groups that were hydrogen-bond donors and acceptors

    The researchers also discovered that there were a number of distinct pathways that could produce biotic compounds. That is to say, they observed synthetic redundancy for the biotic compounds. They discovered that they could eliminate nearly half of the 72 reaction classes from the algorithm and still generate all 82 biotic compounds. In contrast, the abiotic compounds failed to display synthetic redundancy. Only 8 of the reaction classes could be eliminated and still generate the same suite of abiotic molecules.

    Additionally, the team discovered that some of the compounds generated by the in silico reactions—such as formic acid, cyanoacetylene, and isocyanic acid—served as synthetic hubs, giving rise to a large number of additional products. It is quite possible that the existence of these reaction hubs contributes to the synthetic redundancy of the biotic compounds.

    Through the course of 7 generations of chemical synthesis, the researchers found that the Allchemy algorithm produced all of the prebiotic reactions reported in the scientific literature, to date. This finding isn’t surprising because the research team used these reactions to help design the rules used to guide Allchemy.

    The algorithm also yielded prebiotic reactions that, heretofore had not been discovered by origin-of-life researchers. The research team demonstrated the validity of these pathways, discovered in silico, by successfully executing these same reactions in the laboratory.

    Emergent Properties of Prebiotic Reactions

    One of the most exciting discoveries made by the team from the Polish National Academy of Sciences was the emergent properties that arose after 7 generations of in silico prebiotic reactions:

    • Unexpectedly, some of the reaction products catalyzed additional chemical reactions, which expanded the range of available prebiotic reactions.
    • Reaction cycles and reaction cascades emerged, with the reaction cycles displaying the property of self-regeneration. In fact, after 7 generations, the chemical space of the prebiotic reactions became densely populated with reaction cycles.
    • Surfactants, such as fatty acids, emerged. They also discovered peptides with surfactant properties. These types of compounds can, in principle, form vesicles that can encapsulate materials yielding proto-cellular structures.

    In many respects, this work reflects science at its best. It ushers in a new era in prebiotic chemistry, demonstrating the power of computer-assisted organic chemistry to shed light on chemical evolution. Coupled with the increased capacity to analyze complex chemical mixtures (thanks to advances in analytical chemistry), Allchemy and other similar software may make it possible to provide meaningful interpretations of real-life Beilstein reactions.

    This work also shows that, in principle, complex chemical mixtures can give rise to some interesting emergent features that have bearing on chemical evolution and the rise of the chemical complexity and organization required for the origin of life. Nevertheless, we are still a far distance from arriving at any real understanding as to how life could have emerged through evolutionary processes.

    Are the Allchemy Results Geochemically Relevant?

    It is critical to keep in mind that this work involves computer modeling of chemical processes that could have taken place under the putative conditions of early Earth. And, though the algorithm developed by the investigators from the Polish National Academy of Sciences is quite sophisticated, it still represents a simplified set of scenarios that, at times, fails to fully and realistically account for our planet’s early conditions.

    For example, some of the starting materials selected for the in silico reactions, such as ammonia and methane, likely weren’t present on the early Earth at appreciable levels. In fact, most planetary scientists believe that Earth’s early atmosphere was composed of water, nitrogen, and carbon dioxide. When this type of gas mixture is used in spark-discharge experiments—such as the ones carried out by legendary origin-of-life researcher Stanley Miller—no organic compounds form. In other words, this gas mixture is unreactive.

    The researchers also ignored the concentration of the reactants. Laboratory studies indicate that many prebiotic reactions require relatively high concentrations of the reactants. Given the expansiveness of early Earth’s environment (particularly, its oceans), it is hard to imagine that the concentrations needed for many prebiotic reactions could ever have been achieved. In other words, it is quite likely that the concentration of prebiotic reactants on Earth was too dilute to be meaningful for chemical evolution.

    The research group also ignored kinetic effects. Not all chemical reactions proceed at the same rate. So, while a chemical reaction may be possible, in principle, in reality it may transpire too slowly to be meaningful. By not taking into account rates of chemical reactions, the researchers undermined the geochemical relevance of their computer-assisted reactions.

    The availability and types of energy sources on early Earth were ignored as well. Many prebiotic reactions require energy sources to trigger them. In many instances these energy sources have to be highly specific to initiate chemical reactions. Energy sources need to be powerful enough to kick-start the reactions, but not so powerful as to cause the breakdown of the reactants and ensuing products.

    The researchers also failed to take into account the stereochemistry of the reactants and products. For this reason, they have failed to shed any insight into the homochirality problem, which beleaguers origin-of-life research.

    So, the results of Allchemy have questionable geochemical relevance, and thus, questionable bearing on the origin-of-life issue. Still, the work demonstrates the value of Beilstein reactions—even, if performed in silico—and does indicate that emergent properties can originate out of chemical complexity, in principle.

    It is also worth noting that this work sheds potential light on the earliest stages of chemical evolution. Even if building block materials are in place, there still needs to be an explanation for the emergence of information-rich biopolymers and stable membrane-bound vesicles that would form protocells. The work of the Polish National Academy of Sciences investigators provides clues as to how this might happen, but significant hurdles remain.

    The Homopolymer Problem

    One of the interesting findings of the in silico experiments was the recognition that prebiotic reactions generated around 40 peptides. The peptides became larger and more numerous for each generation. These compounds are formed from amino acids, which combine into “chain-like” molecules and could be viewed as the stepping stones to proteins. Some of the peptides produced in the prebiotic pathways display “nonbiological” bonding. This type of bond formation arises from reactions between the hydroxyl and carboxylic acid side groups of serine and aspartic acid (produced in the prebiotic reactions), respectively, and the carboxylic acid moiety and amino groups bound to the alpha carbon. These nonstandard linkages would render these peptides irrelevant for the production of larger proteins because of the homopolymer problem.

    The late Robert Shapiro first identified this problem a number of years ago. For biopolymers to be able to adopt higher-order three-dimensional structures or to carry out critical functions, such as self-replication, the backbone must consist of identical repeating units. For intermolecular interactions to stabilize the higher-order structure of biopolymers or for these biopolymers to serve as templates for self-replication, the backbone’s structure must repeat without any interruption. This means that the subunit molecules that form the self-replicator must consist of the same chemical class.

    Chemists call chain-like molecules with structurally repetitive backbones homopolymers. (Homo = “same”; poly = “many”; mer = “units”). DNA, RNA, proteins, and the proposed pre-RNA world self-replicators, such as peptide-nucleic acids, are all homopolymers and satisfy the chemical requirements necessary to function as self-replicators.

    Undirected chemical processes can produce homopolymers under carefully controlled, pristine laboratory conditions. However, as Shapiro pointed out, these processes cannot generate these types of molecules under early Earth’s conditions. The chemical compounds found in the complex chemical mixture that origin-of-life researchers think existed on early Earth would interfere with homopolymer formation. Instead, polymers with highly heterogeneous backbone structures would be produced. The likely chemical components of any prebiotic soup would not only interrupt the structural regularity of the biopolymer’s backbone, but they would also prematurely terminate its formation or introduce branch sites.

    The homopolymer problem is an intractable problem for chemical evolution—at least for replicator-first scenarios. Even though the in silico experiments demonstrated that amino acids can form and even combine into useful peptides, they also demonstrated that undesirable switching, branching, and termination reactions take place. Ironically, the in silico experiments have also provided added validation for the homopolymer problem.

    The Membrane Problem

    Another interesting feature of this work is the generation of surfactant molecules, such as fatty acids and amphiphilic peptides. Presumably, these materials could form vesicles with the capacity to encapsulate materials, leading to the first protocells. Yet, this process seems unlikely under the conditions of early Earth. Laboratory studies demonstrate that vesicles assembled from fatty acids are metastable and highly sensitive to fluctuation of environmental conditions. In fact, fatty acid vesicles assemble only under exacting solution conditions and require precise lipid compositions.2

    Again, these insights raise questions about the geochemical relevance of this result. So, even though surfactants can form under prebiotic conditions, their assembly into bilayer-forming vesicles is not a given, by any means.

    Prebiotic Chemistry and the Anthropic Principle

    Even though the sophisticated work from the Polish National Academy of Sciences was designed to validate the notion of chemical evolution, the study’s results produced some interesting theistic implications. There are good reasons to think that origin-of-life researchers will never determine how evolutionary pathways generated the first life-forms because of seemingly intractable problems facing chemical evolution. In the face of these dismal prospects, it becomes hard to argue that mechanism alone can explain the origin of life and the design of core biochemical systems. The conviction that a Creator isn’t necessary stands on shaky ground.

    Still, even if one grants the possibility that life had an evolutionary origin, it is impossible to escape the necessary role a Mind must have played in the appearance of first life on Earth—at least based on some intriguing results that emerge from the computer-assisted Beilstein reaction. As a case in point, it is provocative that the 82 biotic compounds which formed—a small fraction of the nearly 37,000 compounds generated by the in silico reactions—all share a suite of physicochemical properties that make these compounds unusually stable and relatively unreactive. These qualities cause these materials to persist in the prebiotic setting. It is also intriguing that these 82 compounds display synthetic redundancy, with the capability of being generated by several distinct chemical routes. It is also fortuitous that these compounds possess the just-right set of properties—many of which overlap with the set of properties that distinguish them from the vast number of abiotic compounds—that make them ideally suited to survive on early Earth and useful as building block materials for life.

    In other words, there appear to be constraints on prebiotic chemistry that inevitably lead to the production of key biotic molecules with the just-right properties that make them unusually stable and ideally suited for life. This remarkable coincidence is a bit “suspicious” and highly fortuitous, suggesting a fitness for purpose to the nature of prebiotic chemistry. To put it another way: There is an apparent teleology to prebiotic chemistry. It appears that the laws of physics and chemistry may well have been rigged at the outset to ensure that life’s building blocks naturally emerged under the conditions of early Earth. Could it be that this coincidence reflects the fact that a Mind is behind it all?

    It is remarkable to me as a biochemist and a Christian that the more insight we gain into the origin of life, the more the evidence points to the necessary role of a Creator, whether the Creator chose to directly intervene to create the first life-forms or whether he rigged the universe in such a way that life would inevitable emerge because of the design and constraints imposed by the laws of nature.

    It really is a new era in origin-of-life research.

    Endnotes

    1. Agnieszka Wołos et al., “Synthetic Connectivity, Emergence, and Self-Regeneration in the Network of Prebiotic Chemistry,” Science 369 (September 25, 2020): eaaw1955, doi: 10.1126/science.aaw1955.
    2. Jacquelyn A. Thomas and F. R. Rana, “Influence of Environmental Conditions, Lipid Composition, and Phase Behavior on the Origin of Cell Membranes,” Origins of Life and Evolution of Biospheres 37 (2007): 267-85, doi:10.1007/s11084-007-9065-6.

    Resources

  • Do Goosebumps Send a Chill Down the Spine of the Creation Model?

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Sep 02, 2020

    I think few would be surprised to learn that J. K. Rowling’s Harry Potter titles are the best-selling children’s books of all time, but do you know which works take second place in that category? It’s the Goosebumps series by R. L. Stine.

    From 1992 to 1997, Stine wrote and published 62 Goosebumps books. To date, these books have been printed in over 3o languages, with over 400 million copies sold worldwide (this does not include Stine’s numerous spin-off books) and adapted for television and film. Each book in the Goosebumps lineup features different child characters who find themselves in scary situations that often involve encounters with the bizarre and supernatural.

    The title of the series is apropos. Humans get goosebumps whenever we are afraid. We also get goosebumps when we are moved by something beautiful and awe-inspiring. And, of course, we get goosebumps when we are cold.

    Goosebumps are caused by a process dubbed piloerection. When we feel cold, tiny smooth muscles (called the arrector pili) deep within our skin contract. Because these muscles are attached to hair follicles, this contraction causes our hairs to stand on end. Getting goosebumps is one of our quirks as human beings. Most biologists don’t think this phenomenon serves any useful purpose, making it that much more of an oddity. So, if goosebumps have no obvious utility, then why do we experience them at all?

    Evolutionary Explanation for Goosebumps

    Many life scientists view goosebumps as a vestige of our evolutionary history. So, while goosebumps serve no apparent function in modern humans, evolutionary biologists believe they did have utility for our evolutionary ancestors, who were covered with a lot of hair. Presumably, when our ancestors were cold, the contraction of the arrector pili muscles created pockets of air near the surface of the skin when the hairs stood on end, serving as insulation from the chill. And when our ancestors were frightened, contraction of the arrector pili muscles caused their hair to puff up, making them seem larger and more menacing.

    blog__inline--do-goosebumps-send-a-chill-down-the-spine
    A cross section of skin. Credit: Wikipedia

    These two behaviors are observed in other mammals and even in some bird species. For evolutionary biologists, this shared behavior attests to our evolutionary connection to animal life.

    In other words, many life scientists see goosebumps as compelling evidence that human beings have an evolutionary origin because: (1) goosebumps serve no useful purpose in humans today and (2) the same physiological process that causes goosebumps in humans causes hair and fur of other animals to stand erect when they feel cold or threatened.

    So, one theological question creationists need to address is this: Why would God create human beings to have a useless response to the cold or to being frightened? For those of us who hold to a creation model/design perspective, goosebumps in humans cause us a bit of a fright. But is there any reason to be scared?

    What if goosebumps in humans serve a useful function? If they do, that function undermines the idea that goosebumps are a vestige of our evolutionary history and, at the same time, makes it reasonable to think that human beings are the handiwork of a Creator. Accordingly, all facets of our anatomy and physiology are intentionally designed for a purpose, including goosebumps. And this is precisely what a research team from Harvard University has discovered. These investigators identified an unexpected function performed by arrector pili muscles, beyond causing hairs to stand erect.1 This new insight suggests a reason why humans get goosebumps, making it reasonable to interpret this physiological feature of human beings within a creation model/design framework.

    Multiple Roles of the Arrector Pili Muscle

    To carry out its function, the arrector pili muscle forms an intimate association with nerves in the sympathetic nervous system. This component of the nervous system contributes to homeostasis, allowing the bodies of animals (including humans) to maintain constant and optimal conditions. As part of this activity, animals receive sensory input from their surroundings and respond to environmental changes. So, in this case, when a mammal experiences cold the sympathetic nervous system transmits the sensation to the arrector pili muscles, causing them to contract, helping the animal to stay warm.

    Recently, the Harvard research team, working with mice, discovered that the arrector pili muscle also plays a structural role, with the individual nerve fibers of the sympathetic nervous system wrapping around the muscle. This architecture positions the nerves next to a bed of stem cells near hair follicles, providing the sympathetic nervous system with a direct connection to the hair follicle stem cells.

    Normally, the hair follicle stem cells are in a quiescent (inactive) state. Under conditions of prolonged cold, however, the sympathetic nerves release the neurotransmitter norepinephrine. This release stimulates the stem cells to replicate and develop into new hair. In other words, the interplay between the arrector pili and the sympathetic nerves provides both a short-term (contraction of the arrector pili) and a long-term (hair growth) response to cold.

    The researchers discovered that when they removed the arrector pili muscles the sympathetic nerves retracted, losing their connection to the hair follicle stem cells. In the retracted state, the sympathetic nerves could not stimulate the activity of the hair follicle stem cells. In short, the arrector pili plays an integral role in coupling stem cell regeneration and, hence, hair growth to changes in the environment by functioning as scaffolding.

    Goosebumps and the Case for Creation

    In mammals (which have a coat of fur or bodies heavily covered with hair), the dual role played by the arrector pili muscles in mounting both rapid and long-term responses to the cold highlights the elegance, sophistication, and ingenuity of biological systems—features befitting the work of a Creator. But does this insight have any bearing on why humans experience goosebumps if they are created by God?

    Toward this end, if the arrector pili muscles served no true function, evolutionary theory predicts that they should atrophy, maybe even disappear. Yet, the work of the Harvard scientists makes it plain that if the arrector pili muscles became more diminutive or were lost, it could very well compromise the overall function of the sympathetic nervous system in human skin, because the scaffolding for nerves of the sympathetic system would be lost.

    The recognition that the arrector pili muscles prevent the sympathetic nerves from retracting away from hair follicles in mice suggests that this muscle functions in the same way in human skin. In mice and other mammals, the positioning of the sympathetic nerve is critical to stimulate the growth of new hair in response to ongoing exposure to cold. The same should be true in humans. Still, it is not clear at this juncture if hair growth in humans under these conditions would have any real benefit. On the other hand, there is no evidence to the contrary. We don’t know.

    What we do know is that without the arrector pili muscles the sympathetic nerves would lose their positioning anchor in human skin. It seems perfectly reasonable to think that the proper positioning of the sympathetic nerve in the skin, in general, plays an overarching role in communicating changes in the environment to our bodies, helping us to maintain a homeostatic state.

    In other words, because the muscle serves multiple purposes, it helps explain why these intact, fully functional muscles are found in human skin, with goosebumps produced as a by-product of the arrector pili’s association with hair follicles and sympathetic nerves. And who knows, maybe these muscles have added functions yet to be discovered.

    There may be other reasons why we get goosebumps. They help us to pay close attention to the happenings in our environment. And, of course, this heightened awareness provides a survival advantage. On top of that use, goosebumps also provide social cues to others, signaling to them that we are cold or frightened, with the hope that these cues would encourage them to step in and help us—again, a survival advantage.

    The cold truth is this: gaining a better understanding about the anatomy and physiology of the skin makes goosebumps less frightening for those of us who embrace a creation model approach to humanity’s origin.

    Resources

    Endnotes
    1. Yuli Schwartz et al., “Cell Types Promoting Goosebumps Form a Niche to Regulate Hair Follicle Stem Cells,” Cell 182, no. 3 (August 6, 2020): 578–93, doi:10.1016/j.cell.2020.06.031.
  • Answering Theological Questions on Neanderthal-Human Interbreeding

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Aug 12, 2020

    Why do we never get an answer

    When we’re knocking at the door

    With a thousand million questions

    About hate and death and war?

    “Question”

    —Justin Haywood

    When I was a teenager in the 1970s, songs by the Moody Blues were a staple on the playlists of most FM rock stations. This group helped pioneer the progressive and art rock genres in the 1960s. Truth be told, they don’t make my list of all-time favorite rock bands, but some of their single releases are among my favorite rock tunes. One of my favorite songs is “Question.” This track appeared on their 1970 album A Question of Balance. The song expresses the frustration that young people felt in the 1960s and 1970s about the conflict in Vietnam, asking profound and difficult questions, but never getting answers.

    As a Christian apologist, I receive difficult and profound questions all the time. My job is to answer them and hope that my responses—though many times incomplete—may bring some resolution. Lately, one topic that generates quite a few questions has to do with interbreeding between modern humans and Neanderthals. And the questions that people raise are difficult and profound.

    • Is it true that modern humans and Neanderthals interbred?
    • If interbreeding took place, what does that mean for the credibility of the biblical account of human origins?
    • Did the children resulting from these interbreeding events have a soul? Did they bear the image of God?

    In a previous article, I tackled a few of the most commonly asked questions about interbreeding and explored the scientific implications for the RTB human origins model. In this piece, I will respond to some of the most commonly asked theological questions connected to interbreeding between modern humans and hominins.

    Is There Biblical Warrant for Interbreeding between Modern Humans and Neanderthals?

    An initial reading of the human origin creation accounts leaves one with the impression that interbreeding between modern humans and Neanderthals lacks biblical support. Yet, a more careful consideration of the biblical text leaves room for this possibility. Toward this end, it is interesting that Genesis 6:1–2 describes the Nephilim as the hybrid offspring of interbreeding between the sons of God and the daughters of men. While this is an extremely difficult passage to interpret, it is clear that modern humans—the descendants of Adam and Eve—interbred outside their line and it displeased God.

    In effect, the act that led to the Nephilim—and by extension the interbreeding between modern humans and Neanderthals—shouldn’t be surprising. As a result of the fall, modern humans have a depraved nature. The consequence of this depravity came to a head just prior to the flood. Genesis 6:5 relates, “The LORD saw how great the wickedness of the human race had become on the earth, and that every inclination of the thoughts of the human heart was only evil all the time.” One could easily envision that out of this wickedness, humans could have pursued Neanderthals for evil ends, even though these hominins weren’t like us.

    Leviticus 18:23 also makes room for the possibility that modern humans interbred with Neanderthals (and other hominins). This passage condemns and forbids bestiality. Given humanity’s propensity to have intercourse with animals, it is not shocking that humans would interbreed with creatures like Neanderthals (who much more closely resemble humans).

    Did Modern Human-Neanderthal Hybrids Have a Soul?

    Tricky theological issues abound if indeed modern humans interbred with Neanderthals—the chief one being whether the hybrids had a soul. Did they possess the image of God?

    To properly engage this concern, we need to consider the two most prominent theological models for ensoulment from the perspective of the historic Christian faith. It is clear from the genealogies in Genesis 5 that the image of God (along with the consequences of Adam’s sin) have been transmitted to Adam and Eve’s descendants. How does this transmission occur functionally ?

    One view, called traducianism, maintains that each individual inherits the immaterial aspect of their being—their spirit or soul, if you will—from both parents, with the souls of their parents blending together to produce a new soul. This spiritual inheritance bears an analogy to biological inheritance in which each individual uniquely derives their physical features from both parents through a “blending” of their genetic material. In traducianism, it is only Adam and Eve who possess souls that were directly created by God.

    Another view, creationism, explains the origin of each person’s soul or spirit as the product of God’s direct creative activity. Just as God created the souls of Adam and Eve, so he creates the immaterial aspect of each person individually, presumably at the time of their conception or shortly thereafter.

    By employing either model, it is possible to conceive of scenarios by which a human-hominin hybrid receives a soul. For example, within the creationist framework, one could envision God creating a soul in the human-Neanderthal hybrid at the time of conception, honoring the fact that one of the parents is an image-bearer and knowing that the hybrid would likely interbreed with other humans.

    In the case of traducianism, as long as one of the parents is an image bearer, then the offspring should, too, possess God’s image. In effect, the image of God is indivisible. Either a being has the image of God or doesn’t. It makes no sense to think of a creature having only half of the image of God.

    So, even if modern humans and Neanderthals did interbreed, it doesn’t invalidate the biblical account of human origins for biblical or theological reasons. While the discovery of interbreeding between modern humans and Neanderthals does stand as a failed prediction of the RTB human origins model, it doesn’t falsify it. Rather, it forces us to revise the model. The good news is that this revision, though it leads to difficult questions, does not violate the teachings of Scripture.

    Resources

    Biological Differences between Humans and Neanderthals

    Archetype Biology

  • Answering Scientific Questions on Neanderthal-Human Interbreeding

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Aug 05, 2020

    So don’t ask me no questions
    And I won’t tell you no lies
    And don’t ask me about my business
    And I won’t tell you good-bye

    “Don’t Ask Me No Questions”

    —Ronnie Van Zandt and Gary Robert Rossington

    One of my favorite rock bands of all time is Lynyrd Skynyrd. (That’s right…Skynyrd, baby!) I know their musical catalog forward and backwards. I don’t know if it is a good thing or not, but I am conversant with the history of most of the songs recorded by the band’s original lineup.

    “Don’t Ask Me No Questions” was the first single released from their second studio album, Second Helping. The album also included “Sweet Home Alabama.” When juxtaposed with the success of “Sweet Home Alabama,” it’s ironic that “Don’t Ask Me No Questions” never even broke the charts.

    An admonition to family and friends not to pry into their personal affairs, this song describes the exhaustion the band members felt after spending months on tour. All they want is peace and respite when they return home. Instead, they find themselves continuously confronted by unrelenting and inappropriate questions about the rock ‘n’ roll lifestyle.

    As a Christian apologist, people ask me questions all the time. Yet, rarely do I find the questions annoying and inappropriate. I am happy to do my best to answer most of the questions asked of me—even the snarky ones posed by internet trolls. As of late, one topic that comes up often is interbreeding between modern humans and Neanderthals:

    • Is it true that modern humans and Neanderthals interbred?
    • If interbreeding took place, what does that mean for the credibility of the biblical account of human origins?
    • Did the children resulting from these interbreeding events have a soul? Did they bear the image of God?

    Recently, an international team of investigators looking to catalog Neanderthal genetic contributions, surveyed a large sampling of Icelander genomes. This work generated new and unexpected insights about interbreeding between hominins and modern humans.1

    No lie.

    It came as little surprise to me when the headlines announcing this discovery triggered another round of questions about interbreeding between modern humans and Neanderthals. I will address the first two questions above in this article and the third one in a future post.

    RTB’s Human Origins Model in 2005

    To tell the truth, for a number of years I resisted the idea that modern humans interbred with Neanderthals and Denisovans. When Hugh Ross and I published the first edition of our book, Who Was Adam? (2005), there was no real evidence that modern humans and Neanderthals interbred. We took this absence of evidence as support for the RTB human origins model.

    According to our model, Neanderthals have no evolutionary connection to modern humans. The RTB model posits that the hominins, such as Neanderthals and Denisovans, were creatures made by God that existed for a time and went extinct. These creatures had intelligence and emotional capacity (like most mammals), which enabled them to establish a culture. However, unlike modern humans, these creatures lacked the image of God. Accordingly, they were cognitively inferior to modern humans. In this sense, the RTB human origins model regards the hominins in the same vein as the great apes: intelligent, fascinating creatures in their own right that share some biological and behavioral attributes with modern humans (reflecting common design). Yet, no one would confuse a great ape and a modern human because of key biological distinctions and, more importantly, because of profound cognitive and behavioral differences.

    When we initially proposed our model, we predicted that the biological differences between modern humans and Neanderthals would have made interbreeding unlikely. And if they did interbreed, then these differences would have prohibited the production of viable, fertile offspring.

    Did Humans and Neanderthals Interbreed?

    In 2010, researchers produced a rough draft sequence of the Neanderthal genome and compared it to modern human genomes. They discovered a closer statistical association of the Neanderthal genome with those from European and Asian people groups than with genomes from African people groups.2 The researchers maintained that this effect could be readily explained if a limited number of interbreeding events took place between humans and Neanderthals in the eastern portion of the Middle East, roughly 45,000 to 80,000 years ago, just as humans began to migrate around the world. This would explain why non-African populations display what appears to be a 1 to 4 percent genetic contribution from Neanderthals while African people groups have no contribution whatsoever.

    At that time, I wasn’t entirely convinced that modern humans and Neanderthals interbred because there were other ways to explain the statistical association. Additionally, studies of Neanderthal genomes indicate that these hominins lived in small insular groups. At that time, I argued that the low population densities of Neanderthals would have greatly reduced the likelihood of encounters with modern humans migrating in small populations. It seemed to me that it was unlikely that interbreeding occurred.

    Other studies demonstrated that Neanderthals most likely were extinct before modern humans made their way into Europe. Once again, I argued that the earlier extinction of Neanderthals makes it impossible for them to have interbred with humans in Europe. Extinction also raises questions about whether the two species interbred at all.

    The Case for Interbreeding

    Despite these concerns, in the last few years I have become largely convinced that modern humans and Neanderthals interbred. Studies such as the one cataloging the Neanderthal contribution to the genomes of Icelanders leave me little choice in the matter.

    Thanks to the deCODE project, the genome sequences for nearly half the Icelandic population have been determined. An international team of collaborators made use of this data set, analyzing over 27,500 Icelander genomes for Neanderthal contribution using a newly developed algorithm. They detected over 14.4 million fragments of Neanderthal DNA in their data set. Of these, 112,709 were unique sequences that collectively constituted 48 percent of the Neanderthal genome.

    This finding has important implications. Even though individual Icelanders have about a 1 to 4 percent Neanderthal contribution to their genomes, the precise contribution differs from person to person. And when these individual contributions are combined it yields Neanderthal DNA sequences that cover nearly 50 percent of the Neanderthal genome. This finding aligns with previous studies which demonstrate that, collectively, across the human population Neanderthal sequences are distributed throughout 20 percent of the human genome. And 40 percent of the Neanderthal genome can be reconstructed from Neanderthal sequences found in a sampling of Eurasian genomes.3

    Adding to this evidence for interbreeding are studies that characterized ancient DNA recovered from several modern human fossil remains unearthed in Europe, dating between about 35,000 and 45,000 years in age. The genomes of these ancient modern humans contain much longer stretches of Neanderthal DNA than what’s found in contemporary modern humans, which is exactly what would be expected if modern humans interbred with these hominins.4

    As I see it, interbreeding is the only way to make sense of these results.

    Are Humans and Neanderthals the Same Species?

    Because the biological species concept (BSC) defines a species as an interbreeding population, some people argue that modern humans and Neanderthals must belong to the same species. This perspective is common among young-earth creationists who see Neanderthals as a subset of humanity.

    This argument fails to take into account the limitations of the BSC, one being the phenomenon of hybridization. Mammals that belong to separate species have been known to interbreed and produce viable—even fertile—offspring called hybrids. For example, lions and tigers in captivity have interbred successfully—yet both parent animals remain considered separate species. I would argue that the concept of hybridization applies to the interbreeding that took place between modern humans and Neanderthals.

    Even though it appears that modern humans and Neanderthals interbred, other lines of evidence indicate that these two hominins were distinct species. Significant anatomical differences exist between the two. The most profound difference is skull anatomy and, consequently, brain structure.

    blog__inline--answering-scientific-questions-on-neanderthal-human-interbreeding-part-1
    Anatomical Differences between Human and Neanderthal Skulls. Image credit: Wikipedia.

    Additionally, Neanderthals possessed a hyper-polar body design, consisting of a stout, barrel-shaped body with shortened limbs to help with heat retention. Neanderthals and modern humans display significant developmental differences as well. Neanderthals, for example, spent a minimal time in adolescence compared to modern humans. The two hominins also exhibit significant genetic differences (which includes differences in gene expression patterns), most notably for genes that play a role in cognition and cognitive development. Most critically, modern humans and Neanderthals display significant behavioral differences that stem from substantial differences in cognitive capacity.

    Along these lines, it is important to note that researchers believe that the resulting human-Neanderthal hybrids lacked fecundity.5 As geneticist David Reich notes, “Modern humans and Neanderthals were at the edge of biological compatibility.”6

    In other words, even though modern humans and Neanderthals interbred, they displayed sufficient biological differences that are extensive enough to justify classing the two as distinct species, just as the RTB model predicts. The extensive behavioral differences also validate the view that modern humans are exceptional and unique in ways that align with the image of God—again, in accord with RTB model predictions.

    Is the RTB Human Origins Model Invalid?

    It is safe to say that most paleoanthropologists view modern humans and Neanderthals as distinct species (or at least distinct populations that were isolated from one another for over 500,000 to 600,000 years). From an evolutionary perspective, modern humans and Neanderthals share a common evolutionary ancestor, perhaps Homo heidelbergensis, and arose as separate species as the two lineages diverged from this ancestral population. In the evolutionary framework, the capacity of Neanderthals and modern humans to interbreed reflects their shared evolutionary heritage. For this reason, some critics have pointed to the interbreeding between modern humans and other hominins as a devastating blow to the RTB model and as clear-cut evidence for human evolution.

    In light of this concern, it is important to recognize that the RTB human origins model readily accommodates the evidence for interbreeding between modern humans and Neanderthals. Instead of reflecting a shared evolutionary ancestry, within a creation model framework, the capacity for interbreeding is a consequence of the biological designs shared by modern humans and Neanderthals.

    The RTB model’s stance that shared biological features represent common design taps into a rich tradition within the history of biology. Prior to Charles Darwin, life scientists such as the preeminent biologist Sir Richard Owen, routinely viewed homologous systems as manifestations of archetypal designs that resided in the Mind of the First Cause. The RTB human origins model co-opts Owen’s ideas and applies them to the biological features modern humans share with other creatures, including the hominins.

    Without question, the discovery that modern humans interbred with other hominins, stands as a failed prediction of the initial version of the RTB human origins model. However, this discovery can be accommodated by revising the model–as is often done in science. Of course, this leads to the next set of questions.

    • Is there biblical warrant to think that modern humans interbred with other creatures?
    • Did the modern human-Neanderthal hybrid have a soul? Did it bear God’s image?

    I will take on these questions in the next article. And I am telling you no lie.

    Resources

    Biological Differences between Humans and Neanderthals

    Archetype Biology

    Endnotes
    1. Laurits Skov et al., “The Nature of Neanderthal Introgression Revealed by 27,566 Icelandic Genomes,” Nature (published online April 22, 2020), doi:10.1038/s49586-020-2225-9.
    2. Fazale Rana with Hugh Ross, Who Was Adam? A Creation Model Approach to the Origin of Humanity, 10-Year Update (Covina, CA: RTB Press, 2015), 301–12.
    3. Sriram Sankararaman et al., “The Genomic Landscape of Neanderthal Ancestry in Present-Day Humans,” Nature 507 (2014): 354–57, doi:10.1038/nature12961; Benjamin Vernot and Joshua M. Akey, “Resurrecting Surviving Neandertal Lineages from Modern Human Genomes,” Science 343 (2014): 1017–21, doi: 10.1126/science.1245938.
    4. Rana with Ross, Who Was Adam?, 304–5.
    5. Sankararaman et al., “Genomic Landscape,” 354–57, Vernot and Akey, “Resurrecting Surviving Neandertal Lineages,” 1017–21.
    6. Ewen Callaway, “Modern Human Genomes Reveal Our Inner Neanderthal,” Nature News (January 29, 2014), https://www.nature.com/news/modern-human-genomes-reveal-our-inner-neanderthal-1.14615.
  • Is Cruelty in Nature Really Evil?

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Jul 08, 2020

    How many are your works, Lord!
    In wisdom you made them all;
    the earth is full of your creatures.

    Psalm 104:24

    I don’t remember who pointed out thedarksideofnature Instagram account to me, but their description was intriguing enough that I had to check it out. After perusing a few posts, I ended up adding myself to the list of followers.

    I can’t say I enjoy the photos and videos posted by thedarksideofnature—which depict nature “red in tooth and claw”—but I do find them mesmerizing. Their website states that it is “all about showing the world a different side of nature. A side that may not be the prettiest, but it is the realist.”

    The posts from thedarksideofnature are a stark reminder of the dichotomy in the animal kingdom, simultaneously beautiful and brutal, highlighting the majesty and the power—and danger—of the world of nature. For many people the beauty, majesty, and power of nature evince a Creator’s handiwork. For others, nature’s brutality serves as justifiable cause for rejecting God’s existence. Why would an all-powerful, all-knowing, and all-good God create a world in which animal pain and suffering appears to be gratuitous?

    Perhaps nothing exemplifies the seemingly senseless cruelty of nature more so than the widespread occurrence of filial (relating to offspring) cannibalism and filial abandonment among animals. Many animals eat their young or consume eggs after laying them. Others abandon their young, condemning them to certain death.

    What an unimaginably cruel feature of nature. Why would God create animals that eat their offspring or abandon their young?

    Is Cruelty in Nature Really Evil?

    What if there are good reasons for God to permit pain and suffering to exist in the animal kingdom? Scientific research seems to offer several reasons.

    For example, some studies reveal purpose for animal pain and suffering. Others demonstrate that animal death promotes biodiversity and ecosystem stability. There are even studies that provide reasons for filial cannibalism and offspring abandonment (see the Resources section, below). Most recently, a team of investigators from Europe and Australia provide additional reasons why animals would consume their own offspring.1

    These researchers didn’t set out to study filial cannibalism. Instead, they sought to understand why the comb jelly, native to the Atlantic coast of North America, has been so successful at colonizing new habitats. For example, this invasive species has made its way into the Baltic Sea, which has longer periods of low food availability compared to the comb jelly’s native habitat. The comb jelly has adapted to the food shortage by engaging in behavior that, at first blush, is counterintuitive and seems to be counterproductive. As it enters into the late season, when the prey field begins to empty, the comb jelly makes a massive investment in reproduction, even though the larval offspring have virtually no chance of survival. In fact, after three weeks the comb jelly progeny stop growing, then shrink in size, and die.

     

    blog__inline--is-cruelty-in-nature-really-evil

    Figure: Comb Jelly. Credit: Shutterstock

    As it turns out, the late season wave of reproduction explains the comb jelly’s success as an invasive species. The researchers learned that the bloom of offspring serve as a food source for the comb jelly adults, replacing the disappearing prey. In other words, as the comb jelly’s available prey begins to decline in number, the jellies reproduce on a large scale with the juveniles serving as a nutrient store that lasts for an additional three weeks beyond the collapse of the prey fields. While this short duration may not seem like much, it affords the comb jelly an opportunity to outcompete other marine life during this window of time, ecologically making the difference between the flourishing and the decline of the species.

    Instead of viewing the filial cannibalism among the comb jelly in sinister terms, the investigators found it to be an ingenious design. They argue that the comb jelly population appears to be working together as a single organism. According to research team member Thomas Larsen:

    “In some ways, the whole jelly population is acting like a single organism, with the younger groups supporting the adults through times of nutrient stress. Overall, it enables jellies to persist through extreme events and low food periods, colonizing further than climate conditions and other conditions would usually allow.”2

    In effect, the filial cannibalism observed for the comb jelly is no different than the autophagy and apoptosis observed in multicellular organisms, in which individual cells are consumed for the overall benefit of the organism.

    Filial Cannibalism and the Logical Problem of Evil

    These insights into the adaptive value of filial cannibalism for the comb jelly help address the logical problem of natural evil. As part of the problem of natural evil, questions arise about God’s existence and goodness because of brutality in the animal kingdom. Many skeptics view the problem of evil as an insurmountable challenge for Christian theism:

    1. God is all-powerful, all-knowing, and all-good.
    2. Therefore, we would expect good designs in nature.
    3. Yet, nature is brutal, with animals experiencing an undue amount of pain and suffering.
    4. Therefore, God either does not exist or is not good.

    Skeptics argue that this final observation about nature is logically incompatible with God’s existence, or, minimally with God’s goodness. In other words, because of natural evil either God doesn’t exist or He isn’t good. Either way, Christian theism is undermined. But what if there is a good reason—as research shows—for pain and suffering to exist in nature? We could modify the syllogism this way:

    1. God is all-powerful, all-knowing, and all-good.
    2. Therefore, we would expect good designs in nature.
    3. There are good reasons for God to allow pain and suffering in the animal realm.
    4. Animal death, pain, and suffering are part of nature.

    In other words, if there are good reasons for animal pain and suffering, then God’s existence and goodness are logically coherent with animal pain and suffering. Also, who is to say that pain and suffering in the animal kingdom is excessive? How could anyone possibly know?

    The God of Skepticism or the God of the Bible?

    When considering the problem of natural evil, it is important to distinguish between the God of naturalistic philosophy and the God of the Bible. Though some philosophers may see pain and suffering in the animal realm as a reason to question God’s existence and goodness, the authors of Scripture had a different perspective. They saw animal death as part of the good creation and a reason to praise and celebrate God as Creator and Provider.3 The insights from science about the importance of animal death to ecosystems, and the adaptive value of pain and suffering provide the rationale for calling these features of nature “good.”

    All creatures look to you
    to give them their food at the proper time.
    When you give it to them,
    they gather it up;
    when you open your hand,
    they are satisfied with good things.

    When you hide your face,
    they are terrified;
    when you take away their breath,
    they die and return to the dust.
    When you send your Spirit,
    they are created,
    and you renew the face of the ground.

    Psalm 104:27–30

    Resources

    Animal Death and the Problem of Evil

    The Argument from Beauty

    Endnotes
    1. Jamileh Javidpour et al., “Cannibalism Makes Invasive Comb Jelly, Mnemiopsis leidyi, Resilient to Unfavorable Conditions,” Communications Biology 3 (2020): 212, doi:10.1038/s42003-020-0940-2.
  • Meteorite Protein Discovery: Does It Validate Chemical Evolution?

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Jun 10, 2020

    “I’ll toss my coins in the fountain,

    Look for clovers in grassy lawns

    Search for shooting stars in the night

    Cross my fingers and dream on.”

    —Tracy Chapman

    Like most little kids, I was regaled with tales about genies and wizards who used their magical powers to grant people the desires of their heart. And for a time, I was obsessed with finding some way to make my own wishes become a reality, too. I blew on dandelions, hunted for four-leafed clover, tried to catch fairy insects and looked for shooting stars in the night sky. Unfortunately, nothing worked.

    But, that didn’t mean that I gave up on my hopes and dreams. In time, I realized that sometimes my imagination outpaced reality.

    I still have hopes and dreams today. Hopefully, they are more realistic than in than the ones I held to in my youth. I even have hopes and dreams about what I might accomplish as a scientist. All scientists do. It’s part of what drives us. Scientists like to solve problems and extend the frontiers of knowledge. And, they hope that they will make discoveries that do that very thing, even if their hopes sometimes outpace reality.

    Recently, a team of biochemists turned to a meteorite—a small piece of a shooting star—with the hope that their dream of finding meaningful insights into the evolutionary origin-of-life question would be realized. Using state-of-the art analytical methods, the Harvard University researchers uncovered the first-ever evidence for proteins in meteorites.1 Their work is exemplary work—science at its best. These biochemists view this discovery as offering an important clue to the chemical evolutionary origin of life. Yet, a careful analysis of their claims leads to the nagging doubt that origin-of-life researchers really aren’t any closer to understanding the origin of life and realizing their dream.

    Meteorites and the Origin of Life

    Origin-of-life researchers have long turned to meteorites for insight into the chemical evolutionary processes they believe spawned life on Earth. It makes sense. Meteorites represent a sampling of the materials that formed during the time our solar system came together and, therefore, provide a window into the physical and chemical processes that shaped the earliest stages of our solar system’s history and would have played a potential role in the origin of life.

    One group of meteorites that origin-of-life researchers find to be most valuable toward this end are carbonaceous chondrites. Some classes of carbonaceous chondrites contain relatively high levels of organic compounds that formed from materials that existed in our early solar system. Many of these meteorites have undergone chemical and physical alterations since the time of their formation. Because of this metamorphosis, these meteorites offer clues about the types of prebiotic chemical processes that could have reasonably transpired on early Earth. However, they don’t give a clear picture of what the chemical and physical environment of the early solar system was like.

    Fortunately, researchers have discovered a unique type of carbonaceous chondrite: the CV3 class. These meteorites have escaped metamorphosis, undergoing virtually no physical or chemical alterations since they formed. For this reason, these meteorites prove to be exceptionally valuable because they provide a pristine, unadulterated view of the nascent solar system.

    The Discovery of Proteins in Meteorites

    Origin-of-life investigators have catalogued a large inventory of organic compounds from carbonaceous chondrites, including some of the building blocks of life, such as amino acids, the constituents of proteins. Even though amino acids have been recovered from meteorites, there have been no reports of amino acid polymers (protein-like materials) in meteorites—at least until the Harvard team began their work.

    Figure: Reaction of Amino Acids to Form Proteins. Credit: Shutterstock

    The team’s pursuit of proteins in meteorites started in 2014 when they carried out a theoretical study that indicated to them that amino acids could polymerize to form protein-like materials in the gas nebulae that condense to form solar systems.2 In an attempt to provide experimental support for this claim, the research team analyzed two CV3 class carbonaceous chondrites: the Allende and Acfer 086 meteorites.

    Instead of extracting these meteorites for 24 hours with water at 100°C (which is the usual approach taken by origin-of-life investigators), the research team adopted a different strategy. They reasoned that the protein-like materials that would form from amino acids in gaseous nebulae would be hydrophobic. (Hydrophobic materials are water-repellent materials that are insoluble in aqueous systems.) These types of materials wouldn’t be extracted by hot water. Alternatively, these hydrophobic protein-like substances would be susceptible to breaking down into their constituent amino acids (through a process called hydrolysis) under the standard extraction method. Either way, the protein-like materials would escape detection.

    So, the researchers employed a Folch extraction at room temperature. This technique is designed to extract materials with a range of solubility properties while avoiding hydrolytic reactions. Using this approach, the Harvard researchers were able to detect evidence for amino acid polymers consisting of glycine and hydroxyglycine in extracts taken from the two meteorites.3

    In their latest work, the research team performed a detailed structural characterization of the amino acid polymers from the Acfer 086 meteorite, thanks to access to a state-of-the-art mass spectrometer that had the capabilities of analyzing low levels of materials in the meteorite extracts.

    The Harvard scientists determined that a distribution of amino acid polymer species existed in the meteorite sample.The most prominent one was a duplex formed from two protein-like chains that were 16 amino acids in length, comprised of glycine and hydroxyglycine residues. They also detected lithium ions associated with some of the hydroxyglycine subunits. Bound to both ends of the duplex was an unusual iron oxide moiety formed from two atoms of iron and three oxygen atoms. Lithium atoms were also associated with the iron oxide moiety.

    Researchers are confident that this protein-like material—which they dub hemolithin—is not due to terrestrial contamination for two reasons. First, hydroxyglycine is a non-protein amino acid. Secondly, the protein duplex is enriched in deuterium—a signature that indicates it stems from an extraterrestrial source. In fact, the deuterium enrichment is so excessive, the researchers think it may have formed in the gas nebula before it condensed to form our solar system.

    Origin-of-Life Implications

    If these results stand, they represent an important scientific milestone—the first-ever protein-like material recovered from an extraterrestrial source. A dream come true for the Harvard scientists. Beyond this acclaim, origin-of-life researchers view this work as having important implications for the origin-of-life question.

    For starters, this work affirms that chemical complexification can take place in prebiotic settings, providing support of chemical evolution. The Harvard scientists also speculate that the iron oxide complex at the ends of the amino acid polymer chains could serve as an energy source for prebiotic chemistry. This complex can absorb photons of light and, in turn, use that absorbed energy to drive chemical processes, such as cleaving water molecules.

    More importantly, this work indicates that amino acids can form and polymerize in gaseous nebulae prior to the time that these structures collapse and condense into solar systems. In other words, this work suggests that prebiotic chemistry may have been well under way before Earth formed. If so, it means that prebiotic materials could have been endogenous to (produced within) the solar system, forming an inventory of building block materials that could have jump-started the chemical evolutionary process. Alternatively, the formation of prebiotic materials prior to solar system formation opens up the possibility that these critical compounds for the origin of life didn’t have to form on early Earth. Instead, prebiotic compounds could have been delivered to the early Earth by asteroids and comets—again, contributing to the early Earth’s cache of prebiotic substances.

    Does the Protein-in-Meterorite Discovery Evince Chemical Evolution?

    In many respects, the discovery of protein species in carbonaceous chondrites is not surprising. If amino acids are present in meteorites (or gaseous nebula), it stands to reason that, under certain conditions, these materials will react to form amino acid polymers. But, even so, a protein-like material made up of glycine and hydroxyglycine residues has questionable biochemical utility and this singular compound is a far cry from the minimal biochemical complexity needed for life. Chemical evolutionary processes must traverse a long road to move from the simplest amino acid building blocks (and the polymers formed from these compounds) to a minimal cell.

    More importantly, it is questionable if the amino acid polymers in carbonaceous chondrites (or in gaseous nebula) made much of a contribution to the inventory of prebiotic materials on early Earth. Detection and characterization of the amino acid polymer in the Acfer 086 meteorite was only possible thanks to cutting-edge analytical instrumentation (the mass spectrometer) with the capability to detect and characterize low levels of materials. This requirement means that proteins found in the Acfer 086 meteorite samples must exist at relatively low levels. Once delivered to the early Earth, these materials would have been further diluted to even lower levels as they were introduced into the environment. In other words, these compounds most likely would have melded into the chemical background of early Earth, making little, if any, contribution to chemical evolution. And once the amino acid polymers dissolved into the early Earth’s oceans, a significant proportion may well have undergone hydrolysis (decomposition) into constituent amino acids.

    Earth’s geological record affirms my assessment of the research team’s claims. Geochemical evidence from the oldest rock formations on Earth, dating to around 3.8 billion years ago, makes it clear that neither endogenous organic materials nor prebiotic materials delivered to early Earth via comets and asteroids (including amino acids and protein-like materials) made any contribution to the prebiotic inventory of early Earth. If these materials did add to the prebiotic store, the carbonaceous deposits in the oldest rocks on Earth would display a carbon-13 and deuterium enrichment. But they don’t. Instead, these deposits display a carbon-13 and deuterium depletion, indicating that these carbonaceous materials result from biological activity, not extraterrestrial mechanisms.

    So, even though the Harvard investigators accomplished an important milestone in origin-of-life research, the scientific community’s dream of finding a chemical evolutionary pathway to the origin of life remains unfulfilled.

    Resources

    Endnotes
    1. Malcolm W. McGeoch, Sergei Dikler, and Julie E. M. McGeoch, “Hemolithin: A Meteoritic Protein Containing Iron and Lithium,” (February 22, 2020), preprint, https://arxiv.org/abs/2002.11688.
    2. Julie E. M. McGeoch and Malcolm W. McGeoch, “Polymer Amide as an Early Topology,” PLoS ONE 9, no. 7 (July 21, 2014): e103036, doi:10.1371/journal.pone.0103036.
    3. Julie E. M. McGeoch and Malcolm W. McGeoch, “Polymer Amide in the Allende and Murchison Meteorites,” Meteoritics and Planetary Science 50 (November 5, 2015): 1971–83, doi:10.1111/maps.12558; Julie E. M. McGeoch and Malcolm W. McGeoch, “A 4641Da Polymer of Amino Acids in Acfer 086 and Allende Meteorites,” (July 28, 2017), preprint, https://arxiv.org/pdf/1707.09080.pdf.
  • The Argument from Beauty: Can Evolution Explain Our Aesthetic Sense?

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | May 13, 2020

    Lately, I find myself spending more time in front of the TV than I normally would, thanks to the COVID-19 pandemic. I‘m not sure investing more time watching TV is a good thing, but it has allowed me to catch up on some of my favorite TV shows.

    One program that is near the top of my favorites list these days is the Canadian sitcom Kim’s Convenience. Based on the 2011 play of the same name written by Ins Choi, this sitcom is about a family of Korean immigrants who live in Toronto, where they run a convenience store.

    In the episode Best Before Appa, the traditional, opinionated, and blunt family patriarch, argues with his 20-year-old daughter about selling cans of ravioli that have expired. Janet, an art student frustrated by her parents’ commitment to Korean traditions and their tendency to parent her excessively, implores her father not to sell the expired product because it could make people sick. But Mr. Kim asserts that the ravioli isn’t bad, reasoning that the label states, “best before this date. After this date, not the best, but still pretty good.”

    The assessment “not the best, but still pretty good” applies to more than just expired cans of foods. It also applies to explanations.

    Often, competing explanations exist for a set of facts, an event in life’s history, or some phenomenon in nature. And, each explanation has merits and weaknesses. In these circumstances, it’s not uncommon to seek the best explanation among the contenders. Yet, as I have learned through experience, identifying the best explanation isn’t as easy as it might seem. For example, whether or not one considers an explanation to be the “best” or “not the best, but pretty good” depends on a number of factors, including one’s worldview.

    I have found this difference in perspective to be true as I have interacted with skeptics about the argument for God from beauty.

    Nature’s Beauty, God’s Existence, and the Biblical View of Humanity

    Every place we look in nature—whether the night sky, the oceans, the rain forests, the deserts, even the microscopic world—we see a grandeur so compelling that we are often moved to our very core. For theists, nature’s beauty points to the reality of God’s existence.

    As philosopher Richard Swinburne argues, “If God creates a universe, as a good workman he will create a beautiful universe. On the other hand, if the universe came into existence without being created by God, there is no reason to suppose that it would be a beautiful universe.”1 In other words, the best explanation for the beauty in the world around us is divine agency.

    blog__inline--the-argument-from-beauty-can-evolution-explain

    Image: Richard Swinburne Credit: Wikipedia

    Moreover, our response to the beauty in the world around us supports the biblical view of human nature. As human beings, why do we perceive beauty in the world? In response to this question, Swinburne asserts, “There is certainly no particular reason why, if the universe originated uncaused, psycho-physical laws . . . would bring about aesthetic sensibilities in human beings.”2 But if human beings are made in God’s image, as Scripture teaches, we should be able to discern and appreciate the universe’s beauty, made by our Creator to reveal his glory and majesty. In other words, Swinburne and others who share his worldview find God to be the best explanation for the beauty that surrounds us.

    Humanity’s Aesthetic Sense

    Our appreciation of beauty stands as one of humanity’s defining features. And it extends beyond our fascination with nature’s beauty. Because of our aesthetic sense, we strive to create beautiful things ourselves, such as paintings and figurative art. We adorn ourselves with body ornaments. We write and perform music. We sing songs. We dance. We create fiction and tell stories. Much of the art we produce involves depictions of imaginary worlds. And, after we create these imaginary worlds, we contemplate them. We become absorbed in them.

    What is the best explanation for our aesthetic sense? Following after Swinburne, I maintain that the biblical view of human nature accounts for our aesthetic sense. For, if we are made in God’s image, then we are creators ourselves. And the art, music, and stories we create arises as a manifestation of God’s image within us.

    As a Christian theist, I am skeptical that the evolutionary paradigm can offer a compelling explanation for our aesthetic sense.

    Though sympathetic to an evolutionary approach as a way to explain for our sense of beauty, philosopher Mohan Matthen helps frame the problem confronting the evolutionary paradigm: “But why is this good, from an evolutionary point of view? Why is it valuable to be absorbed in contemplation, with all the attendant dangers of reduced vigilance? Wasting time and energy puts organisms at an evolutionary disadvantage. For large animals such as us, unnecessary activity is particularly expensive.”3

    Our response to beauty includes the pleasure we experience when we immerse ourselves in nature’s beauty, a piece of art or music, or a riveting fictional account. But, the pleasure we derive from contemplating beauty isn’t associated with a drive that supports our survival, such as thirst, hunger, or sexual urges. When these desires are satisfied we experience pleasure, but that pleasure displays a time-dependent profile. For example, it is unpleasant when we are hungry, yet those unpleasant feelings turn into pleasure when we eat. In turn, the pleasure associated with assuaging our hunger is short-lived, soon replaced with the discomfort of our returning hunger.

    In contrast, the pleasure associated with our aesthetic sense varies little over time. The sensory and intellectual pleasure we experience from contemplating things we deem beautiful continues without end.

    On the surface it appears our aesthetic sense defies explanation within an evolutionary framework. Yet, many evolutionary biologists and evolutionary psychologists have offered possible evolutionary accounts for its origin.

    Evolutionary Accounts for Humanity’s Aesthetic Sense

    Evolutionary scenarios for the origin of human aesthetics adopt one of three approaches, viewing it as either (1) an adaptation, (2) an evolutionary by-product, or (3) the result of genetic noise.4

    1. Theories that involve adaptive mechanisms claim our aesthetic sense emerged as an adaptation that assumed a central place in our survival and reproductive success as a species.

    2. Theories that view our aesthetic sense as an evolutionary by-product maintain that it is the accidental, unintended consequence of other adaptations that evolved to serve other critical functions—functions with no bearing on our capacity to appreciate beauty.

    3. Theories that appeal to genetic drift consider our aesthetic sense to be the accidental, chance outcome of evolutionary history that just happened upon a gene network that makes our appreciation of beauty possible.

    For many people, these evolutionary accounts function as better explanations for our aesthetic sense than one relying on a Creator’s existence and role in creating a beautiful universe, including creatures who bear his image and are designed to enjoy his handiwork. Yet, for me, none of the evolutionary approaches seem compelling. The mere fact that a plethora of differing scenarios exist to explain the origin of our aesthetic sense indicates that none of these approaches has much going for it. If there truly was a compelling way to explain the evolutionary origin of our aesthetic sense, then I would expect that a singular theory would have emerged as the clear front-runner.

    Genetic Drift and Evolutionary By-Product Models

    In effect, evolutionary models that regard our aesthetic sense to be an unintended by-product or the consequence of genetic drift are largely untestable. And, of course, this concern prompts the question: Are any of these approaches genuinely scientific explanations?

    On top of that, both types of scenarios suffer from the same overarching problem; namely, human activities that involve our aesthetic sense are central to almost all that we do. According to evolutionary biologists John Tooby and Leda Cosmides:

    Aesthetically-driven activities are not marginal phenomena or elite behavior without significance in ordinary life. Humans in all cultures spend a significant amount of time engaged in activities such as listening to or telling fictional stories, participating in various forms of imaginative pretense, thinking about imaginary worlds, experiencing the imaginary creations of others, and creating public representations designed to communicate fictional experiences to others. Involvement in fictional, imagined worlds appears to be a cross-culturally universal, species-typical phenomenon . . . Involvement in the imaginative arts appears to be an intrinsically rewarding activity, without apparent utilitarian payoff.5

    As human beings we prefer to occupy imaginary worlds. We prefer absorbing ourselves in the beauty of the world or in the creations we make. Yet, as Tooby and Cosmides point out, obsession with the imaginary world detracts from our survivability.6 The ultimate rewards we receive should be those leading to our survival and reproductive success and these rewards should come from the time we spend acquiring and acting on true information about the world. In fact, we should have an appetite for accurate information about the world and a willingness to cast aside false, imaginary information.

    In effect, our obsession with aesthetics could be properly seen as maladaptive. It would be one thing if our obsession with creating and admiring beauty was an incidental part of our nature. But, because it is at the forefront of everything we think and do, its maladaptive character should have resulted in its adaptive elimination. Instead, we see the opposite. Our aesthetic sense is one of our most dominant traits as human beings.

    Evolutionary Adaptation Models

    This significant shortcoming pushes to the forefront evolutionary scenarios that explain our aesthetic sense as adaptations. Yet, generally speaking, these evolutionary scenarios leave much to be desired. For example, one widely touted model explains our attraction to natural beauty as a capacity that helped humans identify the best habitats when we were hunter-gatherers. This aesthetic sense causes us to admire idyllic settings with water and trees. And, because we admire these settings, we want to live in them, promoting our survivability and reproductive success. Yet this model doesn’t account for our attraction to settings that would make it nearly impossible to live, let alone thrive. Such settings include snow-covered mountains with sparse vegetation; the crashing waves of an angry ocean; or the molten lava flowing from a volcanic eruption. These settings are hostile, yet we are enamored with their majestic beauty. This adaptive model also doesn’t explain our attraction to animals that would be deadly to us: lions and tigers or brightly colored snakes, for example.

    Another more sophisticated model explains our aesthetic sense as a manifestation of our ability to discern patterns. The capacity to discern patterns plays a key role in our ability to predict future events, promoting our survival and reproductive success. Our perception of patterns is innate, yet, it needs to be developed and trained. So, our contemplation of beauty and our creation of art, music, literature, etc. are perceptual play—fun and enjoyable activities that develop our perceptual skills.7 If this model is valid, then I would expect that perceptual play (and consequently fascination with beauty) would be most evident in children and teenagers. Yet, we see that our aesthetic sense continues into adulthood. In fact, it becomes more elaborate and sophisticated as we grow older. Adults are much more likely to spend an exorbitant amount of time admiring and contemplating beauty and creating art and music.

    This model also fails to explain why we feel compelled to develop our perceptual abilities and aesthetic capacities far beyond the basic skills needed to survive and reproduce. As human beings, we are obsessed with becoming aesthetic experts. The drive to develop expert skill in the aesthetic arts detracts from our survivability. This drive for perfection is maladaptive. To become an expert requires time and effort. It involves difficulty—even pain—and sacrifice. It’s effort better spent trying to survive and reproduce.

    At the end of the day, evolutionary models that appeal to the adaptive value of our aesthetic sense, though elaborate and sophisticated, seem little more than evolutionary just-so stories.

    So, what is the best explanation for our aesthetic sense? It likely depends on your worldview.

    Which explanatory model is best? And which is not the best, but still pretty good? If you are a Christian theist, you most likely find the argument from beauty compelling. But, if you are a skeptic you most likely prefer evolutionary accounts for the origin of our aesthetic sense.

    So, like beauty, the best explanation may lie in the eye of the beholder.

    Resources

    Endnotes
    1. Richard Swinburne, The Existence of God, 2nd ed. (New York: Oxford University Press, 2004), 190–91.
    2. Swinburne, The Existence of God, 190–91.
    3. Mohan Matthen, “Eye Candy,” Aeon (March 24, 2014), https://aeon.co/amp/essays/how-did-evolution-shape-the-human-appreciation-of-beauty.
    4. John Tooby and Leda Cosmides, “Does Beauty Build Adaptive Minds? Towards an Evolutionary Theory of Aesthetics, Fiction and the Arts,” SubStance 30, no. 1&2 (2001): 6–27; doi: 10.1353/sub.2001.0017.
    5. Tooby and Cosmides, “Does Beauty Build Adaptive Minds?”
    6. Tooby and Cosmides, “Does Beauty Build Adaptive Minds?”
    7. Matthen, “Eye Candy.”
  • Another Disappointment for the Evolutionary Model for the Origin of Eukaryotic Cells?

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Apr 29, 2020

    We all want to be happy.

    And there is no shortage of advice on what we need to do to lead happy, fulfilled lives. There are even “experts” who offer advice on what we shouldn’t do, if we want to be happy.

    As a scientist, there is one thing that makes me (and most other scientists) giddy with delight: It is learning how things in nature work.

    Most scientists have a burning curiosity to understand the world around them, me included. Like most scientists, I derive enormous amount of joy and satisfaction when I gain insight into the inner workings of some feature of nature. And, like most in the scientific community, I feel frustrated and disappointed when I don’t know why things are the way they are. Side by side, this combination of joy and frustration serves as one of the driving forces for my work as a scientist.

    And, because many of the most interesting questions in science can appear at times to be nearly impenetrable mysteries, new discoveries typically bring me (and most other scientists) a mixture of hope and consternation.

    Trying to Solve a Mystery

    These mixed emotions are clearly evident in the life scientists who strive to understand the evolutionary origin of complex, eukaryotic cells. As science journalist Carl Zimmer rightly points out, the evolutionary process that produced eukaryotic cells from simpler microbes stands as “one of the deepest mysteries in biology.”1 And while researchers continue to accumulate clues about the origin of eukaryotic cells, they remain stymied when it comes to offering a robust, reliable evolutionary account of one of life’s key transitions.

    The leading explanation for the evolutionary origin of eukaryotic cells is the endosymbiont hypothesis. On the surface, this idea appears to be well evidenced. But digging a little deeper into the details of this model exposes gaping holes. And each time researchers present new understanding about this presumed evolutionary transition, it exposes even more flaws with the model, turning the joy of discovery into frustration, as the latest work by a team of Japanese microbiologists attests.2

    Before we unpack the work by the Japanese investigators and its implications for the endosymbiont hypothesis, a quick review of this cornerstone idea in evolutionary theory is in order. (If you are familiar with the endosymbiont hypothesis and the evidence in support of the model, please feel free to skip ahead to The Discovery of Lokiarchaeota)

    The Endosymbiont Hypothesis

    According to this idea, complex cells originated when symbiotic relationships formed among single-celled microbes after free-living bacterial and/or archaeal cells were engulfed by a “host” microbe.

    Much of the endosymbiont hypothesis centers around the origin of the mitochondrion. Presumably, this organelle started as an endosymbiont. Evolutionary biologists believe that once engulfed by the host cell, this microbe took up permanent residency, growing and dividing inside the host. Over time, the endosymbiont and the host became mutually interdependent, with the endosymbiont providing a metabolic benefit for the host cell, such as supplying a source of ATP. In turn, the host cell provided nutrients to the endosymbiont. Presumably, the endosymbiont gradually evolved into an organelle through a process referred to as genome reduction. This reduction resulted when genes from the endosymbiont’s genome were transferred into the genome of the host organism.

    blog__inline--another-disappointment-for-the-evolutionary-model

    Figure 1: A Depiction of the Endosymbiont Hypothesis. Image credit: Shutterstock

    Evidence for the Endosymbiont Hypothesis

    At least three lines of evidence bolster the hypothesis:

    • The similarity of mitochondria to bacteria. Most of the evidence for the endosymbiont hypothesis centers around the fact that mitochondria are about the same size and shape as a typical bacterium and have a double membrane structure like gram-negative cells. These organelles also divide in a way that is reminiscent of bacterial cells.
    • Mitochondrial DNA. Evolutionary biologists view the presence of the diminutive mitochondrial genome as a vestige of this organelle’s evolutionary history. They see the biochemical similarities between mitochondrial and bacterial genomes as further evidence for the evolutionary origin of these organelles.
    • The presence of the unique lipid, cardiolipin, in the mitochondrial inner membrane. This important lipid component of bacterial inner membranes is not found in the membranes of eukaryotic cells—except for the inner membranes of mitochondria. In fact, biochemists consider cardiolipin a signature lipid for mitochondria and another relic from its evolutionary past.

    The Discovery of Lokiarchaeota

    Evolutionary biologists have also developed other lines of evidence in support of the endosymbiont hypothesis. For example, biochemists have discovered that the genetic core (DNA replication and the transcription and translation of genetic information) of eukaryotic cells resembles that of the Archaea. This similarity suggests to many biologists that a microbe belonging to the archaeal domain served as the host cell that gave rise to eukaryotic cells.

    Life scientists think they may have made strides toward identifying the archaeal host. In 2015, a large international team of collaborators reported the discovery of Lokiarchaeota, a new phylum belonging to the Archaea. This phylum groups with eukaryotes on the evolutionary tree. Analysis of the genomes of Lokiarchaeota reveal the presence of genes that encode for the so-called eukaryotic signature proteins (ESPs). These genes are unique to eukaryotic organisms.3

    As exciting as the discovery has been for evolutionary biologists, it has also been a source of frustration. Researchers didn’t discover this group of microbes by isolating microbes and culturing them in the lab. Instead, they discovered them by recovering DNA fragments from the environment (a hydrothermal vent system in the Atlantic Ocean called Loki’s Castle, after Loki, the ancient Norse god of trickery) and assembling them into genome sequences. Through this process, they learned that Lokiarchaeota correspond to a new group of Archaea, called the Asgardians. The reconstructed Lokiarchaeota “genome” is low quality (1.4-fold coverage) and incomplete (8 percent of the genome is missing).

    Mystery Solved?

    So, without actual microbes to study, the best that life scientists could do was infer the cell biology of Lokiarchaeota from its genome. But this frustrating limitation recently turned into excitement as a team of Japanese microbiologists isolated and cultured the first microbe that belongs to this group of archaeons, dubbed Prometheoarchaeum syntrophicum. It took researchers nearly 12 years of laboratory work to isolate this slow-growing microbe from sediments in the Pacific Ocean and culture it in the laboratory. (It takes 14 to 25 days for the microbe to double.) But this effort is now paying off, because the research team is now able to get a glimpse into what many life scientists believe to be a representative of the host microbe that spawned the first eukaryotic cells.

    P. syntrophicum is spherically shaped and about 550 nm in size. In culture, this microbe forms aggregates around an extracellular polymeric material it secretes. It also has unusual membrane-based tentacle-like protrusions (of about 80 to 100 nm in length) that extend from the cell surface.

    Researchers were unable to produce a pure culture of P. syntrophicum because it forms a close association with other microbes. The team learned that P. syntrophicum lives a syntrophic lifestyle, meaning that it forms interdependent relationships with other microbes in the environment. Specifically, P. syntrophicum produces hydrogen and formate as metabolic by-products that, in turn, are scavenged for nutrients by partner microbes. Researchers also discovered that P. syntrophicum consumes amino acids externally supplied in the growth medium. Presumably, this observation means that in the ocean floor sediments, P. syntrophicum feeds on organic materials released by its microbial counterpart.

    P. syntrophicum and Failed Predictions of the Endosymbiont Hypothesis

    Availability of P. syntrophicum cells now allows researchers the unprecedented chance to study a microbe that they believe stands in as a representative for the archaeal host in the endosymbiont hypothesis. Has the mystery been solved? Instead of affirming the scientific predictions of leading versions of the endosymbiont hypothesis, the biology of this organism adds to the frustration and confusion surrounding the evolutionary account. Scientific analysis produces raises three questions for the evolutionary view:

    • First, this microbe has no internal cellular structures. This observation stands as a failed prediction. Because Lokiarchaeota (and other members of the Asgard archaeons) have a large number of ESPs present in their genomes, some biologists speculated that the Asgardian microbes would have complex subcellular structures. Yet, this expectation has not been realized for P. syntrophicum, even though this microbe has around 80 or so ESPs in its genome.
    • Second, this microbe can’t engulf other microbes. This inability also serves as a failed prediction. Prior to the cultivation of P. syntrophicum, analysis of the genomes of Lokiarchaeota identified a number of genes involved in membrane-related activities, suggesting that this microbe may well have possessed the ability to engulf other microbes. Again, this expectation wasn’t realized for P. syntrophicum. This observation is a significant blow to the endosymbiont hypothesis, which requires the host cell to have cellular processes in place to engulf other microbes.
    • Third, the membranes of this microbe are comprised of typical archaeal lipids and lack the enzymatic machinery to make typical bacterial lipids. This also serves as a failed prediction. Evolutionary biologists had hoped that P. syntrophicum would provide a solution to the lipid divide (next section). It doesn’t.

    What Is the Lipid Divide?

    The lipid divide refers to the difference in the chemical composition of the cell membranes found in bacteria and archaea. Phospholipids comprise the cell membranes of both sorts of microbes. But that‘s where the similarity ends. The chemical makeup of the phospholipids is distinct in bacteria and archaea, respectively.

    Bacterial phospholipids are built around a D-glycerol backbone which has a phosphate moiety bound to the glycerol in the sn-3 position. Two fatty acids are bound to the D-glycerol backbone at the sn-1 and sn-2 position. In water, these phospholipids assemble into bilayer structures.

    Archaeal phospholipids are constructed around an Lglycerol backbone (which produces membrane lipids with different stereochemistry than bacterial phospholipids). The phosphate moiety is attached to the sn-1 position of glycerol. Two isoprene chains are bound to the sn-2 and sn-3 positions of L-glycerol via ether linkages. Some archaeal membranes are formed from phospholipid bilayers, while others are formed from phospholipid monolayers.

    Presumably, the structural features of the archaeal phospholipids serve as an adaptation that renders them ideally suited to form stable membranes in the physically and chemically harsh environments in which many archaea find themselves.

    The Lipid Divide Frustrates the Endosymbiont Hypothesis

    If the host cell in the endosymbiont evolutionary mechanism is an archaeal cell, it logically follows that the membrane composition of eukaryotic cells should be archaeal-like. As it turns out, this expectation is not met. The cell membranes of eukaryotic cells closely resemble bacterial, not archaeal, membranes.

    Can Lokiarchaeota Traverse the Lipid Divide?

    Researchers had hoped that the discovery of Lokiarchaeota would shed light on the evolutionary origin of eukaryotic cell membranes. In the absence of having actual organisms to study, researchers screened the Lokiarchaeota genome for enzymes that would take part in phospholipid synthesis, with the hopes of finding clues about how this transition may have occurred.

    Based on their analysis, they argued that Lokiarchaeota could produce some type of hybrid phospholipid with features of both archaeal and bacterial phospholipids. Still, their conclusion remained speculative at best. The only way to establish Lokiarchaeota membranes as transitional between those found in archaea and bacteria is to perform chemical analysis of its membranes. With the isolation and cultivation of P. syntrophicum this analysis is possible. Yet its results only serve to disappoint evolutionary biologists, because this microbe has typical archaeal lipids in its membranes and displays no evidence of being capable of making archaeal/bacterial hybrid lipids.

    A New Model for the Endosymbiont Hypothesis?

    Not to be dissuaded by these disappointing results, the Japanese researchers propose a new version of the endosymbiont hypothesis, consistent with P. syntrophicum biology. For this model, they envision the archaeal host entangling an oxygen-metabolizing, ATP-producing bacterium in the tentacle-like structures that emanate from its cellular surface. Over time, the entangled organism forms a mutualistic relationship with the archaeal host. Eventually, the host encapsulates the entangled microbe in an extracellular structure that forms the body of the eukaryotic cell, with the host cell forming a proto-nucleus.

    Though this model is consistent with P. syntrophicum biology, it is highly speculative and lack supporting evidence. To be fair, the Japanese researchers make this very point when they state, “further evidence is required to support this conjecture.”5

    This work shows how scientific advance helps validate or invalidate models. Even though many biologists view the endosymbiont hypothesis as a compelling, well-established theory, significant gaps in our understanding of the origin of eukaryotic cells persist. (For a more extensive discussion of these outages see the Resources section.) In my view as a biochemist, some of these gaps are unbridgeable chasms that motivate my skepticism about the endosymbiont hypothesis, specifically, and the evolutionary approach to explain the origin of eukaryotic cells, generally.

    Of course, my skepticism leads to another question: Is it possible that the origin of eukaryotic cells reflects a Creator’s handiwork? I am happy to say that the answer is “yes.”

    Resources

    Challenges to the Endosymbiont Hypothesis

    In Support of A Creation Model for the Origin of Eukaryotic Cells

    Endnotes
    1. Carl Zimmer, “This Strange Microbe May Mark One of Life’s Great Leaps,” The New York Times (January 16, 2020), https://www.nytimes.com/2020/01/15/science/cells-eukaryotes-archaea.html.
    2. Hiroyuki Imachi et al., “Isolation of an Archaeon at the Prokaryote-Eukaryote Interface,” Nature 577 (January 15, 2020): 519–25, doi:10.1038/s41586-019-1916-6.
    3. Anja Spang et al., “Complex Archaea That Bridge the Gap between Prokaryotes and Eukaryotes,” Nature 521 (May 14, 2015): 173–79, doi:10.1038/nature14447; Katarzyna Zaremba-Niedzwiedzka et al., “Asgard Archaea Illuminate the Origin of Eukaryotic Cellular Complexity,” Nature 541 (January 19, 2017): 353–58, doi:10.1038/nature21031.
    4. Laura Villanueva, Stefan Shouten, and Jaap S. Sinninghe Damsté, “Phylogenomic Analysis of Lipid Biosynthetic Gene and of Archaea Shed Light on the ‘Lipid Divide,’” Environmental Microbiology 19 (January 2017): 54–69, doi:10.1111/1462-2920.13361.
    5. Imachi et al., “Isolation of an Archaeon.”
  • How Can DNA Survive for 75 Million Years? Implications for the Age of the Earth

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Apr 15, 2020

    My family’s TV viewing habits have changed quite a bit over the years. It doesn’t seem that long ago that we would gather around the TV, each week at the exact same time, to watch an episode of our favorite show, broadcast live by one of the TV networks. In those days, we had no choice but to wait another week for the next installment in the series.

    Now, thanks to the availability of streaming services, my wife and I find ourselves binge-watching our favorite TV programs from beginning to end, in one sitting. I’m almost embarrassed to admit this, but we rarely sit down to watch TV with the idea that we are going to binge watch an entire season at a time. Usually, we just intend to take a break and watch a single episode of our favorite program before we get back to our day. Inevitably, however, we find ourselves so caught up with the show we are watching that we end up viewing one episode after another, after another, as the hours of our day melt away.

    One program we couldn’t stop watching was Money Heist (available through Netflix). This Spanish TV series is a crime drama that was originally intended to span two seasons. (Because of its popularity, Netflix ordered two more seasons.) Money Heist revolves around a group of robbers led by a brilliant strategist called the Professor. The Professor and his brother nicknamed Berlin devise an ambitious, audacious plan to take control of the Royal Mint of Spain in order to print and then escape with 2.5 billion euros.

    Because their plan is so elaborate, it takes the team of robbers five months to prepare for their multi-day takeover of the Royal Mint. As you might imagine, their scheme consists of a sequence of ingenious, well-timed, and difficult-to-execute steps requiring everything to come together in the just-right way for their plan to succeed and for the robbers to make it out of the mint with a treasure trove of cash.

    Recently a team of paleontologists uncovered their own treasure trove—a haul of soft tissue materials from the 75-million-year-old fossilized skull fragments of a juvenile duck-billed dinosaur (Hypacrosaurus stebingeri).1 Included in this cache of soft tissue materials were the remnants of the dinosaur’s original DNA—the ultimate paleontological treasure. What a steal!

    This surprising discovery has people asking: How is possible that DNA could survive for that long a period of time?

    Common wisdom states that DNA shouldn’t survive for more than 1 million years, much less 75 million. Thus, young-earth creationists (YECs) claim that this soft-tissue discovery provides the most compelling reason to think that the earth is young and that the fossil record resulted from a catastrophic global deluge (Noah’s flood).

    But is their claim valid?

    Hardly. The team that made the soft-tissue discovery propose a set of mechanisms and processes that could enable DNA to survive for 75 million years. All it takes is the just-right set of conditions and a sequence of well-timed, just-right events all coming together in the just-right way and DNA will persist in fossil remains.

    Baby Dinosaur Discovery

    The team of paleontologists who made this discovery—co-led by Mary Schweitzer at North Carolina State University and Alida M. Bailleul of the Chinese Academy of Sciences—unwittingly stumbled upon these DNA remnants as part of another study. They were investigating pieces of fossilized skull and leg fragments of a juvenile Hypacrosaurus recovered from a nesting site. Because of the dinosaur’s young age, the researchers hoped to extend the current understanding of dinosaur growth by carrying out a detailed microscopic characterization of these fossil pieces. In one of the skull fragments they observed well-defined and well-preserved calcified cartilage that was part of a growth plate when the juvenile was alive.

    A growth plate is a region in a growing skeleton where bone replaces cartilage. At this chondro-osseous junction, chondrocytes (cells found in cartilage) can be found within lacunae (spaces in the matrix of bone tissues). Here, chondrocyte cells secrete an extracellular matrix made up of type II collagen and glucosamine glycans. These cells rapidly divide and grow (a condition called hypertrophy). Eventually, the cells die, leaving the lacunae empty. Afterwards, bone fills in the cavities.

    blog__inline--how-can-dna-survive-for-75-million-years-1
    A growth plate. Image credit: Wikipedia.

    The team of paleontologists detected lacunae in the translucent, well-preserved cartilage of the dinosaur skull fragment. A more careful examination of the spaces revealed several cell-like structures sharing the same lacunae. The team interpreted these cell-like structures as the remnants of chondrocytes. In some instances, the cell-like structures appeared to be doublets, presumably resulting from the final stages of cell division. In the doublets, they observed darker regions that appeared to be the remnants of nuclei and, within the nuclei, dark colored materials that were elongated and aligned to mirror each other. They interpreted these features as the leftover remnants of chromosomes, which would form condensed structure during the later stages of cell division.

    Given the remarkable degree of preservation, the investigators wondered if any biomolecular remnants persisted within these microscopic structures. To test this idea, they exposed a piece of the fossil to Alcian blue, a dye that stains cartilage of extant animals. The fact that the fossilized cartilage picked up the stain indicated to the research team that soft tissue materials still persisted in the fossils.

    Using an antibody binding assay (an analytic test), the research team detected the remnants of collagen II in the lacunae. Moreover, as a scientific first, the researchers isolated the cell-like remnants of the original chondrocytes. Exposing the chondrocyte remnants to two different dyes (PI and DAPI) produced staining in the cell interior near the nuclei. These two dyes both intercalate between the base pairs that form DNA’s interior region. This step indicated the presence of DNA remnants in the fossils, specifically in the dark regions that appear to be the nuclei.

    Implications of This Find

    This discovery adds to the excitement of previous studies that describe soft tissue remnants in fossils. These types of finds are money for paleontologists because they open up new windows into the biology of extinct life. According to Bailleul:

    “These exciting results add to growing evidence that cells and some of their biomolecules can persist in deep-time. They suggest DNA can preserve for tens of millions of years, and we hope that this study will encourage scientists working on ancient DNA to push current limits and to use new methodology in order to reveal all the unknown molecular secrets that ancient tissues have.”2

    Those molecular secrets are even more exciting and surprising for paleontologists because kinetic and modeling studies indicate that DNA should have completely degraded within the span of 1 million years.

    The YEC Response

    The surprising persistence of DNA in the dinosaur fossil remains is like bars of gold for YECs and they don’t want to hoard these treasure for themselves. YECs assert that this find is the “last straw” for the notion of deep time (the view that Earth is 4.5 billion years old and life has existed on it for upwards of 3.8 billion years). For example, YEC author David Coppedge insists that “something has to give. Either DNA can last that long, or dinosaur bones are not that old.”3 He goes on to remind us that “creation research has shown that there are strict upper limits on the survival of DNA. It cannot be tens of millions of years old.”4 For YECs, this type of discovery becomes prima facia evidence that the fossil record must be the result of a global flood that occurred only a few thousand years ago.

    Yet, in my book Dinosaur Blood and the Age of the Earth, I explain why there is absolutely no reason to think that the radiometric dating techniques used to determine the ages of geological formations and fossils are unreliable. The certainty of radiometric dating methods means that there must be mechanisms that work together to promote DNA’s survival in fossil remains. Fortunately, we don’t have to wait for the next season of our favorite program to be released by Netflix to learn what those mechanisms and processes might be.

    blog__inline--how-can-dna-survive-for-75-million-years-2

    Preservation Mechanisms for Soft Tissues in Fossils

    Even though common wisdom says that DNA can’t survive for tens of millions of years, a word of caution is in order. When I worked in R&D for a Fortune 500 company, I participated in a number of stability studies. I quickly learned an important lesson: the stability of chemical compounds can’t be predicted. The stability profile for a material only applies to the specific set of conditions used in the study. Under a different set of conditions chemical stability can vary quite extensively, even if the conditions differ only slightly from the ones employed in the study.

    So, even though researchers have performed kinetic and modeling studies on DNA during fossilization, it’s best to exercise caution before we apply them to the Hypacrosaurus fossils. To say it differently, the only way to know what the DNA stability profile should be in the Hypacrosaurus fragments is to study it under the precise set of taphonomic (burial, decay, preservation) conditions that led to fossilization. And, of course, this type of study isn’t realistic.

    This limitation doesn’t mean that we can’t produce a plausible explanation for DNA’s survival for 75 million years in the Hypacrosaurus fossil fragments. Here are some clues as to why and how DNA persisted in the young dinosaur’s remains:

    • These fossilized cartilage and chondrocytes appear to be exceptionally well-preserved. For this reason, it makes sense to think that soft tissue material could persist in these remains. So, while we don’t know the taphonomic conditions that contributed to the fossilization process, it is safe to assume that these conditions came together in the just-right way to preserve remnants of the biomolecules that make up the soft tissues, including DNA.
    • Soft tissue material is much more likely to survive in cartilage than in bone. The extracellular matrix that makes up cartilage has no vascularization (channels). This property makes it less porous and reduces the surface area compared to bone. Both properties inhibit groundwater and microorganisms from gaining access to the bulk of the soft tissue materials in the cartilage. At the growth plate, cartilage actually has a higher mineral to organic ratio than bone. Minerals inhibit the activity of environmental enzymes and microorganisms. Minerals also protect the biomolecules that make up the organic portion of cartilage because they serve as an adsorption site stabilizing even fragile molecules. Also, minerals can form cross-links with biomolecules. Cross-linking slows down the degradation of biopolymers. Because the chondrocytes in the cartilage lacunae were undergoing rapid cell division at the time of the creature’s death, they consumed most of the available oxygen in their local environment. This consumption would have created a localized hypoxia (oxygen deficiency) that would have minimized oxidative damage to the tissue in the lacunae.
    • The preserved biomolecules are not the original, unaltered materials, but are fragmented remnants that have undergone chemical alteration. Even with the molecules in this altered, fragmented state, many of the assays designed to detect the original, unaltered materials will produce positive results. For example, the antibody binding assays the research team used to detect collagen II could easily detect small fragmented pieces of collagen. These assays depend upon the binding of antibodies to the target molecule. The antibody binding site consists of a relatively small region of the molecular target. This feature of antibody binding means that the antibodies designed to target collagen II will also bind to small peptide fragments of only a few amino acids in length—as long as they are derived from collagen II.
    blog__inline--how-can-dna-survive-for-75-million-years-3
    Antibody binding to an antigen. Image credit: Wikipedia.

    The dyes used to detect DNA can bind to double-stranded regions of DNA that are only six base pairs in length. Again, this feature means that the dye molecules will as readily intercalate between the bases of intact DNA molecules as relatively small fragments derived from the original material.

    • The biochemical properties of collagen II and condensed chromosomes explain the persistence of this protein and DNA. Collagen is a heavily cross-linked material. Cross-linking imparts a high degree of stability to proteins, accounting for their long-term durability in fossil remains.

    In the later stage of cell division, chromosomes (which consist of DNA and proteins) exist in a highly compact, condensed phase. In this phase, chromosomal DNA would be protected and much more resistant to chemical breakdown than if the chromosomes existed in a more diffuse state, as is the case in other stages of the cell cycle.

    In other words, a confluence of factors worked together to promote a set of conditions that allows small pieces of collagen II and DNA to survive long enough for these materials to become entombed within a mineral encasement. At this point in the preservation process, the materials can survive for indefinite periods of time.

    More Historical Heists to Come

    Nevertheless, some people find it easier to believe that a team of robbers could walk out of the Royal Mint of Spain with 2.5 billion euros than to think that DNA could persist in 75-million-year-old fossils. Their disbelief causes them to question the concept of deep time. Yet, it is possible to devise a scientifically plausible scheme to explain DNA’s survival for tens of millions of years, if several factors all work together in the just-right way. This appears to be the case for the duck-billed dinosaur specimen characterized by Schweitzer and Bailleul’s team.

    As this latest study demonstrates, if the just-right sequence of events occurs in the just-right way with the just-right timing, scientists have the opportunity to walk out of the fossil record vault with the paleontological steal of the century.

    It is exciting to think that more discoveries of this type are just around the corner. Stay tuned!

    Resources

    Responding to Young Earth Critics

    Mechanism of Soft Tissue Preservation

    Recovery of a Wide Range of Soft Tissue Materials in Fossils

    Detection of Carbon-14 in Fossils

    Endnotes
    1. Alida M. Bailleul et al., “Evidence of Proteins, Chromosomes and Chemical Markers for DNA in Exceptionally Preserved Dinosaur Cartilage,” National Science Review, nwz206 (January 12, 2020), doi:10.1093/nsr/nwz206, https://academic.oup.com/nsr/advance-article/doi/10.1093/nsr/nwz206/5762999.
    2. Science China Press, “Cartilage Cells, Chromosomes and DNA Preserved in 75 Million-Year-Old Baby Duck-Billed Dinosaur,” Phys.org, posted February 28, 2020, https://phys.org/news/2020-02-cartilage-cells-chromosomes-dna-million-year-old.html.
    3. David F. Coppedge, “Dinosaur DNA Found!”, Creation-Evolution Headlines (website), posted February 28, 2020, https://crev.info/2020/02/dinosaur-dna-found/.
    4. Coppedge, “Dinosaur DNA Found.”
  • No Joke: New Pseudogene Function Smiles on the Case for Creation

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Apr 01, 2020

    Time to confess. I now consider myself an evolutionary creationist. I have no choice. The evidence for biological evolution is so overwhelming…

    …Just kidding! April Fool’s!

    I am still an old-earth creationist. Even though the evolutionary paradigm is the prevailing framework in biology, I am skeptical about facets of it. I am more convinced than ever that a creation model approach is the best way to view life’s origin, design, and history. It’s not to say that there isn’t evidence for common descent; there is. Still, even with this evidence, I prefer old-earth creationism for three reasons.

    • First, a creation model approach can readily accommodate the evidence for common descent within a design framework.
    • Second, the evolutionary paradigm struggles to adequately explain many of the key transitions in life’s history.
    • Third, the impression of design in biology is overwhelming—and it’s becoming more so every day.

    And that is no joke.

    Take the human genome as an example. When it comes to understanding its structure and function, we are in our infancy. As we grow in our knowledge and insight, it becomes increasingly apparent that the structural and functional features of the human genome (and the genomes of other organisms) display more elegance and sophistication than most life scientists could have ever imagined—at least, those operating within the evolutionary framework. On the other hand, the elegance and sophistication of genomes is expected for creationists and intelligent design advocates. To put it simply, the more we learn about the human genome, the more it appears to be the product of a Mind.

    In fact, the advances in genomics over the last decade have forced life scientists to radically alter their views of genome biology. When the human genome was first sequenced in 2000, biologists considered most of the sequence elements to be nonfunctional, useless DNA. Now biologists recognize that virtually every class of these so-called junk DNA sequences serve key functional roles.

    If most of the DNA sequence elements in the human genome were truly junk, then I’d agree that it makes sense to view them as evolutionary leftovers, especially because these junk DNA sequences appear in corresponding locations of the human and primate genomes. It is for these reasons that biologists have traditionally interpreted these shared sequences as the most convincing evidence for common descent.

    However, now that we have learned that these sequences are functional, I think it is reasonable to regard them as the handiwork of a Creator, intentionally designed to contribute to the genome’s biology. In this framework, the shared DNA sequences in the human and primate genomes reflect common design, not common descent.

    Still, many biologists reject the common design interpretation, while continuing to express confidence in the evolutionary model. Their certainty reflects a commitment to methodological naturalism, but there is another reason for their confidence. They argue that the human genome (and the genomes of other organisms) display other architectural and operational features that the evolutionary framework explains best—and, in their view, these features tip the scales toward the evolutionary interpretation.

    Yet, researchers continue to make discoveries about junk DNA that counterbalance the evidence for common descent, including these structural and functional features. Recent insights into pseudogene biology nicely illustrate this trend.

    Pseudogenes

    Most life scientists view pseudogenes as the remnants of once-functional genes. Along these lines, biologists have identified three categories of pseudogenes (unitary, duplicated, and processed) and proposed three distinct mechanisms to explain the origin of each class. These mechanisms produce distinguishing features that allow investigators to identify certain DNA sequences as pseudogenes. However, a pre-commitment to the evolutionary paradigm can influence many biologists to declare too quickly that pseudogenes are nonfunctional based on their sequence characteristics.1

    blog__inline--no-joke-new-pseudogene-function-smiles
    The Mechanisms of Pseudogene Formation. Image credit: Wikipedia.

    As the old adage goes: theories guide, experiments decide. There is an accumulation of experimental data which indicates that pseudogenes from all three classes have utility.

    A number of research teams have demonstrated that the cell’s machinery transcribes processed pseudogenes and, in turn, these transcripts are translated into proteins. Both duplicated and unitary pseudogenes are also transcribed. However, except for a few rare cases, these transcripts are not translated into proteins. Most of duplicated and unitary pseudogene transcripts serve a regulatory role, described by the competitive endogenous RNA hypothesis.

    In other words, the experimental support for pseudogene function seemingly hinges on the transcription of these sequences. That leads to the question: What about pseudogene sequences located in genomes that aren’t transcribed? A number of pseudogenic sequences in genomes seemingly sit dormant. They aren’t transcribed and, presumably, have no utility whatsoever.

    For many life scientists, this supports the evolutionary account for pseudogene origins, making it the preferred explanation over any model that posits the intentional design of pseudogene sequences. After all, why would a Creator introduce mutationally damaged genes that serve no function? Isn’t it better to explain the presence of functional processed pseudogenes as the result of neofunctionalization, whereby evolutionary mechanisms co-opt processed pseudogenes and use them as the raw material to evolve DNA sequence elements into new genes?

    Or, perhaps, is it better to view the transcripts of regulatory unitary and duplicated pseudogenes as the functional remnants of the original genes whose transcripts played a role in regulatory networks with other RNA transcripts? Even though these pseudogenes no longer direct protein production, they can still take part in the regulatory networks comprised of RNA transcripts.

    Are Untranscribed Pseudogenes Really Untranscribed?

    Again, remember that support for the evolutionary interpretation of pseudogenes rests on the belief that some pseudogenes are not transcribed. What happens to this support if these DNA sequences are transcribed, meaning we simply haven’t detected or identified their transcripts experimentally?

    As a case in point, in a piece for Nature Reviews, a team of collaborators from Australia argue that failure to detect pseudogene transcripts experimentally does not confirm the absence of a transcription.2 For example, the transcripts for a pseudogene transcribed at a low level may fall below the experimental detection limit. This particular pseudogene would appear inactive to researchers when, in fact, the opposite is the case. Additionally, pseudogene expression may be tissue-specific or may take place at certain points in the growth and development process. If the assay doesn’t take these possibilities into account, then failure to detect pseudogene transcripts could just mean that the experimental protocol is flawed.

    The similarity of the DNA sequences of pseudogenes and their corresponding “sister” genes causes another complication. It can be hard to experimentally distinguish between a pseudogene and its “intact” sister gene. This limitation means that, in some instances, pseudogene transcripts may be misidentified as the transcripts of the “intact” gene. Again, this can lead researchers to conclude mistakenly that the pseudogene isn’t transcribed.

    Are Untranscribed Pseudogenes Really Nonfunctional?

    These very real experimental challenges notwithstanding, there are pseudogenes that indeed are not transcribed, but it would be wrong to conclude that they have no role in gene regulation. For example, a large team of international collaborators demonstrated that a pseudogene sequence contributes to the specific three-dimensional architecture of chromosomes. By doing so, this sequence exerts influence over gene expression, albeit indirectly.3

    Another research team determined that a different pseudogene plays a role in maintaining chromosomal stability. In laboratory experiments, they discovered that deleting the DNA region that harbors this pseudogene increases chromosomal recombination events that result in the deletion of pieces of DNA. This deletion is catastrophic and leads to DiGeorge/velocardiofacial syndrome.4

    To be clear, these two studies focused on single pseudogenes. We need to be careful about extrapolating the results to all untranscribed pseudogenes. Nevertheless, at minimum, these findings open up the possibility that other untranscribed pseudogene sequences function in the same way. If past history is anything to go by when it comes to junk DNA, these two discoveries are most likely harbingers of what is to come. Simply put, we continue to uncover unexpected function for pseudogenes (and other classes of junk DNA).

    Common Design or Common Descent?

    Not that long ago, shared nonfunctional, junk DNA sequences in the human and primate genomes were taken as prima facia evidence for our shared evolutionary history with the great apes. There was no way to genuinely respond to the challenge junk DNA posed to creation models, other than to express the belief that we would one day discover function for junk DNA sequences.

    Subsequently, discoveries have fulfilled a key scientific prediction made by creationists and intelligent design proponents alike. These initial discoveries involved single, isolated pseudogenes. Later studies demonstrated that pseudogene function is pervasive, leading to new scientific ideas such as the competitive endogenous RNA hypothesis, that connect the sequence similarity of pseudogenes and “intact” genes to pseudogene function. Researchers are beginning to identify functional roles for untranscribed pseudogenes. I predict that it is only a matter of time before biologists concede that the utility of untranscribed pseudogenes is pervasive and commonplace.

    The creation model interpretation of shared junk DNA sequences becomes stronger and stronger with each step forward, which leads me to ask, When are life scientists going to stop fooling around and give a creation model approach a seat at the biology table?

    Resources

    Endnotes
    1. Seth W. Cheetham, Geoffrey J. Faulkner, and Marcel E. Dinger, “Overcoming Challenges and Dogmas to Understand the Functions of Pseudogenes,” Nature Reviews Genetics 21 (December 17, 2019): 191–201, doi:10.1038/s41576-019-0196-1.
    2. Cheetham et al., 191–201.
    3. Peng Huang, et al., “Comparative Analysis of Three-Dimensional Chromosomal Architecture Identifies a Novel Fetal Hemoglobin Regulatory Element,” Genes and Development 31, no. 16 (August 15, 2017): 1704–13, doi: 10.1101/gad.303461.117.
    4. Laia Vergés et al., “An Exploratory Study of Predisposing Genetic Factors for DiGeorge/Velocardiofacial Syndrome,” Scientific Reports 7 (January 6, 2017): id. 40031, doi: 10.1038/srep40031.
  • Does Evolutionary Bias Create Unhealthy Stereotypes about Pseudogenes?

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Mar 18, 2020

    Truth be told, we all hold to certain stereotypes whether we want to admit it or not. Though unfair, more often than not, these stereotypes cause little real damage.

    Yet, there are instances when stereotypes can be harmful—even deadly. As a case in point, researchers have shown that stereotyping disrupts the healthcare received by members of so-called disadvantaged groups, such as African Americans, Latinos, and the poor.1

    Healthcare providers are frequently guilty of bias towards underprivileged people. Often, the stereotyping is unconscious and unintentional. Still, this bias compromises the medical care received by people in these ethnic and socioeconomic groups.

    Underprivileged patients are also guilty of stereotyping. It is not uncommon for these patients to perceive themselves as the victims of prejudice, even when their healthcare providers are genuinely unbiased. As a result, these patients don’t trust healthcare workers and, consequently, withhold information that is vital for a proper diagnosis.

    Fortunately, psychologists have developed best practices that can reduce stereotyping by both healthcare practitioners and patients. Hopefully, by implementing these practices, the impact of stereotyping on the quality of healthcare can be minimized over time.

    Recently, a research team from Australia identified another form of stereotyping that holds the potential to negatively impact healthcare outcomes.2 In this case, the impact of this stereotyping isn’t limited to disadvantaged people; it affects all of us.

    A Bias Against Pseudogenes

    These researchers have uncovered a bias in the way life scientists view the human genome (and the genomes of other organisms). Too often they regard the human genome as a repository of useless, nonfunctional DNA that arises as a vestige of evolutionary history. Because of this view, life scientists and the biomedical research community eschew studying regions of the human genome they deem to be junk DNA. This posture is not unreasonable. It doesn’t make sense to invest precious scientific resources to study nonfunctional DNA.

    Many life scientists are unaware of their bias. Unfortunately, this stereotyping hinders scientific advance by delaying discoveries that could be translated into the clinical setting. Quite often, supposed junk DNA has turned out to serve a vital purpose. Failure to recognize this function not only compromises our understanding of genome biology, but also hinders biomedical researchers from identifying defects in these genomic regions that contribute to genetic diseases and disorders.

    As psychologists will point out, acknowledging bias is the first step to solving the problems that stereotyping causes. This is precisely what these researchers have done by publishing an article in Nature Review Genetics.3 The team focused on DNA sequence elements called pseudogenes. Traditionally, life scientists have viewed pseudogenes as the remnants of once functional genes. Biologists have identified three categories of pseudogenes: (1) unitary, (2) duplicated, and (3) processed.

    blog__inline--does-evolutionary-bias-cause-unhealthy-stereotypes
    Figure 1: Mechanisms for Formation of Duplicated and Processed Pseudogenes. Image credit: Wikipedia

    Researchers categorize DNA sequences as pseudogenes based on structural features. Such features indicate to the investigators that these sequence elements were functional genes at one time in evolutionary history, but eventually lost function due to mutations or other biochemical processes, such as reverse transcription and DNA insertion. Once a DNA sequence is labeled a pseudogene, bias sets in and researchers just assume that it lacks function—not because it has been experimentally demonstrated to be nonfunctional, but because of the stereotyping that arises out of the evolutionary paradigm.

    The authors of the piece acknowledge that “the annotation of genomics regions as pseudogenes constitutes an etymological signifier that an element has no function and is not a gene. As a result, pseudogene-annotated regions are largely excluded from functional screen and genomic analyses.”4 In other words, the “pseudogene” moniker biases researchers to such a degree that they ignore these sequence elements as they study genome structure and function without ever doing the hard, experimental work to determine whether it is actually nonfunctional.

    This approach is clearly misguided and detracts from scientific discovery. As the authors admit, “However, with a growing number of instances of pseudogene-annotated regions later found to exhibit biological function, there is an emerging risk that these regions of the genome are prematurely dismissed as pseudogenic and therefore regarded as void of function.”5

    Discovering Function Despite Bias

    The harmful effects of this bias become evident as biomedical researchers unexpectedly stumble upon function for pseudogenes, time and time, again, not because of the evolutionary paradigm, but despite it. These authors point out that many processed pseudogenes are transcribed and, of those, many are translated to produce proteins. Many unitary and duplicated pseudogenes are also transcribed. Some are also translated into proteins, but a majority are not. Instead they play a role in gene regulation as described by the competitive endogenous RNA hypothesis.

    Still, there are some pseudogenes that aren’t transcribed and, thus, could rightly be deemed nonfunctional. However, the researchers point out that the current experimental approaches for identifying transcribed regions are less than ideal. Many of these methods may fail to detect pseudogene transcripts. However, as the researchers point out, even if a pseudogene isn’t transcribed it still may serve a functional role (e.g., contributing to chromosome three-dimensional structure and stability).

    This Nature article raises a number of questions and concerns for me as a biochemist:

    • How widespread is this bias?
    • If this type of stereotyping exists toward pseudogenes, does it exist for other classes of junk DNA?
    • How well do we really understand genome structure and function?
    • Do we have the wrong perspective on the genome, one that stultifies scientific advance?
    • Does this bias delay the understanding and alleviation of human health concerns?

    Is the Evolutionary Paradigm the Wrong Framework to Study Genomes?

    Based on this article, I think it is safe to conclude that we really don’t understand the molecular biology of genomes. We are living in the midst of a scientific revolution that is radically changing our view of genome structure and function. The architecture and operations of genomes appear to be far more elegant and sophisticated than anyone ever imagined—at least within the confines of the evolutionary paradigm.

    This insight also leads me to question if the evolutionary paradigm is the proper framework for thinking about genome structure and function. From my perspective, treating biological systems as the Creator’s handiwork provides a superior approach to understanding the genome. A creation model approach promotes scientific advance, particularly when the rationale for the structure and function of a particular biological system is not apparent. This expectation forces researchers to keep an open mind and drives further study of seemingly nonfunctional, purposeless systems with the full anticipation that their functional roles will eventually be uncovered.

    Over the last several years, I have raised concerns about the bias life scientists have harbored as they have worked to characterize the human genome (and genomes of other organisms). It is gratifying to me to see that there are life scientists who, though committed to the evolutionary paradigm, are beginning to recognize this bias as well.

    The first step to addressing the problem of stereotyping—in any sector of society—is to acknowledge that it exists. Often, this step is the hardest one to take. The next step is to put in place structures to help overcome its harmful influence. Could it be that part of the solution to this instance of scientific stereotyping is to grant a creation model approach access to the scientific table?

    Resources

    Pseudogene Function

    The Evolutionary Paradigm Hinders Scientific Advance

    Endnotes
    1. For example, see Joshua Aronson et al., “Unhealthy Interactions: The Role of Stereotype Threat in Health Disparities,” American Journal of Public Health 103 (January 1, 2013): 50–56, doi:10.2105/AJPH.2012.300828.
    2. Seth W. Cheetham, Geoffrey J. Faulkner, and Marcel E. Dinger, “Overcoming Challenges and Dogmas to Understand the Functions of Pseudogenes,” Nature Reviews Genetics 21 (March 2020): 191–201, doi:10.1038/s41576-019-0196-1.
    3. Cheetham, Faulkner, and Dinger, 191–201.
    4. Cheetham, Faulkner, and Dinger, 191–201.
    5. Cheetham, Faulkner, and Dinger, 191–201.
  • New Genetic Evidence Affirms Human Uniqueness

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Mar 04, 2020

    It’s a remarkable discovery—and a bit gruesome, too.

    It is worth learning a bit about some of its unseemly details because this find may have far-reaching implications that shed light on our origins as a species.

    In 2018, a group of locals discovered the remains of a two-year-old male puppy in the frozen mud (permafrost) in the eastern part of Siberia. The remains date to 18,000 years in age. Remarkably, the skeleton, teeth, head, fur, lashes, and whiskers of the specimen are still intact.

    Of Dogs and People

    The Russian scientists studying this find (affectionately dubbed Dogor) are excited by the discovery. They think Dogor can shed light on the domestication of wolves into dogs. Biologists believe that this transition occurred around 15,000 years ago. Is Dogor a wolf? A dog? Or a transitional form? To answer these questions, the researchers have isolated DNA from one of Dogor’s ribs, which they think will provide them with genetic clues about Dogor’s identity—and clues concerning the domestication process.

    Biologists study the domestication of animals because this process played a role in helping to establish human civilization. But biologists are also interested in animal domestication for another reason. They think this insight will tell us something about our identity as human beings.

    In fact, in a separate study, a team of researchers from the University of Milan in Italy used insights about the genetic changes associated with the domestication of dogs, cats, sheep, and cattle to identify genetic features that make human beings (modern humans) stand apart from Neanderthals and Denisovans.1 They conclude that modern humans share some of the same genetic characteristics as domesticated animals, accounting for our unique and distinct facial features (compared to other hominins). They also conclude that our high level of cooperativeness and lack of aggression can be explained by these same genetic factors.

    This work in comparative genomics demonstrates that significant anatomical and behavioral differences exist between humans and hominins, supporting the concept of human exceptionalism. Though the University of Milan researchers carried out their work from an evolutionary perspective, I believe their insights can be recast as scientific evidence for the biblical conception of human nature; namely, creatures uniquely made in God’s image.

    Biological Changes that Led to Animal Domestication

    Biologists believe that during the domestication process, many of the same biological changes took place in dogs, cats, sheep, and cattle. For example, they think that during domestication, mild deficits in neural crest cells resulted. In other words, once animals are domesticated, they produce fewer, less active neural crest cells. These stem cells play a role in neural development; thus, neural crest cell defects tend to make animals friendlier and less aggressive. This deficit also impacts physical features, yielding smaller skulls and teeth, floppy ears, and shorter, curlier tails.

    Life scientists studying the domestication process have identified several genes of interest. One of these is BAZ1B. This gene plays a role in the maintenance of neural crest cells and controls their migration during embryological development. Presumably, changes in the expression of BAZ1B played a role in the domestication process.

    Neural Crest Deficits and Williams Syndrome

    As it turns out, there are two genetic disorders in modern humans that involve neural crest cells: Williams-Beuren syndrome (also called Williams syndrome) and Williams-Beuren region duplication syndrome. These genetic disorders involve the deletion or duplication, respectively, of a region of chromosome 7 (7q11.23). This chromosomal region harbors 28 genes. Craniofacial defects and altered cognitive and behavioral traits characterize these disorders. Specifically, people with these syndromes have cognitive limitations, smaller skulls, and elf-like faces, and they display excessive friendliness.

    Among the 28 genes impacted by the two disorders is the human version of BAZ1B. This gene codes for a type of protein called a transcription factor. (Transcription factors play a role in regulating gene expression.)

    The Role of BAZ1B in Neural Crest Cell Biology

    To gain insight into the role BAZ1B plays in neural crest cell biology, the European research team developed induced pluripotent stem cell lines from (1) four patients with Williams syndrome, (2) three patients with Williams-Beuren region duplication syndrome, and (3) four people without either disorder. Then, they coaxed these cells in the laboratory to develop into neural crest cells.

    Using a technique called RNA interference, they down-regulated BAZ1B in all three types of neural crest cells. By doing this, the researchers learned that changes in the expression of this gene altered the migration rates of the neural crest cells. Specifically, they discovered that neural crest cells developed from patients with Williams-Beuren region duplication syndrome migrated more slowly than control cells (generated from test subjects without either syndrome) and neural crest cells derived from patients with Williams syndrome migrated more rapidly than control cells.

    The discovery that the BAZ1B gene influences neural crest cell migration is significant because these cells have to migrate to precise locations in the developing embryo to give rise to distinct cell types and tissues, including those that form craniofacial features.

    Because BAZ1B encodes for a transcription factor, when its expression is altered, it alters the expression of genes under its control. The team discovered that 448 genes were impacted by down-regulating BAZ1B. They learned that many of these impacted genes play a role in craniofacial development. By querying databases of genes that correlate with genetic disorders, researchers also learned that, when defective, some of the impacted genes are known to cause disorders that involve altered facial development and intellectual disabilities.

    Lastly, the researchers determined that the BAZ1B protein (again, a transcription factor) targets genes that influence dendrite and axon development (which are structures found in neurons that play a role in transmissions between nerve cells).

    BAZ1B Gene Expression in Modern and Archaic Humans

    With these findings in place, the researchers wondered if differences in BAZ1B gene expression could account for anatomical and cognitive differences between modern humans and archaic humans—hominins such as Neanderthals and Denisovans. To carry out this query, the researchers compared the genomes of modern humans to Neanderthals and Denisovans, paying close attention to DNA sequence differences in genes under the influence of BAZ1B.

    This comparison uncovered differences in the regulatory region of genes targeted by the BAZ1B transcription factor, including genes that control neural crest cell activities and craniofacial anatomy. In other words, the researchers discovered significant genetic differences in gene expression among modern humans and Neanderthals and Denisovans. And these differences strongly suggest that anatomical and cognitive differences existed between modern humans and Neanderthals and Denisovans.

    Did Humans Domesticate Themselves?

    The researchers interpret their findings as evidence for the self-domestication hypothesis—the idea that we domesticated ourselves after the evolutionary lineage that led to modern humans split from the Neanderthal/Denisovan line (around 600,000 years ago). In other words, just as modern humans domesticated dogs, cats, cattle, and sheep, we domesticated ourselves, leading to changes in our anatomical features that parallel changes (such as friendlier faces) in the features of animals we domesticated. Along with these anatomical changes, our self-domestication led to the high levels of cooperativeness characteristic of modern humans.

    On one hand, this is an interesting account that does seem to have some experimental support. But on the other, it is hard to escape the feeling that the idea of self-domestication as the explanation for the origin of modern humans is little more than an evolutionary just-so story.

    It is worth noting that some evolutionary biologists find this account unconvincing. One is William Tecumseh Fitch III—an evolutionary biologist at the University of Vienna. He is skeptical of the precise parallels between animal domestication and human self-domestication. He states, “These are processes with both similarities and differences. I also don’t think that mutations in one or a few genes will ever make a good model for the many, many genes involved in domestication.”2

    Adding to this skepticism is the fact that nobody has anything beyond a speculative explanation for why humans would domesticate themselves in the first place.

    Genetic Differences Support the Idea of Human Exceptionalism

    Regardless of the mechanism that produced the genetic differences between modern and archaic humans, this work can be enlisted in support of human uniqueness and exceptionalism.

    Though the claim of human exceptionalism is controversial, a minority of scientists operating within the scientific mainstream embrace the idea that modern humans stand apart from all other extant and extinct creatures, including Neanderthals and Denisovans. These anthropologists argue that the following suite of capacities uniquely possessed by modern humans accounts for our exceptional nature:

    • symbolism
    • open-ended generative capacity
    • theory of mind
    • capacity to form complex social systems

    As human beings, we effortlessly represent the world with discrete symbols. We denote abstract concepts with symbols. And our ability to represent the world symbolically has interesting consequences when coupled with our abilities to combine and recombine those symbols in a countless number of ways to create alternate possibilities. Our capacity for symbolism manifests in the form of language, art, music, and even body ornamentation. And we desire to communicate the scenarios we construct in our minds with other human beings.

    But there is more to our interactions with other human beings than a desire to communicate. We want to link our minds together. And we can do this because we possess a theory of mind. In other words, we recognize that other people have minds just like ours, allowing us to understand what others are thinking and feeling. We also have the brain capacity to organize people we meet and know into hierarchical categories, allowing us to form and engage in complex social networks. Forming these relationships requires friendliness and cooperativeness.

    In effect, these qualities could be viewed as scientific descriptors of the image of God, if one adopts a resemblance view for the image of God.

    This study demonstrates that, at a genetic level, modern humans appear to be uniquely designed to be friendlier, more cooperative, and less aggressive than other hominins—in part accounting for our capacity to form complex hierarchical social structures.

    To put it differently, the unique capability of modern humans to form complex, social hierarchies no longer needs to be inferred from the fossil and archaeological records. It has been robustly established by comparative genomics in combination with laboratory studies.

    A Creation Model Perspective on Human Origins

    This study not only supports human exceptionalism but also affirms RTB’s human origins model.

    RTB’s biblical creation model identifies hominins such as Neanderthals and the Denisovans as animals created by God. These extraordinary creatures possessed enough intelligence to assemble crude tools and even adopt some level of “culture.” However, the RTB model maintains that these hominids were not spiritual creatures. They were not made in God’s image. RTB’s model reserves this status exclusively for Adam and Eve and their descendants (modern humans).

    Our model predicts many biological similarities will be found between the hominins and modern humans, but so too will significant differences. The greatest distinction will be observed in cognitive capacity, behavioral patterns, technological development, and culture—especially artistic and religious expression.

    The results of this study fulfill these two predictions. Or, to put it another way, the RTB model’s interpretation of the hominins and their relationship to modern humans aligns with “mainstream” science.

    But what about the similarities between the genetic fingerprint of modern humans and the genetic changes responsible for animal domestication that involve BAZ1B and genes under its influence?

    Instead of viewing these features as traits that emerged through parallel and independent evolutionary histories, the RTB human origins model regards the shared traits as reflecting shared designs. In this case, through the process of domestication, modern humans stumbled upon the means (breeding through artificial selection) to effect genetic changes in wild animals that resemble some of the designed features of our genome that contribute to our unique and exceptional capacity for cooperation and friendliness.

    It is true: studying the domestication process does, indeed, tell us something exceptionally important about who we are.

    Resources

    Endnotes
    1. Matteo Zanella et al., “Dosage Analysis of the 7q11.23 Williams Region Identifies BAZ1B as a Major Human Gene Patterning the Modern Human Face and Underlying Self-Domestication,” Science Advances 5, no. 12 (December 4, 2019): eaaw7908, doi:10.1126/sciadv.aaw7908.
    2. Michael Price, “Early Humans Domesticated Themselves, New Genetic Evidence Suggests,” Science (December 4, 2019), doi:10.1126/science.aba4534.
  • Ancient DNA Indicates Modern Humans Are One-of-a-Kind

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Feb 19, 2020

    The wonderful thing about tiggers
    Is tiggers are wonderful things!
    Their tops are made out of rubber
    Their bottoms are made out of springs!
    They’re bouncy, trouncy, flouncy, pouncy
    Fun, fun, fun, fun, fun!
    But the most wonderful thing about tiggers is
    I’m the only one!1

    With eight grandchildren and counting (number nine will be born toward the end of February), I have become reacquainted with children’s stories. Some of the stories my grandchildren want to hear are new, but many of them are classics. It is fun to see my grandchildren experiencing the same stories and characters I enjoyed as a little kid.

    Perhaps my favorite children’s book of all time is A. A. Milne’s (1882–1956) Winnie-the-Pooh. And of all the characters that populated Pooh Corner, my favorite character is the ineffable Tigger—the self-declared one-of-a-kind.

    blog__inline--ancient-dna-indicates-modern-humans-1
    A. A. Milne. Credit: Wikipedia

    For many people (such as me), human beings are like Tigger. We are one-of-a-kind among creation. As a Christian, I take the view that we are unique and exceptional because we alone have been created in God’s image.

    For many others, the Christian perspective on human nature is unpopular and offensive. Who are we to claim some type of special status? They insist that humans aren’t truly unique and exceptional. We are not fundamentally different from other creatures. If anything, we differ only in degree, not kind. Naturalists and others assert that there is no evidence that human beings bear God’s image. In fact, some would go so far as to claim that creatures such as Neanderthals were quite a bit like us. They maintain that these hominins were “exceptional,” just like us. Accordingly, if we are one-of-a-kind it is because, like Tigger, we have arrogantly declared ourselves to be so, when in reality we are no different from any of the other characters who make their home at Pooh Corner.

    Despite this pervasive and popular challenge to human exceptionalism (and the image-of-God concept), there is mounting evidence that human beings stand apart from all extant creatures (such as the great apes) and extinct creatures (such as Neanderthals). This growing evidence can be marshaled to make a scientific case that as human beings we, indeed, are image bearers.

    As a case in point, many archeological studies affirm human uniqueness and exceptionalism. (See the Resources section for a sampling of some of this work.) These studies indicate that human beings alone possess a suite of characteristics that distinguish us from all other hominins. I regard these qualities as scientific descriptors of the image of God:

    • Capacity for symbolism
    • Ability for open-ended manipulation of symbols
    • Theory of mind
    • Capacity to form complex, hierarchical social structures

    Other studies have identified key differences between the brains of modern humans and Neanderthals. (For a sample of this evidence see the Resources section.) One key difference relates to skull shape. Neanderthals (and other hominins) possessed an elongated skull. In contradistinction, our skull shape is globular. The globularity allows for the expansion of the parietal lobe. This is significant because an expanded parietal lobe explains a number of unique human characteristics:

    • Perception of stimuli
    • Sensorimotor transformation (which plays a role in planning)
    • Visuospatial integration (which provides hand-eye coordination)
    • Imagery
    • Self-awareness
    • Working and long-term memory

    Again, I connect these scientific qualities to the image of God.

    Now, two recent studies add to the case for human exceptionalism. They involve genetic comparisons of modern humans with both Neanderthals and Denisovans. Through the recovery and sequencing of ancient DNA, we have high quality genomes for these hominins that we can analyze and compare to the genomes of modern humans.

    While the DNA sequences of protein-coding genes in modern human genomes and the genomes of these two extant hominins is quite similar, both studies demonstrate that the gene expression is dramatically different. That difference accounts for anatomical differences between humans and these two hominins and suggests that significant cognitive differences exist as well.

    Differences in Gene Regulation

    To characterize gene expression patterns in Neanderthals and Denisovans and compare them to modern humans, researchers from Vanderbilt University (VU) used statistical methods to develop a mathematical model that would predict gene expression profiles from the DNA sequences of genomes.2 They built their model using DNA sequences and gene expression data (measured from RNA produced by transcription) for a set of human genomes. To ensure that their model could be used to assess gene expression for Neanderthals and Denisovans, the researchers paid close attention to the gene expression pattern for genes in the human genome that were introduced when modern humans and Neanderthals presumably interbred and compared their expression to human genes that were not of Neanderthal origin.

    blog__inline--ancient-dna-indicates-modern-humans-2
    The Process of Gene Expression. Credit: Shutterstock

    With their model in hand, the researchers analyzed the expression profile for nearly 17,000 genes from the Altai Neanderthal. Their model predicts that 766 genes in the Neanderthal genome had a different expression profile than the corresponding genes in modern humans. As it turns out, the differentially expressed genes in the Neanderthal genomes failed to be incorporated into the human genome after interbreeding took place, suggesting to the researchers that these genes are responsible for key anatomical and physiological differences between modern humans and Neanderthals.

    The VU investigators determined that these 766 differentially expressed genes play roles in reproduction, forming skeletal structures, and the functioning of cardiovascular and immune systems.

    Then, the researchers expanded their analysis to include two other Neanderthal genomes (from the Vindija and Croatian specimens) and the Denisovan genome. The researchers learned that the gene expression profiles of the three Neanderthal genomes were more similar to one another than they were to either the gene expression patterns of modern human and Denisovan genomes.

    This study clearly demonstrates that significant differences existed in the regulation of gene expression in modern humans, Neanderthals, and Denisovans and that these differences account for biological distinctives between the three hominin species.

    Differences in DNA Methylation

    In another study, researchers from Israel compared gene expression profiles in modern human genomes with those from and Neanderthals and Denisovans using a different technique. This method assesses DNA methylation.3 (Methylation of DNA downregulates gene expression, turning genes off.)

    Methylation of DNA influences the degradation process for this biomolecule. Because of this influence, researchers can determine the DNA methylation pattern in ancient DNA by characterizing the damage to the DNA fragments isolated from fossil remains.

    Using this technique, the researchers measured the methylation pattern for genomes of two Neanderthals (Altai and Vindija) and a Denisovan and compared these patterns with genomes recovered from the remains of three modern humans, dating to 45,000 years in age, 8,000 years in age, and 7,000 years in age, respectively. They discovered 588 genes in modern human genomes with a unique DNA methylation pattern, indicating that these genes are expressed differently in modern humans than in Neanderthals and Denisovans. Among the 588 genes, researchers discovered some that influence the structure of the pelvis, facial morphology, and the larynx.

    The researchers think that differences in gene expression may explain the anatomical differences between modern humans and Neanderthals. They also think that this result indicates that Neanderthals lacked the capacity for speech.

    What Is the Relationship between Modern Humans and Neanderthals?

    These two genetic studies add to the extensive body of evidence from the fossil record, which indicates that Neanderthals are biologically distinct from modern humans. For a variety of reasons, some Christian apologists and Intelligent Design proponents classify Neanderthals and modern humans into a single group, arguing that the two are equivalent. But these two studies comparing gene regulation profiles make it difficult to maintain that perspective.

    Modern Humans, Neanderthals, and the RTB Human Origins Model

    RTB’s human origins model regards Neanderthals (and other hominins) as creatures made by God, without any evolutionary connection to modern humans. These extraordinary creatures walked erect and possessed some level of intelligence, which allowed them to cobble together tools and even adopt some level of “culture.” However, our model maintains that the hominins were not spiritual beings made in God’s image. RTB’s model reserves this status exclusively for modern humans.

    Based on our view, we predict that biological similarities will exist among the hominins and modern humans to varying degrees. In this regard, we consider the biological similarities to reflect shared designs, not a shared evolutionary ancestry. We also expect biological differences because, according to our model, the hominins would belong to different biological groups from modern humans.

    We also predict that significant cognitive differences would exist between modern humans and the other hominins. These differences would be reflected in brain anatomy and behavior (inferred from the archeological record). According to our model, these differences reflect the absence of God’s image in the hominins.

    The results of these two studies affirm both sets of predictions that flow from the RTB human origins model. The differences in gene regulation between modern human and Neanderthals is precisely what our model predicts. These differences seem to account for the observed anatomical differences between Neanderthals and modern humans observed from fossil remains.

    The difference in the regulation of genes affecting the larynx is also significant for our model and the idea of human exceptionalism. One of the controversies surrounding Neanderthals relates to their capacity for speech and language. Yet, it is difficult to ascertain from fossil remains if Neanderthals had the anatomical structures needed for the vocalization range required for speech. The differences in the expression profiles for genes that control the development and structure of the larynx in modern humans and Neanderthals suggests that Neanderthals lacked the capacity for speech. This result dovetails nicely with the differences in modern human and Neanderthal brain structure, which suggest that Neanderthals also lacked the neural capacity for language and speech. And, of course, it is significant that there is no conclusive evidence for Neanderthal symbolism in the archeological record.

    With these two innovative genetic studies, the scientific support for human exceptionalism continues to mount. And the wonderful thing about this insight is that it supports the notion that as human beings we are the only ones who bear God’s image and can form a relationship with our Creator.

    Resources

    Behavioral Differences between Humans and Neanderthals

    Biological Differences between Humans and Neanderthals

    Endnotes
    1. Richard M. Sherman and Robert B. Sherman, composers, “The Wonderful Thing about Tiggers” (song), released December 1968.
    2. Laura L. Colbran et al., “Inferred Divergent Gene Regulation in Archaic Hominins Reveals Potential Phenotypic Differences,” Nature Evolution and Ecology 3 (November 2019): 1598-606, doi:10.1038/s41559-019-0996-x.
    3. David Gokhman et al., “Reconstructing the DNA Methylation Maps of the Neandertal and the Denisovan,” Science 344, no. 6183 (May 2, 2014): 523–27, doi:1126/science.1250368; David Gokhman et al., “Extensive Regulatory Changes in Genes Affecting Vocal and Facial Anatomy Separate Modern from Archaic Humans,” bioRxiv, preprint (October 2017), doi:10.1101/106955.
  • Cave Art Tells the Story of Human Exceptionalism

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Feb 05, 2020

    Comic books intrigue me. They are a powerful storytelling vehicle. The combination of snapshot-style imagery, along with narration and dialogue, allows the writer and artist to depict action and emotion in a way that isn’t possible using the written word alone. Comic books make it easy to depict imaginary worlds. And unlike film, comics engage the reader in a deeper, more personal way. The snapshot format requires the reader to make use of their imagination to fill in the missing details. In this sense, the reader becomes an active participant in the storytelling process.

    blog__inline--cave-art-tells-the-story-of-human-exceptionalism-1
    Figure 1: Speech Bubbles on a Comic Strip Background. Credit: Shutterstock

    In America, comics burst onto the scene in the 1930s, but the oldest comics (at least in Europe) trace their genesis to Rodolphe Töpffer (1799-1846). Considered by many to be “the father of comics,” Töpffer was a Swiss teacher, artist, and author who became well-known for his illustrated books—works that bore similarity to modern-day comics.

     

    blog__inline--cave-art-tells-the-story-of-human-exceptionalism-2
    Figure 2: Rodolphe Töpffer, Self Portrait, 1840. Credit: Wikipedia

    Despite his renown, Töpffer wasn’t the first comic book writer and artist. That claim to fame belongs to long forgotten artists from prehistory. In fact, recent work by Australian and Indonesian researchers indicates that comics as a storytelling device dates to earlier than 44,000 years ago.

    Seriously!

    These investigators discovered and characterized cave art from a site on the Indonesian island of Sulawesi that depicts a pig and buffalo hunt. Researchers interpret this mural to be the oldest known recorded story1 —a comic book story on a cave wall.

    This find, and others like it, provide important insight into our origins as human beings. From my perspective as a Christian apologist, this discovery is important for another reason. I see it as affirming the biblical teaching about humanity: God made human beings in his image.

    The Find

    Leading up to this discovery, archeologists had already identified and dated art on cave walls in Sulawesi and Borneo. This art, which includes hand stencils and depictions of animals, dates to older than 40,000 years in age and is highly reminiscent of the cave art of comparable age found in Europe.

    blog__inline--cave-art-tells-the-story-of-human-exceptionalism-3
    Figure 3: Hand Stencils from a Cave in Southern Sulawesi. Credit: Wikipedia.

    In December 2017, an archeologist from Indonesia discovered the hunting mural in a cave (now called Leang Bulu’ Sipong 4) in the southern part of Sulawesi. The panel presents the viewer with an ensemble of pigs and small buffalo (called anoas), endemic to Sulawesi. Most intriguing about the artwork is the depiction of smaller human-like figures with animal features such as tails and snouts. In some instances, the figures appear to be holding spears and ropes. Scholars refer to these human-animal depictions as therianthropes.

    blog__inline--cave-art-tells-the-story-of-human-exceptionalism-4
    Figure 4: Illustration of a Pig Deer Found in a Cave in Southern Sulawesi. Credit: Wikipedia.

    Dating the Find

    Dating cave art can be notoriously difficult. One approach is to directly date the charcoal pigments used to make the art using radiocarbon methods. Unfortunately, the dates measured by this technique can be suspect because the charcoal used to make the art can be substantially older than the artwork itself.

    Recently, archeologists have developed a new approach to date cave art. This method measures the levels of uranium and thorium in calcite deposits that form on top of the artwork. Calcite is continuously deposited on cave walls due to hydrological activity in the cave. As water runs down the cave walls, calcium carbonate precipitates onto the cave wall surface. Trace amounts of radioactive uranium are included in the calcium carbonate precipitates. This uranium decays into thorium, hence the ratio of uranium to thorium provides a measure of the calcite deposit’s age and, in turn, yields a minimum age for the artwork.

    To be clear, this dating method has been the subject of much controversy. Some archeologists argue that the technique is unreliable because the calcite deposits are an open system. Once the calcite deposit forms, water will continue to flow over the surface. The water will solubilize part of the calcite deposit and along with it the trace amounts of uranium and thorium. Thus, because uranium is more soluble than thorium we get an artificially high level of thorium. So, when the uranium-thorium ratio is measured, it may make it appear as if the cave art is older than it actually is.

    To ensure that the method worked as intended, the researchers only dated calcite deposits that weren’t porous (which is a sign that they have been partially re-dissolved) and they made multiple measurements from the surface of the deposit toward the interior. If this sequence of measurements produced a chronologically consistent set of ages, the researchers felt comfortable with the integrity of the calcite samples. Using this method, the researchers determined that the cave painting of the pig and buffalo hunt dates to older than 43,900 years.

    Corroborating evidence gives the archeologists added confidence in this result. For example, the discovery of archeological finds in the Sulawesi cave site that were independently dated indicate that modern humans were in the caves between 40,000 to 50,000 years ago, in agreement with the measured age of the cave art.

    The research team also noted that the animal and the therianthropes in the mural appear to have been created at the same time. This insight is important because therianthropes don’t appear in the cave paintings found in Europe until around 10,000 years ago. This observation means that it is possible that the therianthropes could have been added to the painting millennia after the animals were painted onto the cave wall. However, the researchers don’t think this is the case for at least three reasons. First, the same artistic style was used to depict the animals and therianthropes. Second, the technique and pigment used to create the figures is the same. And third, the degree of weathering is the same throughout the panel. None of these features would be expected if the therianthropes were a late addition to the mural.

    Interpreting the Find

    The researchers find the presence of therianthropes in 44,000+ year-old cave art significant. It indicates that humans in Sulawesi not only possessed the capacity for symbolism, but, more importantly, had the ability to conceive of things that did not exist in the material world. That is to say, they had a sense of the supernatural.

    Some archeologists believe that the cave art reflects shamanic beliefs and visions. If this is the case, then it suggests that the therianthropes in the painting may reflect spirit animal helpers who ensured the success of the hunt. The size of the therianthropes supports this interpretation. These animal-human hybrids are depicted as much smaller than the pigs and buffalo. On the island of Sulawesi, both the pig and buffalo species in question were much smaller than modern humans.

    Because this artwork depicts a hunt involving therianthropes, the researchers see rich narrative content in the display. It seems to tell a story that likely reflected the mythology of the Sulawesi people. You could say it’s a comic book on a cave wall.

    Relationship between Cave Art in Europe and Asia

    Cave art in Europe has been well-known and carefully investigated by archeologists and anthropologists for nearly a century. Now archeologists have access to a growing archeological record in Asia.

    Art found at these sites is of the same quality and character as the European cave art. However, it is older. This discovery means that modern humans most likely had the capacity to make art even before beginning their migrations around the world from out of Africa (around 60,000 years ago).

    As noted, the discovery of therianthropes at 44,000+ years in age in Sulawesi is intriguing because these types of figures don’t appear in cave art in Europe until around 10,000 years ago. But archeologists have discovered the lion-man statue in a cave site in Germany. This artifact, which depicts a lion-human hybrid, dates to around 40,000 years in age. In other words, therianthropes were part of the artwork of the first Europeans. It also indicates that modern humans in Europe had the capacity to envision imaginary worlds and held belief in a supernatural realm.

    Capacity for Art and the Image of God

    For many people, our ability to create and contemplate art serves as a defining feature of humanity—a quality that reflects our capacity for sophisticated cognitive processes. So, too, does our capacity for storytelling. As humans, we seem to be obsessed with both. Art and telling stories are manifestations of symbolism and open-ended generative capacity. Through art (as well as music and language), we express and communicate complex ideas and emotions. We accomplish this feat by representing the world—and even ideas—with symbols. And, we can manipulate symbols, embedding them within one another to create alternate possibilities.

    As a Christian, I believe that our capacity to make art and to tell stories is an outworking of the image of God. As such, the appearance of art (as well as other artifacts that reflect our capacity for symbolism) serves as a diagnostic for the image of God in the archeological record. That record provides the means to characterize the mode and tempo of the appearance of behavior that reflect the image of God. If the biblical account of human origins is true, then I would expect that artistic expression should be unique to modern humans and should appear at the same time that we make our first appearance as a species.

    So, when did art (and symbolic capacity) first appear? Did art emerge suddenly? Did it appear gradually? Is artistic expression unique to human beings or did other hominins, such as Neanderthals, produce art too? Answers to these questions are vital to our case for human exceptionalism and, along with it, the image of God.

    When Did the Capacity for Art First Appear?

    Again, the simultaneous appearance of cave art in Europe and Asia indicates that the capacity for artistic expression (and, hence, symbolism) dates back to the time in prehistory before humans began to migrate around the world from out of Africa (around 60,000 years ago). This conclusion gains support from the recent discovery of a silcrete flake from a layer in the Blombos Cave that dates to about 73,000 years old. (The Blombos Cave is located around 150 miles east of Cape Town, South Africa.) A portion of an abstract drawing is etched into this flake.2

    Linguist Shigeru Miyagawa believes that artistic expression emerged in Africa earlier than 125,000 years ago. Archeologists have discovered rock art produced by the San people that dates to 72,000 years ago. This art shares certain elements with European cave art. Because the San diverged from the modern human lineage around 125,000 years ago, the ancestral people groups that gave rise to both lines must have possessed the capacity for artistic expression before that time.3

    It is also significant that the globular brain shape of modern humans first appears in the archeological record around 130,000 years ago. As I have written about previously, globular brain shape allows expansion of the parietal lobe, which is responsible for many of our capacities:

    • Perception of stimuli
    • Sensorimotor transformation (which plays a role in planning)
    • Visuospatial integration (which provides hand-eye coordination needed for making art)
    • Imagery
    • Self-awareness
    • Working and long-term memory

    In other words, the evidence indicates that our capacity for symbolism emerged at the time that our species first appears in the fossil record. Some archeologists claim that Neanderthals displayed the capacity for symbolism as well. If this claim proves true, then human beings don’t stand apart from other creatures. We aren’t special.

    Did Neanderthals Have the Capacity to Create Art?

    Claims of Neanderthal artistic expression abound in popular literature and appear in scientific journals. However, a number of studies question these claims. When taken as a whole, the evidence indicates that Neanderthals were cognitively inferior to modern humans.

    So, when the evidence is considered as a whole, only human beings (modern humans) possess the capability for symbolism, open-ended generative capacity, and theory of mind—in my view, scientific descriptors of the image of God. The archeological record affirms the biblical view of human nature. It is also worth noting that the origin of our symbolic capacity seems to arise at the same time that modern humans appear in the fossil record, an observation I would expect given the biblical account of human origins.

    Like the comics that intrigue me, this narrative resonates on a personal level. It seems as if the story told in the opening pages of the Old Testament is true.

    Resources

    Cave Art and the Image of God

    The Modern Human Brain

    Could Neanderthals Make Art?

    Endnotes
    1. Maxime Aubert et al., “Earliest Hunting Scene in Prehistoric Art,” Nature 576 (December 11, 2019): 442–45, doi:10.1038/s41586-019-1806y.
    2. Shigeru Miyagawa, Cora Lesure, and Vitor A. Nóbrega, “Cross-Modality Information Transfer: A Hypothesis about the Relationship among Prehistoric Cave Paintings, Symbolic Thinking, and the Emergence of Language,” Frontiers in Psychology 9 (February 20, 2018): 115, doi:10.3389/fpsyg.2018.00115.
    3. Christopher S. Henshilwood et al., “An Abstract Drawing from the 73,000-Year-Old Levels at Blombos Cave, South Africa,” Nature 562 (September 12, 2018): 115–18, doi:10.1038/s41586-018-0514-3.
  • But Do Watches Replicate? Addressing a Logical Challenge to the Watchmaker Argument

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Jan 22, 2020

    Were things better in the past than they are today? It depends who you ask.

    Without question, there are some things that were better in years gone by. And, clearly, there are some historical attitudes and customs that, today, we find hard to believe our ancestors considered to be an acceptable part of daily life.

    It isn’t just attitudes and customs that change over time. Ideas change, too—some for the better, some for the worst. Consider the way doing science has evolved, particularly the study of biological systems. Was the way we approached the study of biological systems better in the past than it is today?

    It depends who you ask.

    As an old-earth creationist and intelligent design proponent, I think the approach biologists took in the past was better than today for one simple reason. Prior to Darwin, teleology was central to biology. In the late 1700s and early to mid-1800s, life scientists viewed biological systems as the product of a Mind. Consequently, design was front and center in biology.

    As part of the Darwinian revolution, teleology was cast aside. Mechanism replaced agency and design was no longer part of the construct of biology. Instead of reflecting the purposeful design of a Mind, biological systems were now viewed as the outworking of unguided evolutionary mechanisms. For many people in today’s scientific community, biology is better for it.

    Prior to Darwin, the ideas shaped by thinkers (such as William Paley) and biologists (such as Sir Richard Owen) took center stage. Today, their ideas have been abandoned and are often lampooned.

    But, advances in my areas of expertise (biochemistry and origins-of-life research) justify a return to the design hypothesis, indicating that there may well be a role for teleology in biology. In fact, as I argue in my book The Cell’s Design, the latest insights into the structure and function of biomolecules bring us full circle to the ideas of William Paley (1743-1805), revitalizing his Watchmaker argument for God’s existence.

    In my view, many examples of molecular-level biomachinery stand as strict analogs to human-made machinery in terms of architecture, operation, and assembly. The biomachines found in the cell’s interior reveal a diversity of form and function that mirrors the diversity of designs produced by human engineers. The one-to-one relationship between the parts of man-made machines and the molecular components of biomachines is startling (e.g., the flagellum’s hook). I believe Paley’s case continues to gain strength as biochemists continue to discover new examples of biomolecular machines.

    The Skeptics’ Challenge

    Despite the powerful analogy that exists between machines produced by human designers and biomolecular machines, many skeptics continue to challenge the revitalized watchmaker argument on logical grounds by arguing in the same vein as David Hume.1 These skeptics assert that significant and fundamental differences exist between biomachines and human creations.

    In a recent interaction on Twitter, a skeptic raised just such an objection. Here is what he wrote:

    “Do [objects and machines designed by humans] replicate with heritable variation? Bad analogy, category mistake. Same one Paley made with his watch on the heath centuries ago.”

    In other words, biological systems replicate, whereas devices and artefacts made by human beings don’t. This difference is fundamental. Such a dissimilarity is so significant that it undermines the analogy between biological systems (in general) and biomolecular machines (specifically) and human designs, invalidating the conclusion that life must stem from a Mind.

    This is not the first time I have encountered this objection. Still, I don’t find it compelling because it fails to take into account manmade machines that do, indeed, replicate.

    Von Neumann’s Universal Self-Constructor

    In the 1940s, mathematician, physicist, and computer scientist John von Neumann (1903–1957) designed a hypothetical machine called a universal constructor. This machine is a conceptual apparatus that can take materials from the environment and build any machine, including itself. The universal constructor requires instructions to build the desired machines and to build itself. It also requires a supervisory system that can switch back and forth between using the instructions to build other machines and copying the instructions prior to the replication of the universal constructor.

    Von Neumann’s universal constructor is a conceptual apparatus, but today researchers are actively trying to design and build self-replicating machines.2 Much work needs to be done before self-replicating machines are a reality. Nevertheless, one day machines will be able to reproduce, making copies of themselves. To put it another way, reproduction isn’t necessarily a quality that distinguishes machines from biological systems.

    It is interesting to me that a description of von Neumann’s universal constructor bears remarkable similarity to a description of a cell. In fact, in the context of the origin-of-life problem, astrobiologists Paul Davies and Sara Imari Walker noted the analogy between the cell’s information systems and von Neumann’s universal constructor.3 Davies and Walker think that this analogy is key to solving the origin-of-life problem. I would agree. However, Davies and Walker support an evolutionary origin of life, whereas I maintain that the analogy between cells and von Neumann’s universal constructor adds vigor to the revitalized Watchmaker argument and, in turn, the scientific case for a Creator.

    In other words, the reproduction objection to the Watchmaker argument has little going for it. Self-replication is not the basis for viewing biomolecular machines as fundamentally dissimilar to machines created by human designers. Instead, self-replication stands as one more machine-like attribute of biochemical systems. It also highlights the sophistication of biological systems compared to systems produced by human designers. We are a far distance away from creating machines that are as sophisticated as the machines found inside the cell. Nevertheless, as we continue to move in that direction, I think the case for a Creator will become even more compelling.

    Who knows? With insights such as these maybe one day we will return to the good old days of biology, when teleology was paramount.

    Resources

    Biomolecular Machines and the Watchmaker Argument

    Responding to Challenges to the Watchmaker Argument

    Endnotes
    1. Whenever you depart, in the least, from the similarity of the cases, you diminish proportionably the evidence; and may at last bring it to a very weak analogy, which is confessedly liable to error and uncertainty.” David Hume, “Dialogues Concerning Natural Religion,” in Classics of Western Philosophy, 3rd ed., ed. Steven M. Cahn, (1779; repr., Indianapolis: Hackett, 1990), 880.
    2. For example, Daniel Mange et al., “Von Neumann Revisited: A Turing Machine with Self-Repair and Self-Reproduction Properties,” Robotics and Autonomous Systems 22 (1997): 35-58, https://doi.org/10.1016/S0921-8890(97)00015-8; Jean-Yves Perrier, Moshe Sipper, and Jacques Zahnd, “Toward a Viable, Self-Reproducing Universal Computer,” Physica D: Nonlinear Phenomena
      97, no. 4 (October 15, 1996): 335–52, https://doi.org/10.1016/0167-2789(96)00091-7; Umberto Pesavento, “An Implementation of von Neumann’s Self-Reproducing Machine,” Artificial Life 2, no. 4 (Summer 1995): 337–54, https://doi.org/10.1162/artl.1995.2.4.337.
    3. Sara Imari Walker and Paul C. W. Davies, “The Algorithmic Origins of Life,” Journal of the Royal Society Interface 10 (2013), doi:10.1098/rsif.2012.0869.
  • The Flagellum’s Hook Connects to the Case for a Creator

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Jan 08, 2020

    What would you say is the most readily recognizable scientific icon? Is it DNA, a telescope, or maybe a test tube?

    blog__inline--the-flagellums-hook-connects-to-the-case-1

    Figure 1: Scientific Icons. Image credit: Shutterstock

    Marketing experts recognize the power of icons. When used well, icons prompt consumers to instantly identify a brand or product. They can also communicate a powerful message with a single glance.

    Though many skeptics question if it’s science at all, the intelligent design movement has identified a powerful icon that communicates its message. Today, when most people see an image the bacterial flagellum they immediately think: Intelligent Design.

    This massive protein complex powerfully communicates sophisticated engineering that could only come from an Intelligent Agent. And along these lines, it serves as a powerful piece of evidence for a Creator’s handiwork. Careful study of its molecular architecture and operation provides detailed evidence that an Intelligent Agent must be responsible for biochemical systems and, hence, the origin of life. And, as it turns out, the more we learn about the bacterial flagellum, the more evident it becomes that a Creator must have played a role in the origin and design of life—at least at the biochemical levelas new research from Japan illustrates.1

    The Bacterial Flagellum

    This massive protein complex looks like a whip extending from the bacterial cell surface. Some bacteria have only a single flagellum, others possess several flagella. Rotation of the flagellum(a) allows the bacterial cell to navigate its environment in response to various chemical signals.

    blog__inline--the-flagellums-hook-connects-to-the-case-2

    Figure 2: Typical Bacteria with Flagella. Image credit: Shutterstock

    An ensemble of 30 to 40 different proteins makes up the typical bacterial flagellum. These proteins function in concert as a literal rotary motor. The flagellum’s components include a rotor, stator, drive shaft, bushing, universal joint, and propeller. It is essentially a molecular-sized electrical motor directly analogous to human-produced rotary motors. The rotation is powered by positively charged hydrogen ions flowing through the motor proteins embedded in the inner membrane.

    blog__inline--the-flagellums-hook-connects-to-the-case-3

    Figure 3: The Bacterial Flagellum. Image credit: Wikipedia

    The Bacterial Flagellum and the Revitalized Watchmaker Argument

    Typically, when intelligent design proponents/creationists use the bacterial flagellum to make the case for a Creator, they focus the argument on its irreducibly complex nature. I prefer a different tact. I like to emphasize the eerie similarity between rotary motors created by human designers and nature’ bacterial flagella.

    The bacterial flagellum is just one of a large number of protein complexes with machine-like attributes. (I devote an entire chapter to biomolecular machines in my book The Cell’s Design.) Collectively, these biomolecular machines can be deployed to revitalize the Watchmaker argument.

    Popularized by William Paley in the eighteenth century, this argument states that as a watch requires a watchmaker, so too, life requires a Creator. Following Paley’s line of reasoning, a machine is emblematic of systems produced by intelligent agents. Biomolecular machines display the same attributes as human-crafted machines. Therefore, if the work of intelligent agents is necessary to explain the genesis of machines, shouldn’t the same be true for biochemical systems?

    Skeptics inspired by atheist philosopher David Hume have challenged this simple, yet powerful, analogy. They argue that the analogy would be compelling only if there is a high degree of similarity between the objects that form the analogy. Skeptics have long argued that biochemical systems and machines are too dissimilar to make the Watchmaker argument work.

    However, the striking similarity between the machine parts of the bacterial flagellum and human-made machines cause this objection to evaporate. New work on flagella by Japanese investigators lends yet more support to the Watchmaker analogy.

    New Insights into the Structure and Function of the Flagellum’s Universal Joint

    The flagellum’s universal joint (sometimes referred to as the hook) transfers the torque generated by the motor to the propeller. The research team wanted to develop a deeper understanding of the relationship between the molecular structure of the hook and how the structural features influence its function as a universal joint.

    Comprised of nearly 100 copies (monomers) of a protein called FlgE, the hook is a curved, tube-like structure with a hollow interior. FlgE monomers stack on top of each other to form a protofilament. Eleven protofilaments organize to form the hook’s tube, with the long axis of the protofilament aligning to form the long axis of the hook.

    Each FlgE monomer consists of three domains, called D0, D1, and D2. The researchers discovered that when the FlgE monomers stack to form a protofilament, the D0, D1, and D2 domains of each of the monomers align along the length of the protofilament to form three distinct regions in the hook. These layers have been labeled the tube layer, the mesh layer, and the spring layer.

    During the rotation of the flagellum, the protofilaments experience compression and extension. The movement of the domains, which changes their spatial arrangement relative to one another, mediates the compression and extension. These domain movements allow the hook to function as a universal joint that maintains a rigid tube shape against a twisting “force,” while concurrently transmitting torque from the motor to the flagellum’s filament as it bends along its axis.

    Regardless of one’s worldview, it is hard not to marvel at the sophisticated and elegant design of the flagellum’s hook!

    The Bacterial Flagellum and the Case for a Creator

    If the Watchmaker argument holds validity, it seems reasonable to think that the more we learn about protein complexes, such as the bacterial flagellum, the more machine-like they should appear to be. This work by the Japanese biochemists bears out this assumption. The more we characterize biomolecular machines, the more reason we have to think that life stems from a Creator’s handiwork.

    Dynamic properties of the hook assembly add to the Watchmaker argument (when applied to the bacterial flagellum). This structure is much more sophisticated and ingenious than the design of a typical universal joint crafted by human designers. This elegance and ingenuity of the hook are exactly the attributes I would expect if a Creator played a role in the origin and design of life.

    Message received, loud and clear.

    Resources

    The Bacterial Flagellum and the Case for a Creator

    Can Intelligent Design Be Part of the Scientific Construct?

    Endnotes
    1. Takayuki Kato et al., “Structure of the Native Supercoiled Flagellar Hook as a Universal Joint,” Nature Communications 10 (2019): 5295, doi:10.1038/s4146.

About Reasons to Believe

RTB's mission is to spread the Christian Gospel by demonstrating that sound reason and scientific research—including the very latest discoveries—consistently support, rather than erode, confidence in the truth of the Bible and faith in the personal, transcendent God revealed in both Scripture and nature. Learn More »

Support Reasons to Believe

Your support helps more people find Christ through sharing how the latest scientific discoveries affirm our faith in the God of the Bible.

Donate Now

U.S. Mailing Address
818 S. Oak Park Rd.
Covina, CA 91724
  • P (855) 732-7667
  • P (626) 335-1480
  • Fax (626) 852-0178
Reasons to Believe logo

Reasons to Believe is a nonprofit organization designated as tax-exempt under Section 501(c)3 by the Internal Revenue Service. Donations are tax-deductible to the full extent of the law. Our tax ID is #33-0168048. All Transactions on our Web site are safe and secure.

Copyright 2020. Reasons to Believe. All rights reserved. Use of this website constitutes acceptance of our Privacy Policy.