Terlecky’s Corner: Sickle Cell & Gene Therapy

Terlecky’s Corner: Installment 12

Gene Therapy for Sickle Cell Anemia: Repairing Hemoglobin Subunit Assembly

This month’s announcement[1] from the Food and Drug Administration (FDA) that it will approve two therapeutic approaches to address the molecular defects associated with sickle cell anemia is a major step forward not only in terms of treatment of the disease, but also as evidence of how far the science of gene editing has come.

Sickle cellSickle cell anemia is an inherited blood-borne disease affecting some 100,000 individuals in the US, and nearly 8 million worldwide. Persons of sub-Saharan African descent appear to manifest the disease to the greatest extent, with those of Indian, Hispanic, or Middle Eastern backgrounds also highly affected. The pathology is devastating – misshapen red blood cells occlude blood vessels, compromising flow and inhibiting oxygen delivery. Pain develops frequently in oxygen-deprived tissues. Other complications include an enhanced susceptibility to infections, various eye problems, organ damage, and increased risks of pulmonary/heart disease and stroke.

At the molecular level – sickle cell anemia (in its most common form) is the result of a faulty hemoglobin protein inside red blood cells. Hemoglobin – the vehicle for oxygen delivery in our bodies – is composed of four subunits, two alpha- and two beta-globin proteins, with each complexing an iron-containing heme prosthetic group. Hemoglobin is a marvel of protein biochemistry – a paradigm for allosteric (in this case, oxygen-binding) cooperativity. That is, when the first oxygen molecule binds to hemoglobin, binding of the second oxygen is “cooperatively” enhanced; and similarly for oxygen additions three and four. The fully loaded hemoglobin then leaves the lungs and travels through the blood to tissues whereupon oxygen is released. This is the normal circumstance.

In sickle cell anemia, amino acid 6 of the beta-globin subunit is altered – from glutamic acid to valine. Any protein biochemist will readily recognize that such a substitution (charged reside to hydrophobic one) could dramatically change the molecules folding and/or functional properties. Such is the case with the beta-globin protein – which now interacts inappropriately with other (beta-globin) subunits by virtue of the newly exposed (hydrophobic) surface. As a result, hemoglobin’s tertiary (that is, folded) structure is altered. Indeed, the misshapen hemoglobin molecule aberrantly polymerizes and forms long fibers with the resultant deformed (~sickle shaped) red blood cells causing the aforementioned vaso-occlusive manifestations.

To best understand the genetic strategies employed in the newly approved therapies, some mention of fetal hemoglobin is warranted. Fetal hemoglobin, like the adult version, is a tetrameric protein – with two alpha-subunits – which are complexed with two gamma-, not beta-globin subunits. Shortly after birth, a switch occurs – gamma-globin synthesis is reduced and beta-globin’s turned on. Beta-globin now replaces gamma-globin in complexing with alpha-globin chains to create the adult hemoglobin molecule.

Interestingly, some patients with sickle cell disease continue to make fetal hemoglobin, and enjoy a milder disease course. In fact, the previously FDA-approved drug hydroxyurea, which helps boost fetal hemoglobin levels, has shown efficacy in treating the disease. (On the down side, concerns about toxicity and uneven effectiveness across patient populations have limited hyroxyurea’s more universal adoption.) Nevertheless, the anti-sickling properties of fetal hemoglobin’s gamma-globin chain have been recognized.

Scientists compared gamma-globin to beta-globin and tested variously altered beta-globin derivatives that would confer gamma-globin’s anti-sickling property. One such alteration is a threonine to glutamine change at position 76. It is lentivirus-mediated expression of beta-globin(T76Q) into hematopoietic stems cells that constitutes the basis of Bluebird Bio’s Lyfgenia® (lovotibeglogene autotemcel) therapy – approved by the FDA on December 8th. The idea is that the non-sickling beta-globin(T76Q) subunits will complex with alpha-subunits (and heme prosthetic groups) and result in a fully functional hemoglobin molecule.

Casgevy® (exagamglogene autotemcel) from Vertex Pharmaceuticals takes a different approach. It employs the CRISPR/Cas9 (gene editing) system to eliminate production of a protein called B-cell lymphoma/leukemia 11A (BCL11A). BCL11A is an enzyme which induces the switch in humans – shifting expression from gamma-globin to beta-globin. As described above, this occurs during human development – specifically at birth. As the strategy will result in (non-sickling) gamma-globin production – once again functional hemoglobin will be produced.

To varying degrees, both strategies appear to work – hence the FDA’s approval. There is risk – as the intricate therapeutic approaches require: i. removal of hematopoietic stem cells; ii. the genetic alterations as outlined; iii. conditioning of the patient for receipt of the genetically engineered replacement cells; and iv. the cells’ reintroduction. Also, as might be expected from the complex nature of the steps involved, even as one-time therapies, they are both extremely expensive. That said, how can a price be placed on enjoying even modest relief from the pain and suffering associated with sickle-cell disease?

SRT – December 2023

[1] https://www.fda.gov/news-events/press-announcements/fda-approves-first-gene-therapies-treat-patients-sickle-cell-disease

 

Preparing Drugs Ahead of Viral Disease Outbreak

Last month’s announcement[1] from the National Institute of Allergy and Infectious Disease that it was funding 9 research consortia – called “Antiviral Drug Discovery Centers for Pathogens of Pandemic Concern”, was welcome news. The idea that a concerted effort will be made to create COVID-19 antivirals, as well as ones targeting a range of (viral) families in anticipation of the next outbreak, is inspired. Bringing together academic researchers with pharmaceutical/industrial partners focused on multidisciplinary approaches is a real strength of the envisioned program. Congratulations to Dr. David Perlin (from the Hackensack Meridian Health Research Institute’s Center for Discovery and Innovation) and his collaborators for being selected as part of the program’s drug development initiative.[2]

Complementing the power of antivirals and their ability to alter the course of disease and/or reduce and prevent viral spread, are vaccines – designed to prevent infection altogether. The following discussion focuses on steps to accelerate development of just such antiviral vaccines.

Viruses
Image Source: Innovative Genomics Institute

Let us be clear – viruses have long been, and will continue to be, a plague on human health and well-being. Whether they be extant (e.g., SARS-CoV2, Ebola, West Nile), newly mutated variants, or recently developed from zoonotic (i.e., animal to human) transmission – infectious viruses will continue to do what they have done for thousands of years – copy and spread their genomes and compromise human health. How do we get out in front of this incoming and indeed ever-present onslaught? The answer is to prepare now.

Dr. Florian Krammer at the Icahn School of Medicine at Mount Sinai suggests that some 50-100 viruses should be identified and targeted for vaccine development.[3] Choice of which viruses to pursue would be based on infective potential, transmissibility, and accompanying symptoms/pathology. Such a curated list of potentially dangerous pathogens could be informed by recently developed approaches involving machine learning/artificial intelligence. Georgetown University researcher Dr. Colin Carlson and team have been working on just such approaches and have launched VIRION, a database (still in alpha testing) that is designed to help with the curation process. Powerful algorithms coupled with predictive modeling and detailed analytics allow, for the first time, an ability to predictably identify viruses with enhanced potential to infect humans.

Vaccine development in response to the COVID-19 pandemic proceeded at a pace unseen in modern medicine. Vaccine platforms are now in place such that even tighter timelines between virus identification and vaccine production may be realized. But every day – especially early in an outbreak – is critical and could mean the difference between life and death; so how can the program be maximally accelerated? Perhaps, as Dr. Krammer suggests, once viruses (and viral families) are identified, the process of vaccine development could commence. Not waiting for an actual viral outbreak across human populations is crucial.

Vaccine
Image Source: Flickr

mRNA-based vaccine development, which worked so well in the context of SARS-CoV2, could once again be brought to bear. Moderna’s mRNA Access program[4] would be particularly helpful here – assisting in the identification of appropriate antigen(s), the design of relevant mRNA coding sequences, and other stability, expression, and production parameters associated with its (mRNA) vaccine platform. Once candidate vaccines were developed and tested pre-clinically, they could be evaluated in FDA-approved phase 1 and 2 (drug) testing protocols. Having the results of such clinical trials would position the vaccines for rapid deployment in phase 3 testing when circumstances warranted. Once it is clear a (related) virus has been identified and an outbreak is imminent, scaled up production, distribution, and inoculation efforts would be rapidly initiated. What might have taken years in the past and took roughly a year for the COVID-19 vaccine, could now be accelerated to, Dr. Krammer predicts, 3-4 months (after identification of the relevant viral strain). The value of such preparedness in terms of reducing and/or eliminating the disease burden is incalculable. There are many hurdles (e.g., regulatory, monetary, coordination) that would need to be overcome to effect such a strategy – but the impact could truly be life-saving on a world-wide scale.

SRT – June 2022

[1] https://www.nih.gov/news-events/news-releases/nih-announces-antiviral-drug-development-awards
[2] https://njbiz.com/65m-grant-funds-joint-academic-pharma-drug-accelerator/
[3] Krammer, F. (2020). Pandemic vaccines: how are we going to be better prepared next time? Med, 1(1), 28-32.
[4] https://mrna-access.modernatx.com

mRNA Technology: A Shot in the Arm for Development of New Drug Therapies

As millions the world over receive mRNA-based vaccinations for COVID-19, there is hope that the virus and its attendant wanton destruction may soon be in our collective rear view mirrors. Other vaccine approaches, for example employing viral vectors, are making their way into the armamentarium of anti COVID-19 treatment options – the news just keeps getting better. The focus here is on mRNA technology – how did we get to this point, and what does it mean for the future?

mRNA vaccine COVID
Image Source: MIT News

The central dogma of molecular biology – loosely defined – states that DNA instructs mRNA creation, which directs protein synthesis. Ultimately, of course, it is the protein or enzyme created that is the molecule missing or defective in disease or needed to create immunity. DNA/gene-based therapies have existed for some time and recent advances have begun to overcome early technical problems encountered. The use of protein biologics – molecules produced in living cell “factories,” have also emerged as a viable option to treat protein/enzyme deficiencies or to introduce specifically designed functional antibodies. However, as a protein biochemist who has developed protocols for purifying enzymatically active protein biologics, I can assure you the process is exquisitely complex, time consuming, and costly. The approach can and has worked – it is simply a matter of committing the time and resources to empirically determining/optimizing the purification protocols.

Another option has emerged – specifically, the development of mRNA technologies as a mechanism to induce protein/enzyme expression. Again, as pointed out above – it is not that the role of mRNA in protein synthesis was unclear, rather, there were technical problems attendant to the approach. Let’s consider some of these previous limitations and how they were overcome to allow mRNA to be an efficient messenger of protein synthesis in humans.

mRNA is exquisitely unstable. RNAases – enzymes which break down mRNAs, are very efficient and ever present. mRNA will not enter cells, and if they could be transported, their mere presence often elicits an immune response. Couple this with relatively low protein yields from the cell’s translation processes – and the need for repeated dosing is manifest. So, what has changed?

First, Karikó and coauthors showed that employing specifically modified nucleosides in the design and synthesis of an mRNA molecule would render it far less immunogenic.1 A great first step! Next, the sequence of the mRNA coding region (the area that encodes the information for the protein itself) would take advantage of what was known about (protein) translation. That is, some codons (~mRNA sequences that encode specific amino acids) are expressed more efficiently than others – resulting in greater overall protein yields. Recall that most amino acids are encoded by more than one codon, that is, the genetic code is degenerate. Detailed structural analyses of mRNAs also yielded new information about the importance of 5’ and 3’ untranslated regions in terms of the molecule’s overall stability and translational efficiency. A more complete understanding of mRNAs’ 5’ cap and 3’ poly (A) tail served to further extend the ability to preserve the molecule’s integrity.

Next, it was necessary to design a delivery system – a mechanism that would both protect the mRNA molecule, as well as assure its entry into cells. Many approaches were tested – lipid nanoparticles emerged as an efficient option. Once encapsulated and introduced into tissues, the mRNAs are internalized into cells by endocytosis – basically an engulfment of the lipid vesicle by the cell’s plasma membrane. Once inside the cell – the nascent endosome degranulates and the mRNA molecule is able to emerge into the cytoplasm and begin directing protein synthesis. The cell itself thus makes the protein.

Where does the technology go from here? The answer – quite simply, is that mRNA therapy could potentially be a suitable approach to treat many human diseases. Single enzyme deficiencies constitute a large class of lysosomal storage diseases (e.g., Tay-Sachs or Inclusion-cell (I-cell)), inherited metabolic diseases (e.g., Gaucher or Hunter syndrome), and peroxisomal diseases (e.g., acyl-CoA oxidase or D-bifunctional protein deficiency). Arginase deficiency and cystic fibrosis (caused by the dysfunctional cystic fibrosis transmembrane conductance regulator molecule) are two additional proteins whose missing or defective activities are associated with disease and whose replacement is being sought through mRNA therapies. Designing and synthesizing appropriate mRNAs is relatively straightforward, as is lipid nanoparticle encapsulation. Cold chain handling of the resultant therapeutic remains a requirement – but what a small price to pay for what could be life-changing medicines. It would not be inappropriate to say “the sky is the limit” with respect to the potential of mRNA-based protein/enzyme replacement therapeutics.

SRT – February 2021

Reference:

[1] K. Karikó, M. Buckstein, H. Ni, and D. Weissman, Immunity (2005) doi: 10.1016/j.immuni.2005.06.008. PMID: 16111635

Opioid Abuse in the COVID-19 Era

Somewhat lost in the worldwide COVID-19 health crisis is the continued destruction of lives through opioid abuse. Metaphorically speaking, it is as if the COVID-19 tsunami landed on a beach already flooded by the storm of the opioid abuse epidemic. One crisis does not mitigate the effects of the other; indeed, early data point to increasing opioid use, with already unacceptable consequences only looking to get worse.

Perhaps COVID-19’s effects on substance abuse was predictable – the pandemic has impacted people in numerous ways, many independent of actual viral infection. Social distancing requirements result in increased isolation and alienation. Economic turmoil has caused widespread unemployment (or reduced employment), leading many people to experience deep financial stress and anxiety. For those battling past abusive/addictive behaviors, the pandemic is a relapse catalyst – setting in motion a return to highly destructive actions, attitudes, and decisions. Opportunities to speak with healthcare professionals, therapists, faith-based counselors, or other support personnel are severely curtailed. These conditions facilitate the surge in opioid use and abuse being witnessed across the nation.

Opioids

What could possibly be done when two such health crises collide? We can only begin to attempt an answer here:

First, we must remove the stigma associated with opioid abuse. It should be recognized that opioid and related substance abuse/addiction represents a disease state – involving biological, environmental, and behavioral factors. It is not about moral failings, but neural networks; less about poor decision making, and more about limited perceived options. Individuals experiencing addiction deserve respect and an understanding of the toll of dependency; to marginalize them is demeaning and counterproductive.

COVID-19-related social distancing mandates lead to a reduction of diversions available to potential opioid users/abusers. Fewer people are around to witness and help prevent or treat potential overdoses.

In healthcare, there is an emergent consensus on the effectiveness of medication-assisted treatment (MAT), defined by the Substance Abuse and Mental Health Services Administration (a division of the U.S. Department of Health and Human Services) as “the use of FDA-approved medications, in combination with counseling and behavioral therapies, to provide a “whole-patient” approach to the treatment of substance use disorders.”[1] MAT is not replacing one drug with another – this would be an incomplete understanding of the therapy program. The use of buprenorphine, methadone and naltrexone, in combination with counseling and social support/behavioral interventions, have dramatically altered the landscape of opioid use disorder treatment. MAT works – and works well. The problem is the limited number of physicians trained and licensed to administer MAT. Physicians must receive a Drug Addiction Act of 2000 (DATA) waiver (also known as an “X” waiver) to prescribe the requisite drugs and deliver the appropriate behavioral therapies. Physicians may become waivered after 8 hours of didactic training; medical students require 8 hours of specialized training coupled with a clinical experience demonstrating MAT’s use with opioid use disorder patients. Only a small percentage (<10%) of practicing physicians in this country possess the waiver; of those, less than half actually deliver MAT. Some 40% of counties in the U.S. do not have a waivered physician. Tens of thousands of citizens die from opioid use disorder every year – we must increase the number of X waivered physicians and encourage more to practice the therapy.

If there is any silver lining to the COVID-19 crisis, it is the enabling of telemedicine. For those sheltering in place but requiring access to the health system, telemedicine offers a world of new possibilities. Every attempt must be made to promote digital literacy in vulnerable populations to maximize the impact of this technology.

Recognizing the devastation wrought by COVID-19’s impact on opioid abuse disorders, several local and state jurisdictions across the country are trying to help. As per recommendations made by the American Medical Association[2],[3], several changes are afoot. Buprenorphine may now be prescribed to patients by phone or telemedicine encounter. Methadone is being prescribed in amounts that will last almost a month. These lifesaving drugs are being delivered directly to patients in their homes. The process to have prescriptions refilled has also been streamlined – for example, no toxicology or other testing is required. These developments enable care without the risk of exposure to COVID-19 inherent in in-person visits. Finally, naloxone is being recognized as the true overdose wonder drug – and is being far more liberally distributed.

These are incredibly difficult times – with a global viral-based pandemic intersecting with a devastating substance abuse epidemic. However, as with all crises, good ideas, critical reasoning, and evidence-based decision making will chart a course for real change and true improvement. It cannot happen too quickly for all those affected by opioid use disorders.

 

SRT – July 2020

References:
[1] Medication-Assisted Treatment (MAT). (2020, April 30). Retrieved July 06, 2020, from https://www.samhsa.gov/medication-assisted-treatment

[2] COVID-19 policy recommendations for OUD, pain, harm reduction. (2020, July 2). Retrieved July 06, 2020, from https://www.ama-assn.org/delivering-care/public-health/covid-19-policy-recommendations-oud-pain-harm-reduction

[3] Taking action on opioid use disorder, pain &amp; harm reduction during COVID-19. (2020, July 2). Retrieved July 06, 2020, from https://www.ama-assn.org/delivering-care/opioids/taking-action-opioid-use-disorder-pain-harm-reduction-during-covid-19

Caenorhabditis elegans – the Quintessential Biological Model

The nematode Caenorhabditis elegans (C. elegans) has proven itself time and time again to be an organism of immense value to biomedical researchers. Important studies employing the biological model appear on a regular basis in top tier journals across a wide array of research areas. One perusing the scientific literature is reminded of the worm’s immense power quite regularly. Consider, for example, two related papers that appeared recently in the press; one regarding selective autophagy and lifespan[1], and the other focused on caloric restriction and how its anti-aging effects are elicited from a cellular/metabolic perspective[2].

C. elegans 3D model
The C. elegans 3D model. VirtualWorm project.

In the former, Kumsta and colleagues show that the (C. elegans) protein p62/SQST-1 (~p62) plays an important role in recognizing cellular proteins, macromolecular structures, and even intracellular organelles – e.g., mitochondria, earmarked for destruction. Such recognition leads to trafficking of the p62-substrate to intracellular degradative centers where the actual destruction takes place. Importantly, worms genetically engineered to overexpress p62 enjoy not only an efficiently operating “selective autophagy” pathway, but also a 25% increase in lifespan. Which proteins precisely constitute the repertoire of those recognized by p62 remains to be determined, but the idea that selective autophagy is an extant mechanism in cells suggests myriad potential applications in targeting for destruction those proteins or structures identified as toxic, and associated with human disease. Cellular quality control is critical – and surveillance systems including those mediated by p62 that help maintain proteostasis (i.e., integrity of cell proteome) are essential.

The article by Weir and colleagues suggests that the anti-aging effects of caloric restriction are elicited, at least in part, through maintenance of mitochondrial network integrity, and an interplay with functional (i.e., fatty acid metabolizing) peroxisomes. AMP-activated protein kinase (AMPK) acts similarly to dietary restriction, eliciting many equivalent effects – including those on longevity. These studies beg the follow up question – are anti-aging therapeutics of the future those that both assure structurally sound mitochondria whose metabolic (read: fat metabolizing) functions are carefully coordinated with peroxisomes, and activate appropriate metabolic cascades – including those involving AMPK? The data generated with C. elegans and presented in this interesting (Cell Metabolism) paper certainly supports such conclusions.


A final word or two regarding C. elegans. Dr. Sydney Brenner performed pioneering work in the 1960s and 1970s establishing the organism as a powerful model for biomedical studies. Among the work done was a description of the worm’s neuronal circuitry. For these and related studies, Dr. Brenner, and Drs. H. Robert Horvitz and John Sulston were awarded the 2002 Nobel Prize in Physiology and Medicine. C. elegans was the first multicellular eukaryote to have its genome sequenced; the developmental outcome of every one of its 959 of its cells is known; and all its neural connections are identified. The latter, known as a “connectome”, is available in no other animal at present. The worm has been used in studies involving myriad topics in cell biology, with results impacting all aspects of human health, disease, and aging.

The organism made big news when it was revealed in 2003 that nematodes brought aboard the shuttle Columbia for experimental purposes, had survived the tragic fiery crash of the spacecraft. Upon reentry into the earth’s atmosphere, the creatures were exposed to astonishingly harsh temperatures, centrifugal/gravitational forces, and atmospheric conditions; yet they returned alive. If worms could survive such conditions – could other microorganisms also do so?  Over the course of time, have microorganisms hitched rides on asteroids, comets, meteors and the like and traveled across the heavens – transferring life forms? Hmm…

SRT – January 2020

References:

[1] Kumsta C, Chang JT, Lee R, et al. The autophagy receptor p62/SQST-1 promotes proteostasis and longevity in C. elegans by inducing autophagy. Nat Commun. 2019;10(1):5648. Published 2019 Dec 11. doi:10.1038/s41467-019-13540-4

[2] Weir HJ, Yao P, Huynh FK, et al. Dietary Restriction and AMPK Increase Lifespan via Mitochondrial Network and Peroxisome Remodeling. Cell Metab. 2017;26(6):884–896.e5. doi:10.1016/j.cmet.2017.09.024

A Bioethicist’s Response to “When is a Brain Really Dead?”

Today’s post is a response by Dr. Bryan Pilkington to Dr. Stan Terlecky’s July 15, 2019 Terlecky’s Corner post entitled, “When is a Brain Really Dead?” Dr. Pilkington is an Associate Professor at Seton Hall University.

Last month’s Terlecky’s Corner post raises the question, “When is a brain really dead?” The reanimation – to borrow Terlecky’s phrase and his caution about its use – of porcine brains is both exciting and concerning. The excitement is due, as Terlecky notes well, to the possible benefits of such work: “potentially impacting several health scourges of our time including Alzheimer’s disease, Parkinson’s disease, and other age-related neurodegenerative disorders.”[1] He also notices that ethicists will be busy sorting through these and future studies, especially if the human brain gets involved.

Brain

This is how the usual dialectic goes when addressing ethical questions about emerging medical technologies. Undoubtedly, there will be lectures and papers from ethicists raising the usual kinds of questions, many titled with some version of, “We can, but should we?” These are not endeavors without merit; ethical analysis often lags behind research; in many cases, something has to be created in order to have something about which to raise questions. In fact, one of the important tasks of ethicists in this realm is to serve as watchdogs: to give caution, to aid researchers in thinking through possible concerns, and in some instances – at least with respect to IRB oversight – to say “no.”[2] However, though this is a useful and important role, it comes at a cost. Many times, those raising these kinds of concerns are termed “the ethics police”[3] and this image has hurt collaboration on research and made more challenging the necessary interprofessional tasks that medical research and healthcare practices are.

So what should be said in response to this kind of research? What are the answers to Terlecky’s list of ethical questions? The questions require longer answers than the space of a blog post admits, but I’ll take a shot at two sets of questions, suggesting a model for answering them and those remaining.

The first set comprises a list of real and immediate questions about one of the practical consequences of this research: organ donation. Will donation rates plummet? Will the organ shortage increase because there will be hold outs who think reanimation is possible? Will public trust diminish as this research moves forward?[4] Will families request that additional healthcare resources be spent on loved ones who meet the legal definition of brain death? An answer which avoids the problematic policing approach but takes seriously the importance of these questions is to engage in an extended conversation with researchers, ethicists, healthcare practitioners and administrators, and members of their communities about new technology, how it might be used, and its far-reaching implications. We must not shy away from these hard questions nor from recognizing the potential value of certain kinds of research, but we must also keep in mind the possible negative externalities that could result. This approach raises more questions. Should this research be halted if it damages public trust? Should it be stopped if fewer organs are donated? These questions lead to further questions. If the organ shortage is a primary concern, is it appropriate to connect it to this research? The conversation I am suggesting must consider that, as well. Some have argued that the sale of organs should be allowed and that this would alleviate the shortage[5], others have raised concerns about the commodification of human beings if such sale is legalized[6] – the breadth of the needed conversation is wide, as answers to questions about organ sale are connected to ethical concerns raised about porcine brain experimentation.

A second set of questions hover around the definition of death and the sources upon which we rely to answer those questions. Terlecky helpfully asks about the legal, ethical, and spiritual determinations of death. From where or in what do we root our conceptions of life, of human flourishing, and of death? Are they religious? Are they legalistic? Are they rooted in metaphysical conceptions of the person that we learned in our undergraduate philosophy courses? The kind of conversation I am suggesting is most effectively held when we bring the deep and rich traditions that inform our thought to bear on the subject matter under discussion. It is not a simple task to work through various traditions and try to understand how others think about reanimated porcine brains, human brains, and death, but that is what is needed. As physician and ethicist Lauris Kaldjian recently asked of his colleagues,[7] did you take a course about the existential questions in medical school? Though rhetorical, the suggestion is powerful. How should we respond to this research? In the same way we should respond to all research that raises significant ethical questions, by practically reasoning together.

 

References:

[1] Terlecky, S. 2019, July 15. “When is a Brain Really Dead?

[2] ​Evans, J. 2012. The History and Future of Bioethics: A Sociological Account. Oxford, United​ ​Kingdom: Oxford University Press.

[3] Klitzman, R. 2015. The Ethics Police? The Struggle to Make Human Research Safe. Oxford: Oxford University Press

[4] ​Moschella, M. 2018. Brain death and organ donation: A crisis of public trust. Christian Bioethics 24(2):133–50.

[5] ​Cherry, M. ​2005​. ​K​​idney for Sale by Owner: Human Organs, Transplantation, and the Market. Georgetown University Press.

[6] Pilkington, B. 2018. A Market in Human Flesh: Ramsey’s Argument on Organ Sale, 50 years later. Christian Bioethics 24(2):133–50.

[7] Kaldjian, L. (Personal communication during lecture in Grand Rapids, Michigan, March 25, 2019).

When is a Brain Really Dead?

When is a brain really dead? The answer to this question was made far more complicated by the recent work of Dr. Nenad Sestan and colleagues at Yale University. Their astonishing paper entitled “Restoration of brain circulation and cellular functions post-mortem,” appeared in the journal Nature[1] this April. In it, the authors were able to demonstrate that brains taken from slaughtered animals (pigs in this case), could be “reanimated” 4 hours later in the laboratory – and made at least partially functional for some 6 hours thereafter. I use the word reanimated with some trepidation – it implies the brains were dead and somehow brought back to life. That is not quite the story – rather, the organ turns out to be far more resilient than we previously realized and the research team simply identified a way to tap into that inherent resiliency.

A brief description of the research study: 32 brains from slaughtered pigs were delivered to the research team on ice. Within 4 hours, the scientists carefully perfused the brains using a proprietary surgical procedure, pumping apparatus, and oxygen and nutrient-rich solution collectively termed BrainEx. The team then analyzed the brains for specific cellular, metabolic, and electrical activities. What they found was incredible. The BrainEx perfusion system restored a number of brain functions, including glucose and O2 utilization and concomitant CO2 production (indicative of metabolic function), induced inflammatory responses (suggesting an active immune system), active microcirculation (evidence of structural integrity), and electrical activity (with neuronal firing).

Figure of porcine brain connected to perfusion system
Connection of the porcine brain to the perfusion system via arterial lines. The pulse generator (PG) transforms continuous flow to pulsatile perfusion. Source: Figure 1B, Nature 568, 336–343 (2019).

With respect to the last point, it should be noted the investigators were well aware of the ethical concern that full restoration of brain function could potentially lead to a state of “consciousness.” What it would mean for a disembodied brain to attempt to operate without peripheral sensory input, and what the organ might remember, was simply too much to consider; the brains were treated pharmacologically to assure no coordinated higher level cognitive activity was possible. Said another way, the renewed brains could not begin to think.

Overall, the results suggested to a first approximation, functional activity of the otherwise dead brain had been restored. Importantly from an experimental standpoint, control perfusates were without effect – the brains degenerated much like untreated specimens.

Against the backdrop of these remarkable results are the inevitable ethical questions that follow. Returning to the one posed above – when is a brain really dead? If our understanding of brain death requires a deeper examination, how then do we determine legally, ethically, and/or spiritually, when a person truly dies? What are the implications of this work on the issue of organ donations? Is a brain death as currently defined sufficient to permit the harvesting of a person’s organs? Will (and should) donations ebb as people begin to question the legitimacy of declarations of death. Also, could future brain reanimation experiments include restoration of conscious thought? Certainly the ability to control the brain in this manner permits the testing of drugs in new ways – potentially impacting several health scourges of our time including Alzheimer’s disease, Parkinson’s disease, and other age-related neurodegenerative disorders.

Naturally, science requires replication, expansion, and more careful delineation of what is, and is not possible with the technology described. But what enormous doors have been opened for neuroscientists, neurologists, neuropharmacologists, cognitive scientists, and the many others interested in the structure and function of the brain. I suspect ethicists will also be very busy sorting through these and the inevitable follow-up studies – especially as applications involving the human brain are contemplated.

SRT – July 2019

[1] Vrselja Z, et al., Nature 568, 336-343 (2019). PMID: 30996318

Cardiac Exosomes to the Rescue

In an extremely exciting study that appeared last month in the journal Nature Communications[1], scientists from the University of Alabama at Birmingham, and Huazhong University of Science and Technology in Wuhan, China, showed that after myocardial infarction (~heart attack), heart tissue release tiny membrane-bound vesicles called exosomes, specifically loaded with genetic material designed to promote cardiac tissue repair. Such restoration is mediated by bone marrow progenitor cells that have been released from their sequestered bone-specific existence by the information carried in the newly arrived exosomes.

Several hundred thousand Americans suffer heart attacks each year, making it critical for the medical and scientific communities to understand this potentially transformative study. To begin our analysis, let’s begin with exosomes – tiny membrane-enclosed vesicles released from many cell types. Thought to mediate communication/cell-cell signaling activities, exosomes leave cells loaded with proteins, lipids, DNA, mRNA, and microRNAs among other potentially bioactive molecules. Exosomes are synthesized within the endosome-lysosome system of cells – emerging cargo loaded and poised to deliver their contents locally, or at a distance. It is perhaps not surprising that several pharmaceutical companies – recognizing the potential drug delivery platform that exosomes represent, are pursuing the tiny vesicles in several biomedical contexts.

Once released, how the nascent exosomes hone to specific target tissues and cells is unclear. Perhaps the proteins assembled into their membranes – distinct to each type of exosome – play a role in this discrimination. Also not well understood is the nature of the exosome-target cell interaction. Is it a receptor-ligand-like binding event that triggers signaling events and results in fusion and integration of exosomal contents? Or is it a more basic fusion of two membranes held in close apposition? What about a straightforward endocytic process whereby the exosome is taken up and released within the newly formed endosome?

Back to the study at hand – the investigative team identified the exosomes released from damaged heart muscle as containing specific microRNAs (called myocardial microRNAs or myo-miRs). Recall microRNAs are short, non-coding RNAs with the capacity to (negatively) regulate gene expression. The myo-miRs under consideration here do just that – they very effectively block activity of the chemokine receptor, CXCR4 in bone marrow progenitor cells. Such inhibition allows such stem-like cells to escape their seclusion, enter the bloodstream, and travel to the injured heart…“exosomes to the rescue.” There, repair processes are initiated.

Knowing this cell biology, there must be ways to better harness exosomes to improve cardiac repair. Investigators are sure to pursue this quickly. Almost as assuredly, scientists will also be looking for other examples of exosomes and their bioactive contents as biomarkers for pathology, and as potential rescue mechanisms for disease’s damaging wrath.

SRT – April 2019

[1]Cheng M, et al., Nat Commun. (2019) doi: 10.1038/s41467-019-08895-

Unexpected Role for Mitochondria in Bacterial Killing

Just when we thought we understood the cell biology of antimicrobial resistance in macrophages, a study like the one detailed by Abuaita and colleagues in their Cell Host & Microbe article comes along.[1] Not only are the molecular mechanisms of pathogen metabolism updated, but new cellular signaling and trafficking pathways are implicated. There is a lot of new and important science included in this exciting paper.

Let’s begin by remembering that macrophages are immune cells whose role is to internalize and degrade invading microorganisms. They do so by endocytosing the pathogenic microbes and sequestering them in a newly developed phagosome. It is there that the destructive processes are initiated; final elimination of residual components occurs in the lysosome – a related degradative organelle that works in close concert with the nascent phagosome.

Graphical abstract of Abuaita et al, 2018.

What Abuaita and co-authors describe is a process by which certain bacteria, including the potentially dangerous methicillin-resistant Staphylococcus aureus, trigger more than simply a phagocytic event. Rather, it is clear that the phagosome somehow recognizes the microorganism within its midst, and launches a far more lethal and complex biochemical reaction. The cascade begins with the infected macrophage mounting a stress response – one initiated by the phagosome but further elaborated by the endoplasmic reticulum. Specifically, the endoplasmic reticulum turns on one of the most elegantly choreographed and well-described quality-control systems in the cell, called the unfolded protein response, or UPR. The critical sensor that needs to be tripped to turn on the pathway is the endoplasmic reticulum membrane protein, IRE1a. It is at this point that the cell biology really gets exciting.

The response is not confined to the endoplasmic reticulum and well understood downstream nuclear effector pathways; rather, the signal is somehow transmitted to mitochondria. In response, the organelle is prompted to produce reactive oxygen species – hydrogen peroxide in particular. Mitochondria are known to produce reactive oxygen species, including hydrogen peroxide, as part of their normal energy-generating metabolism. What is different here is that a defense mechanism initiated in the phagosome, signals the endoplasmic reticulum, which then communicates to the mitochondria; a truly elegant and previously unrecognized relay system.

As interesting as the new interorganellar signaling events are – there is more. Specifically, the fascinatingly novel trafficking pathways identified.

The newly synthesized mitochondrial hydrogen peroxide is packaged in vesicles which are shed from the organelle. The authors even identify a critical component of this release step – the ubiquitin ligase, Parkin. These newly created, hydrogen peroxide-loaded mitochondrial vesicles then migrate through the cell, destined to fuse with pathogen-infected phagosomes – releasing their toxic contents and supplementing the already initiated bacterial killing process. Just in case the newly delivered hydrogen peroxide does not provide a sufficient amount of toxic reactive oxygen, some mitochondrial vesicles actually encapsulate the hydrogen peroxide-synthesizing enzyme, superoxide dismutase-2.

What we have here is the cell marshalling its degradative armamentarium in a manner we never imagined to help fight methicillin-resistant Staphylococcus aureus and other invading pathogens. It seems clear that this is only the tip of the iceberg – interorganelle communication/signaling networks and elaborately coordinated effector apparatuses in cells are certain to exist in ways we can only begin to imagine. In this case, the goal is to fight infection; in others, it may very well be to thwart the effects of aging, to counter disease, or neutralize the effects of mutations, toxins, or other environmental insults. The more we know about cell biology, the more we realize how much we do not know; it is amazing how the field continues to captivate our attention.

SRT – February 2019

[1]B.H. Abuaita et al., Cell Host and Microbe (2018) doi: 10.1016/j.chom.2018.10.005. PMID: 30449314 (Request article via ILL).

The Value of Negative Results

Every investigator hopes the results they obtain support the hypothesis they put forth. However, more often than not, this does not happen – the data acquired do not conform to what was expected. What to do then with the “negative” results?

To be clear, the results we are talking about were obtained through carefully considered, well executed, and appropriately controlled (read: presence of positive and negative controls) experiments. They simply do not extend that which was predicted; in some cases, they may even call into question the underlying – often already published – results that led to the hypothesis guiding the study.

Of course, it may be that the work that was to be extended was never solid in the first place. That is, it is possible that the prevailing view in a field is based on incorrect and/or irreproducible results. Indeed, a number of studies are showing an alarmingly large percentage of high-profile published results are not reproducible [1],[2]. In the field of cancer – the pharmaceutical giants Amgen and Bayer Healthcare were unable to replicate the findings included in a large number of studies published in elite journals. The implications are profound; how can companies which rely on such research published in the scientific literature to define molecular targets and develop therapeutic drugs, do so against a backdrop of irreproducibility?

Source: Baker M. 1,500 scientists lift the lid on reproducibility. Nature; 2016.

Why is this happening? Explanations offered include – among others, pressures to publish, financial considerations, poor statistical analyses, insufficiently detailed protocols/technical complexity, selective reporting, and inadequate reagent authentication. The National Institutes of Health is aware of these problems and is calling on investigators seeking support to include in their proposals detailed descriptions of i. the scientific premise of the proposal; ii. experimental design specifications; iii. how biological variability will be considered; and iv. how biological and chemical reagents will be authenticated[3]. This is in addition to newly required statements regarding evidence that a detailed plan for data analysis is in place.

Although the (negative) results obtained do not confirm or extend other studies, the point here is that they are still very much of value. Other scientists in the field would welcome knowing what was done, and what was observed. The obvious benefit is that it will prevent others from wasting time, energy, and resources on approaches that are not fruitful, and would help focus the field by better defining what are, and what are not reliable results/models.

Despite the inherent value of such results, there is the perception of publication bias – the belief that only positive data is worthy of publication. Non-confirmatory or negative results are often not disseminated, at great cost to the scientific community.

One approach to assuring sufficient promulgation of the conclusions of a study is for appropriate journals to agree, in advance, to publish the results regardless of whether they confirm or refute the underlying hypotheses that initiated the study. This would provide a mechanism for a field to enjoy a wealth of otherwise unreported information about what works, and what does not – in particular investigators’ laboratories. Many clinical trials operate this way, with final results disseminated regardless of the effects seen on patients. A second approach would be to develop journals that will consider negative, confirmatory, and non-confirmatory results, data notes, and virtually any valid scientific or technical finding in a particular field. F1000Research is a journal that adheres to just such guidelines. The idea that all results are embraced is extremely attractive; after all, science is, at its core, simply a search for the truth.

SRT – November 2018

[1] M. Baker, Nature (2016) doi: 10.1038/533452a. PMID: 27225100 (article)
[2] C.G. Begley and L.M. Ellis, Nature (2012) doi: 10.1038/483531a. PMID: 22460880 (article)
[3] National Institutes of Health – New Grant Guidelines; what you need to know. https://grants.nih.gov/reproducibility/documents/grant-guideline.pdf