The next frontier in reproductive tourism? Genetic modification

By Rosa Castro, Duke University.

The birth of the first baby born using a technique called mitochondrial replacement, which uses DNA from three people to “correct” an inherited genetic mutation, was announced on Sept. 27.

Mitochondrial replacement or donation allows women who carry mitochondrial diseases to avoid passing them on to their child. These diseases can range from mild to life-threatening. No therapies exist and only a few drugs are available to treat them.

There are no international rules regulating this technique. Just one country, the United Kingdom, explicitly regulates the procedure. It’s a similar situation with other assisted reproductive techniques. Some countries permit these techniques and others don’t.

I study the intended and unintended consequences of regulating, prohibiting or authorizing the use of new technologies. One of these unintended consequences is “medical tourism,” where people travel from their home countries to places where practices such as commercial surrogacy or embryo selection are allowed.

Medical tourism for assisted reproductive technologies raises a host of legal and ethical questions. While new reproductive technologies, like mitochondrial replacement, promise to bring significant benefits, the absence of regulations means that some of these questions, including those related to safety and risks are unanswered, even as people are starting to use them.

Mitochondria power our cells.
Mitochondrium image via

How does mitochondrial replacement work?

We each inherit our mitochondria, which provide the energy that our cells need to function and the tiny fraction of DNA contained in it, only from our mothers. Some of that mitochondrial DNA might be defective, carrying mutations or errors that might lead to mitochondrial diseases.

The mother of the baby born using this technique carried one of these diseases. The disease, known as Leigh Syndrome, is a neurological disorder that typically leads to death during childhood. Before having this baby, the couple had two children who died as a result of the disease.

Mitochondrial replacement is done in a lab, as part of in vitro fertilization. It works by “substituting” the defective mitochondria of the mother’s egg with healthy mitochondria obtained from a donor. The child is genetically related to the mother, but has the donor’s mitochondrial DNA.

It involves three germ cells: an egg from the mother, an egg from a healthy donor and the sperm from the father. While the term “three-parent” child is often used in news stories, it is a highly controversial one.

To some, the tiny fraction of DNA contained in a mitochondria provided by a donor is not sufficient to make the donor a “second mother.” The U.K., the only country that has regulated the technique, takes this position. Ultimately, the DNA replaced is a tiny fraction of a person’s genes, and it is unrelated to the characteristics that we associate with genetic kinship.

There is some discussion as to whether mitochondrial replacement is a so-called “germ line modification,” a genetic modification that can be inherited. Many countries, including the U.K., have either banned or taken a serious stance on technologies that could alter germ cells and cause inherited changes that can affect future generations. But a great number of countries, including Japan and India, have ambiguous or unenforceable regulations about germline modification.

Mitochondrial replacement results in a germline change, but that change is passed to future generations only if the child is a girl. She would pass the donor’s mitochondrial DNA to her offspring, and in turn her female descendants will pass it to their children. If the child is a boy, he wouldn’t pass the mitochondrial DNA on to his offspring.

Because the mitochondrial modification is only heritable in girls, the U.S. National Academies of Science recently recommended that use of this technique be limited to male embryos, in which the change is not inheritable. The U.K. considered but then rejected this approach.

A thorny ethical and regulatory debate

In the U.S., the FDA claimed jurisdiction to regulate mitochondrial replacement but then halted further discussions. A rider included in the 2016 Congressional Appropriations Act precludes the FDA from considering mitochondrial replacement.

While the technique has been given the green light in the U.K., the nation’s Human Fertilisation and Embryology Authority is gathering more safety-related information before granting the first licenses for mitochondrial replacement to clinics.

Experts have predicted that once the authority starts granting authorization, people seeking mitochondrial replacement would go to the U.K.

At the moment, with no global standard dictating the use of mitochondrial replacement, couples (and experts willing to use these technologies) are going to countries where the procedure is allowed.

This has happened with other technologies such as embryo selection and commercial surrogacy, with patients traveling abroad to seek out assisted reproduction services or technologies that are either prohibited, unavailable, of lower quality or more expensive in their own countries.

The first documented case of successful mitochondrial replacement involved U.S. physicians assisting a Jordanian couple in Mexico. Further reports of the use of mitochondrial replacement in Ukraine and China have followed.

In this Nov. 3, 2015 photo, a newborn baby is transferred to an ambulance at the Akanksha Clinic, one of the most organized clinics in the surrogacy business, in Anand, India.
Allison Joyce/AP

The increasing trend of medical tourism has been followed by sporadic scandals and waves of tighter regulations in countries such as India, Nepal and Thailand, which have been leading destinations of couples seeking assisted reproduction services.

Intended parents and children born with the help of assisted reproduction outside of their home countries have faced problems related to family ties, citizenship and their relationship with donors – especially with the use of commercial surrogacy.

Mitochondrial replacement and new gene editing technologies add further questions related to the safety and long-term effects of these procedures.

Gene modification complicates reproductive tourism

Mitochondrial replacement and technologies such as gene-editing with the use of CRISPR-CAS9 that create germline modifications are relatively new. Many of the legal and ethical questions they raise have yet to be answered.

What if the children born as a result of these techniques suffer unknown adverse effects? And could these technologies affect the way in which we think about identity, kinship and family ties in general? One technique to replace mutated mitochondria involves the creation of embryos that will be later disposed. How should the use and disposal of embryos be regulated? What about the interests of the egg donors? Should they be paid?

Some of these problems could be avoided through a solid regulatory system in the U.S. and other countries. But as long as patients continue to seek medical treatments in “havens” for ethically dubious or risky procedures, many of these problems will persist.

Regulatory authorities around the world are debating how to better regulate these genetic modification technologies. Governments need to start considering not only the ethical and safety effects of their choices but also how these choices drive medical tourism.

The ConversationRosa Castro, Postdoctoral Associate in Science and Society, Duke University

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

Eat Lots of Fiber or Microbes Will Eat Your Colon

It sounds like the plot of a 1950s science fiction movie: normal, helpful bacteria begin to eat their host from within, because they don’t get what they want.

But that’s exactly what happens when microbes inside the digestive system don’t get the natural fiber that they rely on for food.

Starved, they begin to munch on the natural layer of mucus that lines the gut, eroding it to the point where dangerous invading bacteria can infect the colon wall.

For a new study, researchers looked at the impact of fiber deprivation on the guts of specially raised mice. The mice were born and raised with no gut microbes of their own, then received a transplant of 14 bacteria that normally grow in the human gut. They knew the full genetic signature of each one, so were able to track activity over time.

Fiber, fiber, fiber

The findings, published in the journal Cell, have implications for understanding not only the role of fiber in a normal diet, but also the potential of using fiber to counter the effects of digestive tract disorders.

“The lesson we’re learning from studying the interaction of fiber, gut microbes, and the intestinal barrier system is that if you don’t feed them, they can eat you,” says Eric Martens, associate professor of microbiology at the University of Michigan Medical School.

Researchers used the gnotobiotic, or germ-free, mouse facility and advanced genetic techniques to determine which bacteria were present and active under different conditions. They studied the impact of diets with different fiber content—and those with no fiber. They also infected some of the mice with a bacterial strain that does to mice what certain strains of Escherichia coli can do to humans—cause gut infections that lead to irritation, inflammation, diarrhea, and more.

The result: the mucus layer stayed thick, and the infection didn’t take full hold in mice that received a diet that was about 15 percent fiber from minimally processed grains and plants. But when the researchers substituted a diet with no fiber in it, even for a few days, some of the microbes in their guts began to munch on the mucus.

They also tried a diet that was rich in prebiotic fiber—purified forms of soluble fiber similar to what some processed foods and supplements currently contain. This diet resulted in a similar erosion of the mucus layer as observed in the lack of fiber.

The researchers also saw that the mix of bacteria changed depending on what the mice were being fed, even day by day. Some species of bacteria in the transplanted microbiome were more common—meaning they had reproduced more—in low-fiber conditions, others in high-fiber conditions.

And the four bacteria strains that flourished most in low-fiber and no-fiber conditions were the only ones that make enzymes that are capable of breaking down the long molecules called glycoproteins that make up the mucus layer.

In addition to looking at the of bacteria based on genetic information, the researchers could see which fiber-digesting enzymes the bacteria were making. They detected more than 1,600 different enzymes capable of degrading carbohydrates—similar to the complexity in the normal human gut.

Mucus layer

Just like the mix of bacteria, the mix of enzymes changed depending on what the mice were being fed, with even occasional fiber deprivation leading to more production of mucus-degrading enzymes.

Images of the mucus layer, and the “goblet” cells of the colon wall that produce the mucus constantly, showed the layer was thinner the less fiber the mice received. While mucus is constantly being produced and degraded in a normal gut, the change in bacteria activity under the lowest-fiber conditions meant that the pace of eating was faster than the pace of production—almost like an overzealous harvesting of trees outpacing the planting of new ones.

When the researchers infected the mice with Citrobacter rodentium—the E. coli-like bacteria—they observed that these dangerous bacteria flourished more in the guts of mice fed a fiber-free diet. Many of those mice began to show severe signs of illness and lost weight.

When the scientists looked at samples of their gut tissue, they saw not only a much thinner or even patchy mucus later—they also saw inflammation across a wide area. Mice that had received a fiber-rich diet before being infected also had some inflammation but across a much smaller area.

“To make it simple, the ‘holes’ created by our microbiota while eroding the mucus serve as wide open doors for pathogenic micro-organisms to invade,” says former postdoctoral fellow Mahesh Desai, now a principle investigator at the Luxembourg Institute of Health.

The researchers will next look at the impact of different prebiotic fiber mixes, and of diets with more intermitted natural fiber content over a longer period. They also want to look for biomarkers that could tell them about the status of the mucus layer in human guts—such as the abundance of mucus-digesting bacteria strains, and the effect of low fiber on chronic disease such as inflammatory bowel disease.

“While this work was in mice, the take-home message from this work for humans amplifies everything that doctors and nutritionists have been telling us for decades: Eat a lot of fiber from diverse natural sources,” says Martens.

“Your diet directly influences your microbiota, and from there it may influence the status of your gut’s mucus layer and tendency toward disease. But it’s an open question of whether we can cure our cultural lack of fiber with something more purified and easy to ingest than a lot of broccoli.”

Source: Republished from as a derivative work under the Attribution 4.0 International license. Original article posted to Futurity by .

Now, Check Out:

How many genes does it take to make a person?

By Sean Nee, Pennsylvania State University.

We humans like to think of ourselves as on the top of the heap compared to all the other living things on our planet. Life has evolved over three billion years from simple one-celled creatures through to multicellular plants and animals coming in all shapes and sizes and abilities. In addition to growing ecological complexity, over the history of life we’ve also seen the evolution of intelligence, complex societies and technological invention, until we arrive today at people flying around the world at 35,000 feet discussing the in-flight movie.

It’s natural to think of the history of life as progressing from the simple to the complex, and to expect this to be reflected in increasing gene numbers. We fancy ourselves leading the way with our superior intellect and global domination; the expectation was that since we’re the most complex creature, we’d have the most elaborate set of genes.

This presumption seems logical, but the more researchers figure out about various genomes, the more flawed it seems. About a half-century ago the estimated number of human genes was in the millions. Today we’re down to about 20,000. We now know, for example, that bananas, with their 30,000 genes, have 50 percent more genes than we do.

As researchers devise new ways to count not just the genes an organism has, but also the ones it has that are superfluous, there’s a clear convergence between the number of genes in what we’ve always thought of as the simplest lifeforms – viruses – and the most complex – us. It’s time to rethink the question of how the complexity of an organism is reflected in its genome.

The converging estimated number of genes in a person versus a giant virus. Human line shows average estimate with dashed line representing estimated number of genes needed. Numbers shown for viruses are for MS2 (1976), HIV (1985), giant viruses from 2004 and average T4 number in the 1990s.
Sean Nee, CC BY

Counting up the genes

We can think of all our genes together as the recipes in a cookbook for us. They’re written in the letters of the bases of DNA – abbreviated as ACGT. The genes provide instructions on how and when to assemble the proteins that you’re made of and that carry out all the functions of life within your body. A typical gene requires about 1000 letters. Together with the environment and experience, genes are responsible for what and who we are – so it’s interesting to know how many genes add up to a whole organism.

When we’re talking about numbers of genes, we can display the actual count for viruses, but only the estimates for human beings for an important reason. One challenge counting genes in eukaryotes – which include us, bananas and yeast like Candida – is that our genes are not lined up like ducks in a row.

Our genetic recipes are arranged as if the cookbook’s pages have all been ripped out and mixed up with three billion other letters, about 50 percent of which actually describe inactivated, dead viruses. So in eukaryotes it’s hard to count up the genes that have vital functions and separate them from what’s extraneous.

Megavirus has over a thousand genes, Pandoravirus has even more. Chantal Abergel, CC BY-SA

In contrast, counting genes in viruses – and bacteria, which can have 10,000 genes – is relatively easy. This is because the raw material of genes – nucleic acids – is relatively expensive for tiny creatures, so there is strong selection to delete unnecessary sequences. In fact, the real challenge for viruses is discovering them in the first place. It is startling that all major virus discoveries, including HIV, have not been made by sequencing at all, but by old methods such as magnifying them visually and looking at their morphology. Continuing advances in molecular technology have taught us the remarkable diversity of the virosphere, but can only help us count the genes of something we already know exists.

Flourishing with even fewer

The number of genes we actually need for a healthy life is probably even lower than the current estimate of 20,000 in our entire genome. One author of a recent study has reasonably extrapolated that the count for essential genes for human beings may be much lower.

These researchers looked at thousands of healthy adults, looking for naturally occurring “knockouts,” in which the functions of particular genes are absent. All our genes come in two copies – one from each parent. Usually, one active copy can compensate if the other is inactive, and it is difficult to find people with both copies inactivated because inactivated genes are naturally rare.

Knockout genes are fairly easy to study with lab rats, using modern genetic engineering techniques to inactivate both copies of particular genes of our choice, or even remove them altogether, and see what happens. But human studies require populations of people living in communities with 21st century medical technologies and known pedigrees suited to the genetic and statistical analyses required. Icelanders are one useful population, and the British-Pakistani people of this study are another.

This research found over 700 genes which can be knocked out with no obvious health consequences. For instance, one surprising discovery was that the PRDM9 gene – which plays a crucial role in the fertility of mice – can also be knocked out in people with no ill effects.

Extrapolating the analysis beyond the human knockouts study leads to an estimate that only 3,000 human genes are actually needed to build a healthy human. This is in the same ballpark as the number of genes in “giant viruses.” Pandoravirus, recovered from 30,000-year-old Siberian ice in 2014, is the largest virus known to date and has 2,500 genes.

So what genes do we need? We don’t even know what a quarter of human genes actually do, and this is advanced compared to our knowledge of other species.

Complexity arises from the very simple

But whether the final number of human genes is 20,000 or 3,000 or something else, the point is that when it comes to understanding complexity, size really does not matter. We’ve known this for a long time in at least two contexts, and are just beginning to understand the third.

Alan Turing, the mathematician and WWII code breaker established the theory of multicellular development. He studied simple mathematical models, now called “reaction-diffusion” processes, in which a small number of chemicals – just two in Turing’s model – diffuse and react with each other. With simple rules governing their reactions, these models can reliably generate very complex, yet coherent structures that are easily seen. So the biological structures of plants and animals do not require complex programming.

The simple building blocks of neurons together generate immense complexity.
UCI Research/Ardy Rahman, CC BY-NC

Similarly, it is obvious that the 100 trillion connections in the human brain, which are what really make us who we are, cannot possibly be genetically programmed individually. The recent breakthroughs in artificial intelligence are based on neural networks; these are computer models of the brain in which simple elements – corresponding to neurons – establish their own connections through interacting with the world. The results have been spectacular in applied areas such as handwriting recognition and medical diagnosis, and Google has invited the public to play games with and observe the dreams of its AIs.

Microbes go beyond basic

So it’s clear that a single cell does not need to be very complicated for large numbers of them to produce very complex outcomes. Hence, it shouldn’t come as a great surprise that human gene numbers may be of the same size as those of single-celled microbes like viruses and bacteria.

What is coming as a surprise is the converse – that tiny microbes can have rich, complex lives. There is a growing field of study – dubbed “sociomicrobiology” – that examines the extraordinarily complex social lives of microbes, which stand up in comparison with our own. My own contributions to these areas concern giving viruses their rightful place in this invisible soap opera.

We have become aware in the last decade that microbes spend over 90 percent of their lives as biofilms, which may best be thought of as biological tissue. Indeed, many biofilms have systems of electrical communication between cells, like brain tissue, making them a model for studying brain disorders such as migraine and epilepsy.

Biofilms can also be thought of as “cities of microbes,” and the integration of sociomicrobiology and medical research is making rapid progress in many areas, such as the treatment of cystic fibrosis. The social lives of microbes in these cities – complete with cooperation, conflict, truth, lies and even suicide – is fast becoming the major study area in evolutionary biology in the 21st century.

Just as the biology of humans becomes starkly less outstanding than we had thought, the world of microbes gets far more interesting. And the number of genes doesn’t seem to have anything to do with it.

The ConversationSean Nee, Research Professor of Ecosystem Science and Management, Pennsylvania State University

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

Why teen brains need later school start time

By Kyla Wahlstrom, University of Minnesota.

Millions of high schoolers are having to wake up early as they start another academic year. It is not uncommon to hear comments from parents such as,

“I have a battle every morning to get my teenager out of bed and off to school. It’s a hard way to start every day.”

Sleep deprivation in teenagers as a result of early school start has been a topic of concern and debate for nearly two decades. School principals, superintendents and school boards across the country have struggled with the question of whether their local high school should start later.

So, are teenagers just lazy?

I have been researching the impact of later high school start times for 20 years. Research findings show that teens’ inability to get out of bed before 8 a.m. is a matter of human biology, not a matter of attitude.

At issue here are the sleep patterns of the teenage brain, which are different from those of younger children and adults. Due to the biology of human development, the sleep mechanism in teens does not allow the brain to naturally awaken before about 8 a.m. This often gets into conflict with school schedules in many communities.

History of school timing

In the earliest days of American education, all students attended a single school with a single starting time. In fact, as late as 1910, half of all children attended one-room schools. As schools and districts grew in size in the late 1890s-1920s, staggered starting times became the norm across the country.

In cities and large towns, high school students went first, followed by middle schoolers and then elementary students.

Here’s what research shows

Research findings during the 1980s started to cast a new light on teenagers’ sleep patterns.

Researcher Mary Carskadon and others at Brown University found that the human brain has a marked shift in its sleep/wake pattern during adolescence.

Researchers around the world corroborated those findings. At the onset of puberty, nearly all humans (and most mammals) experience a delay of sleep timing in the brain. As a result, the adolescent body does not begin to feel sleepy until about 10:45 p.m.

At the same time, medical researchers also found that sleep patterns of younger children enabled them to rise early and be ready for learning much earlier than adolescents.

In other words, the biology of the teenage brain is in conflict with early school start times, whereas sleep patterns of most younger children are in sync with schools that start early.

Biology of teenage brain

So, what exactly happens to the teenage brain during the growth years?

In the teens, the secretion of the sleep hormone melatonin begins at about 10:45 p.m. and continues until about 8 a.m. What this means is that teenagers are unable to fall asleep until melatonin secretion begins and they are also not able to awaken until the melatonin secretion stops.

What happens to the brain during teenage years? Brain image via

These changes in the sleep/wake pattern of teens are dramatic and beyond their control. Just expecting teens to go to bed earlier is not a solution.

I have interviewed hundreds of teens who all said that if they went to bed early, they were unable to sleep – they just stared at the ceiling until sleep set in around 10:45 p.m.

According to the National Sleep Foundation, the sleep requirement for teenagers is between 8-10 hours per night. That indicates that the earliest healthy wake-up time for teens should not be before 7 a.m.

A recent research study that I led shows that it takes an average of 54 minutes from the time teens wake up until they leave the house for school. With nearly half of all high schools in the U.S. starting before 8:00 a.m., and over 86 percent starting before 8:30 a.m., leaving home by 7:54 a.m. would be a challenge for most teens in America.

What happens with less sleep

Studies on sleep in general, and on sleep in teens in particular, have revealed the serious negative consequences of lack of adequate sleep. Teens who are sleep-deprived – defined as obtaining less than eight hours per night – are significantly more likely to use cigarettes, drugs and alcohol.

What happens with less sleep?
Student image via

The incidence of depression among teens significantly rises with less than nine hours of sleep. Feelings of sadness and hopelessness increase from 19 percent up to nearly 52 percent in teens who sleep four hours or less per night.

Teen car crashes, the primary cause of death for teenagers, are found to significantly decline when teens obtain more than eight hours of sleep per night.

What changes with later start time?

Results from schools that switched to a late start time are encouraging. Not only does the teens’ use of drugs, cigarettes, and alcohol decline, their academic performance improves significantly with later start time.

The Edina (Minnesota) School District superintendent and school board was the first district in the country to make the change. The decision was a result of a recommendation from the Minnesota Medical Association, back in 1996.

Research showed significant benefits for teens from that school as well as others with later start times.

For example, the crash rate for teens in Jackson Hole, Wyoming in 2013 dropped by 70 percent in the first year after the district adopted a later high school start.

Schools that have made a change have found a difference.
Teenager image via

At this point, hundreds of schools across the country in 44 states have been able to make the shift. The National Sleep Foundation had a count of over 250 high schools having made a change to a later start as early as 2007.

Furthermore, since 2014, major national health organizations have taken a policy stand to support the implementation of later starting time for high school. The American Academy of Pediatrics, the American Medical Association and the Centers for Disease Control and Prevention have all come out with statements that support the starting time of high schools to be 8:30 a.m. or later.

Challenges and benefits

However, there are many schools and districts across the U.S. that are resisting delaying the starting time of their high schools. There are many reasons.

Issues such as changing transportation routes and altering the timing for other grade levels often head the list of factors making the later start difficult. Schools are also concerned about afterschool sports and activities.

Such concerns are valid. However, there could be creative ways of finding solutions. We already know that schools that were able to make the change found solutions that show “out of the box” thinking. For example, schools adopted mixed-age busing, coordinated with public transport systems and expanded afterschool child care.

I do understand that there are other realistic concerns that need to be addressed in making the change. But, in the end, communities that value maximum development for all of its children would also be willing to grapple with solutions.

After all, our children’s ability to move into healthy adult lives tomorrow depends on what we as adults are deciding for them today.

The ConversationKyla Wahlstrom, Senior Research Fellow, University of Minnesota

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

Parasitic flies, zombified ants, predator beetles – insect drama on Mexican coffee plantations

By Kate Mathis, University of Arizona.

Ants are voracious predators and often very good at defending plants from herbivores. People have taken advantage of this quirk for centuries. In fact, using ants in orange groves is one of the first recorded pest control practices, dating back to A.D. 304 in China.

In southern Mexico, Azteca ants are frequently found on coffee plantations. They live in giant nests built into the sides of the hardwood trees farmers plant to shade the delicate coffee plants below. The ants feast on sugary nectar, either directly from extrafloral nectary structures on the shade trees or indirectly from nectar excreted by aphids living on the coffee plants. In return the ants remove other insects to protect the plants. The Azteca ants are highly territorial and very aggressive, which makes them great at controlling coffee pests. They’re particularly skilled in eliminating one of the coffee’s most damaging pests, the coffee berry borer.

My colleagues and I are studying how these ants are at the center of a complex web of organisms that are important to coffee management. In the process, we serendipitously discovered a brand new species that may be integral to the ants’ success.

Coffee plantation Finca Irlanda in Chiapas, Mexico.
Kate Mathis, CC BY-ND

Ants must deal with parasitizing flies

For the last six years, I’ve been examining the dynamics between these beneficial ants and one of their most deadly natural enemies, phorid fly parasitoids.

On the surface, it’s hard to imagine that phorid flies could have a big impact on a mighty Azteca colony made up of millions of workers. For one, these flies are small, approximately the size of a pinhead. Also, it takes only a fraction of a second for an adult phorid fly to parasitize an ant by laying its egg in the ant’s body.

But these parasitoids are definitely bad news for the ants. Once a phorid fly injects its egg into the ant, the fly larva slowly makes its way into the ant’s head, ultimately consuming the contents and killing the ant in the process. Then it decapitates the ant and uses the head as a pupal case. Once fully mature, the adult fly will emerge from the ant’s mouth parts to begin the cycle again.

This gruesome process isn’t even the worst of it for the ants. Phorid flies aren’t just a death sentence for a parasitized individual ant, they also negatively affect the function of the ant colony as a whole. When ant workers discover phorid flies nearby, they freeze in place or hide, preventing them from collecting food or properly maintaining their nest.

Enter: A mysterious beetle

It was during an experiment in the field, watching phorid flies parasitize ants, that I first noticed the beetles.

I had hypothesized that phorid flies usurp the ants’ own complex chemical communication system to locate their victims; I was testing extracts from different ant glands to determine what chemicals the flies might use as a beacon to find their ant hosts.

I had dissected and extracted the Pygidial gland sac of the ants, which houses their alarm pheromone. The ants secrete this chemical blend whenever they’re injured or discover an intruder to the nest. (It’s quite pungent, and smells vaguely of blue cheese.) I was finding that the phorid flies are attracted to these alarm compounds. Shortly after the chemicals are released into the air, the flies arrive to inspect the immediate area where the scent is strongest. Annoyingly, I was also finding that tiny beetles were apt to crash the party, landing in my observation area with the ants.

Beetles discovered in phorid rearing chambers after eating ants.
Kate Mathis, CC BY-ND

A year or so later, I was working on another phorid fly-rearing experiment in my tiny lab space at the Finca Irlanda research station in Chiapas, Mexico. My lab consisted of a few shelves and a small work table in a poorly screened-in porch area. The experiments involved taking parasitized ants and keeping them alive in tiny chambers with small air holes until the phorid flies could fully develop and hatch.

But everything was going extremely poorly. Over and over again, I would check on my parasitized ants only to find them missing, and in their place, once again, the tiny beetles. It seemed these intruders were entering my lab, accessing the rearing chambers via the air holes and eating the ants.

This is when I realized something interesting was happening and started formulating questions. What were these beetles? Are they finding the ants the same way the phorid flies do? Are they only eating parasitized ants? If so, why?

Predation that could help the colony

Since Azteca ants are so aggressive, it seemed unlikely that a beetle would be able to effectively prey on healthy worker ants twice its size. Myrmecophillic (“ant associated” or literally “ant-loving”) beetles use a wide range of strategies to live closely with such dangerous creatures as ants. Some mimic the smell or look of ants, others use particularly swift movements to outmaneuver them and others use ant-repellent secretions to create a protective force field around themselves. In each of these cases, the beetles take some kind of resource from the ant, whether it’s food from the colony’s stores, or safe nesting space, or simply eating the ants themselves.

Beetles attacking parasitized ants (painted white) while ignoring healthy ants (painted green).

It occurred to me that the myrmecophillic beetles associated with Azteca might be exploiting the state of the parasitized ants in order to prey on them. Even more intriguingly, this might be a case where predation is ultimately not a bad thing for the ants as a group. Parasitized ants are already almost certainly going to die. And their deaths result in more phorid flies, which is bad for the ant colony.

But if a beetle eats a parasitized ant, the developing phorid fly is also consumed. By eating only parasitized ants, these beetles may be reducing the number of phorid flies that successfully develop – which could actually benefit the ant colony.

So I got to work conducting experiments that would untangle what’s going on. I used synthetic versions of the Azteca alarm pheromone chemicals to confirm the beetles were indeed using the alarm pheromone to find the ants, regardless of whether they were in my screened-in lab space or the center of a field of coffee. I set out various traps of parasitized, healthy or injured ants to see if the beetles would prey on only the parasitized ants (they did).

Painted ants and beetles in the behavioral observation arena. Large beetle visitor on the notebook was not included in the experiments.
Kate Mathis, CC BY-ND

I also took parasitized ants and healthy ants, painted them different colors and placed them in an arena with the beetles to observe what they would do. Healthy ants were highly aggressive toward the beetles, whereas the parasitized ants were extremely docile. When the beetles tried to attack healthy ants, they were swiftly rebuffed. But when they attacked parasitized ants, the ant essentially stood still as the beetle ate it alive.

Meanwhile, specimens of the beetles were being transported to a beetle expert for identification. As it turned out, they were a completely new species, the first from their genus to ever be recorded from Mexico. With my collaborators, I chose to name the species Myrmedonota xipe for the Aztec god Xipe Totec. This deity was worshiped via human sacrifices in an act meant to symbolize the casting-off of the old to bring new growth and prosperity to all – an apt metaphor for the beetles’ role in Azteca ant colonies.

When many people think of agriculture, they imagine only the farmer’s crop. But, my colleagues’ and my work shows that a complex web of interactions between many species of insects can provide important ecosystem services, like pest control, in agroecosystems. This particular story shows just a piece of the puzzle, where the Azteca ants are benefiting the coffee, and the beetles are helping keep the phorid flies from stopping that.

The ConversationKate Mathis, Research Associate in Ecology & Evolutionary Biology, University of Arizona

This article was originally published on The Conversation. Read the original article.

Now, Check Out:


We’ve been wrong about the origins of life for 90 years

By Arunas L Radzvilavicius, UCL.

For nearly nine decades, science’s favorite explanation for the origin of life has been the “primordial soup”. This is the idea that life began from a series of chemical reactions in a warm pond on Earth’s surface, triggered by an external energy source such as lightning strike or ultraviolet (UV) light. But recent research adds weight to an alternative idea, that life arose deep in the ocean within warm, rocky structures called hydrothermal vents.

A study published last month in Nature Microbiology suggests the last common ancestor of all living cells fed on hydrogen gas in a hot iron-rich environment, much like that within the vents. Advocates of the conventional theory have been sceptical that these findings should change our view of the origins of life. But the hydrothermal vent hypothesis, which is often described as exotic and controversial, explains how living cells evolved the ability to obtain energy, in a way that just wouldn’t have been possible in a primordial soup.

Under the conventional theory, life supposedly began when lightning or UV rays caused simple molecules to join together into more complex compounds. This culminated in the creation of information-storing molecules similar to our own DNA, housed within the protective bubbles of primitive cells. Laboratory experiments confirm that trace amounts of molecular building blocks that make up proteins and information-storing molecules can indeed be created under these conditions. For many, the primordial soup has become the most plausible environment for the origin of first living cells.

But life isn’t just about replicating information stored within DNA. All living things have to reproduce in order to survive, but replicating the DNA, assembling new proteins and building cells from scratch require tremendous amounts of energy. At the core of life are the mechanisms of obtaining energy from the environment, storing and continuously channelling it into cells’ key metabolic reactions.

Did life evolve around deep-sea hydrothermal vents?
U.S. National Oceanic and Atmospheric Administration/Wikimedia Commons

Where this energy comes from and how it gets there can tell us a whole lot about the universal principles governing life’s evolution and origin. Recent studies increasingly suggest that the primordial soup was not the right kind of environment to drive the energetics of the first living cells.

It’s classic textbook knowledge that all life on Earth is powered by energy supplied by the sun and captured by plants, or extracted from simple compounds such as hydrogen or methane. Far less known is the fact that all life harnesses this energy in the same and quite peculiar way.

This process works a bit like a hydroelectric dam. Instead of directly powering their core metabolic reactions, cells use energy from food to pump protons (positively charged hydrogen atoms) into a reservoir behind a biological membrane. This creates what is known as a “concentration gradient” with a higher concentration of protons on one side of the membrane than other. The protons then flow back through molecular turbines embedded within the membrane, like water flowing through a dam. This generates high-energy compounds that are then used to power the rest of cell’s activities.

Life could have evolved to exploit any of the countless energy sources available on Earth, from heat or electrical discharges to naturally radioactive ores. Instead, all life forms are driven by proton concentration differences across cells’ membranes. This suggests that the earliest living cells harvested energy in a similar way and that life itself arose in an environment in which proton gradients were the most accessible power source.

Vent hypothesis

Recent studies based on sets of genes that were likely to have been present within the first living cells trace the origin of life back to deep-sea hydrothermal vents. These are porous geological structures produced by chemical reactions between solid rock and water. Alkaline fluids from the Earth’s crust flow up the vent towards the more acidic ocean water, creating natural proton concentration differences remarkably similar to those powering all living cells.

The studies suggest that in the earliest stages of life’s evolution, chemical reactions in primitive cells were likely driven by these non-biological proton gradients. Cells then later learned how to produce their own gradients and escaped the vents to colonise the rest of the ocean and eventually the planet.

While proponents of the primordial soup theory argue that electrostatic discharges or the Sun’s ultraviolet radiation drove life’s first chemical reactions, modern life is not powered by any of these volatile energy sources. Instead, at the core of life’s energy production are ion gradients across biological membranes. Nothing even remotely similar could have emerged within the warm ponds of primeval broth on Earth’s surface. In these environments, chemical compounds and charged particles tend to get evenly diluted instead of forming gradients or non-equilibrium states that are so central to life.

Deep-sea hydrothermal vents represent the only known environment that could have created complex organic molecules with the same kind of energy-harnessing machinery as modern cells. Seeking the origins of life in the primordial soup made sense when little was known about the universal principles of life’s energetics. But as our knowledge expands, it is time to embrace alternative hypotheses that recognise the importance of the energy flux driving the first biochemical reactions. These theories seamlessly bridge the gap between the energetics of living cells and non-living molecules.

The ConversationArunas L Radzvilavicius, , UCL

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

The future of genetic enhancement is not in the West

By G. Owen Schaefer, National University of Singapore.

Would you want to alter your future children’s genes to make them smarter, stronger or better-looking? As the state of the science brings prospects like these closer to reality, an international debate has been raging over the ethics of enhancing human capacities with biotechnologies such as so-called smart pills, brain implants and gene editing. This discussion has only intensified in the past year with the advent of the CRISPR-cas9 gene editing tool, which raises the specter of tinkering with our DNA to improve traits like intelligence, athleticism and even moral reasoning.

So are we on the brink of a brave new world of genetically enhanced humanity? Perhaps. And there’s an interesting wrinkle: It’s reasonable to believe that any seismic shift toward genetic enhancement will not be centered in Western countries like the U.S. or the U.K., where many modern technologies are pioneered. Instead, genetic enhancement is more likely to emerge out of China.

Attitudes toward enhancement

Numerous surveys among Western populations have found significant opposition to many forms of human enhancement. For example, a recent Pew study of 4,726 Americans found that most would not want to use a brain chip to improve their memory, and a plurality view such interventions as morally unacceptable.

Public expresses more worry than enthusiasm about each of these potential human enhancements.

A broader review of public opinion studies found significant opposition in countries like Germany, the U.S. and the U.K. to selecting the best embryos for implantation based on nonmedical traits like appearance or intelligence. There is even less support for editing genes directly to improve traits in so-called designer babies.

Opposition to enhancement, especially genetic enhancement, has several sources. The above-mentioned Pew poll found that safety is a big concern – in line with experts who say that tinkering with the human genome carries significant risks. These risks may be accepted when treating medical conditions, but less so for enhancing nonmedical traits like intelligence and appearance. At the same time, ethical objections often arise. Scientists can be seen as “playing God” and tampering with nature. There are also worries about inequality, creating a new generation of enhanced individuals who are heavily advantaged over others. “Brave New World” is a dystopia, after all.

However, those studies have focused on Western attitudes. There has been much less polling in non-Western countries. There is some evidence that in Japan there is similar opposition to enhancement as in the West. Other countries, such as China and India, are more positive toward enhancement. In China, this may be linked to more generally approving attitudes toward old-fashioned eugenics programs such as selective abortion of fetuses with severe genetic disorders, though more research is needed to fully explain the difference. This has led Darryl Macer of the Eubios Ethics Institute to posit that Asia will be at the forefront of expansion of human enhancement.

Restrictions on gene editing

In the meantime, the biggest barrier to genetic enhancement will be broader statutes banning gene editing. A recent study found bans on germline genetic modification – that is, those that are passed on to descendants – are in effect throughout Europe, Canada and Australia. China, India and other non-Western countries, however, have laxer regulatory regimes – restrictions, if they exist, are often in the form of guidelines rather than statutes.

The U.S. may appear to be an exception to this trend. It lacks legal restriction of gene editing; however, federal funding of germline gene editing research is prohibited. Because most geneticists rely on government grants for their research, this acts as a significant restriction on germline editing studies.

By contrast, it was Chinese government funding that led China to be the first to edit the genes of human embryos using the CRISPR-cas9 tool in 2015. China has also been leading the way in using CRISPR-cas9 for non-germline genetic modifications of human tissue cells for use in treatment of cancer patients.

There are, then, two primary factors contributing to emergence of genetic enhancement technologies – research to develop the technologies and popular opinion to support their deployment. In both areas, Western countries are well behind China.

Different countries have different expectations about working with human genes.
Michael Dalder/Reuters

What makes China a probable petri dish

A further, more political factor may be at play. Western democracies are, by design, sensitive to popular opinion. Elected politicians will be less likely to fund controversial projects, and more likely to restrict them. By contrast, countries like China that lack direct democratic systems are thereby less sensitive to opinion, and officials can play an outsize role in shaping public opinion to align with government priorities. This would include residual opposition to human enhancement, even if it were present. International norms are arguably emerging against genetic enhancement, but in other arenas China has proven willing to reject international norms in order to promote its own interests.

Indeed, if we set ethical and safety objections aside, genetic enhancement has the potential to bring about significant national advantages. Even marginal increases in intelligence via gene editing could have significant effects on a nation’s economic growth. Certain genes could give some athletes an edge in intense international competitions. Other genes may have an effect on violent tendencies, suggesting genetic engineering could reduce crime rates.

Many of these potential benefits of enhancement are speculative, but as research advances they may move into the realm of reality. If further studies bear out the reliability of gene editing in improving such traits, China is well-poised to become a leader in the area of human enhancement.

Does this matter?

Aside from a preoccupation with being the best in everything, is there reason for Westerners to be concerned by the likelihood that genetic enhancement is apt to emerge out of China?

If the critics are correct that human enhancement is unethical, dangerous or both, then yes, emergence in China would be worrying. From this critical perspective, the Chinese people would be subject to an unethical and dangerous intervention – a cause for international concern. Given China’s human rights record in other areas, it is questionable whether international pressure would have much effect. In turn, enhancement of its population may make China more competitive on the world stage. An unenviable dilemma for opponents of enhancement could emerge – fail to enhance and fall behind, or enhance and suffer the moral and physical consequences.

Conversely, if one believes that human enhancement is actually desirable, this trend should be welcomed. As Western governments hem and haw, delaying development of potentially great advances for humanity, China leads the way forward. Their increased competitiveness, in turn, would pressure Western countries to relax restrictions and thereby allow humanity as a whole to progress – becoming healthier, more productive and generally capable.

Either way, this trend is an important development. We will see if it is sustained – public opinion in the U.S. and other countries could shift, or funding could dry up in China. But for now, it appears that China holds the future of genetic enhancement in its hands.

The ConversationG. Owen Schaefer, Research Fellow in Biomedical Ethics, National University of Singapore

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

How old is too old for a safe pregnancy?

By Hannah Brown, University of Adelaide.

This week, an Australian woman delivered a baby at the age of 62 after having in vitro fertilisation (IVF) abroad.

Few women can naturally conceive a baby later in life without the help of IVF – and these are rarely first pregnancies. These women go through menopause later, and have lower risks of heart disease, osteoporosis and dementia.

But does that mean that it’s safe to start a family later in life? Are there other risks and complications associated with pregnancy and childbirth in your 50s and 60s – or even your 40s?

Changing demographics

A woman’s reproductive capacity has a finite lifespan. Her eggs initially grow when she is inside her mother’s womb, and are stored inside her ovaries until she begins to menstruate. Each month, more than 400 eggs are lost by attrition until the four million she originally had are gone, and menopause begins.

Social and financial pressures are driving many Australian women who want to have children to wait until later in life. The number of women having babies in their 30s or later has almost doubled in the past 25 years in Australia, from 23% in 1991 to 43% in 2011.

Around one in 1,000 births occur to women 45 years or older. This rate is likely to increase as new technologies emerge, including egg donation.

What are the risks?

Women aged over 30 are more than twice as likely to suffer from life-threatening high blood pressure (pre-eclampsia) during pregnancy than under 30s (5% compared with 2%) and are twice as likely to have gestational diabetes (5-10% compared with 1-2.5%).

More than half of women aged over 40 will require their baby to be delivered by caesarean section.

Increasing maternal age increases the chance of dying during the pregnancy, or during childbirth. Mothers in their 40s and 50s are also between three and six times more likely to die in the six weeks following the birth of the baby than their younger counterparts, from complications associated with the pregnancy such as bleeding and clots.

Mothers aged over 40 are more than twice as likely to suffer a stillbirth. And for a woman aged 40, the risk of miscarriage is greater than the chance of a live birth.

Finally, babies born from older mothers are 1.5-2 times more likely to be born too soon (before 36 weeks) and to be born small (low birthweight). Low birthweight and prematurity carry both immediate risks for the babies including problems with lung development, and obesity and diabetes as an adult.

Postmenopausal pregnancy

Through advances to the IVF industry, it is possible to take a donor egg and embryo from a younger, fertile woman, to help a woman who has undergone menopause become pregnant.

But this comes with greater risks. Pregnancy puts extra stress and strain on the heart and blood vessels and emerging evidence suggests older mothers are more likely to suffer a stroke later in life.

When is pregnancy safest?

While there are no specific age cut-offs for IVF treatment in Australia, many clinics stop treatment at 50. At 30, the chance of conceiving each month (without IVF) is about 20%. At 40 it’s around 5% and this declines throughout the decade.

A wealth of scientific knowledge says that risks to the baby and mother during pregnancy are lowest in your 20s. Women in their 20s are less likely to have health risks and conditions such as obesity and diabetes which negatively influence pregnancy.

As a woman ages, her egg quality also declines. Poor egg quality is directly associated with genetic errors that result in both miscarriage and birth defects.

So while it’s possible to conceive later in life, it’s a risky decision.

The ConversationHannah Brown, Post-doctoral Fellow; Reproductive Epigenetics, University of Adelaide

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

Are pop stars destined to die young?

By Greg Hall, Case Western Reserve University.

Prince’s autopsy has determined that the artist died of an accidental overdose of the synthetic opioid fentanyl. The news comes on the heels of the death of former Megadeth drummer Nick Menza, who collapsed on stage and died in late May.

Indeed, it seems as though before we can even finish mourning the loss of one pop star, another falls. There’s no shortage of groundbreaking artists who die prematurely, whether it’s Michael Jackson, Elvis Presley or Hank Williams.

As a physician, I’ve begun to wonder: Is being a superstar incompatible with a long, healthy life? Are there certain conditions that are more likely to cause a star’s demise? And finally, what might be some of the underlying reasons for these early deaths?

To find out the answer to each of these questions, I analyzed the 252 individuals who made Rolling Stone’s list of the 100 greatest artists of the rock & roll era.

More than their share of accidents

To date, 82 of the 252 members of this elite group have died.

There were six homicides, which occurred for a range of reasons, from the psychiatric obsession that led to the shooting of John Lennon to the planned “hits” on rappers Tupac Shakur and Jam Master Jay. There’s still a good deal of controversy about the shooting of Sam Cooke by a female hotel manager (who was likely protecting a prostitute who had robbed Cooke). Al Jackson Jr., the renowned drummer with Booker T & the MGs, was shot in the back five times in 1975 by a burglar in a case that still baffles authorities.

An accident can happen to anyone, but these artists seem to have more than their share. There were numerous accidental overdoses – Sid Vicious of the Sex Pistols at age 21, David Ruffin of the Temptations at 50, The Drifters’ Rudy Lewis at 27, and country great Gram Parsons, who was found dead at 26.

And while your odds of dying in a plane crash are about one in five million, if you’re on Rolling Stone’s list, those odds jump to one in 84: Buddy Holly, Otis Redding and Ronnie Van Zant of the Lynyrd Skynyrd Band all died in airplane accidents while on tour.

The 27 Club: Robert Johnson, Brian Jones, Jimi Hendrix, Janis Joplin, Jim Morrison, Kurt Cobain and Amy Winehouse all died at 27 years old.
High Star Madrid

A drink, a smoke and a jolt

Among the general population, liver-related diseases are behind only 1.4 percent of deaths. Among the Rolling Stone’s 100 Greatest Artists, however, the rate is three times that.

It’s likely tied to the elevated alcohol and drug use among artists. Liver bile duct cancers – which are extremely rare – happened to two of the top 100, with Ray Manzarek of The Doors and Tommy Ramone of the Ramones both succumbing prematurely from a cancer that normally affects one in 100,000 people a year.

The vast majority of those on Rolling Stone’s list were born in the 1940s and reached maturity during the 1960s, when tobacco smoking peaked. So not surprisingly, a significant portion of artists died from lung cancer: George Harrison of the Beatles at age 58, Carl Wilson of the Beach Boys at 51, Richard Wright of Pink Floyd at 65, Eddie Kendricks of the Temptations at 52 and Obie Benson of the Four Tops at 69. Throat cancer – also linked with smoking – caused the deaths of country great Carl Perkins at 65 and Levon Helm of The Band at 71.

The Doors’ keyboardist Ray Manzarek died of a rare liver bile duct cancer. Tom Hill/CNN

A good number from the list had heart attacks or heart failure, such as Ian Stewart of the Rolling Stones at 47 and blues greats Muddy Waters at 70, Howlin Wolf at 65, Roy Orbison at 52 and Jackie Wilson at 49.

We recently saw The Eagles’ Glenn Frey succumb to pneumonia, but so did soul singer Jackie Wilson at age 49, nine years after a massive heart attack. James Brown complained of a persistent cough and declining health before he passed at 73, with the cause of death listed as congestive heart failure as a result of pneumonia.

Currently, the U.S. is in the midst of an opioid abuse epidemic, with heroin and prescription drug overdoses happening at historic rates.

But for rock stars, opioid abuse is nothing new. Elvis Presley, Jimi Hendrix, Janis Joplin, Sid Vicious, Gram Parsons, Whitney Houston (who didn’t make the list), Michael Jackson and now Prince all died from accidental opioid overdoses.

Two key findings

One of the two shocking findings of this analysis deals with life expectancy. Among those dead, the average age was 49, which is the same as Chad, the country with the lowest life expectancy in the world. The average American male has a life expectancy of about 76 years.

Factoring in their birth year and a life expectancy of 76 years, only 44 should have died by now. Instead, 82 have. (Incidentally, of the 44 we would have expected to be dead by now, 19 are still alive.)

The second shocking discovery was the sobering and disproportional
occurrence of alcohol- and drug-related deaths.

There was Kurt Cobain’s gunshot suicide while intoxicated and Duane Allman’s drunk driving motorcycle crash. Members of legendary bands like The Who (John Entwistle, 57, and Keith Moon, 32), The Doors (Jim Morrison, 27), The Byrds (Gene Clark, 46, and Michael Clarke, 47) and The Band (Rick Danko, 55, and Richard Manuel, 42) all succumbed to alcohol or drugs.

Others – The Grateful Dead’s Jerry Garcia and country star Hank Williams – steadily declined from substance abuse while their organs deteriorated. Their official causes of death were heart-related. In truth, the cause may have been more directly related to substance abuse.

In all, alcohol and drugs accounted for at least one in 10 of these great artists’ deaths.

Does a quest for fame lead to an early demise?

Many have explored the root causes behind these premature deaths.

One answer may come from dysfunctional childhoods: experiencing physical or sexual abuse, having a depressed parent or having a family broken up by tragedy or divorce. An article published in the British Medical Journal found that “adverse childhood experiences” may act as a motivator to become successful and famous as a way to move past childhood trauma.

The authors noted an increased incidence of these adverse childhood experiences among famous artists. Unfortunately, the same adverse experiences also predispose people to depression, drug use, risky behaviors and premature death.

A somewhat similar hypothesis is proposed by the Self Determination Theory, which addresses human motivation through the lens of “intrinsic” versus “extrinsic” life aspirations. People who have intrinsic goals seek inward happiness and contentment. On the other hand, people who possess extrinsic goals focus on material success, fame and wealth – the exact sort of thing attained by these exceptional artists. According to research, people who have extrinsic goals tend to have had less-involved parents and are more likely to experience bouts of depression.

A good deal of research has also explored the fine line between creative genius and mental illness across a wide range of disciplines. They include authors (Virginia Woolf and Ernest Hemingway), scholars (Aristotle and Isaac Newton), classical composers (Beethoven, Schumann, and Tchaikovsky), painters (Van Gogh), sculptors (Michelangelo) and contemporary musical geniuses.

Psychiatrist Arnold Ludwig, in his meta-analysis of over 1,000 people, “The Price of Greatness: Resolving the Creativity and Madness Controversy,” concluded that artists, compared to other professions, were much more likely to have mental illnesses, and were prone to being afflicted with them for longer periods of time.

Meanwhile, Cornell psychiatrist William Frosch, author of “Moods, madness, and music: Major affective disease and musical creativity,” was able to connect the creativity of groundbreaking musical artists to their psychiatric disorders. According to Frosch, their mental illnesses were behind their creative output.

My review also confirmed a greater incidence of mood disorders among these Great 100 rock stars. Numerous studies have shown that depression, bipolar disease and related diagnoses come with an increased risk for premature death, suicide and addiction.

By following the relationship between genius and mental illness, mental illness and substance abuse, and then substance abuse, health problems and accidental death, you can see why so many great artists seem almost destined for a premature or drug-induced demise.

The ConversationGreg Hall, Assistant Clinical Professor, Case Western Reserve University

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

Americans want a say in what happens to their donated blood and tissue in biobanks

By Raymond G. De Vries, University of Michigan and Tom Tomlinson, Michigan State University.

The last time you went to a hospital, you probably had to fill out forms listing the medications you are taking and updating your emergency contacts. You also might have been asked a question about what is to be done with “excess tissues or specimens” that may be removed during diagnosis or treatment. Are you willing to donate these leftover bits of yourself (stripped of your name, of course) for medical research?

If you are inclined to answer, “Sure, why not?” you will join the majority of Americans who would agree to donate, allowing your leftovers, such as blood or unused bits from biopsies or even embryos, to be sent to a “biobank” that collects specimens and related medical information from donors.

But what, exactly, will be done with your donation? Can the biobank guarantee that information about your genetic destiny will not find its way to insurance companies or future employers? Could, for example, a pharmaceutical company use it to develop and patent a new drug that will be sold back to you at an exorbitant price?

These questions may soon become a lot more real for many of us.

Precision medicine, a promising new approach to treating and preventing disease, will require thousands, or even millions, of us to provide samples for genetic research. So how much privacy are we willing to give up in the name of cutting-edge science? And do we care about the kinds of research that will be done with our donations?

President Barack Obama makes remarks highlighting investments to improve health and treat disease through precision medicine on January 30, 2015.
Larry Downing/Reuters

Precision medicine needs you

In January 2015, President Obama announced his “Precision Medicine Initiative” (PMI), asking for US$215 million to move medical care from a “one size fits all” approach to one that tailors treatments to each person’s genetic makeup. In his words, precision medicine is “one of the greatest opportunities for new medical breakthroughs that we have ever seen,” allowing doctors to provide “the right treatments at the right time, every time, to the right person.”

The PMI is now being implemented, and a critical part of the initiative is the creation of a “voluntary national research cohort” of one million people who will provide the “data” researchers need to make this big jump in medical care. And yes, those “data” will include blood, urine and information from your electronic health records, all of which will help scientists find the link between genes, illness and treatments.

Recognizing that there may be some reluctance to donate, the drafters of the initiative bent over backwards to assure future donors that their privacy will be “rigorously protected.” But privacy is not the only thing donors are worrying about.

Together with our colleagues at the Center for Bioethics and Social Sciences in Medicine at the University of Michigan and the Center for Ethics and Humanities in the Life Sciences at Michigan State University, we asked the American public about their willingness to donate blood and tissue to researchers.

Data from our national survey – published in the Journal of the American Medical Association – reveal that while most Americans are willing to donate to biobanks, they have serious concerns about how we ask for their consent and about how their donations may be used in future research.

What are you consenting to?

We asked our respondents – a sample representative of the U.S. population – if they would be willing to donate to a biobank using the current method of “blanket consent” where donors are asked to agree that their tissue can be used for any research study approved by the biobank, “without further consent from me.”

A healthy majority – 68 percent – agreed. But when we asked if they would still be willing to give blanket consent if their specimens might be used “to develop patents and earn profits for commercial companies,” that number dropped to 55 percent. Only 57 percent agreed to donate if there was a possibility their donation would be used to develop vaccines against biological weapons, research that might first require creating biological weapons. And less than 50 percent of our sample agreed to donate if told their specimen may be used “to develop more safe and effective abortion methods.”

You may think that some of these scenarios are far-fetched, but we consulted with a biobank researcher who reviewed all of our scenarios and confirmed that such research could be done with donations to biobanks, or associated data. And some scenarios are real. For instance, biobanked human embryos have been used to confirm how mifepristone, a drug which is used to induce miscarriages, works.

Trust in science is important

Should we take these moral concerns about biobank research seriously? Yes, because progress in science and medicine depends on public trust in the research enterprise. If scientists violate that trust they risk losing public support – including funding – for their work.

Henrietta Lacks. Oregon State University/Flickr, CC BY-SA

Witness the story of the Havasupai tribe of Arizona. Researchers collected DNA from members of the tribe in an effort to better understand their high rate of diabetes. That DNA was then used, without informing those who donated, for a study tracing the migration of Havasupai ancestors. The findings of that research undermined the tribal story of its origins. The result? The tribe banished all researchers.

Rebecca Skloot’s best-seller, “The Immortal Life of Henrietta Lacks,” revealed the way tissues and blood taken for clinical uses can be used for purposes unknown to the donors.

In the early 1950s, Ms. Lacks was unsuccessfully treated for cervical cancer. Researchers harvested her cells without her knowledge, and after her death they used these cells to develop the HeLa cell line. Because of their unique properties, Hela cells have become critical to medical research. They have been used to secure more than 17,000 patents, but neither she nor her family members were compensated.

In a similar case, blood cells from the spleen of a man named John Moore, taken as part of his treatment for leukemia, were used to create a patented cell line for fighting infection. Moore sued for his share of the profits generated by the patent, but his suit was dismissed by local, state and federal courts. As a result of these and similar cases, nearly all biobank consent forms now include a clause indicating that donations might be used to develop commercial products and that the donor has no claim on the proceeds.

Researchers can ill afford to undermine public trust in their work. In our sample we found that lack of trust in scientists and scientific research was the strongest predictor of unwillingness to donate to a biobank.

Those who ask you to donate some of yourself must remember that it is important not only to protect your privacy but also to ensure that your decision to do good for others does not violate your sense of what is good.

The “Proposed Privacy and Trust Principles” issued by the PMI in 2015 are a hopeful sign. They call for transparency about “how [participant] data will be used, accessed, and shared,” including “the types of studies for which the individual’s data may be used.” The PMI soon will be asking us to donate bits of ourselves, and if these principles are honored, they will go a long way toward building the trust that biobanks – and precision medicine – need to succeed.

The ConversationRaymond G. De Vries, Co-Director, Center for Bioethics and Social Sciences in Medicine, University of Michigan and Tom Tomlinson, Chair Professor, Michigan State University

This article was originally published on The Conversation. Read the original article.

Now, Check Out: