UCLA reports in a news release that a 25-year-old man recovering from a coma has made remarkable progress following a treatment at UCLA to jump-start his brain using ultrasound. The technique uses sonic stimulation to excite the neurons in the thalamus, an egg-shaped structure that serves as the brain’s central hub for processing information.
“It’s almost as if we were jump-starting the neurons back into function,” said Martin Monti, the study’s lead author and a UCLA associate professor of psychology and neurosurgery. “Until now, the only way to achieve this was a risky surgical procedure known as deep brain stimulation, in which electrodes are implanted directly inside the thalamus,” he said. “Our approach directly targets the thalamus but is noninvasive.”
Monti said the researchers expected the positive result, but he cautioned that the procedure requires further study on additional patients before they determine whether it could be used consistently to help other people recovering from comas.
“It is possible that we were just very lucky and happened to have stimulated the patient just as he was spontaneously recovering,” Monti said.
A report on the treatment is published in the journal Brain Stimulation. This is the first time the approach has been used to treat severe brain injury.
The technique, called low-intensity focused ultrasound pulsation, was pioneered by Alexander Bystritsky, a UCLA professor of psychiatry and biobehavioral sciences in theSemel Institute for Neuroscience and Human Behavior and a co-author of the study. Bystritsky is also a founder of Brainsonix, a Sherman Oaks, California-based company that provided the device the researchers used in the study.
That device, about the size of a coffee cup saucer, creates a small sphere of acoustic energy that can be aimed at different regions of the brain to excite brain tissue. For the new study, researchers placed it by the side of the man’s head and activated it 10 times for 30 seconds each, in a 10-minute period.
Monti said the device is safe because it emits only a small amount of energy — less than a conventional Doppler ultrasound.
Before the procedure began, the man showed only minimal signs of being conscious and of understanding speech — for example, he could perform small, limited movements when asked. By the day after the treatment, his responses had improved measurably. Three days later, the patient had regained full consciousness and full language comprehension, and he could reliably communicate by nodding his head “yes” or shaking his head “no.” He even made a fist-bump gesture to say goodbye to one of his doctors.
“The changes were remarkable,” Monti said.
The technique targets the thalamus because, in people whose mental function is deeply impaired after a coma, thalamus performance is typically diminished. And medications that are commonly prescribed to people who are coming out of a coma target the thalamus only indirectly.
Under the direction of Paul Vespa, a UCLA professor of neurology and neurosurgery at the David Geffen School of Medicine at UCLA, the researchers plan to test the procedure on several more people beginning this fall at the Ronald Reagan UCLA Medical Center. Those tests will be conducted in partnership with the UCLA Brain Injury Research Center and funded in part by the Dana Foundation and the Tiny Blue Dot Foundation.
If the technology helps other people recovering from coma, Monti said, it could eventually be used to build a portable device — perhaps incorporated into a helmet — as a low-cost way to help “wake up” patients, perhaps even those who are in a vegetative or minimally conscious state. Currently, there is almost no effective treatment for such patients, he said.
Yet early on, communities where fracking spread raised doubts. Nearby residents reported a variety of common symptoms and sources of stress. Public health professionals trumpeted their concerns, and epidemiologists launched health studies of the industry. States like Pennsylvania, where almost 10,000 wells have been drilled since 2005, continued development. But other states, including Maryland and New York, have not permitted drilling because of the potential for environmental and health impacts.
Tensions between economic development, energy policy and environmental and health concerns are common in public health’s history. Often, economic and energy development trump environmental and health concerns, leaving public health playing “catch-up.”
Indeed, only recently have rigorous health studies on the impact of unconventional natural gas development on health been completed. We have published three studies, which evaluated birth outcomes, asthma exacerbations and symptoms, including nasal and sinus, fatigue and migraine headache symptoms. These, together withother studies, form a growing body of evidence that unconventional natural gas development is having detrimental effects on health. Not unexpectedly, the oil and gas industry has countered our findings with pointed criticism.
Which exposures and health outcomes to study?
The process of fracking involves vertical and horizontal drilling, often for more than 10,000 feet below the surface, followed by the injection of millions of gallons of water, chemicals and sand at high pressures. The liquids create fissures that release the natural gas in the shale rock.
These vary during the different phases of well development and have different scales of impact: Vibration may affect only people very close to wells, whereas stress from, for example, concerns about possible water contamination may have a wider reach. Other sources of stress can be an influx of temporary workers, seeing industrial development in what used to be a rural area, heavy truck traffic and concerns about declining home prices.
We have now completed several health studies in partnership with the Geisinger Health System, which provides primary care to over 450,000 patients in Pennsylvania, including many residing in fracking areas. Geisinger has used an electronic health record system since 2001, allowing us to get detailed health data from all patient encounters, including diagnoses, tests, procedures, medications and other treatments during the same time frame as fracking developed.
For our first electronic health record-based studies, we selected adverse birth outcomes and asthma exacerbations. These are important, are common, have short latencies and are conditions patients seek care for, so they are thus well-documented in the electronic health record.
We studied over 8,000 mother-child pairs and 35,000 asthma patients. In our symptom study, we obtained questionnaires from 7,847 patients about nasal, sinus and other health symptoms. Because symptoms are subjective, they are not well-captured by an electronic health record and are better ascertained by questionnaire.
In all studies, we assigned patients measures of unconventional natural gas development activity. These were calculated using distance from the patient’s home to the well, well depth and production, and dates and duration of the different phases.
Our findings and how confident we are in them
In the birth outcome study, we found increased odds of preterm birth and suggestive evidence for reduced birth weight among women with higher unconventional natural gas development activity (those closer to more and bigger unconventional wells), compared with women with lower unconventional natural gas development activity during pregnancy.
In the asthma study, we found increased odds among asthma patients of asthma hospitalizations, emergency department visits and a medication used for mild asthma attacks with higher unconventional natural gas development activity, compared to those with lower activity. Finally, in our study of symptoms, we found patients with higher unconventional natural gas development activity had higher odds of nasal and sinus, migraine headache and fatigue symptoms compared to those with lower activity. In each analysis, we controlled for other risk factors for the outcome, including smoking, obesity, and comorbid conditions.
Psychosocial stress, exposure to air pollution including truck traffic, sleep disruption and changes to socioeconomic status are all biologically plausible pathways for unconventional natural gas development to affect health. We hypothesize that stress and air pollution are the two primary pathways, but in our studies, we cannot yet determine which are responsible for the associations we observed.
As epidemiologists, our data can rarely prove that an exposure caused a health outcome. We do, however, perform additional analyses to test if our findings are robust and eliminate the possibility that another factor we did not include was the actual cause.
In our studies, we looked at differences by county to understand whether there were just differences in the people who live in counties with and without fracking. And we repeated our studies with other health outcomes we would not expect to be affected by the fracking industry. In no analyses did we find results that suggested to us that our primary findings were likely to be biased, which gives us confidence in our results.
Other research groups have published on pregnancy and birth outcomes and symptoms, and the evidence suggests that the fracking industry may be affecting health in a range of ways. Over time, the body of evidence has gotten clearer, more consistent and concerning. However, we would not expect all studies to exactly agree, because, for example, the drilling practices, underlying health conditions and other factors likely differ in different study areas.
How has the industry responded?
Often the industry states that unconventional natural gas development has improved air quality. When describing emissions for the entire United States, this may be true. However, such statements ignore studies that suggest fracking has worsened local air quality in areas undergoing unconventional natural gas development.
A common retort by the industry is that rates of the health outcome studied – whether it’s asthma or preterm birth – are lower in fracking areas than in areas without fracking, or that the rate of the outcome is decreasing over time.
A study of increases or decreases in rates of a disease across years, calculated for groups of people, is called an ecologic study. Ecologic studies are less informative than studies with data on individual people because relationships can exist at the group level that do not exist among individuals. This is called the ecologic fallacy. For example, ecologic studies show a negative association between county-level average radon levels and lung cancer rates, but studies of individuals show strong positive associations between exposure to radon gas and lung cancer.
One reason we used individual-level data in our peer-reviewed studies was to avoid the problem of the ecologic fallacy. So the rates highlighted by industry do not provide any evidence that our findings are invalid.
It’s worth noting that the fracking industry’s practices have improved. One example is the flaring of wells, which is a source of air, noise and light pollution, and has decreased dramatically in recent years. Drilling has also substantially slowed because of the dramatic decline in natural gas prices.
We must monitor the industry with ongoing health studies and perform more detailed exposure measurements by, for example, measuring noise and air pollution levels. If we understand why we are seeing associations between the fracking industry and health problems, then we can better inform patients and policymakers.
In the meantime, we would advise careful deliberation about future decisions about the industry to balance energy needs with environmental and public health considerations.
In a startling discovery that raises fundamental questions about human behavior, researchers at the University of Virginia School of Medicine have determined that the immune system directly affects — and even controls — creatures’ social behavior, such as their desire to interact with others. So could immune system problems contribute to an inability to have normal social interactions? The answer appears to be yes, and that finding could have great implications for neurological conditions such as autism-spectrum disorders and schizophrenia.
“The brain and the adaptive immune system were thought to be isolated from each other, and any immune activity in the brain was perceived as sign of a pathology. And now, not only are we showing that they are closely interacting, but some of our behavior traits might have evolved because of our immune response to pathogens,” explained Jonathan Kipnis, PhD, chairman of UVA’s Department of Neuroscience. “It’s crazy, but maybe we are just multicellular battlefields for two ancient forces: pathogens and the immune system. Part of our personality may actually be dictated by the immune system.”
Evolutionary Forces at Work
It was only last year that Kipnis, the director of UVA’s Center for Brain Immunology and Glia, and his team discovered that meningeal vessels directly link the brain with the lymphatic system. That overturned decades of textbook teaching that the brain was “immune privileged,” lacking a direct connection to the immune system. The discovery opened the door for entirely new ways of thinking about how the brain and the immune system interact.
The follow-up finding is equally illuminating, shedding light on both the workings of the brain and on evolution itself. The relationship between people and pathogens, the researchers suggest, could have directly affected the development of our social behavior, allowing us to engage in the social interactions necessary for the survival of the species while developing ways for our immune systems to protect us from the diseases that accompany those interactions. Social behavior is, of course, in the interest of pathogens, as it allows them to spread.
The UVA researchers have shown that a specific immune molecule, interferon gamma, seems to be critical for social behavior and that a variety of creatures, such as flies, zebrafish, mice and rats, activate interferon gamma responses when they are social. Normally, this molecule is produced by the immune system in response to bacteria, viruses or parasites. Blocking the molecule in mice using genetic modification made regions of the brain hyperactive, causing the mice to become less social. Restoring the molecule restored the brain connectivity and behavior to normal. In a paper outlining their findings, the researchers note the immune molecule plays a “profound role in maintaining proper social function.”
“It’s extremely critical for an organism to be social for the survival of the species. It’s important for foraging, sexual reproduction, gathering, hunting,” said Anthony J. Filiano, PhD, Hartwell postdoctoral fellow in the Kipnis lab and lead author of the study. “So the hypothesis is that when organisms come together, you have a higher propensity to spread infection. So you need to be social, but [in doing so] you have a higher chance of spreading pathogens. The idea is that interferon gamma, in evolution, has been used as a more efficient way to both boost social behavior while boosting an anti-pathogen response.”
Understanding the Implications
The researchers note that a malfunctioning immune system may be responsible for “social deficits in numerous neurological and psychiatric disorders.” But exactly what this might mean for autism and other specific conditions requires further investigation. It is unlikely that any one molecule will be responsible for disease or the key to a cure, the researchers believe; instead, the causes are likely to be much more complex. But the discovery that the immune system — and possibly germs, by extension — can control our interactions raises many exciting avenues for scientists to explore, both in terms of battling neurological disorders and understanding human behavior.
“Immune molecules are actually defining how the brain is functioning. So, what is the overall impact of the immune system on our brain development and function?” Kipnis said. “I think the philosophical aspects of this work are very interesting, but it also has potentially very important clinical implications.”
Researchers at the University of California, Riverside are bringing their idea for a ‘Window to the Brain’ transparent skull implant closer to reality through the findings of two studies that are forthcoming in the journals Lasers in Surgery and Medicine and Nanomedicine: Nanotechnology, Biology and Medicine.
The implant under development, which literally provides a ‘window to the brain,’ will allow doctors to deliver minimally invasive, laser-based treatments to patients with life-threatening neurological disorders, such as brain cancers, traumatic brain injuries, neurodegenerative diseases and stroke. The recent studies highlight both the biocompatibility of the implant material and its ability to endure bacterial infections.
The Window to the Brain project is a multi-institution, interdisciplinary partnership led by Guillermo Aguilar, professor of mechanical engineering in UCR’s Bourns College of Engineering, and Santiago Camacho-López, from the Centro de Investigación Científica y de Educación Superior de Ensenada (CICESE) in Mexico.
The project began when Aguilar and his team developed a transparent version of the material yttria-stabilized zirconia (YSZ) — the same ceramic product used in hip implants and dental crowns. By using this as a window-like implant, the team hopes doctors will be able to aim laser-based treatments into patients’ brains on demand and without having to perform repeated craniotomies, which are highly invasive procedures used to access the brain.
The internal toughness of YSZ, which is more impact resistant than glass-based materials developed by other researchers, also makes it the only transparent skull implant that could conceivably be used in humans. The two recent studies further support YSZ as a promising alternative for currently available cranial implants.
Published July 8 in Lasers in Surgery and Medicine, the most recent study demonstrates how the use of transparent YSZ may allow doctors to combat bacterial infections, which are a leading reason for cranial implant failure. In lab studies, the researchers treated E-Coli infections by aiming laser light through the implant without having to remove it and without damaging the surrounding tissues.
“This was an important finding because it showed that the combination of our transparent implant and laser-based therapies enables us to treat not only brain disorders, but also to tackle bacterial infections that are common after cranial implants. These infections are especially challenging to treat because many antibiotics do not penetrate the blood brain barrier,” said Devin Binder, M.D., a neurosurgeon and neuroscientist in UCR’s School of Medicine and a collaborator on the project.
Another recent study, published in the journal Nanomedicine: Nanotechnology, Biology and Medicine, explored the biocompatibility of YSZ in an animal model, where it integrated into the host tissue without causing an immune response or other adverse effects.
“The YSZ was actually found to be more biocompatible than currently available materials, such as titanium or thermo-plastic polymers, so this was another piece of good news in our development of transparent YSZ as the material of choice for cranial implants,” Aguilar said.
Are pomegranates really the superfood we’ve been led to believe will counteract the aging process? Up to now, scientific proof has been fairly weak. And some controversial marketing tactics have led to skepticism as well. A team of scientists from EPFL and the company Amazentis wanted to explore the issue by taking a closer look at the secrets of this plump pink fruit. They discovered that a molecule in pomegranates, transformed by microbes in the gut, enables muscle cells to protect themselves against one of the major causes of aging. In nematodes and rodents, the effect is nothing short of amazing. Human clinical trials are currently underway, but these initial findings have already been published in the journal Nature Medicine.
As we age, our cells increasingly struggle to recycle their powerhouses. Called mitochondria, these inner compartments are no longer able to carry out their vital function, thus accumulate in the cell. This degradation affects the health of many tissues, including muscles, which gradually weaken over the years. A buildup of dysfunctional mitochondria is also suspected of playing a role in other diseases of aging, such as Parkinson’s disease.
One molecule plays David against the Goliath of aging
The scientists identified a molecule that, all by itself, managed to re-establish the cell’s ability to recycle the components of the defective mitochondria: urolithin A. “It’s the only known molecule that can relaunch the mitochondrial clean-up process, otherwise known as mitophagy,” says Patrick Aebischer, co-author on the study. “It’s a completely natural substance, and its effect is powerful and measurable.”
The team started out by testing their hypothesis on the usual suspect: the nematode C. elegans. It’s a favorite test subject among aging experts, because after just 8-10 days it’s already considered elderly. The lifespan of worms exposed to urolithin A increased by more than 45% compared with the control group.
These initial encouraging results led the team to test the molecule on animals that have more in common with humans. In the rodent studies, like with C. elegans, a significant reduction in the number of mitochondria was observed, indicating that a robust cellular recycling process was taking place. Older mice, around two years of age, showed 42% better endurance while running than equally old mice in the control group.
Human testing underway
Before heading out to stock up on pomegranates, however, it’s worth noting that the fruit doesn’t itself contain the miracle molecule, but rather its precursor. That molecule is converted into urolithin A by the microbes that inhabit the intestine. Because of this, the amount of urolithin A produced can vary widely, depending on the species of animal and the flora present in the gut microbiome. Some individuals don’t produce any at all. If you’re one of the unlucky ones, it’s possible that pomegranate juice won’t do you any good.
For those without the right microbes in their guts, however, the scientists are already working on a solution. The study’s co-authors founded a start-up company, Amazentis, which has developed a method to deliver finely calibrated doses of urolithin A. The company is currently conducting first clinical trials testing the molecule in humans in European hospitals.
Darwin at your service: parallel evolution makes good dinner partners
According to study co-author Johan Auwerx, it would be surprising if urolithin A weren’t effective in humans. “Species that are evolutionarily quite distant, such as C elegans and the rat, react to the same substance in the same way. That’s a good indication that we’re touching here on an essential mechanism in living organisms.”
Urolithin A’s function is the product of tens of millions of years of parallel evolution between plants, bacteria and animals. According to Chris Rinsch, co-author and CEO of Amazentis, this evolutionary process explains the molecule’s effectiveness: “Precursors to urolithin A are found not only in pomegranates, but also in smaller amounts in many nuts and berries. Yet for it to be produced in our intestines, the bacteria must be able to break down what we’re eating. When, via digestion, a substance is produced that is of benefit to us, natural selection favors both the bacteria involved and their host. Our objective is to follow strict clinical validations, so that everyone can benefit from the result of these millions of years of evolution.”
The EPFL scientists’ approach provides a whole new palette of opportunities to fight the muscular degeneration that takes place as we age, and possibly also to counteract other effects of aging. By helping the body to renew itself, urolithin A could well succeed where so many pharmaceutical products, most of which have tried to increase muscle mass, have failed. Auwerx, who has also published a recent discovery about the anti-aging effects of another molecule in the journal Science, emphasizes the game-changing importance of these studies. “The nutritional approach opens up territory that traditional pharma has never explored. It’s a true shift in the scientific paradigm.”
Strands of cow cartilage substitute for ink in a 3D bioprinting process that may one day create cartilage patches for worn-out joints.
“Our goal is to create tissue that can be used to replace large amounts of worn out tissue or design patches,” says Ibrahim T. Ozbolat, associate professor of engineering science and mechanics at Penn State. “Those who have osteoarthritis in their joints suffer a lot. We need a new alternative treatment for this.”
The 3D printer lays down rows of cartilage strands in any pattern the researchers choose.
Cartilage is a good tissue to target for scale-up bioprinting because it is made up of only one cell type and has no blood vessels within the tissue. It is also a tissue that cannot repair itself. Once cartilage is damaged, it remains damaged.
Previous attempts at growing cartilage began with cells embedded in a hydrogel—a substance composed of polymer chains and about 90 percent water—that is used as a scaffold to grow the tissue.
“Hydrogels don’t allow cells to grow as normal,” says Ozbolat. “The hydrogel confines the cells and doesn’t allow them to communicate as they do in native tissues.”This leads to tissues that do not have sufficient mechanical integrity. Degradation of the hydrogel also can produce toxic compounds that are detrimental to cell growth.
Ozbolat and his research team developed a method to produce larger scale tissues without using a scaffold.
How it works
They create a tiny—from 3 to 5 one hundredths of an inch in diameter—tube made of alginate, an algae extract. They inject cartilage cells into the tube and allow them to grow for about a week and adhere to each other.
Because cells do not stick to alginate, they can remove the tube and are left with a strand of cartilage. The cartilage strand substitutes for ink in the 3D printing process. Using a specially designed prototype nozzle that can hold and feed the cartilage strand, the 3D printer lays down rows of cartilage strands in any pattern the researchers choose.
After about half an hour, the cartilage patch self-adheres enough to move to a petri dish. The researchers put the patch in nutrient media to allow it to further integrate into a single piece of tissue. Eventually the strands fully attach and fuse together.
“We can manufacture the strands in any length we want,” says Ozbolat. “Because there is no scaffolding, the process of printing the cartilage is scalable, so the patches can be made bigger as well. We can mimic real articular cartilage by printing strands vertically and then horizontally to mimic the natural architecture.”
Supply your own cartilage?
The artificial cartilage produced by the team is very similar to native cow cartilage. However, the mechanical properties are inferior to those of natural cartilage, but better than the cartilage that is made using hydrogel scaffolding.
Natural cartilage forms with pressure from the joints, and Ozbolat thinks that mechanical pressure on the artificial cartilage will improve the mechanical properties.
If this process is eventually applied to human cartilage, each individual treated would probably have to supply their own source material to avoid tissue rejection. The source could be existing cartilage or stem cells differentiated into cartilage cells.
Researchers from Harvard University and the University of Iowa worked on the project, which was funded by the National Science Foundation, Grow Iowa Value Funds, and the China Scholarship Fund. The researchers report their results in the current issue of Scientific Reports.
Sepsis a severe health problem sparked by your body’s reaction to infection. When you get an infection, your body fights back, releasing chemicals into the bloodstream to kill the harmful bacteria or viruses. When this process works the way it is supposed to, your body takes care of the infection and you get better. With sepsis, the chemicals from your body’s own defenses trigger inflammatory responses, which can impair blood flow to organs, like the brain, heart or kidneys. This in turn can lead to organ failure and tissue damage.
At its most severe, the body’s response to infection can cause dangerously low blood pressure. This is called septic shock.
Sepsis can result from any type of infection. Most commonly, it starts as a pneumonia, urinary tract infection or intra-abdominal infection such as appendicitis. It is sometimes referred to as “blood poisoning,” but this is an outdated term. Blood poisoning is an infection present in the blood, while sepsis refers to the body’s response to any infection, wherever it is.
Once a person is diagnosed with sepsis, she will be treated with antibiotics, IV fluids and support for failing organs, such as dialysis or mechanical ventilation. This usually means a person needs to be hospitalized, often in an ICU. Sometimes the source of the infection must be removed, as with appendicitis or an infected medical device.
It can be difficult to distinguish sepsis from other diseases that can make one very sick, and there is no lab test that can confirm sepsis. Many conditions can mimic sepsis, including severe allergic reactions, bleeding, heart attacks, blood clots and medication overdoses. Sepsis requires particular prompt treatments, so getting the diagnosis right matters.
The revolving door of sepsis care
As recently as a decade ago, doctors believed that sepsis patients were out of the woods if they could just survive to hospital discharge. But that isn’t the case – 40 percent of sepsis patients go back into the hospital within just three months of heading home, creating a “revolving door” that gets costlier and riskier each time, as patients get weaker and weaker with each hospital stay. Sepsis survivors also have an increased risk of dying for months to years after the acute infection is cured.
Post-Intensive Care Syndrome and frequent hospital readmissions mean that we have dramatically underestimated how much sepsis care costs. On top of the US$5.5 billion we now spend on initial hospitalization for sepsis, we must add untold billions in rehospitalizations, nursing home and professional in-home care, and unpaid care provided by devoted spouses and families at home.
Unfortunately, progress in improving sepsis care has lagged behind improvements in cancer and heart care, as attention has shifted to the treatment of chronic diseases. However, sepsis remains a common cause of death in patients with chronic diseases. One way to help reduce the death toll of these chronic diseases may be to improve our treatment of sepsis.
Rethinking sepsis identification
Raising public awareness increases the likelihood that patients will get to the hospital quickly when they are developing sepsis. This in turn allows prompt treatment, which lowers the risk of long-term problems.
Beyond increasing public awareness, doctors and policymakers are also working to improve the care of sepsis patients in the hospital.
For instance, a new sepsis definition was released by several physician groups in February 2016. The goal of this new definition is to better distinguish people with a healthy response to infection from those who are being harmed by their body’s response to infection.
As part of the sepsis redefinition process, the physician groups also developed a new prediction tool called qSOFA. This instrument identifies patients with infection who are at high risk of death or prolonged intensive care. The tools uses just three factors: thinking much less clearly than usual, quick breathing and low blood pressure. Patients with infection and two or more of these factors are at high risk of sepsis. In contrast to prior methods of screening patients at high risk of sepsis, the new qSOFA tool was developed through examining millions of patient records.
Life after sepsis
Even with great inpatient care, some survivors will still have problems after sepsis, such as memory loss and weakness.
Doctors are wrestling with how to best care for the growing number of sepsis survivors in the short and long term. This is no easy task, but there are several exciting developments in this area.
The Society of Critical Care Medicine’s THRIVE initiative is now building a network of support groups for patients and families after critical illness. THRIVE will forge new ways for survivors to work with each other, like how cancer patients provide each other advice and support.
As medical care is increasingly complex, many doctors contribute to a patient’s care for just a week or two. Electronic health records let doctors see how the sepsis hospitalization fits into the broader picture – which in turn helps doctors counsel patients and family members on what to expect going forward.
The high number of repeat hospitalizations after sepsis suggests another opportunity for improving care. We could analyze data about patients with sepsis to target the right interventions to each individual patient.
Better care through better policy
In 2012, New York state passed regulations to require every hospital to have a formal plan for identifying sepsis and providing prompt treatment. It is too early to tell if this is a strong enough intervention to make things better. However, it serves as a clarion call for hospitals to end the neglect of sepsis.
The Centers for Medicare & Medicaid Services (CMS) are also working to improve sepsis care. Starting in 2017, CMS will adjust hospital payments by quality of sepsis treatment. Hospitals with good report cards will be paid more, while hospitals with poor marks will be paid less.
To judge the quality of sepsis care, CMS will require hospitals to publicly report compliance with National Quality Forum’s “Sepsis Management Bundle.” This includes a handful of proven practices such as heavy-duty antibiotics and intravenous fluids.
While policy fixes are notorious for producing unintended consequences, the reporting mandate is certainly a step in the right direction. It would be even better if the mandate focused on helping hospitals work collaboratively to improve their detection and treatment of sepsis.
Right now, sepsis care varies greatly from hospital to hospital, and patient to patient. But as data, dollars and awareness converge, we may be at a tipping point that will help patients get the best care, while making the best use of our health care dollars.
This is an updated version of an article originally published on July 1, 2015. You can read the original version here.
“Cure” is a word that’s dominated the rhetoric in the war on cancer for decades. But it’s a word that medical professionals tend to avoid. While the American Cancer Society reports that cancer treatment has improved markedly over the decades and the five-year survival rate is impressively high for many cancers, oncologists still refrain from declaring their cancer-free patients cured. Why?
Patients are declared cancer-free (also called complete remission) when there are no more signs of detectable disease.
However, minuscule clusters of cancer cells below the detection level can remain in a patient’s body after treatment. Moreover, such small clusters of straggler cells may undergo metastasis, where they escape from the initial tumor into the bloodstream and ultimately settle in a distant site, often a vital organ such as the lungs, liver or brain.
When a colony of these metastatic cells reaches a detectable size, the patient is diagnosed with recurrent metastatic cancer. About one in three breast cancer patients diagnosed with early-stage cancer later develop metastatic disease, usually within five years of initial remission.
By the time metastatic cancer becomes evident, it is much more difficult to treat than when it was originally diagnosed.
What if these metastatic cells could be detected earlier, before they established a “foothold” in a vital organ? Better yet, could these metastatic cancer cells be intercepted, preventing them them from lodging in a vital organ in the first place?
The implant is a tiny porous polymer disc (basically a miniature sponge, no larger than a pencil eraser) that can be inserted just under a patient’s skin. Implantation triggers the immune system’s “foreign body response,” and the implant starts to soak up immune cells that travel to it. If the implant can catch mobile immune cells, then why not mobile metastatic cancer cells?
We gave implants to mice specially bred to model metastatic breast cancer. When the mice had palpable tumors but no evidence of metastatic disease, the implant was removed and analyzed.
Cancer cells were indeed present in the implant, while the other organs (potential destinations for metastatic cells) still appeared clean. This means that the implant can be used to spot previously undetectable metastatic cancer before it takes hold in an organ.
For patients with cancer in remission, an implant that can detect tumor cells as they move through the body would be a diagnostic breakthrough. But having to remove it to see if it has captured any cancer cells is not the most convenient or pleasant detection method for human patients.
Detecting cancer cells with noninvasive imaging
There could be a way around this, though: a special imaging method under development at Northwestern University called Inverse Spectroscopic Optical Coherence Tomography (ISOCT). ISOCT detects molecular-level differences in the way cells in the body scatter light. And when we scan our implant with ISOCT, the light scatter pattern looks different when it’s full of normal cells than when cancer cells are present. In fact, the difference is apparent when even as few as 15 out of the hundreds of thousands of cells in the implant are cancer cells.
There’s a catch – ISOCT cannot penetrate deep into tissue. That means it is not a suitable imaging technology for finding metastatic cells buried deep in internal organs. However, when the cancer cell detection implant is located just under the skin, it may be possible to detect cancer cells trapped in it using ISOCT. This could offer an early warning sign that metastatic cells are on the move.
This early warning could prompt doctors to monitor their patients more closely or perform additional tests. Conversely, if no cells are detected in the implant, a patient still in remission could be spared from unneeded tests.
The ISOCT results show that noninvasive imaging of the implant is feasible. But it’s a method still under development, and thus it’s not widely available. To make scanning easier and more accessible, we’re working to adapt more ubiquitous imaging technologies like ultrasound to detect tiny quantities of tumor cells in the implant.
Not just detecting, but quarantining cancer
Besides providing a way to detect tiny numbers of cancer cells before they can form new tumors in other parts of the body, our implant offers an even more intriguing possibility: diverting metastatic cells away from vital organs, and sequestering them where they cannot cause any damage.
In our mouse studies, we found that metastatic cells got caught in the implant before they were apparent in vital organs. When metastatic cells eventually made their way into the organs, the mice with implants still had significantly fewer tumor cells in their organs than implant-free controls. Thus, the implant appears to provide a therapeutic benefit, most likely by taking the metastatic cells it catches out of the circulation, preventing them from lodging anywhere vital.
Interestingly, we have not seen cancer cells leave the implant once trapped, or form a secondary tumor in the implant. Ongoing work aims to learn why this is. Whether the cells can stay safely immobilized in the implant or if it would need to be removed periodically will be important questions to answer before the implant could be used in human patients.
What the future may hold
For now, our work aims to make the implant more effective at drawing and detecting cancer cells. Since we tested the implant with metastatic breast cancer cells, we also want to see if it will work on other types of cancer. Additionally, we’re studying the cells the implant traps, and learning how the implant interacts with the body as a whole. This basic research should give us insight into the process of metastasis and how to treat it.
In the future (and it might still be far off), we envision a world where recovering cancer patients can receive a detector implant to stand guard for disease recurrence and prevent it from happening. Perhaps the patient could even scan their implant at home with a smartphone and get treatment early, when the disease burden is low and the available therapies may be more effective. Better yet, perhaps the implant could continually divert all the cancer cells away from vital organs on its own, like Iron Man’s electromagnet that deflects shrapnel from his heart.
This solution is still not a “cure.” But it would transform a formidable disease that one out of three cancer survivors would otherwise ultimately die from into a condition with which they could easily live.
Overweight people who took a capsule for eight weeks that contained two compounds found in red grapes and oranges saw improvements in blood sugar levels and artery function, researchers report.
“This is an incredibly exciting development and could have a massive impact on our ability to treat these diseases,” says Paul Thornalley, a professor in systems biology at the University of Warwick Medical School. “As well as helping to treat diabetes and heart disease, it could defuse the obesity time bomb.”
When participants received both compounds—trans-resveratrol (tRES) in red grapes and hesperetin (HESP) in oranges—at pharmaceutical doses, the compounds acted in tandem to decrease blood glucose, improve the action of insulin, and boost the health of arteries.
After eight weeks on the treatment, researchers noted an improvement in insulin resistance in trial participants that was similar to improvements seen six months after bariatric surgery.
The compounds work by increasing a protein called glyoxalase 1 (Glo1) in the body that neutralizes a damaging sugar-derived compound called methylglyoxal (MG).
For the study, researchers increased Glo1 expression in cell culture and then tested the formulation in a randomized, placebo-controlled crossover clinical trial.
Thirty-two overweight and obese people between the ages of 18 and 80 age who had a BMI between 25 to 40 took part in the trial. They were given the supplement in capsule form once a day for eight weeks. They were asked to maintain their usual diet and their food intake was monitored via a dietary questionnaire. They were also asked not to alter their daily physical activity.
Changes to their sugar levels were assessed by blood samples, artery health measured by artery wall flexibility, and other assessments by analysis of blood markers.
Pregnancy sounds like the ultimate form of animal cooperation – mothers share their own bodies to grow and support their children’s prenatal development. But in reality, embryos use every trick in the book to take more than their fair share. Mothers, in turn, marshal their best defensive tactics.
Ultimately, it’s an evolutionary arms race. Offspring continually evolve strategies to steal resources, while mothers evolve strategies to defend their resources. Natural selection will favor embryos that are able to steal resources, but this will impose costs on the mother.
My colleagues and I are interested in how the mechanisms of this battle could have evolved. We recently investigated some differences between closely related animals that carry their young and others that lay eggs to figure out how hormones evolved to be expressed in the placenta. By understanding the processes that support conflict, we can identify how this conflict arose, and the impacts that it might hold for human health.
Placenta as a combat zone
During pregnancy, mothers support their offspring by providing nourishment across a placenta. Formed from both the embryo’s and mother’s tissue, this organ facilitates the exchange of materials between the two. The placenta is responsible for transferring oxygen and nutrients to the baby, while taking away waste products like carbon dioxide and urea.
By secreting hormone signals across the placenta to be received by the mother’s body, embryos can alter the amount of food they’re provided. In a truly cooperative world, offspring would release these “gimme more!” hormones only if they were undernourished. But embryos actually produce these hormones demanding more of the mother almost constantly throughout pregnancy.
Mothers’ bodies fend off these hormonal demands with defenses including the development of physical barriers between the embryo and the maternal blood supply, and the production of enzymes that can break down excessive levels of embryo-produced hormones.
But where did the tools embryos use to wage this battle come from? That’s the question my colleagues and I recently investigated.
Hunting for the origins of the conflict
Placentas are not limited to mammals. They’re also found in reptiles and fishes like the seahorse.
We know that live-bearing animals evolve from egg-laying ones, but we were curious about the role of parent-offspring conflict in this process. Did placental control of pregnancy evolve via novel hormones? Or did it rely on genes that were already present in the ancestral populations?
We know each of these groups evolved pregnancy independently, because each is more closely related to an egg-laying species than they are to each other. For example, the first mammals were egg-laying and some of them are still around today – Australia’s platypus, for instance. Similarly, each of the live-bearing lizard species we studied has closely related egg-laying relatives.
By studying both the live-bearing and egg-laying relatives of these animals we can understand the things that are necessary for the transition.
We compared the list of hormones produced by these animals’ placental tissues to a similar tissue from two egg-laying animals: the chicken and an egg-laying population of the southeastern slider lizard. These species don’t have placentas because they lay eggs rather than carrying their unborn young internally. But placentas evolved from a membrane that lines the internal surface of developing eggs. This embryonic membrane supports the exchange of gasses between the embryo and the world outside its egg.
When we compared the genes found in the embryonic membrane of species with and without a placenta, the lists largely matched. This finding shows that the hormones used by embryos to manipulate their mothers evolved a long time ago, in an ancestor of both reptiles and mammals. When pregnancy evolved, the mechanisms to initiate conflict between the mother and embryo were already in place.
While we don’t know the function of these hormones in egg-laying species, we can speculate. The embryonic membrane is the first living point of contact between an embryo and the outside world. These hormones may alter the development of embryos in response to some environmental stimulus, such as temperature or disease.
Mom vs. dad at the placenta battlefield
Why are mothers and embryos at odds, anyway?
After all, animals have two major evolutionary goals: to survive and to produce fertile offspring to spread their genes. Individuals maximize the fitness of their genes by producing as many healthy offspring as they can over their lifetime. So it seems reasonable that mothers would want to support their offspring to give them the best chance of survival – as long as it doesn’t put mom herself at risk.
But remember, offspring contain genes from both parents. If a father can alter the development of his offspring in a way that allows it to take advantage of the mother, even if it imposes a cost on her, it would give him and his genes a fitness advantage.
This is particularly critical when females mate with multiple males. In this case, a father may be the parent of just one or a few of the many offspring a female produces over her lifetime. He wants his offspring to have an edge over others fathered by competitors.
In this way, the goals of the father’s genes may not overlap with the goals of the mother’s. It’s the differences between the goals of the mom and dad genes that are the ultimate cause of mother-embryo conflict during pregnancy.
Ways to control development beyond genes
As a result of the ongoing battle across the placenta, some animals have evolved strategies to affect the development of their children in ways that do not include changes to the genes they pass on.
For instance, males and females can mark the genes of their sperm and eggs in different ways so the effect of the gene depends on which parent passed it on. Scientists call this phenomenon – when a gene’s outcome in an individual depends on whether it was inherited from the mother or father – genomic imprinting.
Genomic imprinting is one mechanism by which the placenta battle can be waged.
The gene that produces insulin like growth factor two (IGF2) is an example. It controls placental growth: more of the hormone results in a bigger placenta and more nutrients being transferred to the offspring, while lower production results in smaller offspring.
When the mother makes egg cells, she modifies the IGF2 gene by adding molecules that ultimately change the structure of DNA. With this alteration, the genes encoded by the DNA cannot be expressed. So in normal offspring, the maternal copy of this gene isn’t expressed, while the paternal copy is. Mom is working to make sure the embryo doesn’t greedily take more resources than it needs, while Dad is happy to see the embryo garner more than strictly necessary.
My research group wanted to identify whether genomic imprinting is present in the reptiles that have a placenta. In research published in the journal Development Genes and Evolution, we looked at the genes that are imprinted in the placenta of mammals, and checked for imprinting of those same genes in the southern grass skink.
It turned out none of the mammalian imprinted genes are imprinted in this lizard, suggesting some fundamental differences between the role of conflict in mammalian and reptile pregnancy. The war in mammal placentas is waged using genomic imprinting, where as in reptiles, it appears that mothers and fathers must use other tools.
Together our studies suggest that the genes responsible for conflict in animals that exhibit pregnancy were present in the embryonic membranes of the most recent common ancestor of mammals and reptiles, which lived more than 300 million years ago. It looks like conflict between mother and child is baked into species, and is likely to occur anytime pregnancy evolves in animals.
While the processes that underpin conflict are well understood, many questions remain. How does the process of conflict contribute to the evolution of a complex organ like a placenta? I’m interested in how this internal conflict interacts with the environment in natural ecosystems. For example, how does the availability of resources affect how the mother provides those resources to her offspring? My ongoing research seeks to understand how resource availability affects what embryos receive through the placenta, and the genetics that underpin this organ’s function.