The next frontier in medical sensing: Threads coated in nanomaterials

By Sameer Sonkusale, Tufts University.

Doctors have various ways to assess your health. For example, they measure your heart rate and blood pressure to indirectly assess your heart function, or straightforwardly test a blood sample for iron content to diagnose anemia. But there are plenty of situations in which that sort of monitoring just isn’t possible.

To test the health of muscle and bone in contact with a hip replacement, for example, requires a complicated – and expensive – procedure. And if problems are found, it’s often too late to truly fix them. The same is true when dealing with deep wounds or internal incisions from surgery.

In my engineering lab at Tufts University, we asked ourselves whether we could make sensors that could be seamlessly embedded in body tissue or organs – and yet could communicate to monitors outside the body in real time. The first concern, of course, would be to make sure that the materials wouldn’t cause infection or an immune response from the body. The sensors would also need to match the mechanical properties of the body part they would be embedded in: soft for organs and stretchable for muscle. And, ideally, they would be relatively inexpensive to make in large quantities.

Our search for materials we might use led us to a surprising candidate – threads, just like what our clothes are made of. Thread has many advantages. It is abundant, easy to make and very inexpensive. Threads can be made flexible and stretchable – and even from materials that aren’t rejected by the body. In addition, doctors are very comfortable working with threads: They routinely use sutures to stitch up open wounds. What if we could embed sensor functions into threads?

Finding the right sensor

Today’s medical sensors are typically rigid and flat – which limits them to monitoring surfaces such as the scalp or skin. But most organs and tissues are three-dimensional heterogeneous multilayered biological structures. To monitor them, we need something much more like a thread.

Nanomaterials can be organic or inorganic, inert or bioactive, and can be designed with physical and chemical properties that are useful for medical sensing. For example, carbon nanotubes are amazingly versatile – their electrical conductivity can be customized, which has led to them being the basis of the next generation of sensors and electronic transistors. They can even detect single molecules of DNA and proteins. The organic nanomaterial polyaniline has a similarly broad range of applications, notably its conductivity depends on the strength of the acid or base it is in contact with.

Making the materials

To make sensing threads, we start with cotton and other conventional threads, dip them in liquids containing different nanomaterials, and rapidly dry them. Depending on the properties of the nanomaterial we use, these can monitor mechanical or chemical activity.

For example, coating stretchable rubber fiber with carbon nanotubes and silicone can make threads that can sense and measure physical strain. As they stretch, the threads’ electrical properties change in ways we can monitor externally. This can be used to monitor wound healing or muscle strain experienced due to artificial implants. After an implant, abnormal strain could be a sign of slow healing, or even improper placement of the device. Threads monitoring strain levels can send a message to both patient and doctor so that treatment can be modified appropriately.

Monitoring the electricity flow between one cotton thread coated with carbon nanotubes and polyaniline nanofibers, and another coated with silver and silver chloride, allows us to measure acidity, which can be a sign of infection.

To help people who need to monitor their blood sugar levels, we can coat a thread with glucose oxidase, which reacts with glucose to generate an electrical signal indicating how much sugar is in the patient’s blood. Similarly, coating conductive threads with other nanomaterials sensitive to specific elements or chemicals can help doctors measure potassium and sodium levels or other metabolic markers in your blood.

Multiple uses

Beyond sensing abilities, many thread materials, such as cotton, have another useful property: wicking. They can move liquid along their length via capillary action without needing a pump, the way melted wax flows up a candlewick to feed the flame.

Liquid flowing in threads sutured into skin.
Nano Lab, Tufts University, CC BY-ND

We used cotton threads to transport interstitial fluid, which fills in the gaps between cells, from the places it normally exists toward sensing threads located elsewhere. The sensing threads send their electronic signals to an external device housed in a flexible patch, along with a button battery and a small antenna. There, the signals are amplified, digitized and transmitted wirelessly to a smartphone or any Wi-Fi connected device.

These transport-sensing measuring-transmission systems are so small that they can be powered with a tiny battery sitting on top of the skin or could get energy from glucose in the patient’s blood. That could allow doctors to keep a continuous eye on patients’ health remotely and unobtrusively.

Smart threads can monitor wounds using a suite of physical and chemical sensors made using threads and passing information to a skin-surface transmitter.
Nano Lab, Tufts University, CC BY-ND

This type of integrated, wireless monitoring has several advantages over current systems. First, the patient can move around freely, rather than being confined to a hospital bed. In addition, real-time data-gathering provides much more accurate information than periodic testing at a hospital or doctor’s office. And it reduces the cost of health care by moving treatment, monitoring and diagnosis out of the hospital.

So far our testing of nano-infused threads has been in sterile lab environments in rodents. The next step is to perform more tests in animals, particularly to monitor how well the threads do in living tissue over long periods of time. Then we’d move toward testing in humans.

Now that we’ve begun exploring the possibilities of threads, potential uses seem to be everywhere. Diabetic patients can have trouble with wounds resisting healing, which can lead to infection, and even amputation. A few choice stitches using sensing threads could let doctors detect these problems at extremely early stages – much sooner than we can today – and take action to prevent them from worsening. Sensing threads can even be woven into bandages, wound dressings or hospital bed sheets to monitor patients’ progress, and raise alarms before problems get out of control.

The ConversationSameer Sonkusale, Professor of Electrical and Computer Engineering, Tufts University

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

What wind, currents and geography tell us about how people first settled Oceania

By Alvaro Montenegro, The Ohio State University.

Just look at a map of Remote Oceania – the region of the Pacific that contains Hawaii, New Zealand, Samoa, French Polynesia and Micronesia – and it’s hard not to wonder how people originally settled on these islands. They’re mostly small and located many hundreds to thousands of kilometers away from any large landmass as well as from each other. As our species colonized just about every region of the planet, these islands seem to be the last places our distant ancestors reached.

A comprehensive body of archaeological, linguistic, anthropological and genetic evidence suggests that people started settling there about 3,400 years before present (BP). While we have a relatively clear picture of when many of the major island groups were colonized, there is still considerable debate as to precisely where these settlers originated and the strategies and trajectories they used as they voyaged.

In new experiments, my colleagues and I investigated how environmental variability and Oceania’s geographical setting would have influenced the colonization process. We built computer seafaring simulations and analyzed wind, precipitation and land distribution data over this region of the Pacific. We wanted to understand how seasonal and climate variability in weather and currents might lead to some potential routes being favored over others. How would these factors, including the periodic El Niño and La Niña patterns, affect even the feasibility of different sailing strategies? Did they play a role in the puzzling 2,000-year pause we see in eastward expansion? Could they have provided incentives to migration?

Standing questions about Oceania’s settlement

While the archaeological record contains no concrete information on the sailing capabilities of these early voyagers, their navigational prowess is undeniable. Settlement required trips across thousands of kilometers of open ocean toward very small targets. Traditional Pacific vessels such as double-hulled voyaging canoes and outrigger canoes would be able to make these potentially harrowing journeys, but at this point we have no way of knowing what kind of boat technology those early settlers used.

And colonization occurred in the opposite direction of mean winds and currents, which in this area of the Pacific flow on average from east to west. Scientists think the pioneers came from west to east, with western Melanesia and eastern Maritime Southeast Asia being the most likely source areas. But there’s still considerable debate as to exactly where these settlers came from, where they traveled and how.

Among the many intriguing aspects of the colonization process is the fact that it occurred in two rapid bursts separated by an almost 2,000-year-long hiatus. Starting around 3,400 BP, the region between the source areas and the islands of Samoa and Tonga was mostly occupied over a period of about 300 years. Then there was a pause in expansion; regions farther to the east such as Hawaii, Rapa Nui and Tahiti were only colonized sometime between about 1,100 and 800 BP. New Zealand, to the west of Samoa and Tonga but located far to the south, was occupied during this second expansion period. What might have caused that millennia-long lag?

Contemporary replica of a waʻa kaulua, a Polynesian double-hulled voyaging canoe.
Shihmei Barger 舒詩玫, CC BY-NC-ND

Simulating sailing conditions

The goal of our simulations was to take into account what we know about the real-world sailing conditions these intrepid settlers would have encountered at the time they were setting out. We know the general sailing performance of traditional Polynesian vessels – how fast these boats move given a particular wind speed and direction. We ran the simulation using observed present-day wind and current data – our assumption was that today’s conditions would be very close to those from 3,000 years ago and offer a better representation of variability than paleoclimate models.

The simulations compute how far one of these boats would have traveled daily based on winds and currents. We simulated departures from several different areas and at different times of year.

First we considered what would happen if the boats were sailing downwind; the vessels have no specified destination and are allowed to sail only in the direction in which the wind is blowing. Then we ran directed sailing experiments; in these, the boats are still influenced by currents and winds, but are forced to move a minimum daily distance, no matter the environmental conditions, toward a predetermined target. We still don’t know what type of vessels were used or how the sailors navigated; we just ran the model assuming they had some way to voyage against the wind, whether via sails or paddling.

One goal of our analysis was to describe how variations in winds and precipitation associated with the annual seasons and with the El Niño and La Niña weather patterns could have affected voyaging. We focused on conditions that would have favored or motivated movement from west to east, opposite to the mean winds, but in the general direction of the real migratory flow.

We also used land distribution data to determine “shortest hop” trajectories. These are the routes that would be formed if eastward displacement took place by a sequence of crossings in which each individual crossing always reaches the closest island to the east of the departure island.

What did the environmental data suggest?

After conducting thousands of voyaging simulations and calculating hundreds of shortest-hop trajectories, patterns started to emerge.

While the annually averaged winds in the region are to the west, there is significant variability, and eastward winds blow quite frequently in some seasons. The occurrence and intensity of these eastward winds increase during El Niño years. So downwind sailing, especially if conducted during particular times of the year (June-November in areas north of the equator and December-February in the Southern Hemisphere), can be an effective way to move eastward. It could be used to reach islands in the region of the first colonization pulse. Trips by downwind sailing become even more feasible under El Niño conditions.

Though many do believe early settlers were able to sail efficiently against the wind, our simulations suggest that even just following the winds and currents would be one way human beings conceivably could have traveled east in this area. (Moving eastward in the area east of Samoa does require sailing against the wind, though.)

Filled red lines depict all shortest-hop paths with starts from central and southern Philippines, Maluku and Solomon departure areas.
Using seafaring simulations and shortest-hop trajectories to model the prehistoric colonization
of Remote Oceania. Montenegro et al., PNAS 2016 doi:10.1073/pnas.1612426113

Our shortest-hop analysis points to two “gateway islands” – eastward expansion into large areas of Oceania would require passage through them. Movement into Micronesia would have to go through Yap. Expansion into eastern Polynesia would mean traveling through Samoa. This idea of gateway islands that would have to be colonized first opens new possibilities for understanding the process of settling Oceania.

As for that 2,000-year-long pause in migration, our simulation provided us with a few ideas about that, too. The area near Samoa is marked by an increase in distance between islands. And no matter what time of year, El Niño or not, you need to move against the wind to travel eastward around Samoa. So it makes sense that the pause in the colonization process was related to the development of technological advances that would allow more efficient against-the-wind sailing.

And finally, we think our analysis suggests some incentives to migration, too. In addition to changes to wind patterns that facilitate movement to the east, the El Niño weather pattern also causes drier conditions over western portions of Micronesia and Polynesia every two to seven years. It’s possible to imagine El Niño leading to tougher conditions, such as crop-damaging drought. El Niño weather could simultaneously have provided a reason to want to strike out for greener pastures and a means for eastward exploration and colonization. On the flip side, changes in winds and precipitation associated with La Niña could have encouraged migration to Hawaii and New Zealand.

Synthesis of results. Filled and dashed arrows refer to crossings that, according to simulations, are viable under downwind and directed sailing, respectively.
Using seafaring simulations and shortest-hop trajectories to model the prehistoric colonization
of Remote Oceania. Montenegro et al., PNAS 2016 doi:10.1073/pnas.1612426113

Overall, our results lend weight to various existing theories. El Niño and La Niña have been proposed as potential migration influences before, but we’ve provided a much more detailed view in both space and time of how this could have taken place. Our simulations strengthen the case for a lack of technology being the cause for the pause in migration, and downwind sailing as a viable strategy for the first colonization pulse 3,400 BP.

In the future, we hope to create new models – turning to time-series of environmental data instead of the statistical descriptions we used this time – to see if they produce similar results. We also want to develop experiments that would evaluate sailing strategies not in the context of discovery and colonization but of exchange networks. Are the islands along “easier” pathways between distant points also places where the archaeology shows a diverse set of artifacts from different regions? There’s still plenty to figure out about how people originally undertook these amazing voyages of exploration and expansion.

The ConversationAlvaro Montenegro, Assistant Professor of Geography and Director Atmospheric Sciences Program, The Ohio State University

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

Turning diamonds’ defects into long-term 3-D data storage

By Siddharth Dhomkar, City College of New York and Jacob Henshaw, City College of New York.

With the amount of data storage required for our daily lives growing and growing, and currently available technology being almost saturated, we’re in desparate need of a new method of data storage. The standard magnetic hard disk drive (HDD) – like what’s probably in your laptop computer – has reached its limit, holding a maximum of a few terabytes. Standard optical disk technologies, like compact disc (CD), digital video disc (DVD) and Blu-ray disc, are restricted by their two-dimensional nature – they just store data in one plane – and also by a physical law called the diffraction limit, based on the wavelength of light, that constrains our ability to focus light to a very small volume.

And then there’s the lifetime of the memory itself to consider. HDDs, as we’ve all experienced in our personal lives, may last only a few years before things start to behave strangely or just fail outright. DVDs and similar media are advertised as having a storage lifetime of hundreds of years. In practice this may be cut down to a few decades, assuming the disk is not rewritable. Rewritable disks degrade on each rewrite.

Without better solutions, we face financial and technological catastrophes as our current storage media reach their limits. How can we store large amounts of data in a way that’s secure for a long time and can be reused or recycled?

In our lab, we’re experimenting with a perhaps unexpected memory material you may even be wearing on your ring finger right now: diamond. On the atomic level, these crystals are extremely orderly – but sometimes defects arise. We’re exploiting these defects as a possible way to store information in three dimensions.

Focusing on tiny defects

One approach to improving data storage has been to continue in the direction of optical memory, but extend it to multiple dimensions. Instead of writing the data to a surface, write it to a volume; make your bits three-dimensional. The data are still limited by the physical inability to focus light to a very small space, but you now have access to an additional dimension in which to store the data. Some methods also polarize the light, giving you even more dimensions for data storage. However, most of these methods are not rewritable.

Here’s where the diamonds come in.

The orderly structure of a diamond, but with a vacancy and a nitrogen replacing two of the carbon atoms. Zas2000

A diamond is supposed to be a pure well-ordered array of carbon atoms. Under an electron microscope it usually looks like a neatly arranged three-dimensional lattice. But occasionally there is a break in the order and a carbon atom is missing. This is what is known as a vacancy. Even further tainting the diamond, sometimes a nitrogen atom will take the place of a carbon atom. When a vacancy and a nitrogen atom are next to each other, the composite defect is called a nitrogen vacancy, or NV, center. These types of defects are always present to some degree, even in natural diamonds. In large concentrations, NV centers can impart a characteristic red color to the diamond that contains them.

This defect is having a huge impact in physics and chemistry right now. Researchers have used it to detect the unique nuclear magnetic resonance signatures of single proteins and are probing it in a variety of cutting-edge quantum mechanical experiments.

Nitrogen vacancy centers have a tendency to trap electrons, but the electron can also be forced out of the defect by a laser pulse. For many researchers, the defects are interesting only when they’re holding on to electrons. So for them, the fact that the defects can release the electrons, too, is a problem.

But in our lab, we instead look at these nitrogen vacancy centers as a potential benefit. We think of each one as a nanoscopic “bit.” If the defect has an extra electron, the bit is a one. If it doesn’t have an extra electron, the bit is a zero. This electron yes/no, on/off, one/zero property opens the door for turning the NV center’s charge state into the basis for using diamonds as a long-term storage medium.

Starting from a blank ensemble of NV centers in a diamond (1), information can be written (2), erased (3), and rewritten (4).
Siddharth Dhomkar and Carlos A. Meriles, CC BY-ND

Turning the defect into a benefit

Previous experiments with this defect have demonstrated some properties that make diamond a good candidate for a memory platform.

First, researchers can selectively change the charge state of an individual defect so it either holds an electron or not. We’ve used a green laser pulse to assist in trapping an electron and a high-power red laser pulse to eject an electron from the defect. A low-power red laser pulse can help check if an electron is trapped or not. If left completely in the dark, the defects maintain their charged/discharged status virtually forever.

The NV centers can encode data on various levels.
Siddharth Dhomkar and Carlos A. Meriles, CC BY-ND

Our method is still diffraction limited, but is 3-D in the sense that we can charge and discharge the defects at any point inside of the diamond. We also present a sort of fourth dimension. Since the defects are so small and our laser is diffraction limited, we are technically charging and discharging many defects in a single pulse. By varying the duration of the laser pulse in a single region we can control the number of charged NV centers and consequently encode multiple bits of information.

Though one could use natural diamonds for these applications, we use artificially lab-grown diamonds. That way we can efficiently control the concentration of nitrogen vacancy centers in the diamond.

All these improvements add up to about 100 times enhancement in terms of bit density relative to the current DVD technology. That means we can encode all the information from a DVD into a diamond that takes up about one percent of the space.

Past just charge, to spin as well

If we could get beyond the diffraction limit of light, we could improve storage capacities even further. We have one novel proposal on this front.

A human cell, imaged on the right with super-resolution microscope.
Dr. Muthugapatti Kandasamy, CC BY-NC-ND

Nitrogen vacancy centers have also been used in the execution of what is called super-resolution microscopy to image things that are much smaller than the wavelength of light. However, since the super-resolution technique works on the same principles of charging and discharging the defect, it will cause unintentional alteration in the pattern that one wants to encode. Therefore, we won’t be able to use it as it is for memory storage application and we’d need to back up the already written data somehow during a read or write step.

Here we propose the idea of what we call charge-to-spin conversion; we temporarily encode the charge state of the defect in the spin state of the defect’s host nitrogen nucleus. Spin is a fundamental property of any elementary particle; it’s similar to its charge, and can be imagined as having a very tiny magnet permanently attached it.

While the charges are being adjusted to read/write the information as desired, the previously written information is well protected in the nitrogen spin state. Once the charges have encoded, the information can be back converted from the nitrogen spin to the charge state through another mechanism which we call spin-to-charge conversion.

With these advanced protocols, the storage capacity of a diamond would surpass what existing technologies can achieve. This is just a beginning, but these initial results provide us a potential way of storing huge amount of data in a brand new way. We’re looking forward to transform this beautiful quirk of physics into a vastly useful technology.

The ConversationSiddharth Dhomkar, Postdoctoral Associate in Physics, City College of New York and Jacob Henshaw, Teaching Assistant in Physics, City College of New York

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

Relax, the expansion of the universe is still accelerating

By Tamara Davis, The University of Queensland.

There’s been a whirlwind of commentary of late speculating that the acceleration of the expanding universe might not be real after all.

It follows the publication this month of a new look at supernovae in our universe, which the researchers say give only a “marginal detection” of the acceleration of the universe.

This seems to be a big deal, because the 2011 Nobel Prize was awarded to the leaders of two teams that used supernovae to discover that the expansion of the universe is speeding up.

But never have I seen such a storm in a teacup. The new analysis, published in Scientific Reports, barely changes the original result, but puts a different (and in my opinion misleading) spin on it.

So why does this new paper claim that the detection of acceleration is “marginal”?

Well, it is marginal if you only use a single data set. After all, most big discoveries are initially marginal. If they were more obvious, they would have been discovered sooner.

The evidence, so far

The supernova data alone could, at only a slight stretch, be consistent with a universe that neither accelerates nor decelerates. This has been known since the original discovery, and is not under dispute.

But if you also add one more piece of information – for example, that matter exists – then there’s nothing marginal about it. New physics is clearly required.

In fact, if the universe didn’t accelerate or decelerate at all, which is an old proposal revisited in this new paper, new physics would still be required.

These days the important point is that if you take all of the supernova data and throw it in the bin, we still have ample evidence that the universe’s expansion accelerates.

For example, in Australia we did a project called WiggleZ, which over five years made a survey of the positions of almost a quarter of a million galaxies.

The pattern of galaxies isn’t actually random, so we used this pattern to effectively lay grid paper over the universe and measure how its size changes with time.

Using this data alone shows the expanding universe is accelerating, and it is independent of any supernova information. The Nobel Prize was awarded only after this and many other observational techniques confirmed the supernova findings.

Something missing in the universe

Another example is the Cosmic Microwave Background (CMB), which is the leftover afterglow from the big bang and is one of the most precise observational measurements of the universe ever made. It shows that space is very close to flat.

Meanwhile observations of galaxies show that there simply isn’t enough matter or dark matter in the universe to make space flat. About 70% of the universe is missing.

So when observations of supernovae found that 70% of the universe is made up of dark energy, that solved the discrepancy. The supernovae were actually measured before the CMB, so essentially predicted that the CMB would measure a flat universe, a prediction that was confirmed beautifully.

So the evidence for some interesting new physics is now overwhelming.

I could go on, but everything we know so far supports the model in which the universe accelerates. For more detail see this review I wrote about the evidence for dark energy.

What is this ‘dark energy’?

One of the criticisms the new paper levels at standard cosmology is that the conclusion that the universe is accelerating is model dependent. That’s fair enough.

Usually cosmologists are careful to say that we are studying “dark energy”, which is the name we give to whatever is causing the apparent acceleration of the expansion of the universe. (Often we drop the “apparent” in that sentence, but it is there by implication.)

“Dark energy” is a blanket term we use to cover many possibilities, including that vacuum energy causes acceleration, or that we need a new theory of gravity, or even that we’ve misinterpreted general relativity and need a more sophisticated model.

The key feature that is not in dispute is that there is some significant new physics apparent in this data. There is something that goes beyond what we know about how the universe works – something that needs to be explained.

So let’s look at what the new paper actually did. To do so, let’s use an analogy.

Margins of measurement

Imagine you’re driving a car down a 60km/h limit road. You measure your speed to be 55km/h, but your odometer has some uncertainty in it. You take this into account, and are 99% sure that you are travelling between 51km/h and 59km/h.

Now your friend comes along and analyses your data slightly differently. She measures your speed to be 57km/h. Yes, it is slightly different from your measurement, but still consistent because your odometer is not that accurate.

But now your friend says: “Ha! You were only marginally below the speed limit. There’s every possibility that you were speeding!”

In other words, the answer didn’t change significantly, but the interpretation given in the paper takes the extreme of the allowed region and says “maybe the extreme is true”.

For those who like detail, the three standard deviation limit of the supernova data is big enough (just) to include a non-accelerating universe. But that is only if there is essentially no matter in the universe and you ignore all other measurements (see figure, below).

This is a reproduction of Figure 2 from the new research paper with annotations added. The contours encircle the values of the matter density and dark energy (in the form of a cosmological constant) that best fit the supernova data (in units of the critical density of the universe). The contours show one, two, and three standard deviations. The best fit is marked by a cross. The amount of matter measured by other observations lies approximately around the orange line. The contours lie almost entirely in the accelerating region, and the tiny patch that is not yet accelerating will nevertheless accelerate in the future.
Image modified by Samuel Hinton, Author provided

Improving the analysis

This new paper is trying to do something laudable. It is trying to improve the statistical analysis of the data (for comments on their analysis see).

As we get more and more data and the uncertainty on our measurement shrinks, it becomes more and more important to take into account every last detail.

In fact, with the Dark Energy Survey we have three people working full-time on testing and improving the statistical analysis we use to compare supernova data to theory.

We recognise the importance of improved statistical analysis because we’re soon going to have about 3,000 supernovae with which to measure the acceleration far more precisely than the original discoveries, which only had 52 supernovae between them. The sample that this new paper re-analyses contains 740 supernovae.

One final note about the conclusions in the paper. The authors suggest that a non-accelerating universe is worth considering. That’s fine. But you and I, the Earth, the Milky Way and all the other galaxies should gravitationally attract each other.

So a universe that just expands at a constant rate is actually just as strange as one that accelerates. You still have to explain why the expansion doesn’t slow down due to the gravity of everything it contains.

So even if the non-acceleration claim made in this paper is true, the explanation still requires new physics, and the search for the “dark energy” that explains it is just as important.

Healthy scepticism is vital in research. There is still much debate over what is causing the acceleration, and whether it is just an apparent acceleration that arises because our understanding of gravity is not yet complete.

Indeed that is what we as professional cosmologists spend our entire careers investigating. What this new paper and all the earlier papers agree on is that there is something that needs to be explained.

The supernova data show something genuinely weird is going on. The solution might be acceleration, or a new theory of gravity. Whatever it is, we will continue to search for it.

The ConversationTamara Davis, Professor, The University of Queensland

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

Deepwater Horizon: scientists are still trying to unravel mysteries of the spill

By Tony Gutierrez, Heriot-Watt University.

The film Deepwater Horizon, starring Mark Wahlberg, captures the chaos and drama that ensued after the massive fireball that engulfed the oil rig of the same name in 2011, which killed 11 people and injured many others. The film dramatises the gruesome ordeal which saw more than 100 crew members battle to survive the inferno of sweltering heat and mayhem.

But the scientific frenzy that followed in the ensuing months was almost as dramatic. And now, just over six years on, there is still much that science has yet to uncover about the spill in the hope of preparing us for the next big one.

Where, for instance, did all the oil actually end up? And what did all the chemicals dumped in the ocean to break up the oil do to the marine life that survived the spill? These questions and more remain unanswered.

The Deepwater Horizon disaster stands in the record books as the largest oil spill in US history. Following the blowout on April 20, 2010, 4.1m barrels (0.7m tonnes) of crude oil leaked into the Gulf of Mexico over a period of almost three months. Only the 1979 Ixtoc-I oil spill, also in the Gulf of Mexico, ranks in the same league.

Oil in the Gulf of Mexico two months after the spill. But much of the oil remained far below the surface.
NASA, CC BY

It was a particularly complex and challenging spill to deal with and study, for several reasons. The exploratory well below Deepwater Horizon was itself an extraordinary engineering feat, the deepest the oil and gas industry had ever drilled in the ocean. The spill occurred about 1.5km below the sea surface, again, the deepest in history. And, because it occurred in such deep water, a good chunk of the oil didn’t rise to the surface as in a usual spill – instead, an unprecedented “oil plume” formed below the surface and lingered there for months.

Rarely has a human-made disaster ever stopped the clock on the research programmes of so many scientists in a nation. Scientists from universities and government-funded agencies all over the US put their work on hold in order to turn their attention to Deepwater Horizon. More than 400 scientific peer-reviewed papers have now been published on the spill, and they’ve revealed a lot of important information.

Within weeks of the spill occurring scientists reported the formation of a massive plume of crude oil a kilometre below the surface that stretched for about 30km and was 300 metres high. It was difficult to track, but nonetheless was intensively studied as researchers realised they had a unique opportunity. Within this oil cloud, scientists also showed that certain types of oil-degrading bacteria had bloomed and that these microbes played a fundamental role in degrading the oil in the deep as well as on sea surface oil slicks of the Gulf.

Research also demonstrated that the oil caused lasting damage to Gulf coast marshes, and that it affected the spawning habitat of Bluefin tuna along the south-east coast of North America.

The oil slick reached these brown pelicans in Louisiana.
Bevil Knapp / EPA

After the spill, scientists noticed huge quantities of lightly-coloured mucus-like particles or blobs on the sea surface in and around the spill site. These “blobs” could be barely big enough to see, or large enough to fit in your hand. Nothing of this magnitude had been observed before, although there is evidence that similar particles had also formed during the Ixtoc-I spill.

It turned out this was caused by oil sticking to “marine snow” – small specs of dead plankton, bacteria, the mucus they produce, and so on, that clump together near the surface and then fall through the ocean just as real snow falls through the sky. As this “marine oil snow” sank through the water, it took with it a large proportion of the oil from the sea surface and eventually settled on the seabed.

Mysteries remain

Just over six years later, scientists are still trying to understand the full extent of Deepwater Horizon’s impacts on the seabed, beaches and marshes of the Gulf of Mexico. This is actually not a great length of time for science to fully understand a massive and complex spill like this, so its no wonder that some things still remain a mystery.

We know that a lot of the oil from the leaky well that reached the surface made it to the Gulf coast, for instance, and caused acute damage to coastal ecosystems. But we do not know where the deepwater oil plume ended up or what its impact was. Likewise we still don’t know the longer-term impact of the expansive surface oil slicks in the Gulf.

We also need to better understand the impacts of the chemicals that were used to disperse the oil after the spill. Around 7m litres of a dispersant called Corexit was sprayed into the sea by planes or ships. However, given oil dispersant is essentially strong household soap, these chemicals posed a problem for coral and other marine organisms in the Gulf, including the same oil-degrading bacteria that are so critical in the natural bio-degradation process after the spill. Research shows the dispersant used was probably counter-productive.

There is still much to be learnt about what Corexit and other dispersants do to marine life in the longer term. This is important as dispersants are a first line of response to combat oil spills at sea.

Research on the spill is likely to continue for the next few decades. With many oil and gas reservoirs coming to the end of their lives the industry is expanding into the Arctic and other challenging environments, and exploring ever-deeper ocean waters. Another spill like Deepwater Horizon cannot be discounted. What science has already uncovered, and what it will do in years to come, is crucial and should help to better prepare us to deal with the next big one.

The ConversationTony Gutierrez, Associate Professor of Microbiology, Heriot-Watt University

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

Astronomers Investigate Color Changes in Saturn’s Atmospheric Hexagon

Scientists are investigating potential causes for the change in color of the region inside the north-polar hexagon on Saturn. The color change is thought to be an effect of Saturn’s seasons. In particular, the change from a bluish color to a more golden hue may be due to the increased production of photochemical hazes in the atmosphere as the north pole approaches summer solstice in May 2017.

These two natural color images from NASA's Cassini spacecraft show the changing appearance of Saturn's north polar region between 2012 and 2016. Credit: NASA/JPL-Caltech/Space Science Institute/Hampton University. Click/Tap for larger image.
These two natural color images from NASA’s Cassini spacecraft show the changing appearance of Saturn’s north polar region between 2012 and 2016. Credit: NASA/JPL-Caltech/Space Science Institute/Hampton University. Click/Tap for larger image.

Researchers think the hexagon, which is a six-sided jetstream, might act as a barrier that prevents haze particles produced outside it from entering. During the polar winter night between November 1995 and August 2009, Saturn’s north polar atmosphere became clear of aerosols produced by photochemical reactions — reactions involving sunlight and the atmosphere. Since the planet experienced equinox in August 2009, the polar atmosphere has been basking in continuous sunshine, and aerosols are being produced inside of the hexagon, around the north pole, making the polar atmosphere appear hazy today.

Other effects, including changes in atmospheric circulation, could also be playing a role. Scientists think seasonally shifting patterns of solar heating probably influence the winds in the polar regions.

Both images were taken by the Cassini wide-angle camera.

Source: News release on NASA.gov used under public domain rights and in compliance with the NASA Media Guidelines.

Now, Check Out:

Exoplanet Orbiting Nearest Star Could Be Habitable

A rocky extrasolar planet with a mass similar to Earth’s was recently detected around Proxima Centauri, the nearest star to our sun. This planet, called Proxima b, is in an orbit that would allow it to have liquid water on its surface, thus raising the question of its habitability. In a study to be published in The Astrophysical Journal Letters, an international team led by researchers at the Marseille Astrophysics Laboratory (CNRS/Aix-Marseille Université) has determined the planet’s dimensions and properties of its surface, which actually favor its habitability.

This artist’s impression shows a view of the surface of the planet Proxima b orbiting the red dwarf star Proxima Centauri, the closest star to the Solar System. The double star Alpha Centauri AB also appears in the image to the upper-right of Proxima itself. Proxima b is a little more massive than the Earth and orbits in the habitable zone around Proxima Centauri, where the temperature is suitable for liquid water to exist on its surface. Click/Tap for larger image.
This artist’s impression shows a view of the surface of the planet Proxima b orbiting the red dwarf star Proxima Centauri, the closest star to the Solar System. The double star Alpha Centauri AB also appears in the image to the upper-right of Proxima itself. Proxima b is a little more massive than the Earth and orbits in the habitable zone around Proxima Centauri, where the temperature is suitable for liquid water to exist on its surface. Click/Tap for larger image.

The team says Proxima b could be an “ocean planet,” with an ocean covering its entire surface, the water perhaps similar to that of subsurface oceans detected inside icy moons around Jupiter and Saturn. The researchers also show that Proxima b’s composition might resemble Mercury’s, with a metal core making up two-thirds of the mass of the planet. These results provide the basis for future studies to determine the habitability of Proxima b.

Proxima Centauri, the star nearest the sun, has a planetary system consisting of at least one planet. The new study analyzes and supplements earlier observations. These new measurements show that this planet, named Proxima Centauri b or simply Proxima b, has a mass close to that of Earth (1.3 times Earth’s mass) and orbits its star at a distance of 0.05 astronomical units (one tenth of the sun-Mercury distance). Contrary to what one might think, such a small distance does not imply a high temperature on the surface of Proxima b because the host star, Proxima Centauri, is a red dwarf with a mass and radius that are only one-tenth that of the Sun, and a brightness a thousand times smaller than the sun’s. Hence Proxima b is in the habitable zone of its star and may harbor liquid water at its surface.

However, very little is known about Proxima b, particularly its radius. It is therefore impossible to know what the planet looks like, or what it is made of. The radius measurement of an exoplanet is normally done during transit, when it eclipses its star. But Proxima b is not known to transit.

There is another way to estimate the radius of a planet. If we know its mass, we can simulate the behavior of the constituent materials. This is the method used by a French-American team of researchers from the Marseille Astrophysics Laboratory (CNRS/Aix-Marseille University) and the Department of Astronomy at Cornell University. With the help of a model of internal structure, they explored the different compositions that could be associated with Proxima b and deduced the corresponding values for the radius of the planet. They restricted their study to the case of potentially habitable planets, simulating dense and solid planets, formed with the metallic core and rocky mantle found in terrestrial planets in our solar system. They also allowed the incorporation of a large mass of water in their composition.

These assumptions allow a wide variety of compositions for Proxima b. The radius of the planet may vary between 0.94 and 1.40 times the radius of the Earth (3,959 miles, or 6,371 kilometers). The study shows that Proxima b has a minimum radius of 3,722 miles (5,990 kilometers), and the only way to get this value is to have a very dense planet, consisting of a metal core with a mass equal to 65 percent of the planet, the rest being rocky mantle (formed of silicate). The boundary between these two materials is then located about 932 miles (1,500 kilometers) depth. With such a composition, Proxima b is very close to the planet Mercury, which also has a very solid metal core. This first case does not exclude the presence of water on the surface of the planet, as on Earth where the water body does not exceed 0.05 percent of the mass of the planet. In contrast, Proxima b can also have a radius of 5,543 miles (8,920 kilometers), provided that it is composed of 50 percent rock surrounded by 50 percent water. In this case, Proxima b would be covered by a single liquid ocean 124 miles (200 kilometers) deep. Below, the pressure would be so strong that liquid water would turn to high-pressure ice before reaching the boundary with the mantle to 1,926 miles (3,100 kilometers) depth. In these extreme cases, a thin gas atmosphere could cover the planet, as on Earth, making Proxima b potentially habitable.

Such findings provide important additional information to different composition scenarios that have been proposed for Proxima b. Some involve a completely dry planet, while others permit the presence of a significant amount of water in its composition. The work of the research team included providing an estimate of the radius of the planet for each of these scenarios. Similarly, this would restrict the amount of water available on Proxima b, where water is prone to evaporation by ultraviolet and X-rays from the host star, which are much more violent than those from the sun.

Future observations of Proxima Centauri will refine this study. In particular, the measurement of stellar abundances of heavy elements (magnesium, iron, silicon) will decrease the number of possible compositions for Proxima b, allowing determination more accurate radius Proxima b.

Source: News release on NASA.gov used under public domain rights and in compliance with the NASA Media Guidelines

Now, Check Out:

Astounding Discovery May Invalidate Solar System Formation Theories

The discovery of two massive companions around one star in a close binary system—one so-called giant planet and one brown dwarf, or “failed star”—suggests that everything we know about the formation of solar systems might be wrong, say University of Florida astronomy professor Jian Ge and postdoctoral researcher Bo Ma.

The first, called MARVELS-7a, is 12 times the mass of Jupiter, while the second, MARVELS-7b, has 57 times the mass of Jupiter.

Astronomers believe that planets in our solar system formed from a collapsed disk-like gaseous cloud, with our largest planet, Jupiter, buffered from smaller planets by the asteroid belt. In the new binary system, HD 87646, the two giant companions are close to the minimum mass for burning deuterium and hydrogen, meaning that they have accumulated far more dust and gas than what a typical collapsed disk-like gaseous cloud can provide.

They were likely formed through another mechanism. The stability of the system despite such massive bodies in close proximity raises new questions about how protoplanetary disks form.

HD 87646’s primary star is 12 percent more massive than our sun, yet is only 22 astronomical units away from its secondary, a star about 10 percent less massive than our sun, roughly the distance between the sun and Uranus in our solar system.

An astronomical unit is the mean distance between the center of the Earth and our sun, but in cosmic terms, is a relatively short distance. Within such a short distance, two giant companions are orbiting the primary star at about 0.1 and 1.5 astronomical units away.

For such large companion objects to be stable so close together defies our current popular theories on how solar systems form.

The planet-hunting Doppler instrument WM Keck Exoplanet Tracker, or KeckET, is unusual in that it can simultaneously observe dozens of celestial bodies. Ge says this discovery would not have been possible without a measurement capability such as KeckET to search for a large number of stars to discover a very rare system like this one.

The survey of HD 87646 occurred in 2006 during the pilot survey of the Multi-object APO Radial Velocity Exoplanet Large-area Survey (MARVELS) of the SDSS-III program, and Ge led the MARVELS survey from 2008 to 2012.

It has taken eight years of follow-up data collection through collaboration with over 30 astronomers at seven other telescopes around the world and careful data analysis to confirm what Ge calls a “very bizarre” finding.

The team will continue to analyze data from the MARVELS survey; their current findings appear online in the Astronomical Journal.

Source: Republished from Futurity.org as a derivative work under the Attribution 4.0 International license. Original article posted to Futurity by

Now, Check Out:

How many genes does it take to make a person?

By Sean Nee, Pennsylvania State University.

We humans like to think of ourselves as on the top of the heap compared to all the other living things on our planet. Life has evolved over three billion years from simple one-celled creatures through to multicellular plants and animals coming in all shapes and sizes and abilities. In addition to growing ecological complexity, over the history of life we’ve also seen the evolution of intelligence, complex societies and technological invention, until we arrive today at people flying around the world at 35,000 feet discussing the in-flight movie.

It’s natural to think of the history of life as progressing from the simple to the complex, and to expect this to be reflected in increasing gene numbers. We fancy ourselves leading the way with our superior intellect and global domination; the expectation was that since we’re the most complex creature, we’d have the most elaborate set of genes.

This presumption seems logical, but the more researchers figure out about various genomes, the more flawed it seems. About a half-century ago the estimated number of human genes was in the millions. Today we’re down to about 20,000. We now know, for example, that bananas, with their 30,000 genes, have 50 percent more genes than we do.

As researchers devise new ways to count not just the genes an organism has, but also the ones it has that are superfluous, there’s a clear convergence between the number of genes in what we’ve always thought of as the simplest lifeforms – viruses – and the most complex – us. It’s time to rethink the question of how the complexity of an organism is reflected in its genome.

The converging estimated number of genes in a person versus a giant virus. Human line shows average estimate with dashed line representing estimated number of genes needed. Numbers shown for viruses are for MS2 (1976), HIV (1985), giant viruses from 2004 and average T4 number in the 1990s.
Sean Nee, CC BY

Counting up the genes

We can think of all our genes together as the recipes in a cookbook for us. They’re written in the letters of the bases of DNA – abbreviated as ACGT. The genes provide instructions on how and when to assemble the proteins that you’re made of and that carry out all the functions of life within your body. A typical gene requires about 1000 letters. Together with the environment and experience, genes are responsible for what and who we are – so it’s interesting to know how many genes add up to a whole organism.

When we’re talking about numbers of genes, we can display the actual count for viruses, but only the estimates for human beings for an important reason. One challenge counting genes in eukaryotes – which include us, bananas and yeast like Candida – is that our genes are not lined up like ducks in a row.

Our genetic recipes are arranged as if the cookbook’s pages have all been ripped out and mixed up with three billion other letters, about 50 percent of which actually describe inactivated, dead viruses. So in eukaryotes it’s hard to count up the genes that have vital functions and separate them from what’s extraneous.

Megavirus has over a thousand genes, Pandoravirus has even more. Chantal Abergel, CC BY-SA

In contrast, counting genes in viruses – and bacteria, which can have 10,000 genes – is relatively easy. This is because the raw material of genes – nucleic acids – is relatively expensive for tiny creatures, so there is strong selection to delete unnecessary sequences. In fact, the real challenge for viruses is discovering them in the first place. It is startling that all major virus discoveries, including HIV, have not been made by sequencing at all, but by old methods such as magnifying them visually and looking at their morphology. Continuing advances in molecular technology have taught us the remarkable diversity of the virosphere, but can only help us count the genes of something we already know exists.

Flourishing with even fewer

The number of genes we actually need for a healthy life is probably even lower than the current estimate of 20,000 in our entire genome. One author of a recent study has reasonably extrapolated that the count for essential genes for human beings may be much lower.

These researchers looked at thousands of healthy adults, looking for naturally occurring “knockouts,” in which the functions of particular genes are absent. All our genes come in two copies – one from each parent. Usually, one active copy can compensate if the other is inactive, and it is difficult to find people with both copies inactivated because inactivated genes are naturally rare.

Knockout genes are fairly easy to study with lab rats, using modern genetic engineering techniques to inactivate both copies of particular genes of our choice, or even remove them altogether, and see what happens. But human studies require populations of people living in communities with 21st century medical technologies and known pedigrees suited to the genetic and statistical analyses required. Icelanders are one useful population, and the British-Pakistani people of this study are another.

This research found over 700 genes which can be knocked out with no obvious health consequences. For instance, one surprising discovery was that the PRDM9 gene – which plays a crucial role in the fertility of mice – can also be knocked out in people with no ill effects.

Extrapolating the analysis beyond the human knockouts study leads to an estimate that only 3,000 human genes are actually needed to build a healthy human. This is in the same ballpark as the number of genes in “giant viruses.” Pandoravirus, recovered from 30,000-year-old Siberian ice in 2014, is the largest virus known to date and has 2,500 genes.

So what genes do we need? We don’t even know what a quarter of human genes actually do, and this is advanced compared to our knowledge of other species.

Complexity arises from the very simple

But whether the final number of human genes is 20,000 or 3,000 or something else, the point is that when it comes to understanding complexity, size really does not matter. We’ve known this for a long time in at least two contexts, and are just beginning to understand the third.

Alan Turing, the mathematician and WWII code breaker established the theory of multicellular development. He studied simple mathematical models, now called “reaction-diffusion” processes, in which a small number of chemicals – just two in Turing’s model – diffuse and react with each other. With simple rules governing their reactions, these models can reliably generate very complex, yet coherent structures that are easily seen. So the biological structures of plants and animals do not require complex programming.

The simple building blocks of neurons together generate immense complexity.
UCI Research/Ardy Rahman, CC BY-NC

Similarly, it is obvious that the 100 trillion connections in the human brain, which are what really make us who we are, cannot possibly be genetically programmed individually. The recent breakthroughs in artificial intelligence are based on neural networks; these are computer models of the brain in which simple elements – corresponding to neurons – establish their own connections through interacting with the world. The results have been spectacular in applied areas such as handwriting recognition and medical diagnosis, and Google has invited the public to play games with and observe the dreams of its AIs.

Microbes go beyond basic

So it’s clear that a single cell does not need to be very complicated for large numbers of them to produce very complex outcomes. Hence, it shouldn’t come as a great surprise that human gene numbers may be of the same size as those of single-celled microbes like viruses and bacteria.

What is coming as a surprise is the converse – that tiny microbes can have rich, complex lives. There is a growing field of study – dubbed “sociomicrobiology” – that examines the extraordinarily complex social lives of microbes, which stand up in comparison with our own. My own contributions to these areas concern giving viruses their rightful place in this invisible soap opera.

We have become aware in the last decade that microbes spend over 90 percent of their lives as biofilms, which may best be thought of as biological tissue. Indeed, many biofilms have systems of electrical communication between cells, like brain tissue, making them a model for studying brain disorders such as migraine and epilepsy.

Biofilms can also be thought of as “cities of microbes,” and the integration of sociomicrobiology and medical research is making rapid progress in many areas, such as the treatment of cystic fibrosis. The social lives of microbes in these cities – complete with cooperation, conflict, truth, lies and even suicide – is fast becoming the major study area in evolutionary biology in the 21st century.

Just as the biology of humans becomes starkly less outstanding than we had thought, the world of microbes gets far more interesting. And the number of genes doesn’t seem to have anything to do with it.

The ConversationSean Nee, Research Professor of Ecosystem Science and Management, Pennsylvania State University

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

Alcoholism research: A virus could manipulate neurons to reduce the desire to drink

By Yifeng Cheng, Texas A&M University and Jun Wang, Texas A&M University .

About 17 million adults and more than 850,000 adolescents had some problems with alcohol in the United States in 2012. Long-term alcohol misuse could harm your liver, stomach, cardiovascular system and bones, as well as your brain.

Chronic heavy alcohol drinking can lead to a problem that we scientists call alcohol use disorder, which most people call alcohol abuse or alcoholism. Whatever name you use, it is a severe issue that affects millions of people and their families and causes economic burdens to our society.

Quitting alcohol, like quitting any drug, is hard to do. One reason may be that heavy drinking can actually change the brain.

Our research team at Texas A&M University Health Science Center has found that alcohol changes the way information is processed through specific types of neurons in the brain, encouraging the brain to crave more alcohol. Over time, the more you drink, the more striking the change.

In recent research we identified a way to mitigate these changes and reduce the desire to drink using a genetically engineered virus.

Alcohol changes your brain

Alcohol use disorders include alcohol abuse and alcohol dependence, and can be thought of as an addiction. Addiction is a chronic brain disease. It causes abnormalities in the connections between neurons.

Heavy alcohol use can cause changes in a region of the brain, called the striatum. This part of the brain processes all sensory information (what we see and what we hear, for instance), and sends out orders to control motivational or motor behavior.

The striatum is a target for drugs.
Life Science Databases via Wikimedia Commons, CC BY-SA

The striatum, which is located in the forebrain, is a major target for addictive drugs and alcohol. Drug and alcohol intake can profoundly increase the level of dopamine, a neurotransmitter associated with pleasure and motivation, in the striatum.

The neurons in the striatum have higher densities of dopamine receptors as compared to neurons in other parts of the brain. As a result, striatal neurons are more susceptible to changes in dopamine levels.

There are two main types of neurons in the striatum: D1 and D2. While both receive sensory information from other parts of the brain, they have nearly opposite functions.

D1-neurons control “go” actions, which encourage behavior. D2-neurons, on the other hand, control “no-go” actions, which inhibit behavior. Think of D1-neurons like a green traffic light and D2-neurons like a red traffic light.

Dopamine affects these neurons in different ways. It promotes D1-neuron activity, turning the green light on, and suppresses D2-neuron function, turning the red light off. As a result, dopamine promotes “go” and inhibits “no-go” actions on reward behavior.

Alcohol, especially excessive amounts, can hijack this reward system because it increases dopamine levels in the striatum. As a result, your green traffic light is constantly switched on, and the red traffic light doesn’t light up to tell you to stop. This is why heavy alcohol use pushes you to drink to excess more and more.

These brain changes last a very long time. But can they be mitigated? That’s what we want to find out.

What’s in that bottle?
Lab rat via www.shutterstock.com.

Can we mitigate these changes?

We started by presenting mice with two bottles, one containing water and the other containing 20 percent alcohol by volume, mixed with drinking water. The bottle containing alcohol was available every other day, and the mice could freely decide which to drink from. Gradually, most of animals developed a drinking habit.

We then used a process called viral mediated gene transfer to manipulate the “go” or “no-go” neurons in mice that had developed a drinking habit.

Mice were infected with a genetically engineered virus that delivers a gene into the “go” or “no-go” neurons. That gene then drives the neurons to express a specific protein.

After the protein is expressed, we injected the mice with a chemical that recognizes and binds to it. This binding can inhibit or promote activity in these neurons, letting us turn the green light off (by inhibiting “go” neurons) or turn the red light (by exciting “no-go” neurons) back on.

Then we measured how much alcohol the mice were consuming after being “infected,” and compared it with what they were drinking before.

We found that either inhibiting the “go” neurons or turning on the “no-go” neurons successfully reduced alcohol drinking levels and preference for alcohol in the “alcoholic” mice.

In another experiment in this study, we found that directly delivering a drug that excites the “no-go” neuron into the striatum can also reduce alcohol consumption. Conversely, in a previous experiment we found that directly delivering a drug that inhibits the “go” neuron has the same effect. Both results may help the development of clinical treatment for alcoholism.

What does this mean for treatment?

Most people with an alcohol use disorder can benefit from treatment, which can include a combination of medication, counseling and support groups. Although medications, such as Naltrexone, to help people stop drinking can be effective, none of them can accurately target the specific neurons or circuits that are responsible for alcohol consumption.

Employing viruses to deliver specific genes into neurons has been for disorders such as Parkinson’s disease in humans. But while we’ve demonstrated that this process can reduce the desire to drink in mice, we’re not yet at the point of using the same method in humans.

Our finding provides insight for clinical treatment in humans in the future, but using a virus to treat alcoholism in humans is probably still a long way off.

The ConversationYifeng Cheng, Ph.D. Candidate, Texas A&M University Health Science Center, Texas A&M University and Jun Wang, Assistant Professor of Neuroscience and Experimental Therapeutics, Texas A&M Health Science Center , Texas A&M University

This article was originally published on The Conversation. Read the original article.

Now, Check Out: