How forensic science can unlock the mysteries of human evolution

By Patrick Randolph-Quinney, University of Central Lancashire; Anthony Sinclair, University of Liverpool; Emma Nelson, University of Liverpool, and Jason Hall, University of Liverpool.

People are fascinated by the use of forensic science to solve crimes. Any science can be forensic when used in the criminal and civil justice system – biology, genetics and chemistry have been applied in this way. Now something rather special is happening: the scientific skill sets developed while investigating crime scenes, homicides and mass fatalities are being put to use outside the courtroom. Forensic anthropology is one field where this is happening.

Loosely defined, forensic anthropology is the analysis of human remains for the purpose of establishing identity in both living and dead individuals. In the case of the dead this often focuses on analyses of the skeleton. But any and all parts of the physical body can be analysed. The forensic anthropologist is an expert at assessing biological sex, age at death, living height and ancestral affinity from the skeleton.

Our newest research has extended forensic science’s reach from the present into prehistory. In the study, published in the Journal of Archaeological Science, we applied common forensic anthropology techniques to investigate the biological sex of artists who lived long before the invention of the written word.

We specifically focused on those who produced a type of art known as a hand stencil. We applied forensic biometrics to produce statistically robust results which, we hope, will offset some of the problems archaeological researchers have encountered in dealing with this ancient art form.

Sexing rock art

Ancient hand stencils were made by blowing, spitting or stippling pigment onto a hand while it was held against a rock surface. This left a negative impression on the rock in the shape of the hand.

Experimental production of a hand stencil. Jason Hall, University of Liverpool

These stencils are frequently found alongside pictorial cave art created during a period known as the Upper Palaeolithic, which started roughly 40 000 years ago.

Archaeologists have long been interested in such art. The presence of a human hand creates a direct, physical connection with an artist who lived millennia ago. Archaeologists have often focused on who made the art – not the individual’s identity, but whether the artist was male or female.

Until now, researchers have focused on studying hand size and finger length to address the artist’s sex. The size and shape of the hand is influenced by biological sex as sex hormones determine the relative length of fingers during development, known as 2D:4D ratios.

But many ratio-based studies applied to rock art have generally been difficult to replicate. They’ve often produced conflicting results. The problem with focusing on hand size and finger length is that two differently shaped hands can have identical linear dimensions and ratios.

To overcome this we adopted an approach based on forensic biometric principles. This promises to be both more statistically robust and more open to replication between researchers in different parts of the world.

The study used a branch of statistics called Geometric Morphometric Methods. The underpinnings of this discipline date back to the early 20th century. More recently computing and digital technology have allowed scientists to capture objects in 2D and 3D before extracting shape and size differences within a common spatial framework.

In our study we used experimentally produced stencils from 132 volunteers. The stencils were digitised and 19 anatomical landmarks were applied to each image. These correspond to features on the fingers and palms which are the same between individuals, as depicted in figure 2. This produced a matrix of x-y coordinates of each hand, which represented the shape of each hand as the equivalent of a map reference system.

Figure 2. Geometric morphometric landmarks applied to an experimentally produced hand stencil. This shows the 19 geometric landmarks applied to a hand. Emma Nelson, University of Liverpool

We used a technique called Procrustes superimposition to move and translate each hand outline into the same spatial framework and scale them against each other. This made the difference between individuals and sexes objectively apparent.

Procrustes also allowed us to treat shape and size as discrete entities, analysing them either independently or together. Then we applied discriminant statistics to investigate which component of hand form could best be used to assess whether an outline was from a male or a female. After discrimination we were able to predict the sex of the hand in 83% of cases using a size proxy, but with over 90% accuracy when size and shape of the hand were combined.

An analysis called Partial Least Squares was used to treat the hand as discrete anatomical units; that is, palm and fingers independently. Rather surprisingly the shape of the palm was a much better indicator of the sex of the hand than the fingers. This goes counter to received wisdom.

This would allow us to predict sex in hand stencils which have missing digits – a common issue in Palaeolithic rock art – where whole or part fingers are often missing or obscured.

Palaeo-forensics

This study adds to the body of research that has already used forensic science to understand prehistory. Beyond rock art, forensic anthropology is helping to develop the emergent field of palaeo-forensics: the application of forensic analyses into the deep past.

For instance, we have been able to understand fatal falls in Australopithecus sediba from Malapa and primitive mortuary practices in the species Homo naledi from Rising Star Cave, both in South Africa.

All of this shows the synergy that arises when the palaeo, archaeological and forensic sciences are brought together to advance humans’ understanding of the past.

The ConversationPatrick Randolph-Quinney, Senior Lecturer in Biological and Forensic Anthropology, University of Central Lancashire; Anthony Sinclair, Professor of Archaeological Theory and Method, University of Liverpool; Emma Nelson, Lecturer in Clinical Communication, University of Liverpool, and Jason Hall, Chief Archaeology Technician, University of Liverpool

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

 

Dengue virus antibodies may worsen a Zika infection

By Sharon Isern, Florida Gulf Coast University.

The World Health Organization declared in November that Zika was no longer a public health emergency of international concern.

That doesn’t mean concern over Zika is over, but now that a link between Zika and microcephaly has been established, it is viewed as a long-term problem, which requires constant attention.

While researchers have concluded that Zika infection can cause microcephaly and other birth defects such as eye damage in newborns, there are still many unanswered questions about the virus.

Earlier Zika outbreaks in Africa and Asia were gradual, continuous and associated with mild clinical outcomes, but the Zika outbreaks in the Pacific in 2013 and 2014 and the Americas in 2015 and 2016 have been explosive. They have been associated with severe disease, including birth defects in newborns and Guillain-Barre, a condition that can cause temporary paralysis in adults. Scientists are trying to figure out why.

I study flaviviruses, which include Zika and dengue, at Florida Gulf Coast University. Like other flavivirus researchers, I am turning to dengue to better understand Zika. Dengue, a close relative of Zika, is regularly found in places like Brazil, and is spread by the same mosquitoes, Aedes aegypti.

My colleagues and I wanted to find out whether having immunity to dengue from an earlier infection could make a Zika infection worse.

Aedes aegypti mosquitoes can transmit both Zika and dengue viruses.
AP Photo/Felipe Dana, File

A brief history of Zika

Zika was first isolated in Uganda in 1947. For decades Zika infections in humans were sporadic. Perhaps cases of Zika went underreported, since its symptoms were similar to other fever-causing diseases and most cases are asymptomatic.

By the 1980s, Zika had spread beyond Africa and had become endemic, or habitually present, in Asia. Many individuals living in these regions may be immune to the virus.

The first reported Zika outbreak outside of Africa and Asia occurred in the Pacific, in Micronesia, in 2007. To our knowledge there were no associations with microcephaly or Guillain-Barre reported at the time.

Zika transmission in the Pacific wasn’t reported again until 2013, when French Polynesia experienced an explosive outbreak. In 2014 further outbreaks were reported in New Caledonia, Easter Island and the Cook Islands. When French Polynesia experienced another outbreak in 2014, there were reports of Zika being transmitted to babies, most likely in utero, and complications associated with Guillian-Barre in adults.

Map showing countries affected by the Zika virus. Reuters

By early 2015 the virus had spread to the Americas, and the first confirmed case of locally acquired Zika in the region was confirmed in May 2015 by Brazil’s National Reference Laboratory. The Pan American Health Organization reports that Zika virus transmission had occurred in over 48 countries or territories in the Americas, with 177,614 confirmed cases of locally acquired Zika in the region, over half a million suspected cases, and 2,525 confirmed congenital syndrome associated cases with Zika infections as of Dec. 29.

Why did Zika start to cause explosive outbreaks? And why did it start to cause more health problems?

Dengue a common connector

A few factors might explain. For instance, perhaps the difference may lie in the age of the person exposed to Zika.

If children are infected before they reach puberty, they become immune and cannot pass Zika along to their children. And in parts of the world where Zika is endemic, people are more likely to have been exposed and become immune while young.

The scenario we’ve seen in the Americas is different. Since Zika had not been reported in the region prior to 2015, people, including women of child-bearing age, had never been exposed to the virus. However, this doesn’t explain why certain people develop severe disease, whereas others do not.

Dengue virus is endemic many parts of Asia, Africa and the Americas, infecting up to 100 million people globally each year. And the areas of the Pacific and the Americas that experienced explosive Zika outbreaks have two things in common: they had not been exposed to Zika before and dengue is endemic. And dengue may provide a clue to why Zika has caused severe disease in these new outbreaks.

Could a prior dengue infection make Zika worse?

When a person is infected with a particular virus for the first time, the immune system springs into action, producing antibodies to destroy it. The next time a person encounters that virus, the body produces those antibodies again to fight back, preventing illness. This is called immunity.

There are four different kinds of dengue virus, called serotypes. Antibodies produced during infection with one dengue serotype confer lifelong immunity against only that particular serotype. If a person is infected with another serotype later on, the antibodies from the earlier infection will bind to the new virus type, but can’t prevent it from infecting cells.

Instead, the bound antibodies can transport the viruses to immune cells that are normally not infected by dengue. In other words, if a person is infected with one serotype of dengue and then gets infected with another serotype, the antibodies from the previous infection then make the new serotype infect cells that it otherwise wouldn’t. Then the virus can reproduce to very high numbers in these cells, leading to severe disease. This process is called antibody-dependent enhancement, or ADE.

Zika virus is closely related to dengue and has been shown to undergo ADE in response to other flavivirus antibodies. Zika virus antibodies in turn have been shown to have a similar effect on related viruses. And other researchers have shown that preexisting immunity to Zika can enhance dengue virus disease severity in animals.

A digitally colorized electron microscope from the CDC shows the Zika virus, in red. Cynthia Goldsmith/CDC via AP

My lab studied the African strain of the Zika virus in 2015 before the connection with microcephaly was known.

Our results, posted in April to bioRxiv, a preprint server for biology, showed that antibodies from a prior dengue virus infection greatly enhanced Zika virus production in cell culture. Other groups have since independently verified our work. And, as we found in a more recent study, the same results hold true with a strain of Zika isolated in Puerto Rico by the CDC.

Our findings suggest that preexisting dengue virus antibodies may enhance Zika virus infection in patients, potentially making the infection more severe.

However, this correlation needs to be confirmed in the clinic. We do not know how many people infected with Zika in the outbreaks in the Pacific and in the Americas had prior dengue infections. But since the virus is endemic in those areas, it is possible that some people may have been infected with dengue before they were infected with Zika.

Zika’s spread may become limited

Like dengue and other mosquito-borne viruses, Zika spread is seasonal, and outbreaks occur when mosquitoes are abundant. As the United States heads into winter and South America heads into summer, what can we expect with regard to Zika and dengue and disease severity?

As more and more people acquire immunity to Zika, its spread will be limited to those who have never been exposed. Given enough time, new infections will be limited to the young. As long as infections occur prior to child-bearing years, microcephaly in newborns should not be a concern.

If these studies hold true in the clinic, cocirculation of Zika and dengue could increase the severity of both viruses. Vaccines to protect against Zika, and dengue for that matter, would need to be designed carefully so that vaccination with one does not enhance a natural infection with the other virus.

Public health emergency or not, there is still much to be learned about Zika virus and its associated consequences.

The ConversationSharon Isern, Professor of Biological Sciences, Florida Gulf Coast University

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

Why you can’t fry eggs (or testicles) with a cellphone

By Timothy J. Jorgensen, Georgetown University.

A minor craze in men’s underwear fashions these days seems to be briefs that shield the genitals from cellphone radiation. The sales claim is that these products protect the testicles from the harmful effects of the radio waves emitted by cellphones, and therefore help maintain a robust sperm count and high fertility. These undergarments may shield the testicles from radiation, but do male cellphone users really risk infertility?

The notion that electromagnetic radiation in the radio frequency range can cause male sterility, either temporary or permanent, has been around for a long time. As I describe in my book “Strange Glow: The Story of Radiation,” during World War II some enlisted men would consistently and inexplicably volunteer for radar duty just prior to their scheduled leave days. It turned out that a rumor had been circulating that exposure to radio waves from the radar equipment produced temporary sterility, which the soldiers saw as an employment benefit.

The military wanted to know whether there was any substance to the sterility rumor. So they asked Hermann Muller – a geneticist who won the Nobel Prize for showing that x-rays could cause sterility and genetic mutations – to evaluate the effects of radio waves in the same fruit fly experimental model he had used to show that x-rays impaired reproduction.

Muller could find no dose of radio waves that produced either sterility or genetic mutations, and concluded that radio waves did not present the same threat to fertility that x-rays did. Radio waves were different. But why? Aren’t both x-rays and radio waves electromagnetic radiation?

The electromagnetic spectrum, tiny wavelengths on the left, longer wavelengths on the right.
Inductiveload, CC BY-SA

Yes, they are – but they differ in one key factor: They have very different wavelengths. All electromagnetic radiation travels through space as invisible waves of energy. And it’s the specific wavelength of the radiation that determines all of its effects, both physical and biological. The shorter wavelengths carry higher amounts of energy than the longer wavelengths.

X-rays are able to damage cells and tissues precisely because their wavelengths are extremely short – one-millionth the width of a human hair – and thus are highly energetic and very harmful to cells. Radio waves, in contrast, carry little energy because their wavelengths are very long – about the length of a football field. Such long-wavelength radiations have really low energies – too low to damage cells. And it’s this big difference between the wavelengths of x-rays and radio waves that the infertility theorists fail to recognize.

X-rays, and other high-energy waves, produce sterility by killing off the testicular cells that make sperm – the “spermatogonia.” And x-ray doses must be extremely high to kill enough cells to produce sterility. Still, even when the doses are high, the sterility effect is usually temporary because the surviving spermatogonia are able to spawn replacements for their dead comrades, and sperm counts typically return to their normal levels within a few months.

So, if high doses of highly energetic x-rays are needed to kill enough cells to produce sterility, how can low doses of radio waves with energies too low to kill cells do it? Good question.

Don’t fall for the phone-cooking-egg hoax.

At this point you may be thinking that you’ve seen videos of cellphones cooking eggs. And you’ve even experienced your cellphone getting pretty warm when it’s used heavily. But this doesn’t show that cellphones put out a lot of radiation energy. The cooked egg video is a prank, and the phone gets hot because of the heat generated by the chemical reactions going on within the battery, not from radio waves.

Still you protest: What about those sporadic reports claiming that cellphones suppress sperm counts? For the moment, that’s all they are – sporadic reports, unconfirmed by other investigators. You can find all kinds of random assertions about the effects of radiation on health, both good and bad, most of which imply that there is some type of validated scientific evidence to support the claim. Why not believe all of them?

If we’ve learned anything over the years about scientific evidence, it’s that isolated findings from individual labs, reporting limited experimental data, do not a strong case make. Most of the very limited “scientific” reports of infertility caused by cellphones, often cited by anti-cellphone activists, come from outside the radiation biology community, and are published in lower-tier journals of questionable quality. Few, if any, of these reports make any attempt at actually measuring the radiation doses received from the cellphones (probably because they lack either the expertise or the equipment required to do it).

Human sperm, unconcerned by what’s in your pocket.
Doc. RNDr. Josef Reischig, CSc., CC BY-SA

And none actually measure fertility rates – the health endpoint of concern – but rather measure sperm counts and other sperm quality parameters and then infer that there will be an impact on fertility. In fact, sperm counts can vary widely between normally fertile individuals and even within the same individual from day to day. For example, men who frequently ejaculate have lower sperm counts, as you might expect, because they are regularly jettisoning sperm. (Men who ejaculate daily can have sperm counts 50 percent lower than men who don’t.) Perhaps the allegedly lower sperm counts of cellphone users just means that they are having more sex!

But seriously, the point is this: There are so many things that can affect sperm counts in big ways that minor fluctuations in sperm counts have no practical impact on whether a man will produce babies, even if it were true that cellphones can modestly suppress sperm counts.

It is clear that these infertility claims are not the consensus of the mainstream scientific community – a community that demands more rigorous evidence. There are many excellent laboratories around the world that study radiation effects, and it isn’t difficult to study infertility in fruit flies, mice and even people. (It’s fairly easy to find men willing to donate sperm samples.) If the sterility story were true, there would be a chorus of well-respected laboratories from around the world singing the cellphone infertility song, not just a few.

Guglielmo Marconi, inventor of the radio. Smithsonian Institution

The fact is, the current data suggesting that cellphones cause infertility are too weak to challenge the dogma of over 100 years of commercial experience with radio waves. Radio waves are not unique to cellphones. They have been used for telecommunication ever since Marconi first demonstrated in 1901 that they could carry messages across the entire Atlantic Ocean. Early radio workers received massive doses of radio waves, yet there is no indication they had any problems with their fertility. If they didn’t experience fertility problems with their high doses, how can the relatively low doses from cellphones have such an effect? Hard to understand.

Nevertheless, people can spend their money as they please and wear any underwear they want. But if you are still concerned about radio waves affecting your fertility, why not just carry your cellphone in your shirt pocket rather than your pants, and let your testicles be?

The ConversationTimothy J. Jorgensen, Director of the Health Physics and Radiation Protection Graduate Program and Associate Professor of Radiation Medicine, Georgetown University

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

Static electricity’s tiny sparks

By Sebastian Deffner, University of Maryland, Baltimore County.

Static electricity is a ubiquitous part of everyday life. It’s all around us, sometimes funny and obvious, as when it makes your hair stand on end, sometimes hidden and useful, as when harnessed by the electronics in your cellphone. The dry winter months are high season for an annoying downside of static electricity – electric discharges like tiny lightning zaps whenever you touch door knobs or warm blankets fresh from the clothes dryer.

Static electricity is one of the oldest scientific phenomena people observed and described. Greek philosopher Thales of Miletus made the first account; in his sixth century B.C. writings, he noted that if amber was rubbed hard enough, small dust particles will start sticking to it. Three hundred years later, Theophrastus followed up on Thales’ experiments by rubbing various kinds of stone and also observed the “power of attraction.” But neither of these natural philosophers found a satisfactory explanation for what they saw.

It took almost 2,000 more years before the English word “electricity” was first coined, based on the Latin “electricus,” meaning “like amber.” Some of the most famous experiments were conducted by Benjamin Franklin in his quest to understand the underlying mechanism of electricity – which is one of the reasons why his face smiles from the US$100 bill. People quickly recognized electricity’s potential usefulness.

The amazing flying boy relies on static electricity to wow the crowd. Frontispiece of Novi profectus in historia electricitatis, post obitum auctoris, by Christian August Hausen (1746)

Of course, in the 18th century people mostly made use of static electricity in magic tricks and other performances. For instance, Stephen Gray‘s “flying boy experiment” became a popular public demonstration: He’d use a Leyden jar to charge up the youth, suspended from silk cords, and then show how he could turn book pages via static electricity, or lift small objects just using the static attraction.

Building on Franklin’s insights – including his realization that electric charge comes in positive and negative flavors, and that total charge is always conserved – we nowadays understand at the atomic level what causes the electrostatic attraction, why it can cause mini lightning bolts and how to harness what can be a nuisance for use in various modern technologies.

What are these tiny sparks?

Static electricity comes down to the interactive force between electrical charges. At the atomic scale, negative charges are carried by tiny elementary particles called electrons. Most electrons are neatly packed inside the bulk of matter, whether it be a hard and lifeless stone or the soft, living tissue of your body. However, many electrons also sit right on the surface of any material. Each different material holds on to these surface electrons with its own different characteristic strength. If two materials rub against each other, electrons can be ripped out of the “weaker” material and find themselves on the material with stronger binding force.

This transfer of electrons – what we know as a spark of static electricity – happens all the time. Infamous examples are children sliding down a playground slide, feet shuffling along a carpet or someone removing wool gloves in order to shake hands.

But we notice its effect more frequently in the dry months of winter, when the air has very low humidity. Dry air is an electrical insulator, whereas moist air acts as a conductor. This is what happens: In dry air, electrons get trapped on the surface with the stronger binding force. Unlike when the air is moist, they can’t find their way to flow back to the surface where they came from, and they can’t make the distribution of charges uniform again.

A static electric spark occurs when an object with a surplus of negative electrons comes close to another object with less negative charge – and the surplus of electrons is large enough to make the electrons “jump.” The electrons flow from where they’ve built up – like on you after walking across a wool rug – to the next thing you contact that doesn’t have an excess of electrons – such as a doorknob.

You’ll feel the electrons jump.
Muhammed Ibrahim, CC BY-ND

When electrons have nowhere to go, the charge builds up on surfaces – until it reaches a critical maximum and discharges in the form of a tiny lightning bolt. Give the electrons a place to go – such as your outstretched finger – and you will most certainly feel the zap.

The power of the mini sparks

Though sometimes annoying, the amount of charge in static electricity is typically quite little and rather innocent. The voltage can be about 100 times the voltage of typical power outlets. However, these huge voltages are nothing to worry about, since voltage is just a measure of the charge difference between objects. The “dangerous” quantity is current, which tells how many electrons are flowing. Since typically only a few electrons are transmitted in a static electric discharge, these zaps are pretty harmless.

Nevertheless, these little sparks can be fatal to sensitive electronics, such as the hardware components of a computer. Small currents carried by only few electrons can be enough to accidentally fry them. That’s why workers in electronic industries have to remain “grounded.” Being grounded just means maintaining a wired connection to the ground, which for the electrons looks like an empty highway “home.” Grounding yourself is easily done by touching a metal component or holding a key in your hand. Metals are very good conductors, and so electrons are quite happy to go there.

A more serious threat is an electric discharge in the vicinity of flammable gases. This is why it’s advisable to ground yourself before touching the pumps at gas stations; you don’t want a stray spark to combust any stray gasoline fumes. Or you can invest in the kind of anti-static wristband widely used by workers in the electronic industries to safely ground individuals before they work on very sensitive electronic components. They prevent static buildups using a conductive ribbon that coils around your wrist.

In settings where a few electrons can do big damage, workers wear anti-static wrist straps.
Wristband image via www.shutterstock.com.

In everyday life, the best method to reduce charge buildups is running a humidifier to raise the amount of moisture in the air. Also keeping your skin moist by applying moisturizer can make a big difference. Dryer sheets prevent charges from building up as your clothes tumble dry by spreading a small amount of fabric softener over the cloth. These positive particles balance out loose electrons, and the effective charge nullifies, meaning your clothes won’t emerge from the dryer clingy and stuck to one another. You can rub fabric softener on your carpets to prevent charge buildup too. Last but not least, wearing cotton clothes and leather-soled shoes are the better choice, rather than wool clothing and rubber-soled shoes, if you’ve really had it with static electricity.

Harnessing static electricity

Despite the nuisance and possible dangers of static electricity, it definitely has its benefits.

Many everyday applications of modern technology crucially rely on static electricity. For instance, Xerox machines and photocopiers use electric attraction to “glue” charged tone particles onto paper. Air fresheners not only make the room smell nice, but they really do eliminate bad odors by discharging static electricity onto dust particles, thus dissembling the bad smell.

Static electricity can attract and trap charged pollution particles before they’re emitted from factories.
Muhammed Ibrahim, CC BY-ND

Similarly, the smokestacks found in modern factories use charged plates to reduce pollution. As smoke particles move up the stack, they pick up negative charges from a metal grid. Once charged, they are attracted to plates on the other sides of the smokestack that are positively charged. Finally, the charged smoke particles are collected onto a tray from the collecting plates and can be disposed of.

Static electricity has also found its way into nanotechnology, where it is used, for instance, to pick up single atoms by laser beams. These atoms can then be manipulated for all kinds of purposes as in various computing applications. Another exciting application in nanotechnology is the control of nanoballoons, which through static electricity can be switched between an inflated and a collapsed state. These molecular machines could one day deliver medication to specific tissues within the body.

Static electricity has seen two and a half millennia since its discovery. Still it’s a curiosity, a nuisance – but it’s also proven to be important for our everyday lives.


This article was coauthored by Muhammed Ibrahim, a system engineer at an environmental software company. He is conducting collaborative research with Dr. Sebastian Deffner on reducing computational errors in quantum memories.

The ConversationSebastian Deffner, Assistant Professor of Physics, University of Maryland, Baltimore County

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

With legal pot comes a problem: How do we weed out impaired drivers?

By Igor Grant, University of California, San Diego.

On Nov. 8 voters in California, Maine, Massachusetts and Nevada approved ballot measures to legalize recreational cannabis. It is now legal in a total of eight states. And this creates potential problems for road safety. How do we determine who’s impaired and who’s not?

The effects of alcohol vary based on a person’s size and weight, metabolism rate, related food intake and the type and amount of beverage consumed. Even so, alcohol consumption produces fairly straightforward results: The more you drink, the worse you drive. Factors like body size and drinking experience can shift the correlation slightly, but the relationship is still pretty linear, enough to be able to confidently develop a blood alcohol content scale for legally determining drunk driving. Not so with marijuana.

We have a reliable and easy-to-use test to measure blood alcohol concentration. But right now we don’t have a fast, reliable test to gauge whether someone is too doped up to drive.

The need is urgent. The 2014 National Survey on Drug Use and Health reported that 10 million Americans said they had driven while under the influence of illicit drugs during the previous year. Second to alcohol marijuana is the drug most frequently found in drivers involved in crashes.

But how do you know when you’re too stoned to drive? How can police tell?

My colleagues and I at the Center for Medicinal Cannabis Research at UC San Diego have received a US$1.8 million grant from the state of California to gather data about dosages, time and what it takes to impair driving ability – and then create a viable roadside sobriety test for cannabis.

A man smokes a marijuana joint at a party celebrating weed on April 20, 2016, in Seattle.
AP photos/Elaine Thompson

Testing for marijuana isn’t like a BAC test

Alcohol and marijuana both affect mental function, which means they can both impair driving ability.

Some elements of cannabis use are similar. Potency of strain affects potency of effect. Marijuana and its active ingredient – THC – alter brain function, affecting processes like attention, perception and coordination, which are necessary for a complex behavior like driving a car.

Regular users tend to become accustomed to the drug, particularly in terms of cognitive disruption or psycho-motor skills. Because they are accustomed to the drugs’ effects, this means they may function better relative to naïve users.

Smoked marijuana produces a rapid spike in THC concentrations in the blood, followed by a decline as the drug redistributes to tissues, including the brain. The psychological impact depends upon a host of variables.

Let’s say, for example, a person smokes a joint and gets into his car. THC levels in his blood are likely to be quite high, but his cognitive functions and driving skills may not yet be impaired because the drug hasn’t yet significantly impacted the brain. But another driver might use cannabis but wait a few hours before getting behind the wheel. Her THC blood levels are now quite low, but she’s impaired because drug concentrations remain high in her brain.

Six states have set limits for THC in drivers’ blood, and nine other states have zero-tolerance laws, making the presence of THC in the drivers blood illegal.

But unlike alcohol, evidence of cannabis use can linger long after its effects have worn off, particularly if people are regular users or consume a lot in a single episode. Among chronic users, it may not clear out of their systems for weeks. Therefore, unlike blood alcohol concentration, the presence and amount of different cannabis compounds in the blood or urine do not necessarily tell you whether the driver is impaired due to marijuana.

This is why a quick and simple assessment of whether someone is driving while under the influence is difficult. And that is a necessity for any type of effective roadside sobriety test.

To create a fast and easy-to-use test, there are a few questions about marijuana that our team at UC San Diego has to answer.

How high is too high to drive?
Ignition key image via www.shutterstock.com.

How much marijuana is too much to drive?

Current blood, breath, saliva and urine tests have been challenged as unreliable in court, though they are used to prove that someone has ingested marijuana.

In California and elsewhere, the primary assessment of impairment is the law enforcement officer’s field sobriety test.

One specific challenge is determining the relationship of dose or potency, and time since consumption, to impairment. While there has been some research in this area, the studies have not comprehensively examined the issues of dose and time course of impairment. The lack of data is one of the big reasons for our work now.

Later this year, we will begin controlled experiments in which participants will smoke varying amounts of cannabis in varying strengths and then operate a driving simulator. We’ll look for impairment effects in the immediate period after exposure and over subsequent hours.

We’ll also investigate the relationship between THC and other cannabinoid levels in blood to different measures, such as saliva or exhaled breath. Roadside blood sampling is impractical, but perhaps there is an easier, reliable indicator of marijuana exposure.

Finally, there is the goal of finding the best way to assess impairment. A driver suspected of being high might be asked to follow with his finger a square moving around on a device’s screen, a test of critical tracking. Or she might perform tablet tests that more validly simulate the demands of driving.

The idea is to determine whether and how these measures – drug intake, biomarkers, objective cognitive performance and driving ability – correlate to produce an evidence-based, broadly applicable assessment standard and tool.

The ConversationIgor Grant, Professor and Chair of Department of Psychiatry and Director, Center for Medical Cannabis Research, University of California, San Diego

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

The Future of Electronics is Light

By Arnab Hazari, University of Michigan.

For the past four decades, the electronics industry has been driven by what is called “Moore’s Law,” which is not a law but more an axiom or observation. Effectively, it suggests that the electronic devices double in speed and capability about every two years. And indeed, every year tech companies come up with new, faster, smarter and better gadgets.

Specifically, Moore’s Law, as articulated by Intel cofounder Gordon Moore, is that “The number of transistors incorporated in a chip will approximately double every 24 months.” Transistors, tiny electrical switches, are the fundamental unit that drives all the electronic gadgets we can think of. As they get smaller, they also get faster and consume less electricity to operate.

In the technology world, one of the biggest questions of the 21st century is: How small can we make transistors? If there is a limit to how tiny they can get, we might reach a point at which we can no longer continue to make smaller, more powerful, more efficient devices. It’s an industry with more than US$200 billion in annual revenue in the U.S. alone. Might it stop growing?

Getting close to the limit

At the present, companies like Intel are mass-producing transistors 14 nanometers across – just 14 times wider than DNA molecules. They’re made of silicon, the second-most abundant material on our planet. Silicon’s atomic size is about 0.2 nanometers.

Today’s transistors are about 70 silicon atoms wide, so the possibility of making them even smaller is itself shrinking. We’re getting very close to the limit of how small we can make a transistor.

At present, transistors use electrical signals – electrons moving from one place to another – to communicate. But if we could use light, made up of photons, instead of electricity, we could make transistors even faster. My work, on finding ways to integrate light-based processing with existing chips, is part of that nascent effort.

Putting light inside a chip

A transistor has three parts; think of them as parts of a digital camera. First, information comes into the lens, analogous to a transistor’s source. Then it travels through a channel from the image sensor to the wires inside the camera. And lastly, the information is stored on the camera’s memory card, which is called a transistor’s “drain” – where the information ultimately ends up.

Light waves can have different frequencies. maxhurtz

Right now, all of that happens by moving electrons around. To substitute light as the medium, we actually need to move photons instead. Subatomic particles like electrons and photons travel in a wave motion, vibrating up and down even as they move in one direction. The length of each wave depends on what it’s traveling through.

In silicon, the most efficient wavelength for photons is 1.3 micrometers. This is very small – a human hair is around 100 micrometers across. But electrons in silicon are even smaller – with wavelengths 50 to 1,000 times shorter than photons.

This means the equipment to handle photons needs to be bigger than the electron-handling devices we have today. So it might seem like it would force us to build larger transistors, rather than smaller ones.

However, for two reasons, we could keep chips the same size and deliver more processing power, shrink chips while providing the same power, or, potentially both. First, a photonic chip needs only a few light sources, generating photons that can then be directed around the chip with very small lenses and mirrors.

And second, light is much faster than electrons. On average photons can travel about 20 times faster than electrons in a chip. That means computers that are 20 times faster, a speed increase that would take about 15 years to achieve with current technology.

Scientists have demonstrated progress toward photonic chips in recent years. A key challenge is making sure the new light-based chips can work with all the existing electronic chips. If we’re able to figure out how to do it – or even to use light-based transistors to enhance electronic ones – we could see significant performance improvement.

When can I get a light-based laptop or smartphone?

We still have some way to go before the first consumer device reaches the market, and progress takes time. The first transistor was made in the year 1907 using vacuum tubes, which were typically between one and six inches tall (on average 100 mm). By 1947, the current type of transistor – the one that’s now just 14 nanometers across – was invented and it was 40 micrometers long (about 3,000 times longer than the current one). And in 1971 the first commercial microprocessor (the powerhouse of any electronic gadget) was 1,000 times bigger than today’s when it was released.

The vast research efforts and the consequential evolution seen in the electronics industry are only starting in the photonic industry. As a result, current electronics can perform tasks that are far more complex than the best current photonic devices. But as research proceeds, light’s capability will catch up to, and ultimately surpass, electronics’ speeds. However long it takes to get there, the future of photonics is bright.

The ConversationArnab Hazari, Ph.D. student in Electrical Engineering, University of Michigan

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

Climate change is affecting all life on Earth – and that’s not good news for humanity

By Brett Scheffers, University of Florida and James Watson, The University of Queensland.

More than a dozen authors from different universities and nongovernmental organizations around the world have concluded, based on an analysis of hundreds of studies, that almost every aspect of life on Earth has been affected by climate change.

In more scientific parlance, we found in a paper published in Science that genes, species and ecosystems now show clear signs of impact. These responses to climate change include species’ genome (genetics), their shapes, colors and sizes (morphology), their abundance, where they live and how they interact with each other (distribution). The influence of climate change can now be detected on the smallest, most cryptic processes all the way up to entire communities and ecosystems.

Some species are already beginning to adapt. The color of some animals, such as butterflies, is changing because dark-colored butterflies heat up faster than light-colored butterflies, which have an edge in warmer temperatures. Salamanders in eastern North America and cold-water fish are shrinking in size because being small is more favorable when it is hot than when it is cold. In fact, there are now dozens of examples globally of cold-loving species contracting and warm-loving species expanding their ranges in response to changes in climate.

All of these changes may seem small, even trivial, but when every species is affected in different ways these changes add up quickly and entire ecosystem collapse is possible. This is not theoretical: Scientists have observed that the cold-loving kelp forests of southern Australia, Japan and the northwest coast of the U.S. have not only collapsed from warming but their reestablishment has been halted by replacement species better adapted to warmer waters.

Flood of insights from ancient flea eggs

Researchers are using many techniques, including one called resurrection ecology, to understand how species are responding to changes in climate by comparing the past to current traits of species. And a small and seemingly insignificant organism is leading the way.

One hundred years ago, a water flea (genus Daphnia), a small creature the size of a pencil tip, swam in a cold lake of the upper northeastern U.S. looking for a mate. This small female crustacean later laid a dozen or so eggs in hopes of doing what Mother Nature intended – that she reproduce.

Water flea; Daphnia barbata. photo credit: Joachim Mergeay

Her eggs are unusual in that they have a tough, hardened coat that protects them from lethal conditions such as extreme cold and droughts. These eggs have evolved to remain viable for extraordinary periods of time and so they lay on the bottom of the lake awaiting the perfect conditions to hatch.

Now fast forward a century: A researcher interested in climate change has dug up these eggs, now buried under layers of sediment that accumulated over the many years. She takes them to her lab and amazingly, they hatch, allowing her to show one thing: that individuals from the past are of a different architecture than those living in a much hotter world today. There is evidence for responses at every level from genetics to physiology and up through to community level.

By combining numerous research techniques in the field and in the lab, we now have a definitive look at the breadth of climate change impacts for this animal group. Importantly, this example offers the most comprehensive evidence of how climate change can affect all processes that govern life on Earth.

From genetics to dusty books

The study of water fleas and resurrection ecology is just one of many ways that thousands of geneticists, evolutionary scientists, ecologists and biogeographers around the world are assessing if – and how – species are responding to current climate change.

Other state-of-the-art tools include drills that can sample gases trapped several miles beneath the Antarctic ice sheet to document past climates and sophisticated submarines and hot air balloons that measure the current climate.

Warmer temperatures are already affecting some species in discernible ways. Sea turtles on dark sands, for instance, will more likely be feminine because of higher temperatures.
levork/flickr, CC BY-SA

Researchers are also using modern genetic sampling to understand how climate change is influencing the genes of species, while resurrection ecology helps understand changes in physiology. Traditional approaches such as studying museum specimens are effective for documenting changes in species morphology over time.

Some rely on unique geological and physical features of the landscape to assess climate change responses. For example, dark sand beaches are hotter than light sand beaches because black color absorbs large amounts of solar radiation. This means that sea turtles breeding on dark sand beaches are more likely to be female because of a process called temperature dependent sex determination. So with higher temperatures, climate change will have an overall feminizing effect on sea turtles worldwide.

Wiping the dust off of many historical natural history volumes from the forefathers and foremothers of natural history, who first documented species distributions in the late 1800s and early 1900s, also provides invaluable insights by comparing historical species distributions to present-day distributions.

For example, Joseph Grinnell’s extensive field surveys in early 1900s California led to the study of how the range of birds there shifted based on elevation. In mountains around the world, there is overwhelming evidence that all forms of life, such as mammals, birds, butterflies and trees, are moving up towards cooler elevations as the climate warms.

How this spills over onto humanity

So what lessons can be taken from a climate-stricken nature and why should we care?

This global response occurred with just a 1 degree Celsius increase in temperature since preindustrial times. Yet the most sensible forecasts suggest we will see at least an increase of up to an additional 2-3 degrees Celsius over the next 50 to 100 years unless greenhouse gas emissions are rapidly cut.

All of this spells big trouble for humans because there is now evidence that the same disruptions documented in nature are also occurring in the resources that we rely on such as crops, livestock, timber and fisheries. This is because these systems that humans rely on are governed by the same ecological principles that govern the natural world.

Examples include reduced crop and fruit yields, increased consumption of crops and timber by pests and shifts in the distribution of fisheries. Other potential results include the decline of plant-pollinator networks and pollination services from bees.

Bleached coral, a result of higher acidity in the oceans from absorbing CO2. Corals provide valuable services to people who rely on healthy fisheries for food.
Oregon State University, CC BY-SA

Further impacts on our health could stem from declines in natural systems such as coral reefs and mangroves, which provide natural defense to storm surges, expanding or new disease vectors and a redistribution of suitable farmland. All of this means an increasingly unpredictable future for humans.

This research has strong implications for global climate change agreements, which aim to keep total warming to 1.5C. If humanity wants our natural systems to keep delivering the nature-based services we rely so heavily on, now is not the time for nations like the U.S. to step away from global climate change commitments. Indeed, if this research tells us anything it is absolutely necessary for all nations to up their efforts.

Humans need to do what nature is trying to do: recognize that change is upon us and adapt our behavior in ways that limit serious, long-term consequences.

The ConversationBrett Scheffers, Assistant Professor, University of Florida and James Watson, Associate Professor, The University of Queensland

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

Here’s why ‘baby talk’ is good for your baby

By Catherine E. Laing, Duke University.

When we read, it’s very easy for us to tell individual words apart: In written language, spaces are used to separate words from one another. But this is not the case with spoken language – speech is a stream of sound, from which the listener has to separate words to understand what the speaker is saying.

This task isn’t difficult for adults who are familiar with the words of their language. But what about babies, who have almost no linguistic experience? How do they even begin to separate, or “segment,” individual words from the stream of language that they hear all around them all of the time?

As a researcher interested in early language production, I am fascinated by how babies begin acquiring knowledge of their language, and how parents and other caregivers can support them in this task.

Babies first start learning language by listening not to individual words, but to the rhythm and intonation of the speech stream – that is, the changes between high and low pitch, and the rhythm and loudness of syllables in speech. Parents often exaggerate these features of the language when talking with their infants, and this is important for early language learning.

Nevertheless, some may feel that using this exaggerated speech style is condescending, or unrealistic in comparison to adult speech, and as such does not set babies off to a good start.

Is “baby talk” really good for babies?

How babies learn

Even before a baby is born, the process of learning language has already begun. In the third trimester of pregnancy, when the infant’s ears are sufficiently developed, the intonation patterns of the mother’s speech are transmitted through the fluids in the womb.

Babies’ learning starts in the womb itself. brett jordan, CC BY

This is thought to be like listening to someone talking in a swimming pool: It’s difficult to make out the individual sounds, but the rhythm and intonation are clear. This has an important effect on language learning. By the time an infant is born, she already has a preference for her mother’s language. At this stage the infant is able to identify language through its intonation patterns.

For example, French and Russian speakers place emphasis on different parts of a word or sentence, so the rhythm of these two languages sounds different. Even at four days old, babies can use this information to distinguish their own language from an unfamiliar other language.

This means that the newly born infant is ready to start learning the language that surrounds her; she already has an interest in her mother’s language, and as her attention is drawn to this language she begins to learn more about the features and patterns within it.

Using a singsong voice

Intonation is also very important to infants’ language development in the first months of life. Adults tend to speak to babies using a special type of register that we know as “baby talk” or “motherese.” This typically involves a higher pitch than regular speech, with wide, exaggerated intonation changes.

Research has shown that babies prefer to listen to this exaggerated “baby talk” type of speech than typical adult-like speech: They pay more attention when a parent’s speech has a higher pitch and a wider pitch range compared to adult-like speech with less exaggerated pitch features.

For example, a mother might say the word “baby” in an exaggerated “singsong” voice, which holds an infant’s attention longer than it would in a monotonal adult-style voice. Words produced in this way also stand out more from the speech stream, making it easier for babies to pick out smaller chunks of language.

Across the vast stream of language that babies hear around them every day, these distinctive pitch features in baby talk help babies to “tune in” to a small part of the input, making language processing a more manageable task.

How infants process speech

Baby talk tends to be spoken at a slower rate, and key words often appear at the end of a phrase. For example, the sentence, “Can you see the doggie?” is preferable to “The doggie is eating a bone”: Babies will learn the word “doggie” more easily when it appears at the end of the phrase.

For the same reasons, words produced in isolation – separated from the rest of the phrase by pauses – are also easier for infants to learn. Research has shown that the first words that infants produce are often those that are heard most frequently in isolation in early development. Babies hear isolated words such as “bye bye” and “mummy” very frequently, and these are often some of the earliest words that they learn to produce.

How do babies learn language?
Dean Wissing, CC BY-SA

When a word is produced separately from running speech, the infant does not have to segment it from a stream of sounds, and so it is easier to determine where the word begins and where it ends.

Furthermore, infants have been found to recognize words more easily when they are produced more slowly than in typical adult speech. This is because when speech is slower, it is easier for infants to pick out the individual words and sounds, which may be produced more clearly than in faster speech. In addition, infants process language much more slowly than adults, and so it is believed that slower speech gives infants more time to process what they hear.

How reduplication helps

Word repetition is also beneficial in infants’ early word learning. Infants’ first words tend to be those which are produced most frequently in caregiver speech, such as “mummy,” “bottle” and “baby.”

Words with reduplication are easier to learn for babies.
Sellers Patton, CC BY

The more often an infant hears a word, the easier it is to segment it from the speech stream. The infant develops a stronger mental representation of frequent words. Eventually she will be more likely to produce frequently heard words with fewer errors.

Furthermore, reduplicated words – that is, words which contain repetition, such as “woof woof” or “quack quack” – are typical of baby talk, and are known to have an advantage for early word learning.

Even newborn infants show stronger brain activation when they hear words that contain reduplication. This suggests that there may be a strong advantage for these words in human language processing. This is supported by evidence from slightly older infants, who have been found to learn reduplicated words more easily than non-reduplicated words.

How ‘baby talk’ helps infants

So, baby talk is not just a way of engaging with infant on a social level – it has important implications for language learning from the very first moments of a newborn’s life. Features of baby talk present infants with information about their ambient language, and allow them to break up the speech stream into smaller chunks.

While baby talk is not essential to guiding infants’ language learning, the use of pitch modulations, repetition and slower speech all allow infants to process the patterns in their language more easily.

Speaking in such an exaggerated style might not seem conducive to language learning in the longer term, but ample research shows that this speech style actually provides an optimum input for language learning from the very first days of an infant’s life.

The ConversationCatherine E. Laing, Postdoctoral Associate, Duke University

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

The next frontier in reproductive tourism? Genetic modification

By Rosa Castro, Duke University.

The birth of the first baby born using a technique called mitochondrial replacement, which uses DNA from three people to “correct” an inherited genetic mutation, was announced on Sept. 27.

Mitochondrial replacement or donation allows women who carry mitochondrial diseases to avoid passing them on to their child. These diseases can range from mild to life-threatening. No therapies exist and only a few drugs are available to treat them.

There are no international rules regulating this technique. Just one country, the United Kingdom, explicitly regulates the procedure. It’s a similar situation with other assisted reproductive techniques. Some countries permit these techniques and others don’t.

I study the intended and unintended consequences of regulating, prohibiting or authorizing the use of new technologies. One of these unintended consequences is “medical tourism,” where people travel from their home countries to places where practices such as commercial surrogacy or embryo selection are allowed.

Medical tourism for assisted reproductive technologies raises a host of legal and ethical questions. While new reproductive technologies, like mitochondrial replacement, promise to bring significant benefits, the absence of regulations means that some of these questions, including those related to safety and risks are unanswered, even as people are starting to use them.

Mitochondria power our cells.
Mitochondrium image via www.shutterstock.com.

How does mitochondrial replacement work?

We each inherit our mitochondria, which provide the energy that our cells need to function and the tiny fraction of DNA contained in it, only from our mothers. Some of that mitochondrial DNA might be defective, carrying mutations or errors that might lead to mitochondrial diseases.

The mother of the baby born using this technique carried one of these diseases. The disease, known as Leigh Syndrome, is a neurological disorder that typically leads to death during childhood. Before having this baby, the couple had two children who died as a result of the disease.

Mitochondrial replacement is done in a lab, as part of in vitro fertilization. It works by “substituting” the defective mitochondria of the mother’s egg with healthy mitochondria obtained from a donor. The child is genetically related to the mother, but has the donor’s mitochondrial DNA.

It involves three germ cells: an egg from the mother, an egg from a healthy donor and the sperm from the father. While the term “three-parent” child is often used in news stories, it is a highly controversial one.

To some, the tiny fraction of DNA contained in a mitochondria provided by a donor is not sufficient to make the donor a “second mother.” The U.K., the only country that has regulated the technique, takes this position. Ultimately, the DNA replaced is a tiny fraction of a person’s genes, and it is unrelated to the characteristics that we associate with genetic kinship.

There is some discussion as to whether mitochondrial replacement is a so-called “germ line modification,” a genetic modification that can be inherited. Many countries, including the U.K., have either banned or taken a serious stance on technologies that could alter germ cells and cause inherited changes that can affect future generations. But a great number of countries, including Japan and India, have ambiguous or unenforceable regulations about germline modification.

Mitochondrial replacement results in a germline change, but that change is passed to future generations only if the child is a girl. She would pass the donor’s mitochondrial DNA to her offspring, and in turn her female descendants will pass it to their children. If the child is a boy, he wouldn’t pass the mitochondrial DNA on to his offspring.

Because the mitochondrial modification is only heritable in girls, the U.S. National Academies of Science recently recommended that use of this technique be limited to male embryos, in which the change is not inheritable. The U.K. considered but then rejected this approach.

A thorny ethical and regulatory debate

In the U.S., the FDA claimed jurisdiction to regulate mitochondrial replacement but then halted further discussions. A rider included in the 2016 Congressional Appropriations Act precludes the FDA from considering mitochondrial replacement.

While the technique has been given the green light in the U.K., the nation’s Human Fertilisation and Embryology Authority is gathering more safety-related information before granting the first licenses for mitochondrial replacement to clinics.

Experts have predicted that once the authority starts granting authorization, people seeking mitochondrial replacement would go to the U.K.

At the moment, with no global standard dictating the use of mitochondrial replacement, couples (and experts willing to use these technologies) are going to countries where the procedure is allowed.

This has happened with other technologies such as embryo selection and commercial surrogacy, with patients traveling abroad to seek out assisted reproduction services or technologies that are either prohibited, unavailable, of lower quality or more expensive in their own countries.

The first documented case of successful mitochondrial replacement involved U.S. physicians assisting a Jordanian couple in Mexico. Further reports of the use of mitochondrial replacement in Ukraine and China have followed.

In this Nov. 3, 2015 photo, a newborn baby is transferred to an ambulance at the Akanksha Clinic, one of the most organized clinics in the surrogacy business, in Anand, India.
Allison Joyce/AP

The increasing trend of medical tourism has been followed by sporadic scandals and waves of tighter regulations in countries such as India, Nepal and Thailand, which have been leading destinations of couples seeking assisted reproduction services.

Intended parents and children born with the help of assisted reproduction outside of their home countries have faced problems related to family ties, citizenship and their relationship with donors – especially with the use of commercial surrogacy.

Mitochondrial replacement and new gene editing technologies add further questions related to the safety and long-term effects of these procedures.

Gene modification complicates reproductive tourism

Mitochondrial replacement and technologies such as gene-editing with the use of CRISPR-CAS9 that create germline modifications are relatively new. Many of the legal and ethical questions they raise have yet to be answered.

What if the children born as a result of these techniques suffer unknown adverse effects? And could these technologies affect the way in which we think about identity, kinship and family ties in general? One technique to replace mutated mitochondria involves the creation of embryos that will be later disposed. How should the use and disposal of embryos be regulated? What about the interests of the egg donors? Should they be paid?

Some of these problems could be avoided through a solid regulatory system in the U.S. and other countries. But as long as patients continue to seek medical treatments in “havens” for ethically dubious or risky procedures, many of these problems will persist.

Regulatory authorities around the world are debating how to better regulate these genetic modification technologies. Governments need to start considering not only the ethical and safety effects of their choices but also how these choices drive medical tourism.

The ConversationRosa Castro, Postdoctoral Associate in Science and Society, Duke University

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

Cassini Getting Set for Dramatic “Ring-Grazing Orbits” of Saturn [Video]

A thrilling ride is about to begin for NASA’s Cassini spacecraft. Engineers have been pumping up the spacecraft’s orbit around Saturn this year to increase its tilt with respect to the planet’s equator and rings. And on Nov. 30, following a gravitational nudge from Saturn’s moon Titan, Cassini will enter the first phase of the mission’s dramatic endgame.

Launched in 1997, Cassini has been touring the Saturn system since arriving there in 2004 for an up-close study of the planet, its rings and moons. During its journey, Cassini has made numerous dramatic discoveries, including a global ocean within Enceladus and liquid methane seas on Titan.

Between Nov. 30 and April 22, Cassini will circle high over and under the poles of Saturn, diving every seven days — a total of 20 times — through the unexplored region at the outer edge of the main rings.

First Phase in Dramatic Endgame for Long-Lived Cassini Spacecraft

“We’re calling this phase of the mission Cassini’s Ring-Grazing Orbits, because we’ll be skimming past the outer edge of the rings,” said Linda Spilker, Cassini project scientist at NASA’s Jet Propulsion Laboratory, Pasadena, California. “In addition, we have two instruments that can sample particles and gases as we cross the ringplane, so in a sense Cassini is also ‘grazing’ on the rings.”

On many of these passes, Cassini’s instruments will attempt to directly sample ring particles and molecules of faint gases that are found close to the rings. During the first two orbits, the spacecraft will pass directly through an extremely faint ring produced by tiny meteors striking the two small moons Janus and Epimetheus. Ring crossings in March and April will send the spacecraft through the dusty outer reaches of the F ring.

“Even though we’re flying closer to the F ring than we ever have, we’ll still be more than 4,850 miles (7,800 kilometers) distant. There’s very little concern over dust hazard at that range,” said Earl Maize, Cassini project manager at JPL.

The F ring marks the outer boundary of the main ring system; Saturn has several other, much fainter rings that lie farther from the planet. The F ring is complex and constantly changing: Cassini images have shown structures like bright streamers, wispy filaments and dark channels that appear and develop over mere hours. The ring is also quite narrow — only about 500 miles (800 kilometers) wide. At its core is a denser region about 30 miles (50 kilometers) wide.

So Many Sights to See

Cassini’s ring-grazing orbits offer unprecedented opportunities to observe the menagerie of small moons that orbit in or near the edges of the rings, including best-ever looks at the moons Pandora, Atlas, Pan and Daphnis.

Grazing the edges of the rings also will provide some of the closest-ever studies of the outer portions of Saturn’s main rings (the A, B and F rings). Some of Cassini’s views will have a level of detail not seen since the spacecraft glided just above them during its arrival in 2004. The mission will begin imaging the rings in December along their entire width, resolving details smaller than 0.6 mile (1 kilometer) per pixel and building up Cassini’s highest-quality complete scan of the rings’ intricate structure.

The mission will continue investigating small-scale features in the A ring called “propellers,” which reveal the presence of unseen moonlets. Because of their airplane propeller-like shapes, scientists have given some of the more persistent features informal names inspired by famous aviators, including “Earhart.” Observing propellers at high resolution will likely reveal new details about their origin and structure.

And in March, while coasting through Saturn’s shadow, Cassini will observe the rings backlit by the sun, in the hope of catching clouds of dust ejected by meteor impacts.

Preparing for the Finale

During these orbits, Cassini will pass as close as about 56,000 miles (90,000 kilometers) above Saturn’s cloud tops. But even with all their exciting science, these orbits are merely a prelude to the planet-grazing passes that lie ahead. In April 2017, the spacecraft will begin its Grand Finale phase.

After nearly 20 years in space, the mission is drawing near its end because the spacecraft is running low on fuel. The Cassini team carefully designed the finale to conduct an extraordinary science investigation before sending the spacecraft into Saturn to protect its potentially habitable moons.

During its grand finale, Cassini will pass as close as 1,012 miles (1,628 kilometers) above the clouds as it dives repeatedly through the narrow gap between Saturn and its rings, before making its mission-ending plunge into the planet’s atmosphere on Sept. 15. But before the spacecraft can leap over the rings to begin its finale, some preparatory work remains.

To begin with, Cassini is scheduled to perform a brief burn of its main engine during the first super-close approach to the rings on Dec. 4. This maneuver is important for fine-tuning the orbit and setting the correct course to enable the remainder of the mission.

“This will be the 183rd and last currently planned firing of our main engine. Although we could still decide to use the engine again, the plan is to complete the remaining maneuvers using thrusters,” said Maize.

Saturn's rings were named alphabetically in the order they were discovered. The narrow F ring marks the outer boundary of the main ring system. Credits: NASA/JPL-Caltech/Space Science Institute
Saturn’s rings were named alphabetically in the order they were discovered. The narrow F ring marks the outer boundary of the main ring system.
Credits: NASA/JPL-Caltech/Space Science Institute

To further prepare, Cassini will observe Saturn’s atmosphere during the ring-grazing phase of the mission to more precisely determine how far it extends above the planet. Scientists have observed Saturn’s outermost atmosphere to expand and contract slightly with the seasons since Cassini’s arrival. Given this variability, the forthcoming data will be important for helping mission engineers determine how close they can safely fly the spacecraft.

Source: NASA.gov news release used in accordance with the NASA Media Guidelines.

Next, Check Out: