With legal pot comes a problem: How do we weed out impaired drivers?

By Igor Grant, University of California, San Diego.

On Nov. 8 voters in California, Maine, Massachusetts and Nevada approved ballot measures to legalize recreational cannabis. It is now legal in a total of eight states. And this creates potential problems for road safety. How do we determine who’s impaired and who’s not?

The effects of alcohol vary based on a person’s size and weight, metabolism rate, related food intake and the type and amount of beverage consumed. Even so, alcohol consumption produces fairly straightforward results: The more you drink, the worse you drive. Factors like body size and drinking experience can shift the correlation slightly, but the relationship is still pretty linear, enough to be able to confidently develop a blood alcohol content scale for legally determining drunk driving. Not so with marijuana.

We have a reliable and easy-to-use test to measure blood alcohol concentration. But right now we don’t have a fast, reliable test to gauge whether someone is too doped up to drive.

The need is urgent. The 2014 National Survey on Drug Use and Health reported that 10 million Americans said they had driven while under the influence of illicit drugs during the previous year. Second to alcohol marijuana is the drug most frequently found in drivers involved in crashes.

But how do you know when you’re too stoned to drive? How can police tell?

My colleagues and I at the Center for Medicinal Cannabis Research at UC San Diego have received a US$1.8 million grant from the state of California to gather data about dosages, time and what it takes to impair driving ability – and then create a viable roadside sobriety test for cannabis.

A man smokes a marijuana joint at a party celebrating weed on April 20, 2016, in Seattle.
AP photos/Elaine Thompson

Testing for marijuana isn’t like a BAC test

Alcohol and marijuana both affect mental function, which means they can both impair driving ability.

Some elements of cannabis use are similar. Potency of strain affects potency of effect. Marijuana and its active ingredient – THC – alter brain function, affecting processes like attention, perception and coordination, which are necessary for a complex behavior like driving a car.

Regular users tend to become accustomed to the drug, particularly in terms of cognitive disruption or psycho-motor skills. Because they are accustomed to the drugs’ effects, this means they may function better relative to naïve users.

Smoked marijuana produces a rapid spike in THC concentrations in the blood, followed by a decline as the drug redistributes to tissues, including the brain. The psychological impact depends upon a host of variables.

Let’s say, for example, a person smokes a joint and gets into his car. THC levels in his blood are likely to be quite high, but his cognitive functions and driving skills may not yet be impaired because the drug hasn’t yet significantly impacted the brain. But another driver might use cannabis but wait a few hours before getting behind the wheel. Her THC blood levels are now quite low, but she’s impaired because drug concentrations remain high in her brain.

Six states have set limits for THC in drivers’ blood, and nine other states have zero-tolerance laws, making the presence of THC in the drivers blood illegal.

But unlike alcohol, evidence of cannabis use can linger long after its effects have worn off, particularly if people are regular users or consume a lot in a single episode. Among chronic users, it may not clear out of their systems for weeks. Therefore, unlike blood alcohol concentration, the presence and amount of different cannabis compounds in the blood or urine do not necessarily tell you whether the driver is impaired due to marijuana.

This is why a quick and simple assessment of whether someone is driving while under the influence is difficult. And that is a necessity for any type of effective roadside sobriety test.

To create a fast and easy-to-use test, there are a few questions about marijuana that our team at UC San Diego has to answer.

How high is too high to drive?
Ignition key image via www.shutterstock.com.

How much marijuana is too much to drive?

Current blood, breath, saliva and urine tests have been challenged as unreliable in court, though they are used to prove that someone has ingested marijuana.

In California and elsewhere, the primary assessment of impairment is the law enforcement officer’s field sobriety test.

One specific challenge is determining the relationship of dose or potency, and time since consumption, to impairment. While there has been some research in this area, the studies have not comprehensively examined the issues of dose and time course of impairment. The lack of data is one of the big reasons for our work now.

Later this year, we will begin controlled experiments in which participants will smoke varying amounts of cannabis in varying strengths and then operate a driving simulator. We’ll look for impairment effects in the immediate period after exposure and over subsequent hours.

We’ll also investigate the relationship between THC and other cannabinoid levels in blood to different measures, such as saliva or exhaled breath. Roadside blood sampling is impractical, but perhaps there is an easier, reliable indicator of marijuana exposure.

Finally, there is the goal of finding the best way to assess impairment. A driver suspected of being high might be asked to follow with his finger a square moving around on a device’s screen, a test of critical tracking. Or she might perform tablet tests that more validly simulate the demands of driving.

The idea is to determine whether and how these measures – drug intake, biomarkers, objective cognitive performance and driving ability – correlate to produce an evidence-based, broadly applicable assessment standard and tool.

The ConversationIgor Grant, Professor and Chair of Department of Psychiatry and Director, Center for Medical Cannabis Research, University of California, San Diego

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

The Future of Electronics is Light

By Arnab Hazari, University of Michigan.

For the past four decades, the electronics industry has been driven by what is called “Moore’s Law,” which is not a law but more an axiom or observation. Effectively, it suggests that the electronic devices double in speed and capability about every two years. And indeed, every year tech companies come up with new, faster, smarter and better gadgets.

Specifically, Moore’s Law, as articulated by Intel cofounder Gordon Moore, is that “The number of transistors incorporated in a chip will approximately double every 24 months.” Transistors, tiny electrical switches, are the fundamental unit that drives all the electronic gadgets we can think of. As they get smaller, they also get faster and consume less electricity to operate.

In the technology world, one of the biggest questions of the 21st century is: How small can we make transistors? If there is a limit to how tiny they can get, we might reach a point at which we can no longer continue to make smaller, more powerful, more efficient devices. It’s an industry with more than US$200 billion in annual revenue in the U.S. alone. Might it stop growing?

Getting close to the limit

At the present, companies like Intel are mass-producing transistors 14 nanometers across – just 14 times wider than DNA molecules. They’re made of silicon, the second-most abundant material on our planet. Silicon’s atomic size is about 0.2 nanometers.

Today’s transistors are about 70 silicon atoms wide, so the possibility of making them even smaller is itself shrinking. We’re getting very close to the limit of how small we can make a transistor.

At present, transistors use electrical signals – electrons moving from one place to another – to communicate. But if we could use light, made up of photons, instead of electricity, we could make transistors even faster. My work, on finding ways to integrate light-based processing with existing chips, is part of that nascent effort.

Putting light inside a chip

A transistor has three parts; think of them as parts of a digital camera. First, information comes into the lens, analogous to a transistor’s source. Then it travels through a channel from the image sensor to the wires inside the camera. And lastly, the information is stored on the camera’s memory card, which is called a transistor’s “drain” – where the information ultimately ends up.

Light waves can have different frequencies. maxhurtz

Right now, all of that happens by moving electrons around. To substitute light as the medium, we actually need to move photons instead. Subatomic particles like electrons and photons travel in a wave motion, vibrating up and down even as they move in one direction. The length of each wave depends on what it’s traveling through.

In silicon, the most efficient wavelength for photons is 1.3 micrometers. This is very small – a human hair is around 100 micrometers across. But electrons in silicon are even smaller – with wavelengths 50 to 1,000 times shorter than photons.

This means the equipment to handle photons needs to be bigger than the electron-handling devices we have today. So it might seem like it would force us to build larger transistors, rather than smaller ones.

However, for two reasons, we could keep chips the same size and deliver more processing power, shrink chips while providing the same power, or, potentially both. First, a photonic chip needs only a few light sources, generating photons that can then be directed around the chip with very small lenses and mirrors.

And second, light is much faster than electrons. On average photons can travel about 20 times faster than electrons in a chip. That means computers that are 20 times faster, a speed increase that would take about 15 years to achieve with current technology.

Scientists have demonstrated progress toward photonic chips in recent years. A key challenge is making sure the new light-based chips can work with all the existing electronic chips. If we’re able to figure out how to do it – or even to use light-based transistors to enhance electronic ones – we could see significant performance improvement.

When can I get a light-based laptop or smartphone?

We still have some way to go before the first consumer device reaches the market, and progress takes time. The first transistor was made in the year 1907 using vacuum tubes, which were typically between one and six inches tall (on average 100 mm). By 1947, the current type of transistor – the one that’s now just 14 nanometers across – was invented and it was 40 micrometers long (about 3,000 times longer than the current one). And in 1971 the first commercial microprocessor (the powerhouse of any electronic gadget) was 1,000 times bigger than today’s when it was released.

The vast research efforts and the consequential evolution seen in the electronics industry are only starting in the photonic industry. As a result, current electronics can perform tasks that are far more complex than the best current photonic devices. But as research proceeds, light’s capability will catch up to, and ultimately surpass, electronics’ speeds. However long it takes to get there, the future of photonics is bright.

The ConversationArnab Hazari, Ph.D. student in Electrical Engineering, University of Michigan

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

Climate change is affecting all life on Earth – and that’s not good news for humanity

By Brett Scheffers, University of Florida and James Watson, The University of Queensland.

More than a dozen authors from different universities and nongovernmental organizations around the world have concluded, based on an analysis of hundreds of studies, that almost every aspect of life on Earth has been affected by climate change.

In more scientific parlance, we found in a paper published in Science that genes, species and ecosystems now show clear signs of impact. These responses to climate change include species’ genome (genetics), their shapes, colors and sizes (morphology), their abundance, where they live and how they interact with each other (distribution). The influence of climate change can now be detected on the smallest, most cryptic processes all the way up to entire communities and ecosystems.

Some species are already beginning to adapt. The color of some animals, such as butterflies, is changing because dark-colored butterflies heat up faster than light-colored butterflies, which have an edge in warmer temperatures. Salamanders in eastern North America and cold-water fish are shrinking in size because being small is more favorable when it is hot than when it is cold. In fact, there are now dozens of examples globally of cold-loving species contracting and warm-loving species expanding their ranges in response to changes in climate.

All of these changes may seem small, even trivial, but when every species is affected in different ways these changes add up quickly and entire ecosystem collapse is possible. This is not theoretical: Scientists have observed that the cold-loving kelp forests of southern Australia, Japan and the northwest coast of the U.S. have not only collapsed from warming but their reestablishment has been halted by replacement species better adapted to warmer waters.

Flood of insights from ancient flea eggs

Researchers are using many techniques, including one called resurrection ecology, to understand how species are responding to changes in climate by comparing the past to current traits of species. And a small and seemingly insignificant organism is leading the way.

One hundred years ago, a water flea (genus Daphnia), a small creature the size of a pencil tip, swam in a cold lake of the upper northeastern U.S. looking for a mate. This small female crustacean later laid a dozen or so eggs in hopes of doing what Mother Nature intended – that she reproduce.

Water flea; Daphnia barbata. photo credit: Joachim Mergeay

Her eggs are unusual in that they have a tough, hardened coat that protects them from lethal conditions such as extreme cold and droughts. These eggs have evolved to remain viable for extraordinary periods of time and so they lay on the bottom of the lake awaiting the perfect conditions to hatch.

Now fast forward a century: A researcher interested in climate change has dug up these eggs, now buried under layers of sediment that accumulated over the many years. She takes them to her lab and amazingly, they hatch, allowing her to show one thing: that individuals from the past are of a different architecture than those living in a much hotter world today. There is evidence for responses at every level from genetics to physiology and up through to community level.

By combining numerous research techniques in the field and in the lab, we now have a definitive look at the breadth of climate change impacts for this animal group. Importantly, this example offers the most comprehensive evidence of how climate change can affect all processes that govern life on Earth.

From genetics to dusty books

The study of water fleas and resurrection ecology is just one of many ways that thousands of geneticists, evolutionary scientists, ecologists and biogeographers around the world are assessing if – and how – species are responding to current climate change.

Other state-of-the-art tools include drills that can sample gases trapped several miles beneath the Antarctic ice sheet to document past climates and sophisticated submarines and hot air balloons that measure the current climate.

Warmer temperatures are already affecting some species in discernible ways. Sea turtles on dark sands, for instance, will more likely be feminine because of higher temperatures.
levork/flickr, CC BY-SA

Researchers are also using modern genetic sampling to understand how climate change is influencing the genes of species, while resurrection ecology helps understand changes in physiology. Traditional approaches such as studying museum specimens are effective for documenting changes in species morphology over time.

Some rely on unique geological and physical features of the landscape to assess climate change responses. For example, dark sand beaches are hotter than light sand beaches because black color absorbs large amounts of solar radiation. This means that sea turtles breeding on dark sand beaches are more likely to be female because of a process called temperature dependent sex determination. So with higher temperatures, climate change will have an overall feminizing effect on sea turtles worldwide.

Wiping the dust off of many historical natural history volumes from the forefathers and foremothers of natural history, who first documented species distributions in the late 1800s and early 1900s, also provides invaluable insights by comparing historical species distributions to present-day distributions.

For example, Joseph Grinnell’s extensive field surveys in early 1900s California led to the study of how the range of birds there shifted based on elevation. In mountains around the world, there is overwhelming evidence that all forms of life, such as mammals, birds, butterflies and trees, are moving up towards cooler elevations as the climate warms.

How this spills over onto humanity

So what lessons can be taken from a climate-stricken nature and why should we care?

This global response occurred with just a 1 degree Celsius increase in temperature since preindustrial times. Yet the most sensible forecasts suggest we will see at least an increase of up to an additional 2-3 degrees Celsius over the next 50 to 100 years unless greenhouse gas emissions are rapidly cut.

All of this spells big trouble for humans because there is now evidence that the same disruptions documented in nature are also occurring in the resources that we rely on such as crops, livestock, timber and fisheries. This is because these systems that humans rely on are governed by the same ecological principles that govern the natural world.

Examples include reduced crop and fruit yields, increased consumption of crops and timber by pests and shifts in the distribution of fisheries. Other potential results include the decline of plant-pollinator networks and pollination services from bees.

Bleached coral, a result of higher acidity in the oceans from absorbing CO2. Corals provide valuable services to people who rely on healthy fisheries for food.
Oregon State University, CC BY-SA

Further impacts on our health could stem from declines in natural systems such as coral reefs and mangroves, which provide natural defense to storm surges, expanding or new disease vectors and a redistribution of suitable farmland. All of this means an increasingly unpredictable future for humans.

This research has strong implications for global climate change agreements, which aim to keep total warming to 1.5C. If humanity wants our natural systems to keep delivering the nature-based services we rely so heavily on, now is not the time for nations like the U.S. to step away from global climate change commitments. Indeed, if this research tells us anything it is absolutely necessary for all nations to up their efforts.

Humans need to do what nature is trying to do: recognize that change is upon us and adapt our behavior in ways that limit serious, long-term consequences.

The ConversationBrett Scheffers, Assistant Professor, University of Florida and James Watson, Associate Professor, The University of Queensland

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

Here’s why ‘baby talk’ is good for your baby

By Catherine E. Laing, Duke University.

When we read, it’s very easy for us to tell individual words apart: In written language, spaces are used to separate words from one another. But this is not the case with spoken language – speech is a stream of sound, from which the listener has to separate words to understand what the speaker is saying.

This task isn’t difficult for adults who are familiar with the words of their language. But what about babies, who have almost no linguistic experience? How do they even begin to separate, or “segment,” individual words from the stream of language that they hear all around them all of the time?

As a researcher interested in early language production, I am fascinated by how babies begin acquiring knowledge of their language, and how parents and other caregivers can support them in this task.

Babies first start learning language by listening not to individual words, but to the rhythm and intonation of the speech stream – that is, the changes between high and low pitch, and the rhythm and loudness of syllables in speech. Parents often exaggerate these features of the language when talking with their infants, and this is important for early language learning.

Nevertheless, some may feel that using this exaggerated speech style is condescending, or unrealistic in comparison to adult speech, and as such does not set babies off to a good start.

Is “baby talk” really good for babies?

How babies learn

Even before a baby is born, the process of learning language has already begun. In the third trimester of pregnancy, when the infant’s ears are sufficiently developed, the intonation patterns of the mother’s speech are transmitted through the fluids in the womb.

Babies’ learning starts in the womb itself. brett jordan, CC BY

This is thought to be like listening to someone talking in a swimming pool: It’s difficult to make out the individual sounds, but the rhythm and intonation are clear. This has an important effect on language learning. By the time an infant is born, she already has a preference for her mother’s language. At this stage the infant is able to identify language through its intonation patterns.

For example, French and Russian speakers place emphasis on different parts of a word or sentence, so the rhythm of these two languages sounds different. Even at four days old, babies can use this information to distinguish their own language from an unfamiliar other language.

This means that the newly born infant is ready to start learning the language that surrounds her; she already has an interest in her mother’s language, and as her attention is drawn to this language she begins to learn more about the features and patterns within it.

Using a singsong voice

Intonation is also very important to infants’ language development in the first months of life. Adults tend to speak to babies using a special type of register that we know as “baby talk” or “motherese.” This typically involves a higher pitch than regular speech, with wide, exaggerated intonation changes.

Research has shown that babies prefer to listen to this exaggerated “baby talk” type of speech than typical adult-like speech: They pay more attention when a parent’s speech has a higher pitch and a wider pitch range compared to adult-like speech with less exaggerated pitch features.

For example, a mother might say the word “baby” in an exaggerated “singsong” voice, which holds an infant’s attention longer than it would in a monotonal adult-style voice. Words produced in this way also stand out more from the speech stream, making it easier for babies to pick out smaller chunks of language.

Across the vast stream of language that babies hear around them every day, these distinctive pitch features in baby talk help babies to “tune in” to a small part of the input, making language processing a more manageable task.

How infants process speech

Baby talk tends to be spoken at a slower rate, and key words often appear at the end of a phrase. For example, the sentence, “Can you see the doggie?” is preferable to “The doggie is eating a bone”: Babies will learn the word “doggie” more easily when it appears at the end of the phrase.

For the same reasons, words produced in isolation – separated from the rest of the phrase by pauses – are also easier for infants to learn. Research has shown that the first words that infants produce are often those that are heard most frequently in isolation in early development. Babies hear isolated words such as “bye bye” and “mummy” very frequently, and these are often some of the earliest words that they learn to produce.

How do babies learn language?
Dean Wissing, CC BY-SA

When a word is produced separately from running speech, the infant does not have to segment it from a stream of sounds, and so it is easier to determine where the word begins and where it ends.

Furthermore, infants have been found to recognize words more easily when they are produced more slowly than in typical adult speech. This is because when speech is slower, it is easier for infants to pick out the individual words and sounds, which may be produced more clearly than in faster speech. In addition, infants process language much more slowly than adults, and so it is believed that slower speech gives infants more time to process what they hear.

How reduplication helps

Word repetition is also beneficial in infants’ early word learning. Infants’ first words tend to be those which are produced most frequently in caregiver speech, such as “mummy,” “bottle” and “baby.”

Words with reduplication are easier to learn for babies.
Sellers Patton, CC BY

The more often an infant hears a word, the easier it is to segment it from the speech stream. The infant develops a stronger mental representation of frequent words. Eventually she will be more likely to produce frequently heard words with fewer errors.

Furthermore, reduplicated words – that is, words which contain repetition, such as “woof woof” or “quack quack” – are typical of baby talk, and are known to have an advantage for early word learning.

Even newborn infants show stronger brain activation when they hear words that contain reduplication. This suggests that there may be a strong advantage for these words in human language processing. This is supported by evidence from slightly older infants, who have been found to learn reduplicated words more easily than non-reduplicated words.

How ‘baby talk’ helps infants

So, baby talk is not just a way of engaging with infant on a social level – it has important implications for language learning from the very first moments of a newborn’s life. Features of baby talk present infants with information about their ambient language, and allow them to break up the speech stream into smaller chunks.

While baby talk is not essential to guiding infants’ language learning, the use of pitch modulations, repetition and slower speech all allow infants to process the patterns in their language more easily.

Speaking in such an exaggerated style might not seem conducive to language learning in the longer term, but ample research shows that this speech style actually provides an optimum input for language learning from the very first days of an infant’s life.

The ConversationCatherine E. Laing, Postdoctoral Associate, Duke University

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

The next frontier in reproductive tourism? Genetic modification

By Rosa Castro, Duke University.

The birth of the first baby born using a technique called mitochondrial replacement, which uses DNA from three people to “correct” an inherited genetic mutation, was announced on Sept. 27.

Mitochondrial replacement or donation allows women who carry mitochondrial diseases to avoid passing them on to their child. These diseases can range from mild to life-threatening. No therapies exist and only a few drugs are available to treat them.

There are no international rules regulating this technique. Just one country, the United Kingdom, explicitly regulates the procedure. It’s a similar situation with other assisted reproductive techniques. Some countries permit these techniques and others don’t.

I study the intended and unintended consequences of regulating, prohibiting or authorizing the use of new technologies. One of these unintended consequences is “medical tourism,” where people travel from their home countries to places where practices such as commercial surrogacy or embryo selection are allowed.

Medical tourism for assisted reproductive technologies raises a host of legal and ethical questions. While new reproductive technologies, like mitochondrial replacement, promise to bring significant benefits, the absence of regulations means that some of these questions, including those related to safety and risks are unanswered, even as people are starting to use them.

Mitochondria power our cells.
Mitochondrium image via www.shutterstock.com.

How does mitochondrial replacement work?

We each inherit our mitochondria, which provide the energy that our cells need to function and the tiny fraction of DNA contained in it, only from our mothers. Some of that mitochondrial DNA might be defective, carrying mutations or errors that might lead to mitochondrial diseases.

The mother of the baby born using this technique carried one of these diseases. The disease, known as Leigh Syndrome, is a neurological disorder that typically leads to death during childhood. Before having this baby, the couple had two children who died as a result of the disease.

Mitochondrial replacement is done in a lab, as part of in vitro fertilization. It works by “substituting” the defective mitochondria of the mother’s egg with healthy mitochondria obtained from a donor. The child is genetically related to the mother, but has the donor’s mitochondrial DNA.

It involves three germ cells: an egg from the mother, an egg from a healthy donor and the sperm from the father. While the term “three-parent” child is often used in news stories, it is a highly controversial one.

To some, the tiny fraction of DNA contained in a mitochondria provided by a donor is not sufficient to make the donor a “second mother.” The U.K., the only country that has regulated the technique, takes this position. Ultimately, the DNA replaced is a tiny fraction of a person’s genes, and it is unrelated to the characteristics that we associate with genetic kinship.

There is some discussion as to whether mitochondrial replacement is a so-called “germ line modification,” a genetic modification that can be inherited. Many countries, including the U.K., have either banned or taken a serious stance on technologies that could alter germ cells and cause inherited changes that can affect future generations. But a great number of countries, including Japan and India, have ambiguous or unenforceable regulations about germline modification.

Mitochondrial replacement results in a germline change, but that change is passed to future generations only if the child is a girl. She would pass the donor’s mitochondrial DNA to her offspring, and in turn her female descendants will pass it to their children. If the child is a boy, he wouldn’t pass the mitochondrial DNA on to his offspring.

Because the mitochondrial modification is only heritable in girls, the U.S. National Academies of Science recently recommended that use of this technique be limited to male embryos, in which the change is not inheritable. The U.K. considered but then rejected this approach.

A thorny ethical and regulatory debate

In the U.S., the FDA claimed jurisdiction to regulate mitochondrial replacement but then halted further discussions. A rider included in the 2016 Congressional Appropriations Act precludes the FDA from considering mitochondrial replacement.

While the technique has been given the green light in the U.K., the nation’s Human Fertilisation and Embryology Authority is gathering more safety-related information before granting the first licenses for mitochondrial replacement to clinics.

Experts have predicted that once the authority starts granting authorization, people seeking mitochondrial replacement would go to the U.K.

At the moment, with no global standard dictating the use of mitochondrial replacement, couples (and experts willing to use these technologies) are going to countries where the procedure is allowed.

This has happened with other technologies such as embryo selection and commercial surrogacy, with patients traveling abroad to seek out assisted reproduction services or technologies that are either prohibited, unavailable, of lower quality or more expensive in their own countries.

The first documented case of successful mitochondrial replacement involved U.S. physicians assisting a Jordanian couple in Mexico. Further reports of the use of mitochondrial replacement in Ukraine and China have followed.

In this Nov. 3, 2015 photo, a newborn baby is transferred to an ambulance at the Akanksha Clinic, one of the most organized clinics in the surrogacy business, in Anand, India.
Allison Joyce/AP

The increasing trend of medical tourism has been followed by sporadic scandals and waves of tighter regulations in countries such as India, Nepal and Thailand, which have been leading destinations of couples seeking assisted reproduction services.

Intended parents and children born with the help of assisted reproduction outside of their home countries have faced problems related to family ties, citizenship and their relationship with donors – especially with the use of commercial surrogacy.

Mitochondrial replacement and new gene editing technologies add further questions related to the safety and long-term effects of these procedures.

Gene modification complicates reproductive tourism

Mitochondrial replacement and technologies such as gene-editing with the use of CRISPR-CAS9 that create germline modifications are relatively new. Many of the legal and ethical questions they raise have yet to be answered.

What if the children born as a result of these techniques suffer unknown adverse effects? And could these technologies affect the way in which we think about identity, kinship and family ties in general? One technique to replace mutated mitochondria involves the creation of embryos that will be later disposed. How should the use and disposal of embryos be regulated? What about the interests of the egg donors? Should they be paid?

Some of these problems could be avoided through a solid regulatory system in the U.S. and other countries. But as long as patients continue to seek medical treatments in “havens” for ethically dubious or risky procedures, many of these problems will persist.

Regulatory authorities around the world are debating how to better regulate these genetic modification technologies. Governments need to start considering not only the ethical and safety effects of their choices but also how these choices drive medical tourism.

The ConversationRosa Castro, Postdoctoral Associate in Science and Society, Duke University

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

Cassini Getting Set for Dramatic “Ring-Grazing Orbits” of Saturn [Video]

A thrilling ride is about to begin for NASA’s Cassini spacecraft. Engineers have been pumping up the spacecraft’s orbit around Saturn this year to increase its tilt with respect to the planet’s equator and rings. And on Nov. 30, following a gravitational nudge from Saturn’s moon Titan, Cassini will enter the first phase of the mission’s dramatic endgame.

Launched in 1997, Cassini has been touring the Saturn system since arriving there in 2004 for an up-close study of the planet, its rings and moons. During its journey, Cassini has made numerous dramatic discoveries, including a global ocean within Enceladus and liquid methane seas on Titan.

Between Nov. 30 and April 22, Cassini will circle high over and under the poles of Saturn, diving every seven days — a total of 20 times — through the unexplored region at the outer edge of the main rings.

First Phase in Dramatic Endgame for Long-Lived Cassini Spacecraft

“We’re calling this phase of the mission Cassini’s Ring-Grazing Orbits, because we’ll be skimming past the outer edge of the rings,” said Linda Spilker, Cassini project scientist at NASA’s Jet Propulsion Laboratory, Pasadena, California. “In addition, we have two instruments that can sample particles and gases as we cross the ringplane, so in a sense Cassini is also ‘grazing’ on the rings.”

On many of these passes, Cassini’s instruments will attempt to directly sample ring particles and molecules of faint gases that are found close to the rings. During the first two orbits, the spacecraft will pass directly through an extremely faint ring produced by tiny meteors striking the two small moons Janus and Epimetheus. Ring crossings in March and April will send the spacecraft through the dusty outer reaches of the F ring.

“Even though we’re flying closer to the F ring than we ever have, we’ll still be more than 4,850 miles (7,800 kilometers) distant. There’s very little concern over dust hazard at that range,” said Earl Maize, Cassini project manager at JPL.

The F ring marks the outer boundary of the main ring system; Saturn has several other, much fainter rings that lie farther from the planet. The F ring is complex and constantly changing: Cassini images have shown structures like bright streamers, wispy filaments and dark channels that appear and develop over mere hours. The ring is also quite narrow — only about 500 miles (800 kilometers) wide. At its core is a denser region about 30 miles (50 kilometers) wide.

So Many Sights to See

Cassini’s ring-grazing orbits offer unprecedented opportunities to observe the menagerie of small moons that orbit in or near the edges of the rings, including best-ever looks at the moons Pandora, Atlas, Pan and Daphnis.

Grazing the edges of the rings also will provide some of the closest-ever studies of the outer portions of Saturn’s main rings (the A, B and F rings). Some of Cassini’s views will have a level of detail not seen since the spacecraft glided just above them during its arrival in 2004. The mission will begin imaging the rings in December along their entire width, resolving details smaller than 0.6 mile (1 kilometer) per pixel and building up Cassini’s highest-quality complete scan of the rings’ intricate structure.

The mission will continue investigating small-scale features in the A ring called “propellers,” which reveal the presence of unseen moonlets. Because of their airplane propeller-like shapes, scientists have given some of the more persistent features informal names inspired by famous aviators, including “Earhart.” Observing propellers at high resolution will likely reveal new details about their origin and structure.

And in March, while coasting through Saturn’s shadow, Cassini will observe the rings backlit by the sun, in the hope of catching clouds of dust ejected by meteor impacts.

Preparing for the Finale

During these orbits, Cassini will pass as close as about 56,000 miles (90,000 kilometers) above Saturn’s cloud tops. But even with all their exciting science, these orbits are merely a prelude to the planet-grazing passes that lie ahead. In April 2017, the spacecraft will begin its Grand Finale phase.

After nearly 20 years in space, the mission is drawing near its end because the spacecraft is running low on fuel. The Cassini team carefully designed the finale to conduct an extraordinary science investigation before sending the spacecraft into Saturn to protect its potentially habitable moons.

During its grand finale, Cassini will pass as close as 1,012 miles (1,628 kilometers) above the clouds as it dives repeatedly through the narrow gap between Saturn and its rings, before making its mission-ending plunge into the planet’s atmosphere on Sept. 15. But before the spacecraft can leap over the rings to begin its finale, some preparatory work remains.

To begin with, Cassini is scheduled to perform a brief burn of its main engine during the first super-close approach to the rings on Dec. 4. This maneuver is important for fine-tuning the orbit and setting the correct course to enable the remainder of the mission.

“This will be the 183rd and last currently planned firing of our main engine. Although we could still decide to use the engine again, the plan is to complete the remaining maneuvers using thrusters,” said Maize.

Saturn's rings were named alphabetically in the order they were discovered. The narrow F ring marks the outer boundary of the main ring system. Credits: NASA/JPL-Caltech/Space Science Institute
Saturn’s rings were named alphabetically in the order they were discovered. The narrow F ring marks the outer boundary of the main ring system.
Credits: NASA/JPL-Caltech/Space Science Institute

To further prepare, Cassini will observe Saturn’s atmosphere during the ring-grazing phase of the mission to more precisely determine how far it extends above the planet. Scientists have observed Saturn’s outermost atmosphere to expand and contract slightly with the seasons since Cassini’s arrival. Given this variability, the forthcoming data will be important for helping mission engineers determine how close they can safely fly the spacecraft.

Source: NASA.gov news release used in accordance with the NASA Media Guidelines.

Next, Check Out:

Young children are terrible at hiding – psychologists have a new theory why

By Henrike Moll, University of Southern California – Dornsife College of Letters, Arts and Sciences and Allie Khalulyan, University of Southern California – Dornsife College of Letters, Arts and Sciences.

Young children across the globe enjoy playing games of hide and seek. There’s something highly exciting for children about escaping someone else’s glance and making oneself “invisible.”

However, developmental psychologists and parents alike continue to witness that before school age, children are remarkably bad at hiding. Curiously, they often cover only their face or eyes with their hands, leaving the rest of their bodies visibly exposed.

For a long time, this ineffective hiding strategy was interpreted as evidence that young children are hopelessly “egocentric” creatures. Psychologists theorized that preschool children cannot distinguish their own perspective from someone else’s. Conventional wisdom held that, unable to transcend their own viewpoint, children falsely assume that others see the world the same way they themselves do. So psychologists assumed children “hide” by covering their eyes because they conflate their own lack of vision with that of those around them.

But research in cognitive developmental psychology is starting to cast doubt on this notion of childhood egocentrism. We brought young children between the ages of two and four into our Minds in Development Lab at USC so we could investigate this assumption. Our surprising results contradict the idea that children’s poor hiding skills reflect their allegedly egocentric nature.

Who can see whom?

Each child in our study sat down with an adult who covered her own eyes or ears with her hands. We then asked the child whether or not she could see or hear the adult, respectively. Surprisingly, children denied that they could. The same thing happened when the adult covered her own mouth: Now children denied that they could speak to her.

A number of control experiments ruled out that the children were confused or misunderstood what they were being asked. The results were clear: Our young subjects comprehended the questions and knew exactly what was asked of them. Their negative responses reflected their genuine belief that the other person could not be seen, heard, or spoken to when her eyes, ears, or mouth were obstructed. Despite the fact that the person in front of them was in plain view, they flatout denied being able to perceive her. So what was going on?

It seems like young children consider mutual eye contact a requirement for one person to be able to see another. Their thinking appears to run along the lines of “I can see you only if you can see me, too” and vice versa. Our findings suggest that when a child “hides” by putting a blanket over her head, this strategy is not a result of egocentrism. In fact, children deem this strategy effective when others use it.

Built into their notion of visibility, then, is the idea of bidirectionality: Unless two people make eye contact, it is impossible for one to see the other. Contrary to egocentrism, young children simply insist on mutual recognition and regard.

An expectation of mutual engagement

Children’s demand of reciprocity demonstrates that they are not at all egocentric. Not only can preschoolers imagine the world as seen from another’s point of view; they even apply this capacity in situations where it’s unnecessary or leads to wrong judgments, such as when they are asked to report their own perception. These faulty judgments – saying that others whose eyes are covered cannot be seen – reveal just how much children’s perception of the world is colored by others.

The seemingly irrational way in which children try to hide from others and the negative answers they gave in our experiment show that children feel unable to relate to a person unless the communication flows both ways – not only from me to you but also from you to me, so we can communicate with each other as equals.

We are planning to investigate children’s hiding behavior directly in the lab and test if kids who are bad at hiding show more reciprocity in play and conversation than those who hide more skillfully. We would also like to conduct these experiments with children who show an atypical trajectory in their early development.

Children want to interact with the people around them.
Eye contact image via www.shutterstock.com.

Our findings underscore children’s natural desire and preference for reciprocity and mutual engagement between individuals. Children expect and strive to create situations in which they can be reciprocally involved with others. They want to encounter people who are not only looked at but who can return another’s gaze; people who not only listen but are also heard; and people who are not just spoken to but who can reply and thus enter a mutual dialogue.

At least in this respect, young children understand and treat other human beings in a manner that is not at all egocentric. On the contrary, their insistence on mutual regard is remarkably mature and can be considered inspirational. Adults may want to turn to these preschoolers as role models when it comes to perceiving and relating to other humans. These young children seem exquisitely aware that we all share a common nature as people who are in constant interaction with others.

The ConversationHenrike Moll, Assistant Professor in Developmental Psychology, University of Southern California – Dornsife College of Letters, Arts and Sciences and Allie Khalulyan, Ph.D. Student in Developmental Psychology, University of Southern California – Dornsife College of Letters, Arts and Sciences

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

Eat Lots of Fiber or Microbes Will Eat Your Colon

It sounds like the plot of a 1950s science fiction movie: normal, helpful bacteria begin to eat their host from within, because they don’t get what they want.

But that’s exactly what happens when microbes inside the digestive system don’t get the natural fiber that they rely on for food.

Starved, they begin to munch on the natural layer of mucus that lines the gut, eroding it to the point where dangerous invading bacteria can infect the colon wall.

For a new study, researchers looked at the impact of fiber deprivation on the guts of specially raised mice. The mice were born and raised with no gut microbes of their own, then received a transplant of 14 bacteria that normally grow in the human gut. They knew the full genetic signature of each one, so were able to track activity over time.

Fiber, fiber, fiber

The findings, published in the journal Cell, have implications for understanding not only the role of fiber in a normal diet, but also the potential of using fiber to counter the effects of digestive tract disorders.

“The lesson we’re learning from studying the interaction of fiber, gut microbes, and the intestinal barrier system is that if you don’t feed them, they can eat you,” says Eric Martens, associate professor of microbiology at the University of Michigan Medical School.

Researchers used the gnotobiotic, or germ-free, mouse facility and advanced genetic techniques to determine which bacteria were present and active under different conditions. They studied the impact of diets with different fiber content—and those with no fiber. They also infected some of the mice with a bacterial strain that does to mice what certain strains of Escherichia coli can do to humans—cause gut infections that lead to irritation, inflammation, diarrhea, and more.

The result: the mucus layer stayed thick, and the infection didn’t take full hold in mice that received a diet that was about 15 percent fiber from minimally processed grains and plants. But when the researchers substituted a diet with no fiber in it, even for a few days, some of the microbes in their guts began to munch on the mucus.

They also tried a diet that was rich in prebiotic fiber—purified forms of soluble fiber similar to what some processed foods and supplements currently contain. This diet resulted in a similar erosion of the mucus layer as observed in the lack of fiber.

The researchers also saw that the mix of bacteria changed depending on what the mice were being fed, even day by day. Some species of bacteria in the transplanted microbiome were more common—meaning they had reproduced more—in low-fiber conditions, others in high-fiber conditions.

And the four bacteria strains that flourished most in low-fiber and no-fiber conditions were the only ones that make enzymes that are capable of breaking down the long molecules called glycoproteins that make up the mucus layer.

In addition to looking at the of bacteria based on genetic information, the researchers could see which fiber-digesting enzymes the bacteria were making. They detected more than 1,600 different enzymes capable of degrading carbohydrates—similar to the complexity in the normal human gut.

Mucus layer

Just like the mix of bacteria, the mix of enzymes changed depending on what the mice were being fed, with even occasional fiber deprivation leading to more production of mucus-degrading enzymes.

Images of the mucus layer, and the “goblet” cells of the colon wall that produce the mucus constantly, showed the layer was thinner the less fiber the mice received. While mucus is constantly being produced and degraded in a normal gut, the change in bacteria activity under the lowest-fiber conditions meant that the pace of eating was faster than the pace of production—almost like an overzealous harvesting of trees outpacing the planting of new ones.

When the researchers infected the mice with Citrobacter rodentium—the E. coli-like bacteria—they observed that these dangerous bacteria flourished more in the guts of mice fed a fiber-free diet. Many of those mice began to show severe signs of illness and lost weight.

When the scientists looked at samples of their gut tissue, they saw not only a much thinner or even patchy mucus later—they also saw inflammation across a wide area. Mice that had received a fiber-rich diet before being infected also had some inflammation but across a much smaller area.

“To make it simple, the ‘holes’ created by our microbiota while eroding the mucus serve as wide open doors for pathogenic micro-organisms to invade,” says former postdoctoral fellow Mahesh Desai, now a principle investigator at the Luxembourg Institute of Health.

The researchers will next look at the impact of different prebiotic fiber mixes, and of diets with more intermitted natural fiber content over a longer period. They also want to look for biomarkers that could tell them about the status of the mucus layer in human guts—such as the abundance of mucus-digesting bacteria strains, and the effect of low fiber on chronic disease such as inflammatory bowel disease.

“While this work was in mice, the take-home message from this work for humans amplifies everything that doctors and nutritionists have been telling us for decades: Eat a lot of fiber from diverse natural sources,” says Martens.

“Your diet directly influences your microbiota, and from there it may influence the status of your gut’s mucus layer and tendency toward disease. But it’s an open question of whether we can cure our cultural lack of fiber with something more purified and easy to ingest than a lot of broccoli.”

Source: Republished from Futurity.org as a derivative work under the Attribution 4.0 International license. Original article posted to Futurity by .

Now, Check Out:

Confirmation bias: A psychological phenomenon that helps explain why pundits got it wrong

By Ray Nickerson, Tufts University.

As post mortems of the 2016 presidential election began to roll in, fingers started pointing to what psychologists call the confirmation bias as one reason many of the polls and pundits were wrong in their predictions of which candidate would end up victorious.

Confirmation bias is usually described as a tendency to notice or search out information that confirms what one already believes, or would like to believe, and to avoid or discount information that’s contrary to one’s beliefs or preferences. It could help explain why many election-watchers got it wrong: in the runup to the election, they saw only what they expected, or wanted, to see.

Psychologists put considerable effort into discovering how and why people sometimes reason in less than totally rational ways. The confirmation bias is one of the better-known of the biases that have been identified and studied over the past few decades. A large body of psychological literature reports how confirmation bias works and how widespread it is.

The role of motivation

Confirmation bias can appear in many forms, but for present purposes, we may divide them into two major types. One is the tendency, when trying to determine whether to believe something is true or false, to look for evidence that it is true while failing to look for evidence that it is false.

Imagine four cards on a table, each one showing either a letter or number on its visible side. Let’s say the cards show A, B, 1 and 2. Suppose you are asked to indicate which card or cards you would have to turn over in order to determine whether the following statement is true or false: If a card has A on its visible side, it has 1 on its other side. The correct answer is the card showing A and the one showing 2. But when people are given this task, a large majority choose to turn either the card showing A or both the card showing A and the one showing 1. Relatively few see the card showing 2 as relevant, but finding A on its other side would prove the statement to be false. One possible explanation for people’s poor performance of this task is that they look for evidence that the statement is true and fail to look for evidence that it is false.

Another type of confirmation bias is the tendency to seek information that supports one’s existing beliefs or preferences or to interpret data so as to support them, while ignoring or discounting data that argue against them. It may involve what is best described as case building, in which one collects data to lend as much credence as possible to a conclusion one wishes to confirm.

At the risk of oversimplifying, we might call the first type of bias unmotivated, inasmuch as it doesn’t involve the assumption that people are driven to preserve or defend their existing beliefs. The second type of confirmation bias may be described as motivated, because it does involve that assumption. It may go a step further than just focusing on details that support one’s existing beliefs; it may involve intentionally compiling evidence to confirm some claim.

It seems likely that both types played a role in shaping people’s election expectations.

A proper venue for leaving out conflicting evidence.
Clyde Robinson, CC BY

Case building versus unbiased analysis

An example of case building and the motivated type of confirmation bias is clearly seen in the behavior of attorneys arguing a case in court. They present only evidence that they hope will increase the probability of a desired outcome. Unless obligated by law to do so, they don’t volunteer evidence that’s likely to harm their client’s chances of a favorable verdict.

Another example is a formal debate. One debater attempts to convince an audience that a proposition should be accepted, while another attempts to show that it should be rejected. Neither wittingly introduces evidence or ideas that will bolster one’s adversary’s position.

In these contexts, it is proper for protagonists to behave in this fashion. We generally understand the rules of engagement. Lawyers and debaters are in the business of case building. No one should be surprised if they omit information likely to weaken their own argument. But case building occurs in contexts other than courtrooms and debating halls. And often it masquerades as unbiased data collection and analysis.

Where confirmation bias becomes problematic

One sees the motivated confirmation bias in stark relief in commentary by partisans on controversial events or issues. Television and other media remind us daily that events evoke different responses from commentators depending on the positions they’ve taken on politically or socially significant issues. Politically liberal and conservative commentators often interpret the same event and its implications in diametrically opposite ways.

Anyone who followed the daily news reports and commentaries regarding the election should be keenly aware of this fact and of the importance of political orientation as a determinant of one’s interpretation of events. In this context, the operation of the motivated confirmation bias makes it easy to predict how different commentators will spin the news. It’s often possible to anticipate, before a word is spoken, what specific commentators will have to say regarding particular events.

Here the situation differs from that of the courtroom or the debating hall in one very important way: Partisan commentators attempt to convince their audience that they’re presenting a balanced factual – unbiased – view. Presumably, most commentators truly believe they are unbiased and responding to events as any reasonable person would. But the fact that different commentators present such disparate views of the same reality makes it clear that they cannot all be correct.

Reporters in the media center watched a presidential debate, but might have seen something different.
AP Photo/John Locher

Selective attention

Motivated confirmation bias expresses itself in selectivity: selectivity in the data one pays attention to and selectivity with respect to how one processes those data.

When one selectively listens only to radio stations, or watches only TV channels, that express opinions consistent with one’s own, one is demonstrating the motivated confirmation bias. When one interacts only with people of like mind, one is exercising the motivated confirmation bias. When one asks for critiques of one’s opinion on some issue of interest, but is careful to ask only people who are likely to give a positive assessment, one is doing so as well.

This presidential election was undoubtedly the most contentious of any in the memory of most voters, including most pollsters and pundits. Extravagant claims and counterclaims were made. Hurtful things were said. Emotions were much in evidence. Civility was hard to find. Sadly, “fallings out” within families and among friends have been reported.

The atmosphere was one in which the motivated confirmation bias would find fertile soil. There is little doubt that it did just that and little evidence that arguments among partisans changed many minds. That most pollsters and pundits predicted that Clinton would win the election suggests that they were seeing in the data what they had come to expect to see – a Clinton win.

None of this is to suggest that the confirmation bias is unique to people of a particular partisan orientation. It is pervasive. I believe it to be active independently of one’s age, gender, ethnicity, level of intelligence, education, political persuasion or general outlook on life. If you think you’re immune to it, it is very likely that you’ve neglected to consider the evidence that you’re not.

The ConversationRay Nickerson, Research Professor of Psychology, Tufts University

This article was originally published on The Conversation. Read the original article.

Now, Check Out: 

Understanding the four types of AI, from reactive robots to self-aware beings

By Arend Hintze, Michigan State University.

The common, and recurring, view of the latest breakthroughs in artificial intelligence research is that sentient and intelligent machines are just on the horizon. Machines understand verbal commands, distinguish pictures, drive cars and play games better than we do. How much longer can it be before they walk among us?

The new White House report on artificial intelligence takes an appropriately skeptical view of that dream. It says the next 20 years likely won’t see machines “exhibit broadly-applicable intelligence comparable to or exceeding that of humans,” though it does go on to say that in the coming years, “machines will reach and exceed human performance on more and more tasks.” But its assumptions about how those capabilities will develop missed some important points.

As an AI researcher, I’ll admit it was nice to have my own field highlighted at the highest level of American government, but the report focused almost exclusively on what I call “the boring kind of AI.” It dismissed in half a sentence my branch of AI research, into how evolution can help develop ever-improving AI systems, and how computational models can help us understand how our human intelligence evolved.

The report focuses on what might be called mainstream AI tools: machine learning and deep learning. These are the sorts of technologies that have been able to play “Jeopardy!” well, and beat human Go masters at the most complicated game ever invented. These current intelligent systems are able to handle huge amounts of data and make complex calculations very quickly. But they lack an element that will be key to building the sentient machines we picture having in the future.

We need to do more than teach machines to learn. We need to overcome the boundaries that define the four different types of artificial intelligence, the barriers that separate machines from us – and us from them.

Type I AI: Reactive machines

The most basic types of AI systems are purely reactive, and have the ability neither to form memories nor to use past experiences to inform current decisions. Deep Blue, IBM’s chess-playing supercomputer, which beat international grandmaster Garry Kasparov in the late 1990s, is the perfect example of this type of machine.

Deep Blue can identify the pieces on a chess board and know how each moves. It can make predictions about what moves might be next for it and its opponent. And it can choose the most optimal moves from among the possibilities.

But it doesn’t have any concept of the past, nor any memory of what has happened before. Apart from a rarely used chess-specific rule against repeating the same move three times, Deep Blue ignores everything before the present moment. All it does is look at the pieces on the chess board as it stands right now, and choose from possible next moves.

This type of intelligence involves the computer perceiving the world directly and acting on what it sees. It doesn’t rely on an internal concept of the world. In a seminal paper, AI researcher Rodney Brooks argued that we should only build machines like this. His main reason was that people are not very good at programming accurate simulated worlds for computers to use, what is called in AI scholarship a “representation” of the world.

The current intelligent machines we marvel at either have no such concept of the world, or have a very limited and specialized one for its particular duties. The innovation in Deep Blue’s design was not to broaden the range of possible movies the computer considered. Rather, the developers found a way to narrow its view, to stop pursuing some potential future moves, based on how it rated their outcome. Without this ability, Deep Blue would have needed to be an even more powerful computer to actually beat Kasparov.

Similarly, Google’s AlphaGo, which has beaten top human Go experts, can’t evaluate all potential future moves either. Its analysis method is more sophisticated than Deep Blue’s, using a neural network to evaluate game developments.

These methods do improve the ability of AI systems to play specific games better, but they can’t be easily changed or applied to other situations. These computerized imaginations have no concept of the wider world – meaning they can’t function beyond the specific tasks they’re assigned and are easily fooled.

They can’t interactively participate in the world, the way we imagine AI systems one day might. Instead, these machines will behave exactly the same way every time they encounter the same situation. This can be very good for ensuring an AI system is trustworthy: You want your autonomous car to be a reliable driver. But it’s bad if we want machines to truly engage with, and respond to, the world. These simplest AI systems won’t ever be bored, or interested, or sad.

Type II AI: Limited memory

This Type II class contains machines can look into the past. Self-driving cars do some of this already. For example, they observe other cars’ speed and direction. That can’t be done in a just one moment, but rather requires identifying specific objects and monitoring them over time.

These observations are added to the self-driving cars’ preprogrammed representations of the world, which also include lane markings, traffic lights and other important elements, like curves in the road. They’re included when the car decides when to change lanes, to avoid cutting off another driver or being hit by a nearby car.

But these simple pieces of information about the past are only transient. They aren’t saved as part of the car’s library of experience it can learn from, the way human drivers compile experience over years behind the wheel.

So how can we build AI systems that build full representations, remember their experiences and learn how to handle new situations? Brooks was right in that it is very difficult to do this. My own research into methods inspired by Darwinian evolution can start to make up for human shortcomings by letting the machines build their own representations.

Type III AI: Theory of mind

We might stop here, and call this point the important divide between the machines we have and the machines we will build in the future. However, it is better to be more specific to discuss the types of representations machines need to form, and what they need to be about.

Machines in the next, more advanced, class not only form representations about the world, but also about other agents or entities in the world. In psychology, this is called “theory of mind” – the understanding that people, creatures and objects in the world can have thoughts and emotions that affect their own behavior.

This is crucial to how we humans formed societies, because they allowed us to have social interactions. Without understanding each other’s motives and intentions, and without taking into account what somebody else knows either about me or the environment, working together is at best difficult, at worst impossible.

If AI systems are indeed ever to walk among us, they’ll have to be able to understand that each of us has thoughts and feelings and expectations for how we’ll be treated. And they’ll have to adjust their behavior accordingly.

Type IV AI: Self-awareness

The final step of AI development is to build systems that can form representations about themselves. Ultimately, we AI researchers will have to not only understand consciousness, but build machines that have it.

This is, in a sense, an extension of the “theory of mind” possessed by Type III artificial intelligences. Consciousness is also called “self-awareness” for a reason. (“I want that item” is a very different statement from “I know I want that item.”) Conscious beings are aware of themselves, know about their internal states, and are able to predict feelings of others. We assume someone honking behind us in traffic is angry or impatient, because that’s how we feel when we honk at others. Without a theory of mind, we could not make those sorts of inferences.

While we are probably far from creating machines that are self-aware, we should focus our efforts toward understanding memory, learning and the ability to base decisions on past experiences. This is an important step to understand human intelligence on its own. And it is crucial if we want to design or evolve machines that are more than exceptional at classifying what they see in front of them.

The ConversationArend Hintze, Assistant Professor of Integrative Biology & Computer Science and Engineering, Michigan State University

This article was originally published on The Conversation. Read the original article.

Now, Check Out: