New research may offer support for the idea that schizophrenia is a sensory disorder and that individuals with the condition are impaired in their ability to process stimuli from the outside world.
The findings may also point to a new way to identify the disease at an early stage and before symptoms become acute.
Because one of the hallmarks of the disease is auditory hallucinations, such as hearing voices, researchers have long suspected a link between auditory processing and schizophrenia. The new study provides evidence that the filtering of incoming visual information, and also of simple touch inputs, is also severely compromised in individuals with the condition.
“When we think about schizophrenia, the first things that come to mind are the paranoia, the delusions, the disorganized thinking,” says John Foxe, the chair of the University of Rochester Medical Center’s department of neuroscience and senior author of the study. “But there is increasing evidence that there is something fundamentally wrong with the way these patients hear, the way they feel things through their sense of touch, and in the way in which they see the environment.”
As reported in Translational Psychiatry, researchers conducted a series of experiments in which they presented visual and touch stimuli to 15 schizophrenia patients and 15 controls while they recorded the brain’s response via electrodes placed on the surface of the scalp. What scientists have known for years is that when encountering a series of inputs, such as successive flashes of light, the brain’s initial response is large and strong. However, as the flash is repeated the reaction quickly fades in intensity.
This response reduction is known as sensory “adaptation” and is an essential mechanism that enables the brain to filter out repetitive and irrelevant information. Researchers believe that adaptation allows the brain to free itself up to respond to new events and stimuli that may be more important.
The research team found that adaptation was substantially weaker in the patients with schizophrenia, and this was the case for both repeated visual stimulation and for repeated touch stimulation.
“If you can’t properly filter the information at the basic sensory input stage, then it is not too hard to imagine how the external world could begin to be experienced as bizarre and unreliable,” says Gizely Andrade, of the Albert Einstein College of Medicine and a coauthor of the study. “A fundamental aspect of the way our minds operate is that they can rely on the fact that the external world remains constant. If it doesn’t, then reality itself could become distorted.”
The team is hopeful that this discovery might lead to simple and basic measures of sensory adaptation that could be used to diagnose schizophrenia or identify individuals at risk of developing the condition before the disease has had a chance to fully establish itself.
“A key point with this study is that we find these dramatic differences in patients who are already suffering from full-blown schizophrenia,” says Foxe. “Schizophrenia is a disease that typically strikes during late adolescence or early adulthood, but what we also know is that long before a person has their first major psychotic episode, there are subtle changes occurring that precede the full manifestation of the disease. Our hope is that these new measures can allow us to pick up on these people before they ever become seriously ill.”
Additional coauthors of the study are from the Albert Einstein College of Medicine, and the Dublin Institute of Technology in Ireland. The National Institute of Child Health and Human Development supported the work, which is published in Translational Psychiatry.
Astronomers searching for planets using NASA’s Kepler telescope have found an extraordinary family of four. Their orbits are so carefully timed that they provide long-term stability for their planetary system.
Each time the innermost planet (Kepler-223b) orbits the system’s star 3 times, the second-closest planet (Kepler-223c) orbits precisely 4 times. Thus, these two planets return to the same positions relative to each other and their host star.
Throughout the Kepler-233 system, the dance is much more elaborate.
“The orbital periods of the four planets of the Kepler-233 system have ratios of exactly 3 to 4, 4 to 6, and 6 to 8,” says Eric Ford, a professor of astronomy and astrophysics at Penn State and a member of the research team.
All four of the puffy, gaseous planets are far more massive than Earth and orbit extremely close to their nearest star—closer than Mercury is to our sun. The scientists used data from NASA’s Kepler telescope to measure how much starlight each of the four planets block as they pass in front of their star, and to detect slight changes in each of the planets’ orbits. Combining observations from Kepler and the Keck Observatory, the team was able to infer the planets’ sizes and masses.
Their orbits raise the question of whether the gas giants in our solar system somehow escaped a similar configuration in the distant past.
“Exactly how and where planets form is an outstanding question in planetary science,” says Sean Mills, a graduate student at the University of Chicago and lead author of the study in Nature. “Our work essentially tests a model for planet formation for a type of planet we don’t have in our solar system.”
Because the orbital configuration is so different than the one in our system, Mills says, there’s a big debate about how such planets form, how they got there, and why Earth’s system turned out as it did.
The orbital configuration of the solar system seems to have evolved since its birth 4.6 billion years ago. The four known planets of the much older Kepler-223 system, however, have maintained one orbital configuration for far longer.
The Kepler-223 system provides alternative scenarios for how planets form and migrate in a planetary system that is different from our own, says study coauthor Howard Isaacson, a research astronomer at the University of California, Berkeley, and member of the California Planet Search Team.
Thanks to observations of Kepler-223 and other exoplanetary systems, “We now know of systems that are unlike the sun’s solar system, with hot Jupiters, planets closer than Mercury or in between the size of Earth and Neptune, none of which we see in our solar system. Other types of planets are very common,” says Isaacson.
Some stages of planet formation can involve violent processes, but during other stages, planets can evolve from gaseous disks in a smooth, gentle way, which is probably what the sub-Neptune planets of Kepler-223 did, Mills says.
“We think that two planets migrate through this disk, get stuck, and then keep migrating together; find a third planet, get stuck, migrate together; find a fourth planet and get stuck,” Mills adds.
JUPITER, SATURN, URANUS, AND NEPTUNE
That process differs completely from the one that scientists believe led to the formation of Earth, Mercury, Venus, and Mars, which likely formed in their current orbital locations.
Earth formed from Mars- or moon-sized bodies smacking together, Mills says, a violent and chaotic process. When planets form this way their final orbital periods are not near a resonance.
But scientists suspect that the solar system’s larger, more distant planets of today—Jupiter, Saturn, Uranus, and Neptune—moved around substantially during their formation. They may have been knocked out of resonances that once resembled those of Kepler-223, possibly after interacting with numerous asteroids and small planets (planetesimals).
“These resonances are extremely fragile,” Fabrycky says. “If bodies were flying around and hitting each other, then they would have dislodged the planets from the resonance.”
But Kepler-223’s planets somehow managed to dodge this scattering of cosmic bodies.
Other processes, including tidal forces that flex the planets, also might cause resonance separation.
“Many of the multi-planet systems may start out in a chain of resonances like this, fragile as it is, meaning that those chains usually break on long timescales similar to those inferred for the solar system,” Fabrycky says.
Mills and Fabrycky’s Berkeley collaborators were able to determine the size and mass of the star by making precise measurements of its light using the high-resolution Eschelle spectrometer on the 10-meter Keck I telescope atop Mauna Kea in Hawaii.
“The spectrum revealed a star very similar in size and mass to the sun but much older—more than six billion years old,” Isaacson says. “You need to know the precise size of the star so you can do the dynamical and stability analysis, which involve estimates of the masses of the planets.”
NASA, the Alfred P. Sloan Foundation, and the Polish National Science Centre funded the study, now published in Nature.
There is considerable debate and disagreement among scientists over what to call a canid inhabiting the northeastern United States. In the course of this creature’s less than 100-year history, it has been variously called coyote, eastern coyote, coydog, Tweed wolf, brush wolf, new wolf, northeastern coyote and now coywolf, with nature documentaries highlighting recent genetic findings.
Recently, Roland Kays penned an interesting article in The Conversation concluding that “coywolf is not a thing,” and that it should not be considered for species status. Interestingly, and perhaps ironically, the beautiful light orangey-red canid in the cover picture of that article looks nothing like a western coyote and has striking observable characteristics of both coyotes and wolves, as well as dogs.
In the paper, we suggest that coywolf is the most accurate term for this animal and that they warrant new species status, Canis oriens, which literally means eastern canid in Latin. We based this on the fact that they are physically and genetically distinct from their parental species of mainly western coyotes (Canis latrans) and eastern wolves (Canis lycaon). They also have smaller amounts of gray wolf (Canis lupus) and domestic dog (Canis familiaris) genes.
The eastern coyote/coywolf in a nutshell
Before I describe why the coywolf is unique, let’s get a quick snapshot of the animal we are discussing.
The emerging picture of the coywolf is that they have a larger home range than most western coyotes but smaller than wolves, at about 30 square kilometers (about 11 square miles). They also travel long distances daily (10-15 miles), eat a variety of food including white-tailed deer, medium-sized prey such as rabbits and woodchucks, and small prey such as voles and mice. They are social, often living in families of three to five members.
In short, the coywolf has ecological and physical characteristics that can be seen on a continuum of coyote-like to wolf-like predators, but occupies an ecological niche that is closer to coyotes than wolves.
So why is coywolf a more accurate name?
Some argue that if the coywolf is predominantly coyote, then they should be called coyotes. Let’s analyze this claim.
I have previously found coywolves to be significantly different in body size from both western coyotes and eastern wolves. However, they are closer to coyotes whereby eastern wolves are 61-71 percent heavier than the same-sex coywolf, while coywolves are 35-37 percent heavier than western coyotes.
Bill Lynn and I concluded that they are statistically different – both genetically and physically – from their parental species since the coywolf is about 60 percent coyote, 30 percent wolf, and 10 percent dog; thus, nearly 40 percent of this animal is not coyote. That, essentially, is why we recommend that they be classified as a new species, Canis oriens.
Kays’ article stated that “coyotes” in the Northeast are mostly (60-84 percent) coyote, with lesser amounts of wolf (%-25 percent) and dog (8-11 percent). However, the values of 84 percent coyote and only 8 percent wolf used a study (by vonHoldt et al. 2011) that has since mostly been discounted by subsequent papers since eastern wolves were not adequately sampled in their analysis.
Thus, based on our analysis, the claim that coywolves are predominantly coyote is untrue. While they may be numerically closer in size and genetics to coyotes than wolves, they are clearly statistically divergent from both coyotes and wolves. Taken from a wolf-centric viewpoint, I can see that they seem more coyote-like than wolf-like, but it is important to realize that a large part of their background is not from coyotes.
The term coywolf uses the portmanteau method (i.e., a word formed by combining two other words) of naming, whereby the first word (coyote) of the combined two (coyote-wolf) is the more dominant or robust descriptor of that term. It does not suggest that this animal is equally or more wolf than coyote as has been suggested.
Furthermore, I believe that the terms coyote, eastern coyote and northeastern coyote undervalue the importance of the eastern wolf – the animals that interbred with western coyotes in Canada in the early 20th century to produce the coywolf – in the ancestry of this canid. This naming effectively discounts that, for example, one-third of the population’s mitochondrial DNA (C1 haplotype) is derived from the eastern wolf and another one-third (C9 haplotype) is not found in most nonhybridized western coyote populations but is found in eastern wolves.
Research has confirmed that all canids in the genus Canis can and do mate with other species (or canid types). This includes gray wolves mating with eastern wolves around the Great Lakes area, eastern wolves with gray wolves and western coyotes north and south/west of Algonquin Park in Ontario, respectively. Also, western coyotes mix with eastern wolves and coywolves, especially at the edge of their respective ranges.
Hybridization is a natural process that can be greatly accelerated by human modifications to the environment, like hunting and habitat destruction – two key ingredients that paved the way for the creation of the coywolf.
It’s worth noting that coyote populations in eastern North America continue to change. Indeed, we recently questioned if the generic term “eastern coyote” is even accurate or appropriate considering that colonizing “coyotes” in eastern North America are considerably different from each other.
Southeastern coyotes are more coyote-like compared to northeastern coyotes/coywolves, and coyotes in the mid-Atlantic region have medium amounts of wolf intermixing, or introgression, compared with more typical western coyotes in the southeast that have little wolf but some domestic dog admixture. Comparatively, coywolves in the northeast are more wolf-like.
There is also the possibility that coywolves in the northeast will eventually become genetically swamped by western coyote genes from the south and west. Eastern coyotes from the mid-Atlantic area, which are more coyote-like and less wolf-like, have recently contacted the coywolf in the west part of its range, which could affect the makeup of the populations in the eastern U.S.
Thus, it remains to be seen whether this entity will remain distinct, which could influence future discussions of its taxonomy.
Why does it all matter anyway?
In the long run, does it really matter what we call this animal?
Science, at its best, is self-correcting, and new science often leads one in new directions. As biologists, we are charged with accurately describing natural systems, and for this reason alone it is important that we accurately characterize (and even debate about) the systems that we are studying. The more I investigate the coywolf, the more I realize it is different than other canids, including western coyotes.
Perhaps the most important finding from our recent paper is that new species status, Canis oriens, is warranted for this cool creature. While there may be continued controversy over the simple naming scheme of this canid, the premises in this paper better explain why coywolf is an appropriate term to use moving forward.
These islands lost to the sea range in size from one to five hectares. They supported dense tropical vegetation that was at least 300 years old. Nuatambu Island, home to 25 families, has lost more than half of its habitable area, with 11 houses washed into the sea since 2011.
This is the first scientific evidence, published in Environmental Research Letters, that confirms the numerous anecdotal accounts from across the Pacific of the dramatic impacts of climate change on coastlines and people.
However, these studies have been conducted in areas of the Pacific with rates of sea level rise of 3-5 mm per year – broadly in line with the global average of 3 mm per year.
For the past 20 years, the Solomon Islands have been a hotspot for sea-level rise. Here the sea has risen at almost three times the global average, around 7-10 mm per year since 1993. This higher local rate is partly the result of natural climate variability.
These higher rates are in line with what we can expect across much of the Pacific in the second half of this century as a result of human-induced sea-level rise. Many areas will experience long-term rates of sea-level rise similar to that already experienced in Solomon Islands in all but the very lowest-emission scenarios.
Natural variations and geological movements will be superimposed on these higher rates of global average sea level rise, resulting in periods when local rates of rise will be substantially larger than that recently observed in Solomon Islands. We can therefore see the current conditions in Solomon Islands as an insight into the future impacts of accelerated sea-level rise.
We studied the coastlines of 33 reef islands using aerial and satellite imagery from 1947-2015. This information was integrated with local traditional knowledge, radiocarbon dating of trees, sea-level records, and wave models.
Waves add to damage
Wave energy appears to play an important role in the dramatic coastal erosion observed in Solomon Islands. Islands exposed to higher wave energy in addition to sea-level rise experienced greatly accelerated loss compared with more sheltered islands.
Twelve islands we studied in a low wave energy area of Solomon Islands experienced little noticeable change in shorelines despite being exposed to similar sea-level rise. However, of the 21 islands exposed to higher wave energy, five completely disappeared and a further six islands eroded substantially.
The human story
These rapid changes to shorelines observed in Solomon Islands have led to the relocation of several coastal communities that have inhabited these areas for generations. These are not planned relocations led by governments or supported by international climate funds, but are ad hoc relocations using their own limited resources.
The customary land tenure (native title) system in Solomon Islands has provided a safety net for these displaced communities. In fact, in some cases entire communities have left coastal villages that were established in the early 1900s by missionaries, and retraced their ancestral movements to resettle old inland village sites used by their forefathers.
In other cases, relocations have been more ad hoc, with indivdual families resettling small inland hamlets over which they have customary ownership.
In these cases, communities of 100-200 people have fragmented into handfuls of tiny family hamlets. Sirilo Sutaroti, the 94-year-old chief of the Paurata tribe, recently abandoned his village. “The sea has started to come inland, it forced us to move up to the hilltop and rebuild our village there away from the sea,” he told us.
In addition to these village relocations, Taro, the capital of Choiseul Province, is set to become the first provincial capital in the world to relocate residents and services in response to the impact of sea-level rise.
The global effort
Interactions between sea-level rise, waves, and the large range of responses observed in Solomon Islands – from total island loss to relative stability – shows the importance of integrating local assessments with traditional knowledge when planning for sea-level rise and climate change.
Linking this rich knowledge and inherent resilience in the people with technical assessments and climate funding is critical to guiding adaptation efforts.
Melchior Mataki who chairs the Solomon Islands’ National Disaster Council, said: “This ultimately calls for support from development partners and international financial mechanisms such as the Green Climate Fund. This support should include nationally driven scientific studies to inform adaptation planning to address the impacts of climate change in Solomon Islands.”
Last month, the Solomon Islands government joined 11 other small Pacific Island nations in signing the Paris climate agreement in New York. There is a sense of optimism among these nations that this signifies a turning point in global efforts.
However, it remains to be seen how the hundreds of billions of dollars promised through global funding models such as the Green Climate Fund can support those most at need in remote communities, like those in Solomon Islands.
A project housed inside a 15-foot-tall geodesic dome allows people to dance with a computer-controlled figure named VAI.
The virtual partner “watches” and improvises its own moves based on prior experiences. When the human responds, the figure reacts again, creating an impromptu dance couple based on artificial intelligence (AI).
The LuminAI project dome designed and constructed Jessica Anderson, a digital media master’s student at the Georgia Institute of Technology. The system uses Kinect devices to capture the person’s movement, which is then projected as a digitally enhanced silhouette on the dome’s screens.
The dome is lined with custom-made projection panels for mapping.
The surfaces allow people to watch their own shadowy avatar as it struts with a virtual character named VAI, which learns how to dance by paying attention to which moves the current user (and everyone before them) is doing and when. The more moves it sees, the better and deeper the computer’s dance vocabulary. It then uses this vocabulary as a basis for future improvisation.
“HUMANS AREN’T FULLY IN THE DRIVER’S SEAT ANYMORE. THE PROCESS GIVES AUTONOMY BACK TO THE COMPUTER.”
“Co-creative artificial intelligence, or using AI as a creative collaborator, is rare,” says Brian Magerko, the Georgia Tech digital media associate professor who leads the project. “As computers become more ubiquitous, we must understand how they can co-exist with humans. Part of that is creating things together.”
“This episodic memory is filled with experiences of how people have danced with it in the past,” says Mikhail Jacob, a computer science PhD student and lead developer of the LuminAI technology. “For example, the computer learns to predict that when one person pumps their arms into the air, their partner is likely to do something similar. So on seeing that movement, the avatar might pump its arms sideways at the same pace or use that as the basis for its response.”
The team says this improvisation is one of the most important parts of the project. The avatar recognizes patterns, but doesn’t always react the same way every time. That means that the person must improvise too, which leads to greater creativity all around. All the while, the computer is capturing these new experiences and storing the information to use as a basis for future dance sessions.
“Humans aren’t fully in the driver’s seat anymore. The process gives autonomy back to the computer,” says Jacob. “LuminAI forces a person to create something new—potentially something better—with their partner because they’re forced to take their (virtual) partner’s actions into consideration.”
The technology has broader implications than art. As Magerko explains it, these days AI mostly relies on instructions fed to it by humans, and programming a computer with every possible instruction is impossible.
“That’s because humans are so unpredictable,” says Magerko. “Let’s say a computer and a person are going to write a story together about a family conversation at a restaurant. The story could go in a typical fashion or veer wildly into novel territory. The computer won’t do well unless it has been programmed with all of the pieces of knowledge that the story could possibly contain.
“However, if it can learn that knowledge from people and prior experiences, its improvisation can become somewhat consistent and accurate and the AI learning new story content (or dance moves) becomes part of the user experience.”
NASA has released the first global digital model of Mercury’s peaks and valleys, revealing in stunning detail the topography of the innermost planet.
The model, compiled from more than 100,000 images taken by the Messenger probe during more than four years in orbit around Mercury, is represented in an animation showing the planet’s high and low points and everything in between.
The model and other new data pave the way to explore and fully explain the planet’s geologic history, scientists say.
“The wealth of these data … will continue to enable exciting scientific discoveries about Mercury for decades to come,” says Susan Ensor, manager of the Messenger science operations center and a software engineer at the Johns Hopkins University Applied Physics Laboratory.
The model shows that Mercury’s highest elevation is 4.48 kilometers (2.78 miles) above average elevation, at a point just south of the equator in some of the planet’s oldest terrain. The lowest, 5.38 kilometers (3.34 miles) below Mercury’s average, is on the floor of Rachmaninoff basin, an area suspected to host some of the most recent volcanic deposits on the planet.
Messenger launched in 2004 and in 2011 became the first spacecraft to orbit the planet closest to the sun. After circling the planet for more than four years—a mission three years longer than initially planned—it fell to the surface on April 30, 2015. Johns Hopkins APL built and operated the spacecraft and manages the mission for NASA.
Although Mercury is rocky like Earth, Venus, and Mars, the planet is quite different from Earth in other ways—far smaller, denser, and, because of a lack of atmosphere, far hotter on its sun-facing side and colder on the night side. It also has the oldest surface among the terrestrial planets. Understanding how it is different from Earth is crucial to understanding the formation and evolution of planets in the solar system.
Researchers have also released a new map providing an unprecedented view of the region near Mercury’s north pole.
“Messenger had previously discovered that past volcanic activity buried this portion of the planet beneath extensive lavas, more than a mile deep in some areas and covering a vast area equivalent to approximately 60 percent of the continental United States,” says Nancy Chabot of APL.
Because the region is near Mercury’s north pole, the sun is always low on the horizon there, casting long shadows that can obscure the color of the rocks. The new map is produced from photos by Messenger’s Mercury Dual Imaging System, carefully captured through five different narrow-band color filters when the shadows were relatively short. The map reveals Mercury’s northern volcanic plains in striking color.
“This has become one of my favorite maps of Mercury,” says Chabot, instrument scientist for the imaging system. “Now that it is available, I’m looking forward to it being used to investigate this epic volcanic event that shaped Mercury’s surface.”
We know that changes in our genetic code can be associated with an increased risk for psychiatric illnesses such as schizophrenia and bipolar disorder. But how can a genetic mutation lead to complex psychiatric symptoms such as vivid hallucinations, manic episodes and bizarre delusions?
To find out, researchers are trying to fill in the blanks between the genetic blueprint (genotype) and psychiatric disorder (psychiatric phenotype). Phenotypes are a set of observable characteristics that result when a particular genotype interacts with its environment. The phenotype is the eventual outcome of a specific genotype.
But between genotype and psychiatric phenotype lie many measurable traits that together are called endophenotypes. This is an aspect of genetics that scientists are just starting to understand.
The National Institute of Mental Health has recently begun an initiative to push researchers to study endophenotypes with a program called Research Domain Criterion (RDoC), described as an effort to study basic dimensions of functioning that underlie human behavior.
So what exactly are endophenotypes, and how might they contribute to psychiatric illnesses?
Endophenotypes lie between genes and psychiatric phenotypes
An endophenotype can refer to anything from the size and shape of brain cells, to changes in brain structure, to impairments in working memory. The term can refer to a physical trait or a functional one.
An endophenotype must be associated with a specific psychiatric illness, such as schizophrenia, and it must be heritable. It must also be present even if the illness is not active. Within families, the endophenotype must be more common in ill family members than in healthy family members. But the endophenotype must be more common among nonaffected relatives of people with the associated illness than among the general population.
Certain endophenotypes are thought to precede behavioral symptoms. For instance, in several conditions, such as schizophrenia and Alzheimer’s disease, changes in brain structure have been found years before the onset of symptoms.
Currently doctors diagnose a psychiatric disorder based on the patient’s symptoms. The underlying neurobiology isn’t usually considered, because we lack the data to really use it.
In the future, endophenotypes might let us detect who is susceptible to psychiatric illness before clinical symptoms develop. That means we could try to combat, or at least appease, the symptoms of the disorder before they start. And knowing how endophenotypes contribute to these disorders could lead to precision medicine treatments.
How do you study endophenotypes?
One way to study the endophenotypes is to focus on a specific genetic alteration that is associated with a psychiatric disorder. This way we can get a sense of what brain changes the genetic change causes.
For instance, I study a genetic disorder called 22q11.2 Deletion Syndrome (also called 22q11DS). The syndrome is due to a deletion of up to 60 genes, many of which are linked to brain function. About 30 percent of individuals with 22q11DS will develop schizophrenia (the rate in the U.S. population overall is about one percent).
Studying 22q11DS lets us draw a line from a genetic alteration to a psychiatric phenotype, such as decreased neural function, brain structure changes or fewer neurons in certain parts of the brain, and to a psychiatric phenotype, such as schizophrenia.
Let’s go through some concrete examples of how this can be done.
22q11DS: a model syndrome to study endophenotypes
In one study researchers looked at a group of 70 children and adolescents with 22q11DS, and found deficits in executive function (which encompasses cognitive processes such as motivation, working memory and attention) in patients with 22q11DS.
In fact, researchers were actually able to predict subsequent development of psychotic symptoms in individuals with 22q11DS. This study shows that cognitive endophenotypes may underlie psychiatric phenotypes and demonstrates their predictive power. And, like all endophenotypes, it is invisible to the naked eye, but measurable in the lab.
Another study, using functional magnetic resonance imaging (fMRI), found reduced neural activity in patients with 22q11DS when they performed a working memory task compared to a group of healthy control subjects. What’s more, the magnitude of the decrease correlated with the severity of their psychotic symptoms. This suggests abnormalities in neural activity might underlie symptoms associated with schizophrenia.
Other studies have found an association between psychiatric illnesses such as schizophrenia and abnormalities in the size and shape of different brain regions. For instance, a recent study found that certain parts of the brain were thicker in patients with 22q11DS. What’s more, the degree of thickness was related to psychotic symptoms. Changes in brain structure have also been associated with psychiatric disorders, such as obsessive compulsive disorder.
In order to gain a more in-depth understanding of the underlying physiology in 22q11DS, researchers can breed mice with the deletion syndrome by “knocking out” genes in the mouse genome.
Researchers have found that mice with 22q11DS had fewer neurons in a part of the brain associated with cognition compared to unaffected mice.
The number of neurons correlated with how well the mice performed on tasks measuring executive function. These results suggest that individuals with psychiatric illnesses might actually have microscopic changes in their brain cells. This is a significant finding, because we can’t study these effects directly in humans.
These are just some examples of how we can experimentally determine endophenotypes that underlie schizophrenia in 22q11DS. And while 22q11DS is a risk factor for schizophrenia, what we learn from studying this syndrome could help us understand the endophenotypes behind other illnesses.
Of course defining endophenotypes for psychiatric illness is just the first step. After that, researchers and scientists need to find ways to use these results to inform diagnosis, treatment and prevention strategies.
Cybersecurity researchers hacked into the leading “smart home” automation system and essentially got the PIN code to a home’s front door.
Their “lock-pick malware app” was one of four attacks that the cybersecurity researchers leveled at an experimental set-up of Samsung’s SmartThings, a top-selling Internet of Things platform for consumers. The work is believed to be the first platform-wide study of a real-world connected home system. The researchers didn’t like what they saw.
“At least today, with the one public IoT software platform we looked at, which has been around for several years, there are significant design vulnerabilities from a security perspective,” says Atul Prakash, professor of computer science and engineering at the University of Michigan. “I would say it’s okay to use as a hobby right now, but I wouldn’t use it where security is paramount.”
Earlence Fernandes, a doctoral student in computer science and engineering who led the study, says that “letting it control your window shades is probably fine.”
“One way to think about it is if you’d hand over control of the connected devices in your home to someone you don’t trust and then imagine the worst they could do with that and consider whether you’re okay with someone having that level of control,” he says.
Regardless of how safe individual devices are or claim to be, new vulnerabilities form when hardware like electronic locks, thermostats, ovens, sprinklers, lights, and motion sensors are networked and set up to be controlled remotely. That’s the convenience these systems offer. And consumers are interested in that.
As a testament to SmartThings’ growing use, its Android companion app that lets you manage your connected home devices remotely has been downloaded more than 100,000 times. SmartThings’ app store, where third-party developers can contribute SmartApps that run in the platform’s cloud and let users customize functions, holds more than 500 apps.
The researchers performed a security analysis of the SmartThings’ programming framework and to show the impact of the flaws they found, they conducted four successful proof-of-concept attacks.
They demonstrated a SmartApp that eavesdropped on someone setting a new PIN code for a door lock, and then sent that PIN in a text message to a potential hacker. The SmartApp, which they called a “lock-pick malware app” was disguised as a battery level monitor and only expressed the need for that capability in its code.
As an example, they showed that an existing, highly rated SmartApp could be remotely exploited to virtually make a spare door key by programming an additional PIN into the electronic lock. The exploited SmartApp was not originally designed to program PIN codes into locks.
They showed that one SmartApp could turn off “vacation mode” in a separate app that lets you program the timing of lights, blinds, etc., while you’re away to help secure the home.
They demonstrated that a fire alarm could be made to go off by any SmartApp injecting false messages.
How is all this possible? The security loopholes the researchers uncovered fall into a few categories. One common problem is that the platform grants its SmartApps too much access to devices and to the messages those devices generate. The researchers call this “over-privilege.”
“The access SmartThings grants by default is at a full device level, rather than any narrower,” Prakash says. “As an analogy, say you give someone permission to change the lightbulb in your office, but the person also ends up getting access to your entire office, including the contents of your filing cabinets.”
More than 40 percent of the nearly 500 apps they examined were granted capabilities the developers did not specify in their code. That’s how the researchers could eavesdrop on setting of lock PIN codes.
The researchers also found that it is possible for app developers to deploy an authentication method called OAuth incorrectly. This flaw, in combination with SmartApps being over-privileged, allowed the hackers to program their own PIN code into the lock—to make their own secret spare key.
Finally, the “event subsystem” on the platform is insecure. This is the stream of messages devices generate as they’re programmed and carry out those instructions. The researchers were able to inject erroneous events to trick devices. That’s how they managed the fire alarm and flipped the switch on vacation mode.
THE BOTTOM LINE
These results have implications for all smart home systems, and even the broader Internet of Things.
“The bottom line is that it’s not easy to secure these systems” Prakash says. “There are multiple layers in the software stack and we found vulnerabilities across them, making fixes difficult.”
The researchers told SmartThings about these issues in December 2015 and the company is working on fixes. The researchers rechecked a few weeks ago if a lock’s PIN code could still be snooped and reprogrammed by a potential hacker, and it still could.
In a statement, SmartThings officials say they’re continuing to explore “long-term, automated, defensive capabilities to address these vulnerabilities.” They’re also analyzing old and new apps in an effort to ensure that appropriate authentication is put in place, among other steps.
Jaeyeon Jung of Microsoft Research also contributed to this work. The researchers will present a paper on the findings on May 24 at the IEEE Symposium on Security and Privacy in San Jose.
The U.S. Fish and Wildlife Service and the U.S. Geological Survey last month delivered a sobering update on the white-nose syndrome (WNS) epidemic in North America. WNS has been confirmed in a little brown bat (Myotis lucifugus) near North Bend, Washington, over 1,300 miles west of the previously identified western edge of the disease front, Nebraska.
The news hit the WNS and bat conservation community hard. For the previous 10 years, WNS has spread in a stepwise manner from state to state in a radial pattern from Albany, New York, which is thought to be where the infections started. The consistency of this spread allowed researchers to model the movement of the pathogen, Pseudogymnoascus destructans, with an anticipated arrival on the Pacific Coast in 2026.
Researchers have been developing strategies to control WNS and prevent the massive bat mortalities that have been the hallmark of WNS since 2007. And yet, the disease has spread faster than predicted. Where does this new point of infection leave researchers developing techniques to stall this devastating disease?
Gateway to the west
In order to understand why this is such bad news for bats, one needs to understand how wildlife biologists seek to control the spread of devastating pathogens.
Many of the strategies currently being investigated to minimize the impact of WNS on susceptible bat populations are predicated on the idea that “stop-gap” methods could be employed at geographical choke points to delay the spread of the disease to new populations. That would buy time for scientists to develop permanent solutions, such as vaccinations or “gene silencing” techniques to control the disease.
The arrival of WNS on the West Coast takes this approach off the table in many respects, as it’s already past the geographical bottleneck spots where scientists had hoped to slow it down. But the WNS community has other reasons for concern with this new case.
“Come here often?”
Studies of the fungus from the eastern U.S. have shown the pathogen to be mono-clonal. That is, P. destructans in Georgia is the same genetically as P. destructans in Missouri or New York. This is a good thing for bats because it gives them a better chance to develop resistance.
Subsequent evaluation indicates that P. destructans, like most fungi, is likely capable of participating in sexual reproduction in areas where complementary mating types (think male and female, but with numerous potentially compatible “genders”) exist together. When this is considered along with the recent finding that P. destructans and WNS are widespread in eastern Asia, it presents the possibility that this West Coast case may have been introduced a new way or it represents a different strain of the fungus. Significantly, it could be a complementary mating type to P. destructans in the eastern U.S.
This could be a very bad thing for bats for several reasons. To understand how bad this infection could be to the future of WNS in North America, researchers will need to determine its source and any sexual compatibility with existing isolates.
Tougher than your average spore
The spores (reproductive cells produced by fungi) that are produced in asexual reproduction are known as conidia. All the current work being conducted to make spores inactive to control the spread of WNS are predicated on the sensitivity of these conidia to a given control agent.
Yet the phylum of this fungus, known as Ascomycota, can reproduce in another way – sexually, through a type of spore known as ascospores. In numerous examples in other Ascomycota, it has been shown that ascospores are more resistant to control methods than conidia.
If researchers find that the particular P. destructans fungus is capable of producing ascospores – that is reproducing sexually, rather than asexually as with conidia – then current decontamination protocols will need to be revised to address the increased resilience of these sexual spores.
The idea is that in a system with a host (bat) and parasite (P. destructans), coevolution occurs as the disease recurs through numerous generations. If only the host is reproducing sexually (i.e., WNS in North America) and generating greater variation with each generation, the host will be able to evolve a tolerance to the parasite.
However, in a system where both the host and parasite reproduce sexually, coevolution supports the status quo. So as bats evolve tolerance to one strain of P. destructans another strain resulting from sexual recombination that is capable of causing disease in the new tolerant host will become the dominant strain.
Thus, the analogy of the Red Queen running in place in “Through the Looking-Glass, and What Alice Found There” by Lewis Carroll. Although evolution is occurring (running), everyone is evolving together so the disease paradigm never changes.
This is the possibility the introduction of a complementary mating type presents to WNS in North America. Bats won’t be able to evolve a significant tolerance as the fungus reproduces sexually and rapidly adapts to any resistance the bats develop.
Bats everywhere but not a hibernacula to treat
In addition to the strategic and biological challenges that a Pacific Coast WNS case may introduce, there is also a major logistical challenge that has been looming over the WNS community: where are hibernacula – the shelters where bats hibernate – in the west?
Currently there are no known little brown bat hibernacula in Washington. This doesn’t mean that bat ecologists think little brown bats don’t hibernate in Washington, but rather they have never been able to find large hibernacula as is common in the eastern U.S.
Treating bats during the spring, summer and fall when they are widely dispersed on the landscape is impractical. The effort it takes to capture a few individuals is not scalable to an extent that could have a significant impact on WNS-related population declines. That is why most efforts to develop management strategies have been focused on intervention during the winter at known hibernacula where large groups of bats could be treated together with reasonable effort.
If any of the treatments currently under investigation were available today, how could they be used in Washington? Without understanding how these western bat species use the landscape and where they hibernate, there is no way to deliver any future management tool.
Bad, badder, baddest
In many ways this new western case changes the paradigm of WNS.
In the worst-case scenario, a complementary strain has been introduced into North America and will eventually find its way to locations where the East Coast strain exists, facilitating a more recalcitrant and adaptable pathogen.
In the best-case scenario, this case represents a loss of containment within North America, reducing the value of efforts to slow the westward spread of WNS while treatments can be developed and western bat hibernacula can be identified. Either way, the news of a WNS-positive bat in Washington state represents another disaster for bats that are already experiencing unprecedented declines.
In 1950 English mathematician and computer scientist Alan Turing published a paper, “Computing Machinery and Intelligence,” which starts off by proposing a thought experiment that he called the “Imitation Game.” In one room is a human “interrogator” and in another room a man and a woman. The goal of the game is for the interrogator to figure out which of the unknown hidden interlocutors is the man and which is the woman. This is to be accomplished by asking a sequence of questions with responses communicated either by a third party or typed out and sent back. “Winning” the Imitation Game means getting the identification right on the first shot.
Turing then modifies the game by replacing one interlocutor with a computer, and asks whether a computer will be able to converse sufficiently well that the interrogator cannot tell the difference between it and the human. This version of the Imitation Game has come to be known as the “Turing Test.”
Turing’s simple, but powerful, thought experiment gives a very general framework for testing many different aspects of the human-machine boundary, of which conversation is but a single example.
On May 18 at Dartmouth, we will explore a different area of intelligence, taking up the question of distinguishing machine-generated art. Specifically, in our “Turing Tests in the Creative Arts,” we ask if machines are capable of generating sonnets, short stories, or dance music that is indistinguishable from human-generated works, though perhaps not yet so advanced as Shakespeare, O. Henry or Daft Punk.
Conducting the tests
The dance music competition (“Algorhythms”) requires participants to construct an enjoyable (fun, cool, rad, choose your favorite modifier for having an excellent time on the dance floor) dance set from a predefined library of dance music. In this case the initial random “seed” is a single track from the database. The software package should be able to use this as inspiration to create a 15-minute set, mixing and modifying choices from the library, which includes standard annotations of more than 20 features, such as genre, tempo (bpm), beat locations, chroma (pitch) and brightness (timbre).
In what might seem a stiffer challenge, the sonnet and short story competitions (“PoeTix” and “DigiLit,” respectively) require participants to submit self-contained software packages that upon the “seed” or input of a (common) noun phrase (such as “dog” or “cheese grater”) are able to generate the desired literary output. Moreover, the code should ideally be able to generate an infinite number of different works from a single given prompt.
To perform the test, we will screen the computer-made entries to eliminate obvious machine-made creations. We’ll mix human-generated work with the rest, and ask a panel of judges to say whether they think each entry is human- or machine-generated. For the dance music competition, scoring will be left to a group of students, dancing to both human- and machine-generated music sets. A “winning” entry will be one that is statistically indistinguishable from the human-generated work.
The competitions are open to any and all comers. To date, entrants include academics as well as nonacademics. As best we can tell, no companies have officially thrown their hats into the ring. This is somewhat of a surprise to us, as in the literary realm companies are already springing up around machine generation of more formulaic kinds of “literature,” such as earnings reports and sports summaries, and there is of course a good deal of AI automation around streaming music playlists, most famously Pandora.
Judging the differences
Evaluation of the entries will not be entirely straightforward. Even in the initial Imitation Game, the question was whether conversing with men and women over time would reveal their gender differences. (It’s striking that this question was posed by a closeted gay man.) The Turing Test, similarly, asks whether the machine’s conversation reveals its lack of humanity not in any single interaction but in many over time.
It’s also worth considering the context of the test/game. Is the probability of winning the Imitation Game independent of time, culture and social class? Arguably, as we in the West approach a time of more fluid definitions of gender, that original Imitation Game would be more difficult to win. Similarly, what of the Turing Test? In the 21st century, our communications are increasingly with machines (whether we like it or not). Texting and messaging have dramatically changed the form and expectations of our communications. For example, abbreviations, misspellings and dropped words are now almost the norm. The same considerations apply to art forms as well.
Who is the artist?
Thinking about art forms leads naturally to another question: who is the artist? Is the person who writes the computer code that creates sonnets a poet? Is the programmer of an algorithm to generate short stories a writer? Is the coder of a music-mixing machine a DJ?
Where is the divide between the artist and the computational assistant and how does the drawing of this line affect the classification of the output? The sonnet form was constructed as a high-level algorithm for creative work – though one that’s executed by humans. Today, when the Microsoft Office Assistant “corrects” your grammar or “questions” your word choice and you adapt to it (either happily or out of sheer laziness), is the creative work still “yours” or is it now a human-machine collaborative work?
We’re looking forward to seeing what our programming artists submit. Regardless of their performance on “the test,” their body of work will continue to expand the horizon of creativity and machine-human coevolution.