Researcher David Silver and colleagues designed a computer program capable of beating a top-level Go player – a marvelous technological feat and important threshold in the development of artificial intelligence, or AI. It stresses once more that humans aren’t at the center of the universe, and that human cognition isn’t the pinnacle of intelligence.
I remember well when IBM’s computer Deep Blue beat chess master Garry Kasparov. Where I’d played – and lost to – chess-playing computers myself, the Kasparov defeat solidified my personal belief that artificial intelligence will become reality, probably even in my lifetime. I might one day be able to talk to things similar to my childhood heroes C-3PO and R2-D2. My future house could be controlled by a program like HAL from Kubrick’s “2001” movie.
As a researcher in artificial intelligence, I realize how impressive it is to have a computer beat a top Go player, a much tougher technical challenge than winning at chess. Yet it’s still not a big step toward the type of artificial intelligence used by the thinking machines we see in the movies. For that, we need new approaches to developing AI.
Intelligence is evolved, not engineered
To understand the limitations of the Go milestone, we need to think about what artificial intelligence is – and how the research community makes progress in the field.
Typically, AI is part of the domain of engineering and computer science, a field in which progress is measured not by how much we learned about nature or humans, but by achieving a well-defined goal: if the bridge can carry a 120-ton truck, it succeeds. Beating a human at Go falls into exactly that category.
I take a different approach. When I talk about AI, I typically don’t talk about a well-defined matter. Rather, I describe the AI that I would like to have as “a machine that has cognitive abilities comparable to that of a human.”
Admittedly, that is a very fuzzy goal, but that is the whole point. We can’t engineer what we can’t define, which is why I think the engineering approach to “human level cognition” – that is, writing smart algorithms to solve a particularly well-defined problem – isn’t going to get us where we want to go. But then what is?
The New Horizons mission has discovered yet another strange and interesting feature of Pluto: nitrogen ice glaciers floating in the vast “heart” of Pluto that carry hills of water ice with them. Once again, the distant dwarf planet is astonishing scientists with the quantity and variety of its geological activity.
An article on the NASA website provides the intriguing details of the new finding:
The nitrogen ice glaciers on Pluto appear to carry an intriguing cargo: numerous, isolated hills that may be fragments of water ice from Pluto’s surrounding uplands. These hills individually measure one to several miles or kilometers across, according to images and data from NASA’s New Horizons mission.
The hills, which are in the vast ice plain informally named Sputnik Planum within Pluto’s ‘heart,’ are likely miniature versions of the larger, jumbled mountains on Sputnik Planum’s western border. They are yet another example of Pluto’s fascinating and abundant geological activity.
Because water ice is less dense than nitrogen-dominated ice, scientists believe these water ice hills are floating in a sea of frozen nitrogen and move over time like icebergs in Earth’s Arctic Ocean. The hills are likely fragments of the rugged uplands that have broken away and are being carried by the nitrogen glaciers into Sputnik Planum. ‘Chains’ of the drifting hills are formed along the flow paths of the glaciers. When the hills enter the cellular terrain of central Sputnik Planum, they become subject to the convective motions of the nitrogen ice, and are pushed to the edges of the cells, where the hills cluster in groups reaching up to 12 miles (20 kilometers) across.
At the northern end of the image, the feature informally named Challenger Colles – honoring the crew of the lost space shuttle Challenger – appears to be an especially large accumulation of these hills, measuring 37 by 22 miles (60 by 35 kilometers). This feature is located near the boundary with the uplands, away from the cellular terrain, and may represent a location where hills have been ‘beached’ due to the nitrogen ice being especially shallow.
No matter how much we exercise or how many wheatgrass smoothies we slurp, our bodies still age. As the years pass, skin wrinkles, eyesight falters, hearing fades. It’s called senescence—the natural course of aging.
All animals succumb, except a rare and lucky few: The rougheye rockfish, for instance, can live more than 200 years with negligible signs of aging; ocean quahogs more than 500 years before dying from disease, accident, or predation.
Such rare, ageless animals are a scientific curiosity, perhaps holding clues to how and why we humans age. And now, scientists have added another critter to the list of forever young: minor workers of the ant Pheidole dentata.
A new study, published in the Proceedings of the Royal Society B, reports that P. dentata minor workers, which live up to 140 days in the laboratory, show no signs of age-related decline before they die. None.
Since ants are a social species, scientists say the new discovery may hold meaning for the social animal we care most about: humans.
“I don’t want to make any claims that the ant brains are just like human brains, because of course they’re very different,” says Ysabel Giraldo a postdoctoral fellow at the California Institute of Technology and lead author on the paper. “But when we observe social insect behavior, there’s something that is attractive and interesting because we think, well, maybe this parallels something about our own social organization.”
Giraldo earned a PhD in biology from Boston University in 2014, and her thesis research formed the basis for the new paper. “By looking at social insects, maybe we can learn something about how social interactions shape behavior or neurobiology that we can’t learn in a solitary system,” she says.
The idea of studying how ants age began as a conversation between Giraldo and biology professor James Traniello, co-lead author of the study and Giraldo’s thesis advisor, about the work of Robert Friedlander, a physician at Brigham and Women’s Hospital who studies the role of cell death in neurodegenerative disorders like Alzheimer’s and Parkinson’s.
“We were reading some literature on Alzheimer’s and human aging and some of the molecular mechanisms, looking at cell death in the brain,” says Giraldo. “And some of those conversations just got us wondering, ‘What goes on in ant brains?’”
She decided to find out. Her plan: examine hundreds of P. dentata minor workers—the “lab rat” of Traniello’s lab—and look for similar phenomena that occur in humans as we age: more cell death in certain areas of the brain, lower levels of certain important neurotransmitters, like dopamine and serotonin, and poorer performance on daily tasks.
But what did she actually find? Read on to learn the surprising findings…
Researchers from the Mayo Clinic have made an astounding discovery that not only increases the life span in normal mice by up to 35%, but also increases their “healthy lifespan,” meaning the timeframe of their lives in which they were at their optimal health.
How did they do this? By creating a way to remove so-called “senescent cells,” or cells that have become damaged and have stopped dividing, from the bodies of the mice using a targeted drug. Since the biological systems of mice are very similar to humans, then it’s very likely that this research will also translate to people.
A fascinating news release on the Science Daily website provides the amazing details:
Researchers at Mayo Clinic have shown that senescent cells — cells that no longer divide and accumulate with age — negatively impact health and shorten lifespan by as much as 35 percent in normal mice. The results, which appear today in Nature, demonstrate that clearance of senescent cells delays tumor formation, preserves tissue and organ function, and extends lifespan without observed adverse effects.
“Cellular senescence is a biological mechanism that functions as an ’emergency brake’ used by damaged cells to stop dividing,” says Jan van Deursen, Ph.D., Chair of Biochemistry and Molecular biology at Mayo Clinic, and senior author of the paper. “While halting cell division of these cells is important for cancer prevention, it has been theorized that once the ’emergency brake’ has been pulled, these cells are no longer necessary.”
The immune system sweeps out the senescent cells on a regular basis, but over time becomes less effective. Senescent cells produce factors that damage adjacent cells and cause chronic inflammation, which is closely associated with frailty and age-related diseases.
Mayo Clinic researchers used a transgene that allowed for the drug-induced elimination of senescent cells from normal mice. Upon administration of a compound called AP20187, removal of senescent cells delayed the formation of tumors and reduced age-related deterioration of several organs. Median lifespan of treated mice was extended by 17 to 35 percent. They also demonstrated a healthier appearance and a reduced amount of inflammation in fat, muscle and kidney tissue.
“Senescent cells that accumulate with aging are largely bad, do bad things to your organs and tissues, and therefore shorten your life but also the healthy phase of your life,” says Dr. van Deursen. “And since you can eliminate the cells without negative side effects, it seems like therapies that will mimic our findings — or our genetic model that we used to eliminate the cells — like drugs or other compounds that can eliminate senescent cells would be useful for therapies against age-related disabilities or diseases or conditions.”
Darren Baker, Ph.D., a molecular biologist at Mayo Clinic, and first author on the study is also optimistic about the potential implications of the study for humans.
“The advantage of targeting senescent cells is that clearance of just 60-70 percent can have significant therapeutic effects,” says Dr. Baker. “If translatable, because senescent cells do not proliferate rapidly, a drug could efficiently and quickly eliminate enough of them to have profound impacts on healthspan and lifespan.”
On the next page, view a video and learn directly from two of the researchers who were involved in the study about exactly how it works.
A team of geochemists from UCLA and other organizations around the world have found fascinating new evidence for how the Earth’s moon was formed. By analyzing moon rocks that were brought back from the Apollo missions and comparing them to volcanic rocks from Hawaii and Arizona.
Their conclusion: the moon was formed when the Earth was in a head-on collision with a smaller, forming planet called Theia 4.5 billion years ago.
An excellent news release on the EureAlert website provides the forensic details:
The moon was formed by a violent, head-on collision between the early Earth and a “planetary embryo” called Theia approximately 100 million years after the Earth formed, UCLA geochemists and colleagues report.
Scientists had already known about this high-speed crash, which occurred almost 4.5 billion years ago, but many thought the Earth collided with Theia (pronounced THAY-eh) at an angle of 45 degrees or more — a powerful side-swipe. New evidence reported Jan. 29 in the journal Science substantially strengthens the case for a head-on assault.
The researchers analyzed seven rocks brought to the Earth from the moon by the Apollo 12, 15 and 17 missions, as well as six volcanic rocks from the Earth’s mantle — five from Hawaii and one from Arizona.
The key to reconstructing the giant impact was a chemical signature revealed in the rocks’ oxygen atoms. (Oxygen makes up 90 percent of rocks’ volume and 50 percent of their weight.) More than 99.9 percent of Earth’s oxygen is O-16, so called because each atom contains eight protons and eight neutrons. But there also are small quantities of heavier oxygen isotopes: O-17, which have one extra neutron, and O-18, which have two extra neutrons. Earth, Mars and other planetary bodies in our solar system each has a unique ratio of O-17 to O-16 — each one a distinctive “fingerprint.”
In 2014, a team of German scientists reported in Science that the moon also has its own unique ratio of oxygen isotopes, different from Earth’s. The new research finds that is not the case.
“We don’t see any difference between the Earth’s and the moon’s oxygen isotopes; they’re indistinguishable,” said Edward Young, lead author of the new study and a UCLA professor of geochemistry and cosmochemistry.
Young’s research team used state-of-the-art technology and techniques to make extraordinarily precise and careful measurements, and verified them with UCLA’s new mass spectrometer.
The fact that oxygen in rocks on the Earth and our moon share chemical signatures was very telling, Young said. Had Earth and Theia collided in a glancing side blow, the vast majority of the moon would have been made mainly of Theia, and the Earth and moon should have different oxygen isotopes. A head-on collision, however, likely would have resulted in similar chemical composition of both Earth and the moon.
“Theia was thoroughly mixed into both the Earth and the moon, and evenly dispersed between them,” Young said. “This explains why we don’t see a different signature of Theia in the moon versus the Earth.”
Theia, which did not survive the collision (except that it now makes up large parts of Earth and the moon) was growing and probably would have become a planet if the crash had not occurred, Young said. Young and some other scientists believe the planet was approximately the same size as the Earth; others believe it was smaller, perhaps more similar in size to Mars.
Another interesting question is whether the collision with Theia removed any water that the early Earth may have contained. After the collision — perhaps tens of millions of year later — small asteroids likely hit the Earth, including ones that may have been rich in water, Young said. Collisions of growing bodies occurred very frequently back then, he said, although Mars avoided large collisions.
A head-on collision was initially proposed in 2012 by Matija ?uk, now a research scientist with the SETI Institute, and Sarah Stewart, now a professor at UC Davis; and, separately during the same year by Robin Canup of the Southwest Research Institute.
The World Health Organization says it is likely that the virus will spread, as the mosquitoes that carry the virus are found in almost every country in the Americas.
Zika virus was discovered almost 70 years ago, but wasn’t associated with outbreaks until 2007. So how did this formerly obscure virus wind up causing so much trouble in Brazil and other nations in South America?
Where did Zika come from?
Zika virus was first detected in Zika Forest in Uganda in 1947 in a rhesus monkey, and again in 1948 in the mosquito Aedes africanus, which is the forest relative of Aedes aegypti. Aedes aegypti and Aedes albopictus can both spread Zika. Sexual transmission between people has also been reported.
Zika has a lot in common with dengue and chikungunya, another emergent virus. All three originated from West and central Africa and Southeast Asia, but have recently expanded their range to include much of the tropics and subtropics globally. And they are all spread by the same species of mosquitoes.
Genetic analysis of the virus revealed that the strain in Brazil was most similar to one that had been circulating in the Pacific.
Brazil had been on alert for an introduction of a new virus following the 2014 FIFA World Cup, because the event concentrated people from all over the world. However, no Pacific island nation with Zika transmission had competed at this event, making it less likely to be the source.
There is another theory that Zika virus may have been introduced following an international canoe event held in Rio de Janeiro in August of 2014, which hosted competitors from various Pacific islands.
Another possible route of introduction was overland from Chile, since that country had detected a case of Zika disease in a returning traveler from Easter Island.
Most people with Zika don’t know they have it
According to research after the Yap Island outbreak, the vast majority of people (80 percent) infected with Zika virus will never know it – they do not develop any symptoms at all. A minority who do become ill tend to have fever, rash, joint pains, red eyes, headache and muscle pain lasting up to a week. And no deaths had been reported.
In early 2015, Brazilian public health officials sounded the alert that Zika virus had been detected in patients with fevers in northeast Brazil. Then there was a similar uptick in the number of cases of Guillain-Barré in Brazil and El Salvador. And in late 2015 in Brazil, cases of microcephaly started to emerge.
At present, the link between Zika virus infection and microcephaly isn’t confirmed, but the virus has been found in amniotic fluid and brain tissue of a handful of cases.
One way to understand how Zika spread is to use something called the Swiss cheese model. Imagine a stack of Swiss cheese slices. The holes in each slice are a weakness, and throughout the stack, these holes aren’t the same size or the same shape. Problems arise when the holes align.
With any disease outbreak, multiple factors are at play, and each may be necessary but not sufficient on its own to cause it. Applying this model to our mosquito-borne mystery makes it easier to see how many different factors, or layers, coincided to create the current Zika outbreak.
A hole through the layers
The first layer is a fertile environment for mosquitoes. That’s something my colleagues and I have studied in the Amazon rain forest. We found that deforestation followed by agriculture and regrowth of low-lying vegetation provided a much more suitable environment for the malaria mosquito carrier than pristine forest.
Increasing urbanization and poverty create a fertile environment for the mosquitoes that spread dengue by creating ample breeding sites. In addition, climate change may raise the temperature and/or humidity in areas that previously have been below the threshold required for the mosquitoes to thrive.
The second layer is the introduction of the mosquito vector. Aedes aegypti and Aedes albopictus have expanded their geographic range in the past few decades. Urbanization, changing climate, air travel and transportation, and waxing and waning control efforts that are at the mercy of economic and political factors have led to these mosquitoes spreading to new areas and coming back in areas where they had previously been eradicated.
For instance, in Latin America, continental mosquito eradication campaigns in the 1950s and 1960s led by the Pan American Health Organization conducted to battle yellow fever dramatically shrunk the range of Aedes aegypti. Following this success, however, interest in maintaining these mosquito control programs waned, and between 1980 and the 2000s the mosquito had made a full comeback.
The third layer, susceptible hosts, is critical as well. For instance, chikungunya virus has a tendency to infect very large portions of a population when it first invades an area. But once it blows through a small island, the virus may vanish because there are very few susceptible hosts remaining.
Since Zika is new to the Americas, there is a large population of susceptible hosts who haven’t previously been exposed. In a large country, Brazil for instance, the virus can continue circulating without running out of susceptible hosts for a long time.
The fourth layer is the introduction of the virus. It can be very difficult to pinpoint exactly when a virus is introduced in a particular setting. However, studies have associated increasing air travel with the spread of certain viruses such as dengue.
When these multiple factors are in alignment, it creates the conditions needed for an outbreak to start.
Putting the layers together
My colleagues and I are studying the role of these “layers” as they relate to the outbreak of yet another mosquito-borne virus, Madariaga virus (formerly known as Central/South American eastern equine encephalitis virus), which has caused numerous cases of encephalitis in the Darien jungle region of Panama.
There, we are examining the association between deforestation, mosquito vector factors, and the susceptibility of migrants compared to indigenous people in the affected area.
In our highly interconnected world which is being subjected to massive ecological change, we can expect ongoing outbreaks of viruses originating in far-flung regions with names we can barely pronounce – yet.
The common bed bug, once considered rare in developed countries, has been proliferating on every continent but Antarctica for the last two decades, making it a growing concern for travelers and others.
With an eye toward eradicating the parasite, which feeds on the blood of humans and other animals, a team of researchers from 36 institutions has successfully mapped the genome ofCimex lectularius to get a better understanding of its genetic makeup.
“There’s an explosion of insect genome sequencing right now,” says Jack Werren, a professor of biology at the University of Rochester and a team member. “But the bed bug is particularly interesting because it’s a human parasite, a major pest, and has a unique biology.”
In his part of the sequencing project, Werren discovered 805 possible instances of genes being transferred from bacteria within the bed bug to the insect’s chromosomes—a process called lateral gene transfer (LGT).
Chromosomes routinely break and are then repaired in organisms. The most common repair mechanism is called homologous recombination in which similar genetic material is used as a template in piecing the broken chromosome back together. But, periodically, the repairs go badly and foreign DNA is incorporated into the chromosomes–in the case of C. lectularius, that DNA includes genetic material from bacteria.
“Usually, genes that are transferred from other organisms never become functional or are harmful to the host organism,” says Werren. “In those cases, the transferred material is often lost by random processes—such as mutation and deletion—or removed during genetic selection.”
One exception involves the transfer of a patatin-like gene from the Wolbachia bacteria. Patatin genes help organisms to store and cleave starch and lipid molecules. The gene, transferred from the intracellular bacterium Wolbachia to C. lectularius, appears to be functional in the male bed bug, but not the female.
“Because the inserted genes create unique genetic profiles in bed bugs, they have the potential of becoming effective targets for pest control,” says Werren.
A great deal more work needs to be done before any eradication steps can be taken based on these results. While 805 candidate sites for LGT have been identified in the common bed bug, Werren says only six have been confirmed, so far, as actually having received genetic material from bacteria.
Of those 805 candidate sites, 459 have been attributed to the Arsenophonus bacteria, and 87 from Wolbachia, both of which are common bacterial associates of insects.
Researchers are testing a new technology similar to Microsoft’s Kinect for Xbox One that allows orangutans in the Melbourne Zoo to use their bodies to control challenging games and applications.
If the trial is successful, within a few years the orangutans could be playing computer games with zoo visitors.
Zoos around the world, including Melbourne, have for some years been using computer tablets to enrich activities for primates. But while Melbourne’s six orangutans clearly love playing with and watching the tablets, there are serious limitations. The animals tend to smash the devices if they hold them, which means a zookeeper has to hold onto the tablet from behind protective mesh.
“WE WANT TO SEE IF WE CAN DEVISE EXPERIENCES THAT ARE INHERENTLY ENGAGING TO THEM.”
Zoo Atlanta in Georgia installed a touch screen into a tree-like structure inside the enclosure. But Sally Sherwen, an animal welfare specialist at Zoos Victoria, wanted to go beyond that and give the orangutans the opportunity to engage with the technology the way they want to—make it a richer experience by allowing more full body movements and also to interact directly with visitors.
She says previous research at the zoo has shown the orangutans themselves are keen to engage. When given the opportunity to be behind a curtained off part of the enclosure or in clear view, the orangutans much preferred being in clear view.
“They enjoyed using the tablet, but we wanted to give them something more, something they can use when they choose to,” Sherwen says.
FUN AND SAFE
The goal was to ensure there was nothing the orangutans could break or use to hurt themselves.
“By having all the technology on the outside and using these emerging technologies that allow for touch detections on projected surfaces, we are able to circumvent the safety issue,” says Marcus Carter, a researcher at the University of Melbourne.
Such safety issues aren’t to be underestimated. Orangutans are three times stronger than humans while sharing 97 percent of our DNA. A key challenge has been projecting successfully through the three panels of bulletproof glass protecting the enclosure. But the team can now project a full body-sized screen that allows the orangutans to “bodily engage” with the projection, whether it be rolling, using it on their bodies, or bringing over physical objects like leaves and bits of tarpaulin.
GAMES BASED ON PERSONALITIES
The team has developed an initial shape-recognition game dubbed Zap that builds off a game the zoo keepers created in which the orangutans are trained to identify a red dot on a wall to secure a treat. But in the computer game, the shape will explode with light when both the orangutan and the human player touch the shapes at the same time. The idea is to encourage collaborative play.
“As interactive game designers we use what we call participatory design. We work with the people who use the products and they provide input and feedback for prototypes,” says PhD candidate Sarah Webber, who is working with Carter. “You can’t have those conversations with animals, so we have to find new ways to include them as participants in the design process.”
The researchers are now working with the zookeepers to tap into the different personalities of the zoo’s orangutans. For example, females Gabby and Kiani appear to be fascinated with handbags and what people pull out of them, while male Santan is intrigued by men with ginger beards.
Malu, also a male, likes extracting nuts and bolts from places he shouldn’t. Indeed, Malu’s penchant for taking things apart–he managed to escape the enclosure for a short time last year–has made his keepers train him to hand over whatever bolts and nuts he extracts in return for a reward.
“These individual differences are things we can use as inspiration to design something that is complex and motivates them to solve puzzles,” Webber says. “We know apes can successfully use touch screens, but they are very task orientated, so we want to see if we can devise experiences that are inherently engaging to them.”
INSTAGRAM FOR ORANGUTANS
One application is being aimed especially at Kiani, who loves to look at pictures of herself. Dubbed “Orangstagram”, the app allows the orangutans to take pictures of themselves and display them. They would also be able to go through a picture library and choose what they want to look at.
Such an activity could prove to be much more than a game. It opens up the possibility of animal psychologists “reading” the emotional health of orangutans by analyzing what pictures they select.
“If we design an interface they understand, they could use it to communicate things about their welfare,” Carter says.
Such an interface could also eventually become a shared projection space extending inside and outside the enclosure, creating a host of potential new opportunities for interaction between the orangutans and zoo visitors. This could allow orangutans the opportunity to play collaboratively with visitors, who can remain safely outside the enclosure. It would completely change the way animals and visitors interact at the zoo.
In a surprising 6-2 decision, the Supreme Court upheld a controversial energy conservation rule from the Federal Energy Regulatory Commission (FERC), the agency that regulates interstate electricity sales.
The rule was one of those arcane pieces of federal policy so complex that even attorneys arguing for and against had difficulty explaining it. Yet this particular decision by the court is one of the most important in the energy world for many years – not because it upheld a particular FERC rule but because the decision seems to tip the balance of power on electricity policy toward the federal government and away from the states.
The breadth of this decision paves the way for a host of new technologies and business models that seem poised to disrupt the usually staid business of electric utilities and usher in a more technologically advanced power grid. At the same time, the ruling sidestepped a number of thorny questions at the heart of state versus federal control over the power grid.
Getting paid to save energy
The FERC rule allows homes and businesses to get paid for energy conservation when demand on the power grid is very high, a practice known as demand response in the electricity business. Demand response has been around for years even before the case was heard by the Supreme Court, and has been credited with keeping power costs down and even with avoiding blackouts.
For example, on hot summer afternoons when the air conditioner load soars, consumers and businesses can sign up for utility programs to turn up thermostats for short periods and, in return, receive a rebate. By arranging to consume less power during those critical times, grid operators can avoid purchasing costly power from very polluting generators.
Critics of the practice have complained that payments in the demand response market have been so lucrative as to amount to a major subsidy for electricity users, one that has eroded the profits of power plants to the point where (ironically) the reliability of the grid may eventually be threatened. The decision issued on Monday, and the margin by which FERC’s demand response rules were upheld, came therefore as something of a surprise.
During oral arguments in October last year, the attorneys arguing on behalf of the FERC sometimes struggled to explain the workings of the power grid and the markets that have been created in the wake of electricity deregulation in the 1990s.
A host of awkward analogies, from sports cars to hamburger stands, were used on all sides. At the end of arguments, it seemed that the FERC had won some points and opponents of demand response some others, but ultimately, that confusion had prevailed.
Some months ago, I argued that this case has hugely broad implications for the electricity business, particularly for innovation, that go far beyond demand response. Indeed, the majority opinion, authored by Justice Kagan, seemed at times very sweeping.
During arguments, power generators complained that FERC simply did not have the jurisdiction to set up a market for demand response. The Federal Power Act suggests that the portion of the grid that distributes power to homes and businesses, rather than high-voltage transmission lines that transport power long distances, is the jurisdiction of the states. On this point, the message from the court was pretty clear: FERC has the authority to make the rules for deregulated electricity markets, and it can be as permissive or restrictive as it sees fit in determining who gets to participate in those markets.
As a result, the ruling seems to put the federal government in the driver’s seat over modernizing the power grid, at least in the 70 percent of the U.S. where deregulated regional electricity markets are now the norm and have been for nearly two decades.
Get paid to reduce electricity demand? Use on-site generators to supplement the grid during hot summer days? Allow community solar and energy storage to earn the same market price as natural gas or nuclear power generators? The Supreme Court has now opened the door to all of this. A smarter grid, here we come!
Microgrids and community solar
While demand response has been controversial, it has (alongside the rest of the smart grid) undoubtedly paved the way for a burst of innovative technologies, practices and business models, the likes of which the electricity sector has not seen in many decades.
Scientists at McGill University have achieved a huge breakthrough in neuroscience – they have discovered how to make artificial neurons that are indistinguishable from normal human neurons and can be implanted to make new connections in the nervous system.
This is the first time scientists have managed to create new functional connections between neurons.
And apart from the fact that these artificial neurons grow over 60 times faster than neurons naturally do, they are indistinguishable from ones that grow naturally in our bodies.
“It’s really very exciting, because the central nervous system doesn’t regenerate,” says Montserrat Lopez, a postdoctoral fellow at McGill University who spent four years developing, fine-tuning, and testing the new technique. “What we’ve discovered should make it possible to develop new types of surgery and therapies for those with central nervous system damage or diseases.”
A TINY BALL AND A MICROSCOPE
Because neurons are about the size of 1/100th of a single strand of hair, it takes some very specialized instruments and a lot of careful manipulation to create healthy neuronal connections that transmit electrical signals in the same way that naturally grown neurons do.
The researchers used an atomic force microscope to attach a very small polystyrene ball (a few micrometers in size) to a portion of a neuron that acts as the transmitter, which they then stretched, a bit like pulling on a rubber band, to extend and connect with the part of the neuron that acts as a receiver.
“We would never have made this discovery if the people working in the lab hadn’t figured out that you had to avoid any quick or jerky movements when you move the newly made neurons around,” says Peter Grutter, a McGill physics professor and the senior author on the paper published in the Journal of Neuroscience. “Until they found the right way to walk the neurons across the lab, from the microscope to the incubator where the newly made neurons are left to grow for 24 hours, we weren’t having any luck getting them to behave the way we wanted them to.”
An even bigger challenge than getting the neurons to connect in the proper way, proved to be getting them to detach from the tool used to create them without destroying them in the process.
Eventually, the researchers figured out how to sever the connection and still preserve the functional neuron by releasing the beads.
Although it is now possible to create new neuronal connections, there is still much work ahead.
“The neurons we were able to create were just under 1 millimeter long, but that’s because we were limited by the size of the dish we used,” says Margaret Magdesian, a neuroscientist who is the first author on the paper and who worked at the Montreal Neurological Institute when the research was conducted. “This technique can potentially create neurons that are several millimeters long, but clearly more studies will need to be done to understand whether and how these micro-manipulated connections differ from natural ones.”