Breakthrough Squishy Motor Can Power Soft Robots Over Rugged Terrain

Scientists have created a robotic vehicle equipped with soft wheels and a flexible motor. It can easily roll over rough terrain and through water.

Future versions might be suitable for search and rescue missions after disasters, deep space and planet exploration, and manipulating objects during magnetic resonance imaging (MRI).

The silicone rubber is nearly 1 million times softer than aluminum.

The most important innovation is the soft motor that provides torque without bending or extending its housing, says Aaron D. Mazzeo, assistant professor of mechanical and aerospace engineering.

“The introduction of a wheel and axle assembly in soft robotics should enable vast improvement in the manipulation and mobility of devices. We would very much like to continue developing soft motors for future applications, and develop the science to understand the requirements that improve their performance.”

Vehicle innovations

  • Motor rotation without bending. “It’s actually remarkably simple, but providing torque without bending is something we believe will be advantageous for soft robots going forward,” Mazzeo says.
  • A unique wheel and axle configuration. The soft wheels may allow for passive suspensions in wheeled vehicles.
  • Wheels that use peristalsis—the process people use to push food to the stomach through the esophagus.
  • A consolidated wheel and motor with an integrated “transmission.”
  • Soft, metal-free motors suitable for harsh environments with electromagnetic fields.
  • The ability to handle impacts. The vehicle survived a fall eight times its height.
  • The ability to brake motors and hold them in a fixed position without the need for extra power.

To create the vehicle, engineers used silicone rubber that is nearly 1 million times softer than aluminum. They liken the softness to be somewhere between a silicone spatula and a relaxed human calf muscle. The motors were made using 3D-printed molds and soft lithography. A provisional patent has been filed with the US government.

“If you build a robot or vehicle with hard components, you have to have many sophisticated joints so the whole body can handle complex or rocky terrain,” says Xiangyu Gong, lead author of the study that is published in the journal Advanced Materials. “For us, the whole design is very simple, but it works very well because the whole body is soft and can negotiate complex terrain.”

via GIPHY

Future possibilities include amphibious vehicles that could traverse rugged lakebeds; search and rescue missions in extreme environments and varied terrains, such as irregular tunnels; shock-absorbing vehicles that could be used as landers equipped with parachutes; and elbow-like systems with limbs on either side.

The Rutgers School of Engineering, the Department of Mechanical and Aerospace Engineering, the Rutgers Research Council, and an A. Walter Tyson Assistant Professorship Award supported the work.

Source: Republished from Futurity.org as a derivative work under the Attribution 4.0 International license. Original article posted to Futurity by .

Now, Check Out:

American Medical Association warns of health and safety problems from ‘white’ LED streetlights

By Richard G. ‘Bugs’ Stevens, University of Connecticut.

The American Medical Association (AMA) has just adopted an official policy statement about street lighting: cool it and dim it.

The statement, adopted unanimously at the AMA’s annual meeting in Chicago on June 14, comes in response to the rise of new LED street lighting sweeping the country. An AMA committee issued guidelines on how communities can choose LED streetlights to “minimize potential harmful human health and environmental effects.”

Municipalities are replacing existing streetlights with efficient and long-lasting LEDs to save money on energy and maintenance. Although the streetlights are delivering these benefits, the AMA’s stance reflects how important proper design of new technologies is and the close connection between light and human health.

Light is composed of light of different colors (red, blue and green) and some LED streetlights have a relatively high portion of blue light, which can disrupt people’s circadian rhythms. flakepardigm/flickr, CC BY-SA

The AMA’s statement recommends that outdoor lighting at night, particularly street lighting, should have a color temperature of no greater than 3000 Kelvin (K). Color temperature (CT) is a measure of the spectral content of light from a source; how much blue, green, yellow and red there is in it. A higher CT rating generally means greater blue content, and the whiter the light appears.

A white LED at CT 4000K or 5000K contains a high level of short-wavelength blue light; this has been the choice for a number of cities that have recently retrofitted their street lighting such as Seattle and New York.

But in the wake of these installations have been complaints about the harshness of these lights. An extreme example is the city of Davis, California, where the residents demanded a complete replacement of these high color temperature LED street lights.

Can communities have more efficient lighting without causing health and safety problems?

Two problems with LED street lighting

An incandescent bulb has a color temperature of 2400K, which means it contains far less blue and far more yellow and red wavelengths. Before electric light, we burned wood and candles at night; this artificial light has a CT of about 1800K, quite yellow/red and almost no blue. What we have now is very different.

The new “white” LED street lighting which is rapidly being retrofitted in cities throughout the country has two problems, according to the AMA. The first is discomfort and glare. Because LED light is so concentrated and has high blue content, it can cause severe glare, resulting in pupillary constriction in the eyes. Blue light scatters more in the human eye than the longer wavelengths of yellow and red, and sufficient levels can damage the retina. This can cause problems seeing clearly for safe driving or walking at night.

You can sense this easily if you look directly into one of the control lights on your new washing machine or other appliance: it is very difficult to do because it hurts. Street lighting can have this same effect, especially if its blue content is high and there is not appropriate shielding.

The other issue addressed by the AMA statement is the impact on human circadian rhythmicity.

Color temperature reliably predicts spectral content of light – that is, how much of each wavelength is present. It’s designed specifically for light that comes off the tungsten filament of an incandescent bulb.

However, the CT rating does not reliably measure color from fluorescent and LED lights.

Another system for measuring light color for these sources is called correlated color temperature (CCT). It adjusts the spectral content of the light source to the color sensitivity of human vision. Using this rating, two different 3000K light sources could have fairly large differences in blue light content.

Therefore, the AMA’s recommendation for CCT below 3000K is not quite enough to be sure that blue light is minimized. The actual spectral irradiance of the LED – the relative amounts of each of the colors produced – should be considered, as well.

The reason lighting matters

The AMA policy statement is particularly timely because the new World Atlas of Artificial Night Sky Brightness just appeared last week, and street lighting is an important component of light pollution. According to the AMA statement, one of the considerations of lighting the night is its impact on human health.

In previous articles for The Conversation, I have described how lighting affects our normal circadian physiology, how this could lead to some serious health consequences and most recently how lighting the night affects sleep.

LEDs (the yellow device) produce a highly concentrated light, which makes glare a problem for LED streetlights since it can hamper vision at night.
razor512/flickr, CC BY

In the case of white LED light, it is estimated to be five times more effective at suppressing melatonin at night than the high pressure sodium lamps (given the same light output) which have been the mainstay of street lighting for decades. Melatonin suppression is a marker of circadian disruption, which includes disrupted sleep.

Bright electric lighting can also adversely affect wildlife by, for example, disturbing migratory patterns of birds and some aquatic animals which nest on shore.

Street lighting and human health

The AMA has made three recommendations in its new policy statement:

First, the AMA supports a “proper conversion to community based Light Emitting Diode (LED) lighting, which reduces energy consumption and decreases the use of fossil fuels.”

Second, the AMA “encourage[s] minimizing and controlling blue-rich environmental lighting by using the lowest emission of blue light possible to reduce glare.”

Third, the AMA “encourage[s] the use of 3000K or lower lighting for outdoor installations such as roadways. All LED lighting should be properly shielded to minimize glare and detrimental human and environmental effects, and consideration should be given to utilize the ability of LED lighting to be dimmed for off-peak time periods.”

There is almost never a completely satisfactory solution to a complex problem. We must have lighting at night, not only in our homes and businesses, but also outdoors on our streets. The need for energy efficiency is serious, but so too is minimizing human risk from bad lighting, both due to glare and to circadian disruption. LED technology can optimize both when properly designed.

The ConversationRichard G. ‘Bugs’ Stevens, Professor, School of Medicine, University of Connecticut

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

Can we harness bacteria to help clean up future oil spills?

By Nina Dombrowski, University of Texas at Austin and Brett J. Baker, University of Texas at Austin.

In 2010 the Deepwater Horizon oil spill released an estimated 4.2 million barrels of oil into the Gulf of Mexico – the largest offshore spill in U.S. history. The spill caused widespread damage to marine species, fisheries and ecosystems stretching from tidal marshes to the deep ocean floor.

Emergency responders used multiple strategies to remove oil from the Gulf: They skimmed it from the water’s surface, burned it and used chemical dispersants to break it into small droplets. However, experts struggled to account for what had happened to much of the oil. This was an important question, because it was unclear how much of the released oil would break down naturally within a short time. If spilled oil persisted and sank to the ocean floor, scientists expected that it would cause more extensive harm to the environment.

Before the Deepwater Horizon spill, scientists had observed that marine bacteria were very efficient at removing oil from seawater. Therefore, many experts argued that marine microbes would consume large quantities of oil from the BP spill and help the Gulf recover.

In a recent study, we used DNA analysis to confirm that certain kinds of marine bacteria efficiently broke down some of the major chemical components of oil from the spill. We also identified the major genetic pathways these bacteria used for this process, and other genes, which they likely need to thrive in the Gulf.

Altogether, our results suggest that some bacteria can not only tolerate but also break up oil, thereby helping in the cleanup process. By understanding how to support these natural occurring microbes, we may also be able to better manage the aftermath of oil spills.

Finding the oil-eaters

Observations in the Gulf appeared to confirm that microbes broke down a large fraction of the oil released from BP’s damaged well. Before the spill, waters in the Gulf of Mexico contained a highly diverse range of bacteria from several different phyla, or large biological families. Immediately after the spill, these bacterial species became less diverse and one phylum increased substantially in numbers. This indicated that many bacteria were sensitive to high doses of oil, but a few types were able to persist.

We wanted to analyze these observations more closely by posing the following questions: Could we show that these bacteria removed oil from the spill site and thereby helped the environment recover? Could we decipher the genetic code of these bacteria? And finally, could we use this genetic information to understand their metabolisms and lifestyles?

Individual puzzle pieces of DNA making up a bacterial genome. Each color represents an individual genome and each dot depicts one piece of DNA.
To address these questions, we used new technologies that enabled us to sequence the genetic code of the active bacterial community that was present in the Gulf of Mexico’s water column, without having to grow them in the laboratory. This process was challenging because there are millions of bacteria in every drop of seawater. As an analogy, imagine looking through a large box that contains thousands of disassembled jigsaw puzzles, and trying to extract the pieces belonging to each individual puzzle and reassemble it.

We wanted to identify bacteria that could degrade two types of compounds that are the major constituents of crude oil: alkanes and aromatic hydrocarbons. Alkanes are relatively easy to degrade – even sunlight can break them down – and have low toxicity. In contrast, aromatic hydrocarbons are much harder to remove from the environment. They are generally much more harmful to living organisms, and some types cause cancer.

Microscopy image of oil-eating bacteria. Tony Gutierrez, Heriot-Watt University

We successfully identified bacteria that degraded each of these compounds, and were surprised to find that many different bacteria fed on aromatic hydrocarbons, even though these are much harder to break down. Some of these bacteria, such as Colwellia, had already been identified as factors in the degradation of oil from the Deepwater Horizon spill, but we also found many new ones.

This included Neptuniibacter, which had not previously been known as an important oil-degrader during the spill, and Alcanivorax, which had not been thought to be capable of degrading aromatic hydrocarbons. Taken together, our results indicated that many different bacteria may act together as a community to degrade complex oil mixtures.

Neptuniibacter also appears to be able to break down sulfur. This is noteworthy because responders used 1.84 million gallons of dispersants on and under the water’s surface during the Deepwater Horizon cleanup effort. Dispersants are complex chemical mixtures but mostly consist of molecules that contain carbon and sulfur.

Their long-term impacts on the environment are still largely unknown. But some studies suggest that Corexit, the main dispersant used after the Deepwater Horizon spill, can be harmful to humans and marine life. If this proves true, it would be helpful to know whether some marine microbes can break down dispersant as well as oil.

Cleaning an oiled gannet, Theodore, Alabama, June 17, 2010.
Deepwater Horizon Response/Flickr, CC BY-ND

Looking more closely into these microbes’ genomes, we were able to detail the pathways that each appeared to use in order to degrade its preferred hydrocarbon in crude oil. However, no single bacterial genome appeared to possess all the genes required to completely break down the more stable aromatic hydrocarbons alone. This implies that it may require a diverse community of microbes to break down these compounds step by step.

Back into the ocean

Offshore drilling is a risky activity, and we should expect that oil spills will happen again. However, it is reassuring to see that marine ecosystems have the ability to degrade oil pollutants. While human intervention will still be required to clean up most spills, naturally occurring bacteria have the ability to remove large amounts of oil components from seawater, and can be important players in the oil cleanup process.

To maximize their role, we need to better understand how we can support them in what they do best. For example, adding dispersant changed the makeup of microbial communities in the Gulf of Mexico during the spill: the chemicals were toxic to some bacteria but beneficial for others. With a better understanding of how human intervention affects these bacteria, we may be able to support optimal bacteria populations in seawater and reap more benefit from their natural oil-degrading abilities.

The ConversationNina Dombrowski, Postdoctoral Fellow, University of Texas at Austin and Brett J. Baker, Assistant Professor of Marine Science, University of Texas at Austin

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

How do food manufacturers pick those dates on their product packaging – and what do they mean?

By Londa Nwadike, Kansas State University.

No one wants to serve spoiled food to their families. Conversely, consumers don’t want to throw food away unnecessarily – but we certainly do. The United States Department of Agriculture estimates Americans toss out the equivalent of US$162 billion in food every year, at the retail and consumer levels. Plenty of that food is discarded while still safe to eat.

Part of these losses are due to consumers being confused about the “use-by” and “best before” dates on food packaging. Most U.S. consumers report checking the date before purchasing or consuming a product, even though we don’t seem to have a very good sense of what the dates are telling us. “Sell by,” “best if used by,” “use by” – they all mean different things. Contrary to popular impression, the current system of food product dating isn’t really designed to help us figure out when something from the fridge has passed the line from edible to inedible.

For now, food companies are not required to use a uniform system to determine which type of date to list on their food product, how to determine the date to list or even if they need to list a date on their product at all. The Food Date Labeling Act of 2016, now before Congress, aims to improve the situation by clearly distinguishing between foods that may be past their peak but still ok to eat and foods that are unsafe to consume.

Aside from the labeling issues, how are these dates even generated? Food producers, particularly small-scale companies just entering the food business, often have a difficult time knowing what dates to put on their items. But manufacturers have a few ways – both art and science – to figure out how long their foods will be safe to eat.

Dates can be about rotating product, not necessarily when it’s safe to eat the food.
MdAgDept, CC BY

Consumer confusion

One study estimated 20 percent of food wasted in U.K. households is due to misinterpretation of date labels. Extending the same estimate to the U.S., the average household of four is losing $275-455 per year on needlessly trashed food.

Out of a mistaken concern for food safety, 91 percent of consumers occasionally throw food away based on the “sell by” date – which isn’t really about product safety at all. “Sell by” dates are actually meant to let stores know how to rotate their stock.

A survey conducted by the Food Marketing Institute in 2011 found that among their actions to keep food safe, 37 percent of consumers reported discarding food “every time” it’s past the “use by” date – even though the date only denotes “peak quality” as determined by the manufacturer.

The most we can get from the dates currently listed on food products is a general idea of how long that particular item has been in the marketplace. They don’t tell consumers when the product shifts from being safe to not safe.

Here’s how producers come up with those dates in the first place.

Figuring out when food’s gone foul

A lot of factors determine the usable life of a food product, both in terms of safety and quality. What generally helps foods last longer? Lower moisture content, higher acidity, higher sugar or salt content. Producers can also heat-treat or irradiate foods, use other processing methods or add preservatives such as benzoates to help products maintain their safety and freshness longer.

But no matter the ingredients, additives or treatments, no food lasts forever. Companies need to determine the safe shelf life of a product.

Larger food companies may conduct microbial challenge studies on food products. Researchers add a pathogenic (one that could make people sick) microorganism that’s a concern for that specific product. For example, they could add Listeria moncytogenes to refrigerated packaged deli meats. This bacterium causes listeriosis, a serious infection of particular concern for pregnant women, older adults and young children.

The researchers then store the contaminated food in conditions it’s likely to experience in transportation, in storage, at the store, and in consumers’ homes. They’re thinking about temperature, rough handling and so on.

Every harmful microorganism has a different infective dose, or amount of that organism that would make people sick. After various lengths of storage time, the researchers test the product to determine at what point the level of microorganisms present would likely be too high for safety.

Based on the shelf life determined in a challenge study, the company can then label the product with a “use by” date that would ensure people would consume the product long before it’s no longer safe. Companies usually set the date at least several days earlier than product testing indicated the product will no longer be safe. But there’s no standard for the length of this “safety margin”, it’s set at the manufacturer’s discretion.

Do you even know what the manufacturer meant by this date?
Sascha Grant, CC BY-NC-ND

Another option for food companies is to use mathematical modeling tools that have been developed based on the results of numerous earlier challenge studies. The company can enter information such as the specific type of product, moisture content and acidity level, and expected storage temperatures into a “calculator.” Out comes an estimate of the length of time the product should still be safe under those conditions.

Companies may also perform what’s called a static test. They store their product for an extended period of time under typical conditions the product may face in transport, in storage, at the store, and in consumer homes. This time they don’t add any additional microorganisms.

They just sample the product periodically to check it for safety and quality, including physical, chemical, microbiological, and sensory (taste and smell) changes. When the company has established the longest possible time the product could be stored for safety and quality, they will label the product with a date that is quite a bit earlier to be sure it’s consumed long before it is no longer safe or of the best quality.

Companies may also store the product in special storage chambers which control the temperature, oxygen concentration, and other factors to speed up its deterioration so the estimated shelf life can be determined more quickly (called accelerated testing). Based on the conditions used for testing, the company would then calculate the actual shelf life based on formulas using the estimated shelf life from the rapid testing.

Smaller companies may list a date on their product based on the length of shelf life they have estimated their competitors are using, or they may use reference materials or ask food safety experts for advice on the date to list on their product.

Sometimes it’s an obvious call.
Steven Depolo, CC BY

Even the best dates are only guidelines

Consumers themselves hold a big part of food safety in their own hands. They need to handle food safely after they purchase it, including storing foods under sanitary conditions and at the proper temperature. For instance, don’t allow food that should be refrigerated to be above 40℉ for more than two hours.

If a product has a use-by date on the package, consumers should follow that date to determine when to use or freeze it. If it has a “sell-by” or no date on the package, consumers should follow storage time recommendations for foods kept in the refrigerator or freezer and cupboard.

And use your common sense. If something has visible mold, off odors, the can is bulging or other similar signs, this spoilage could indicate the presence of dangerous microorganisms. In such cases, use the “If in doubt, throw it out” rule. Even something that looks and smells normal can potentially be unsafe to eat, no matter what the label says.

The ConversationLonda Nwadike, Assistant Professor of Food Safety, Extension Food Safety Specialist at University of Missouri, Kansas State University

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

Where Supermassive Black Holes Come From [Video]

Here’s a great, brief video we found that explains how supermassive black holes are formed, or at least one theory of how that happens. As the video explains, this type of black holes are paradoxically the blackest objects in the universe as well as being the brightest due to the way that the are formed.

If you’ve ever wondered what astronomers and cosmologists mean when they are talking about supermassive black holes, wonder no more! This video will explain it all:

Our gratitude to the PHD Comics YouTube channel for creating this awesome video!

Now, Check Out: 

Why Having a ‘Bird Brain’ is Actually Awesome [Video]

The macaw has a brain the size of an unshelled walnut and the macaque monkey has one about the size of a lemon. Nevertheless, the macaw has more neurons in its forebrain—the portion of the brain associated with intelligent behavior—than the macaque.

That is one of the surprising results of the first study to systematically measure the number of neurons in the brains of more than two dozen species of birds ranging in size from the tiny zebra finch to the six-foot-tall emu, which found that they consistently have more neurons packed into their small brains than are stuffed into mammalian or even primate brains of the same mass.

“For a long time having a ‘bird brain’ was considered to be a bad thing: Now it turns out that it should be a compliment,” says Suzana Herculano-Houzel, a neuroscientist at Vanderbilt University.

The study provides a straightforward answer to a puzzle that comparative neuroanatomists have been wrestling with for more than a decade: how can birds with their small brains perform complicated cognitive behaviors?

The conundrum was created by a series of studies beginning in the previous decade that directly compared the cognitive abilities of parrots and crows with those of primates. The studies found that the birds could manufacture and use tools, use insight to solve problems, make inferences about cause-effect relationships, recognize themselves in a mirror, and plan for future needs, among other cognitive skills previously considered the exclusive domain of primates.

Scientists were left with a generally unsatisfactory fallback position: Avian brains must simply be wired in a completely different fashion from primate brains. Two years ago, even this hypothesis was knocked down by a detailed study of pigeon brains, which concluded that they are, in fact, organized along quite similar lines to those of primates.

The new study, published in the Proceedings of the National Academy of Sciences, provides a more plausible explanation: Birds can perform these complex behaviors because their forebrains contain a lot more neurons than anyone had previously thought—as many as in mid-sized primates.

Densely packed neurons

“We found that birds, especially songbirds and parrots, have surprisingly large numbers of neurons in their pallium: the part of the brain that corresponds to the cerebral cortex, which supports higher cognition functions such as planning for the future or finding patterns. That explains why they exhibit levels of cognition at least as complex as primates,” Herculano-Houzel says.

That’s possible because the neurons in avian brains are much smaller and more densely packed than those in mammalian brains. Parrot and songbird brains, for example, contain about twice as many neurons as primate brains of the same mass and two to four times as many neurons as equivalent rodent brains.

birdbrain-vs-mammalbrain
Click/tap for larger image.

Not only are neurons packed into the brains of parrots and crows at a much higher density than in primate brains, but the proportion of neurons in the forebrain is also significantly higher.

“In designing brains, nature has two parameters it can play with: the size and number of neurons and the distribution of neurons across different brain centers,” Herculano-Houzel says, “and in birds we find that nature has used both of them.”

Although the relationship between intelligence and neuron count has not yet been firmly established, scientists argue that avian brains with the same or greater forebrain neuron counts than primates with much larger brains can potentially provide the birds with much higher “cognitive power” per pound than mammals.

One of the important implications of the study, Herculano-Houzel says, is that it demonstrates that there is more than one way to build larger brains.

Previously, neuroanatomists thought that as brains grew larger neurons had to grow bigger as well because they had to connect over longer distances. “But bird brains show that there are other ways to add neurons: keep most neurons small and locally connected and only allow a small percentage to grow large enough to make the longer connections. This keeps the average size of the neurons down.

“Something I love about science is that when you answer one question, it raises a number of new questions.”

Among the questions that this study raises are whether the surprisingly large number of neurons in bird brains comes at a correspondingly large energetic cost, and whether the small neurons in bird brains are a response to selection for small body size due to flight, or possibly the ancestral way of adding neurons to the brain – from which mammals, not birds, may have diverged.

Scientists from the Charles University in Prague and the University of Vienna are coauthors of the study.

Source: Republished from Futurity.org as a derivative work under the Attribution 4.0 International license. Original article posted to Futurity by .

Featured Photo Credit:  Mathias Appel/Flickr

Now, Check Out:

LIGO Detects More Gravitational Waves

Scientists have observed gravitational waves—ripples in the fabric of spacetime—for the second time, surpassing the expectations of LIGO researchers and clearly demonstrating the increased capabilities of Advanced LIGO.

The gravitational waves were detected by both of the twin Laser Interferometer Gravitational-Wave Observatory (LIGO) detectors, located in Louisiana and Washington, on December 26, 2015.

Gravitational waves carry information about their origins and about the nature of gravity that cannot otherwise be obtained, and physicists have concluded that these gravitational waves were produced during the final moments of the merger of two black holes—14 and 8 times the mass of the sun—to produce a single, more massive spinning black hole that is 21 times the mass of the sun.

“LIGO is bringing us a new way to observe some of the darkest yet most energetic events in our universe.”

“It is very significant that these black holes were much less massive than those observed in the first detection,” says Gabriela Gonzalez, LIGO Scientific Collaboration (LSC) spokesperson and professor of physics and astronomy at Louisiana State University. “Because of their lighter masses compared to the first detection, they spent more time—about one second—in the sensitive band of the detectors. It is a promising start to mapping the populations of black holes in our universe.”

During the merger, which occurred approximately 1.4 billion years ago, a quantity of energy roughly equivalent to the mass of the sun was converted into gravitational waves. The detected signal comes from the last 27 orbits of the black holes before their merger. Based on the arrival time of the signals—with the Livingston (Louisiana) detector measuring the waves 1.1 milliseconds before the Hanford (Washington) detector—the position of the source in the sky can be roughly determined.

“In the near future, Virgo, the European interferometer, will join a growing network of gravitational wave detectors, which work together with ground-based telescopes that follow-up on the signals,” notes Fulvio Ricci, the Virgo Collaboration spokesperson, a physicist at Istituto Nazionale di Fisica Nucleare (INFN) and professor at Sapienza University of Rome. “The three interferometers together will permit a far better localization in the sky of the signals.”

Two events in 4 months

The first detection of gravitational waves, announced on February 11, 2016, confirmed a major prediction of Albert Einstein’s 1915 general theory of relativity, and marked the beginning of the new field of gravitational-wave astronomy.

The second discovery “has truly put the ‘O’ for Observatory in LIGO,” says Caltech’s Albert Lazzarini, deputy director of the LIGO Laboratory. “With detections of two strong events in the four months of our first observing run, we can begin to make predictions about how often we might be hearing gravitational waves in the future.

“LIGO is bringing us a new way to observe some of the darkest yet most energetic events in our universe.”

“We are starting to get a glimpse of the kind of new astrophysical information that can only come from gravitational wave detectors,” says MIT’s David Shoemaker, who led the Advanced LIGO detector construction program.

Both discoveries were made possible by the enhanced capabilities of Advanced LIGO, a major upgrade that increases the sensitivity of the instruments compared to the first generation LIGO detectors, enabling a large increase in the volume of the universe probed.

“With the advent of Advanced LIGO, we anticipated researchers would eventually succeed at detecting unexpected phenomena, but these two detections thus far have surpassed our expectations,” says NSF Director France A. Córdova. “NSF’s 40-year investment in this foundational research is already yielding new information about the nature of the dark universe.”

Advanced LIGO’s next data-taking run will begin this fall. By then, further improvements in detector sensitivity are expected to allow LIGO to reach as much as 1.5 to 2 times more of the volume of the universe. The Virgo detector is expected to join in the latter half of the upcoming observing run.

LIGO research is carried out by the LIGO Scientific Collaboration (LSC), a group of more than 1,000 scientists from universities around the United States and in 14 other countries. More than 90 universities and research institutes in the LSC develop detector technology and analyze data; approximately 250 students are strong contributing members of the collaboration. The LSC detector network includes the LIGO interferometers and the GEO600 detector.

The LIGO Observatories are funded by the National Science Foundation (NSF), and were conceived, built, and are operated by Caltech and MIT. A paper about the discovery is forthcoming in Physical Review Letters.

Source: Republished from Futurity.org as a derivative work under the Attribution 4.0 International license. Original article posted to Futurity by .

Now, Check Out:

Personal beliefs versus scientific innovation: getting past a flat Earth mentality

By Igor Juricevic, Indiana University South Bend.

The history of science is also a history of people resisting new discoveries that conflict with conventional wisdom.

When Galileo promoted Copernicus’ theory that the Earth revolves around the sun – counter to church doctrine about the Earth being the center of the universe – he wound up condemned by the Roman Inquisition in 1633. Charles Darwin’s theory of evolution – that new species develop as a result of natural selection on inherited traits – ran into opposition because it contradicted long-held scientific, political and religious beliefs. Alfred Wegener’s 1912 proposal that Earth’s continents move relative to each other – the theory of continental drift – was rejected for decades, in part because scientists held fast to the traditional theories they’d spent careers developing.

These kinds of examples aren’t only historical, unfortunately. We’re used to hearing about how the general public can be dense about science. You might expect some portion of everyday folks to take their time coming around on truly groundbreaking ideas that run counter to what they’ve always thought.

But scientists, too, hold their own personal beliefs – by definition, based on old ways of thinking – that may be holding back the innovation that’s at the heart of science. And that’s a problem. It’s one thing for an average Joe to resist evolving scientific theories. It’s quite another if a scientist’s preconceived notions holds us back from discovering the new and unknown – whether that’s a cure for Zika or a cutting-edge technology to combat climate change.

Personal beliefs as publication roadblocks

Real scientific progress occurs when laboratory or field research is reported to the public. With luck, the finding is accepted and put into practice, cures are developed, social policies are instituted, educational practices are improved and so on.

This usually occurs though publication of the research in scientific journals. There’s an important step between the lab and publication that laypeople may not know about – the evaluation of the research by other scientists. These other scientists are peers of the researcher, typically working in a closely related area. This middle step is commonly referred to as peer review.

In a perfect world, peer review is supposed to determine if the study is solid, based on the quality of the research. It’s meant to be an unbiased evaluation of whether the findings should be reported via journal publication. This important step prevents sloppy research from reaching the public.

However, in the real world, scientists are human beings and are often biased. They let their own beliefs influence their peer reviews. For example, numerous reports indicate that scientists rate research more favorably if the findings agree with their prior beliefs. Worst of all, these prior beliefs often have nothing to do with science but are simply the scientists’ personal views.

‘But that’s counter to what I thought…’

How is this a problem for scientific innovation? Let’s look at how some personal beliefs could prevent innovative science from reaching the public.

What if she’s on the path to a revolutionary idea?
CIAT, CC BY-SA

“Minorities aren’t good at STEM.” The stereotype that “women are not good at math” is commonly held – and also happens to be incorrect. If a scientist holds this personal belief, then he is likely to judge any research done by women in STEM (Science, Technology, Engineering and Mathematics) more negatively – not because of its quality, but because of his own personal belief.

For instance, some studies have shown that female STEM applicants in academia are judged more harshly than their male counterparts. Because of this gender bias, it may take a female STEM researcher more time and effort before her work reaches the public.

Some racial minorities face similar kinds of bias. For example, one study found that black applicants are less likely to receive research funding from the U.S. National Institutes of Health than equivalently qualified whites. That’s a major roadblock to these researchers advancing their work.

“Comic books are low-brow entertainment for kids.” Here’s an example from my own area of expertise.

Does this look like a legitimate area of inquiry to you? Analysis of comic book images can yield new insights into how we perceive

.

Comic book research is a relatively recent area of study. Perhaps because of this, innovative findings in psychology have been discovered by analyzing comic book images.

However, people often believe that comic books are just low-brow entertainment for kids. If a scientist holds this personal belief, then she’s likely to judge any psychology research using comic books more negatively. Because of this, scientists like me who focus on comic books may not be able to publish in the most popular psychology journals. As a result, fewer people will ever see this research.

“The traditional ways are the best ways.” A final example is a personal belief that directly counters scientific innovation. Often, scientists believe that traditional methods and techniques are better than any newly proposed approaches.

The history of psychology supplies one example. Behaviorism was psychology’s dominant school of thought for the first part of the 20th century, relying on observed behavior to provide insights. Its devotees rejected new techniques for studying psychology. During behaviorism’s reign, any talk of internal processes of the mind was considered taboo. One of the pioneers of the subsequent cognitive revolution, George A. Miller, said “using ‘cognitive’ was an act of defiance.” Luckily for us, he was defiant and published one of the most highly cited papers in psychology.

If a scientist believes the way we’ve always done things in the lab is best, then she’ll judge any research done using novel approaches more negatively. Because of this, highly innovative work is rarely published in the best scientific journals and is often recognized only after considerable delay.

We know our planet is round. But are we missing out on other innovative ideas?
Jaya Ramchandani, CC BY

How is this a problem for scientific progress?

Almost by definition, the most important and innovative scientific findings often go against people’s existing beliefs. If research that conforms to personal beliefs is favored, then any research that is based on new ideas runs the risk of being passed over. It takes a leap to imagine a round Earth when everyone’s always believed it to be flat.

When old ideas rule the day, scientific progress stalls. And as our world changes at an ever faster pace, we need innovative thinking to face the coming challenges.

How can scientists stop their personal beliefs from impeding scientific progress? Completely removing personal beliefs from these contexts is impossible. But we can work to change our beliefs so that, instead of hampering scientific progress, they encourage it. Many studies have outlined possible ways to modify beliefs. It’s up to scientists, and indeed society as well, to begin to examine their own beliefs and change them for the better.

After all, we don’t want to delay the next revolutionary idea in climate science, pioneering cure for cancer, or dazzling discovery in astronomy just because we can’t see past our original beliefs.

The ConversationIgor Juricevic, Assistant Professor of Psychology (Perception and Cognition), Indiana University South Bend

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

Graphene isn’t the only Lego in the materials-science toy box

By Peter Byrley, University of California, Riverside.

You may have heard of graphene, a sheet of pure carbon, one atom thick, that’s all the rage in materials-science circles, and getting plenty of media hype as well. Reports have trumpeted graphene as an ultra-thin, super-strong, super-conductive, super-flexible material. You could be excused for thinking it might even save all of humanity from certain doom.

Not exactly. In the current world of nano-electronics, there is a lot more going on than just graphene. One of the materials I work with, molybdenum disulphide (MoS₂), is a one-layer material with interesting properties beyond those of graphene. MoS₂ can absorb five times as much visible light as graphene, making it useful in light detectors and solar cells. In addition, even newer materials like borophene (a one-layer material made of boron atoms projected to be mechanically stronger than graphene) are being proposed and synthesized every day.

Layering two-dimensional materials. Peter Byrley, Author provided

These and other materials yet to be discovered will be used like Lego pieces to build the electronics of the future. By stacking multiple materials in different ways, we can take advantage of different properties in each of them. The new electronics built with these combined structures will be faster, smaller, more environmentally resistant and cheaper than what we have now.

Looking for an energy gap

There is a key reason that graphene will not be the versatile cure-all material that the hype might suggest. You can’t just stack graphene repeatedly to get what you want. The electronic property preventing this is the lack of what is called an “energy gap.” (The more technical term is “band gap.”)

What the energy gap looks like. Peter Byrley

Metals will conduct electricity through them regardless of the environment. However, any other material that is not a metal needs a little boost of energy from the outside to get electrons to move through the band gap and into the conducting state. How much of a boost the material needs is called the energy gap. The energy gap is one of the factors that determines how much total energy needs to be put into your entire electrical device, from either heat or applied electrical voltage, to get it to conduct electricity. You essentially have to put in enough starting energy if you want your device to work.

Some materials have a gap so large that almost no amount of energy can get electrons flowing through them. These materials are called insulators (think glass). Other materials have either an extremely small gap or no gap at all. These materials are called metals (think copper). This is why we use copper (a metal with instant conductivity) for wiring, while we use plastics (an insulator that blocks electricity) as the protective outer coating.

Everything else, with gaps in between these two extremes, is called a semiconductor (think silicon). Semiconductors, at the theoretical temperature of absolute zero, behave as insulators because they have no heat energy to get their electrons into the conducting state. At room temperature, however, heat from the surrounding environment provides just enough energy to get some electrons (hence the term, “semi”-conducting) over the small band gap and into the conducting state ready to conduct electricity.

Comparing the band gap in metals (left), semiconductors (center) and insulators (right). Peter Byrley

Graphene’s energy gap

Graphene is in fact a semi-metal. It has no energy gap, which means it will always conduct electricity – you can’t turn off its conductivity.

This is a problem because electronic devices use electrical current to communicate. At their most fundamental level, computers communicate by sending 1’s and 0’s – on and off signals. If a computer’s components were made from graphene, the system would always be on, everywhere. It would be unable to perform tasks because its lack of energy gap prevents graphene from ever becoming a zero; the computer would keep reading 1’s all the time. Semiconductors, by contrast, have an energy gap that is small enough to let some electrons conduct electricity but is large enough to have a clear distinction between on and off states.

Imagining using a computer based on graphene.
Woman with computer via shutterstock.com

Finding the right materials

Not all hope is lost, however. Researchers are looking at three main ways to tackle this:

  1. Using new materials similar to graphene that actually have a sufficient energy gap and finding ways to further improve their conductivity.
  2. Altering graphene itself to create this energy gap.
  3. Combining graphene with other materials to optimize their combined properties.

There are many one-layer materials currently being looked at that actually have a sufficient energy gap. One such material, MoS₂, has been studied in recent years as a potential replacement for traditional silicon and also as a light detector and gas sensor.

The only drawback with these other materials is that so far, we have not found one that matches the excellent though always-on conductivity of graphene. The other materials can be turned off, but when on, they are not as good as graphene. MoS₂ itself is estimated to have 1/15th to 1/10th the conductivity of graphene in small devices. Researchers, including me, are now looking at ways to alter these materials to increase their conductivity.

Using graphene as an ingredient

Strangely, an energy gap in graphene can actually be induced through modifications like bending it, turning it into a nanoribbon, inserting foreign chemicals into it or using two layers of graphene. But each of these modifications can reduce the graphene’s conductivity or limit how it can be used.

To avoid specialized setups, we could just combine graphene with other materials. By doing this, we are also combining the properties of the materials in order to reap the best benefits. We could, for example, invent new electronic components that have a material allowing them to be shut off or on (like MoS₂) but have graphene’s great conductivity when turned on. New solar cells will work on this concept.

A combined structure could, for example, be a solar panel made for harsh environments: We could layer a thin, transparent protective material over the top of a very efficient solar-collecting material, which in turn could be on top of a material that is excellent at conducting electricity to a nearby battery. Other middle layers could include materials that are good at selectively detecting gases such as methane or carbon dioxide.

Researchers are now racing to figure out what the best combination is for different applications. Whoever finds the best combination will eventually win numerous rights to patents for improved electronic products.

The truth is, though, we don’t know what our future electronics will look like. New Lego pieces are being invented all the time; the ways we stack or rearrange them are changing constantly, too. All that’s certain is that the insides of electronic devices will look drastically different in the future than they do today.

The ConversationPeter Byrley, Ph.D. Candidate in Chemical Engineering, University of California, Riverside

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

We’re (not) running out of water — a better way to measure water scarcity

By Kate Brauman, University of Minnesota.

Water crises seem to be everywhere. In Flint, the water might kill us. In Syria, the worst drought in hundreds of years is exacerbating civil war. But plenty of dried-out places aren’t in conflict. For all the hoopla, even California hasn’t run out of water.

There’s a lot of water on the planet. Earth’s total renewable freshwater adds up to about 10 million cubic kilometers. That number is small, less than one percent, compared to all the water in oceans and ice caps, but it’s also large, something like four trillion Olympic-sized swimming pools. Then again, water isn’t available everywhere: across space, there are deserts and swamps; over time, seasons of rain and years of drought.

Also, a water crisis isn’t about how much water there is – a desert isn’t water-stressed if no one is using the water; it’s just an arid place. A water shortage happens when we want more water than we have in a specific place at a specific time.

So determining whether a given part of the world is water-stressed is complicated. But it’s also important: we need to manage risk and plan strategically. Is there a good way to measure water availability and, thereby, identify places that could be vulnerable to water shortages?

Because it measures whether we have enough, the ratio of water use to water availability is a good way to quantify water shortage. Working with a group of collaborators, some of whom run a state-of-the-art global water resources model and some of whom work on the ground in water-scarce places, I quantified just how much of our water we’re using on a global basis. It was less straightforward than it sounds.

Water consumption, water availability

We use water for drinking and cleaning and making clothes and cars. Mostly, however, we use water to grow food. Seventy percent of the water we pull from rivers, streams and aquifers, and nearly 90 percent of the water we “use up,” is for irrigation.

How much water we use hinges on what you mean by “use.” Tallying the water we withdraw from rivers, lakes and aquifers makes sense for homes and farms, because that’s how much water runs through our taps or sprinkles onto farm fields.

But an awful lot of that water flows down the drain. So it can be, and probably is, used again. In the U.S., wastewater from most homes flows to treatment plants. After it’s cleaned, it’s released to rivers or lakes that are likely someone else’s water source. My tap water in Minneapolis comes from the Mississippi River, and all the water I flush goes through a wastewater treatment plant and back into the Mississippi River, the drinking water source for cities all the way to New Orleans.

Water-saving products, such as low-flow faucets and appliances, reduce the amount of what that is used on site, most of which is sent back into watersheds. The water in a home consumed, through evaporation for instance, remains the same. Kate Brauman, Author provided

With most water “saving” technologies, less water is taken out of a river, but that also means that less water is put back into the river. It makes a difference to your water bill – you had to pump less water! However, your neighbor in the town downstream doesn’t care if that water ran through your tap before it got to her. She cares only about how much total water there is in the stream. If you took out less but also put back less so the total didn’t change, it doesn’t make a difference to her.

So in our analysis, we decided to count all the water that doesn’t flow downstream, called water consumption. Consumed water isn’t gone, but it’s not around for us to use again on this turn of the water cycle.

For example, when a farmer irrigates a field, some of the water evaporates or moves through plants into the atmosphere and is no longer available to be used by a farm downhill. We tallied that water, not the runoff (which might go to that town downstream, or to migrating birds!).

Our model calculated water consumption by people and agriculture all over the world. It turns out that if a lot of water is being consumed in a watershed, meaning that it’s used and can’t be immediately reused, it’s being used for irrigation. But irrigated agriculture is super-concentrated – 75 percent of water consumption by irrigation occurs in just 6 percent of all the watersheds in the world. So in many watersheds, not much water is consumed at all – often it’s fed back into the watershed after it’s used.

On the other side of the ledger, we had to keep track of how much water is available. Water availability fluctuates, with flood peaks and dry seasons, so we counted up available water each month, not just in average years but during wet and dry years as well. And we counted groundwater as well as surface water from rivers, lakes and wetlands.

In many places, rainfall and snowfall replenish groundwater each year. But in other places, like the High Plains aquifer in the central United States, groundwater reserves were formed long ago and effectively aren’t recharged. This fossil groundwater is a finite resource, so using it is fundamentally unsustainable; for our measure of water shortage, we considered only renewable groundwater and surface water.

Water shortage or water stress?

We analyzed how much of the available renewable water in a watershed we’re using up for over 15,000 watersheds worldwide for each month in wet and in dry years. With those data in hand, my colleagues and I started trying to interpret it. We wanted to identify parts of the world facing water stress all the time, during dry seasons, or only in drought years.

But it turns out that identifying and defining water stress is tough, too. Just because a place is using up a lot of its water – maybe a city pulls most of the water out of a river every summer – that doesn’t necessarily mean it is water-stressed. Culture, governance and infrastructure determine whether a limit on water availability is problematic. And this context influences whether consuming 55 percent of available water is demonstrably worse than using 50 percent, or whether two short months of water shortage is twice as bad as one. Demarcating water scarcity transforms water shortage into a value-laden evaluation of water stress.

An example of a more detailed and localized measure of freshwater scarcity risk that uses data from dry seasons and dry years. Blue areas have the lowest areas of risk because they use less than five percent of their annually renewable water. The darkest areas use more than 100 percent of their renewable freshwater because they tap groundwater that isn’t replenished. Kate Braumen, Author provided

To evaluate whether a watershed is stressed, we considered the common use-to-availability thresholds of 20 percent and 40 percent to define moderate and severe water scarcity. Those levels are most often attributed to Malin Falkenmark, who did groundbreaking work assessing water for people. In doing our research, we did some digging and found Waclaw Balcerski, however. His 1964 study (published in a Hungarian water resources journal) of postwar Europe showed the cost of building water infrastructure increased in countries withdrawing more than 20 percent of their available water. Interesting, but hardly a universal definition of water stress.

A nuanced picture

In the end, we sidestepped definitions of stress and opted to be descriptive. In our study, we decided to report the fraction of renewable water used up by people annually, seasonally, and in dry years.

What does this metric reveal? You’re probably in trouble if you’re using up 100 percent of your water, or even 75 percent, since there’s no room for error in dry years and there’s no water in your river for fish or boats or swimmers. But only local context can illuminate that.

We found that globally, just two percent of watersheds use more than 75 percent of their total renewable water each year. Most of these places depend on fossil groundwater and irrigate heavily; they will run out of water.

More of the places we recognize as water-limited are seasonally depleted (nine percent of watersheds), facing regular periods of water shortage. Twenty-one percent of the world’s watersheds are depleted in dry years; these are the places where it’s easy to believe there’s plenty of water to do what we like, yet people struggle semi-regularly with periods of shortage.

We also found that 68 percent of watersheds have very low depletion; when those watersheds experience water stress, it is due to access, equality and governance.

To our surprise, we found that no watersheds were moderately depleted, defined as watersheds that in an average year are using up half their water. But it turns out that all of those watersheds are heavily depleted sometimes – they have months when nearly all the water is consumed and months when little is used.

Managing water to meet current and future demand is critical. Biophysical indicators, such as the ones we looked at, can’t tell us where a water shortage is stressful to society or ecosystems, but a good biophysical indicator can help us make useful comparisons, target interventions, evaluate risk and look globally to find management models that might work at home.

The ConversationKate Brauman, Lead Scientist Institute on the Environment, University of Minnesota

This article was originally published on The Conversation. Read the original article.

Now, Check Out: