Saving lives by letting cars talk to each other

By Huei Peng, University of Michigan.

The death of a person earlier this year while driving with Autopilot in a Tesla sedan, along with news of more crashes involving Teslas operating in Autopilot, has triggered a torrent of concerns about the safety of self-driving cars.

But there is a way to improve safety across a rapidly evolving range of advanced mobility technologies and vehicles – from semi-autonomous driver assist features like Tesla’s Autopilot to a fully autonomous self-driving car like Google’s.

A Tesla Model S on the highway. pasa47, CC BY

The answer is connectivity: wireless communication that connects vehicles to each other, to the surrounding infrastructure, even to bicyclists and pedestrians. While connectivity and automation each provide benefits on their own, combining them promises to transform the movement of people and goods more than either could alone, and to do so safely. The U.S. Department of Transportation may propose requiring all new cars to have vehicle-to-vehicle communication, known as V2V, as early as this fall.

Tesla blamed the fatal crash on the failure of both its Autopilot technology and the driver to see the white tractor-trailer against a bright sky. But the crash – and the death – might have been avoided entirely if the Tesla and the tractor-trailer it hit had been able to talk to each other.

Limitations of vehicles that are not connected

Autonomous vehicles that aren’t connected to each other is a bit like gathering together the smartest people in the world but not letting them talk to each other. Connectivity enables smart decisions by individual drivers, by self-driving vehicles and at every level of automation in between.

Despite all the safety advances in recent decades, there are still more than 30,000 traffic deaths every year in the United States, and the number may be on the rise. After years of steady declines, fatalities rose 7.2 percent in 2015 to 35,092, up from 32,744 in 2014, representing the largest percentage increase in nearly 50 years, according to the U.S. DOT.

Most American traffic crashes involve human error. Ragesoss, CC BY-SA

The federal government estimates that 94 percent of all crashes – fatal or not – involve human error. Fully automated, self-driving vehicles are considered perhaps the best way to reduce or eliminate traffic deaths by taking human error out of the equation. The benefits of automation are evident today in vehicles that can steer you back into your lane if you start to drift or brake automatically when another driver cuts you off.

A self-driving vehicle takes automation to a higher level. It acts independently, using sensors such as cameras and radars, along with decision-making software and control features, to “see” its environment and respond, just as a human driver would.

However, onboard sensors, no matter how sophisticated, have limitations. Like humans, they see only what is in their line of sight, and they can be hampered by poor weather conditions.

Connecting cars to each other

Connected vehicles anonymously and securely “talk” to each other and to the surrounding infrastructure via wireless communication similar to Wi-Fi, known as Dedicated Short Range Communications, or DSRC. Vehicles exchange data – including location, speed and direction – 10 times per second through messages that can be securely transmitted at least 1,000 feet in any direction, and through barriers such as heavy snow or fog. Bicycles and pedestrians can be linked using portable devices such as smartphones or tablets, so drivers know they are nearby.

The federal government estimates that V2V connectivity could ultimately prevent or reduce the severity of about 80 percent of collisions that don’t involve a driver impaired by drugs or alcohol.

Cars are already connected in many ways. Think satellite-based GPS navigation, in-vehicle Wi-Fi hotspots and smartphone apps that remind you where you parked or remotely unlock your doors. But when it comes to connectivity for safety, there is broad agreement within the auto industry that DSRC-based V2V communication holds the most potential for reducing crashes. After years of testing, the industry is poised to roll out the technology. The next step is putting regulations in place.

Could this congested mess become a connected, communicating system? joiseyshowaa/flickr, CC BY-SA

Perhaps the greatest benefit of connectivity is that it can transform a group of independent vehicles sharing a road into a cohesive traffic system that can exchange critical information about road and traffic conditions in real time. If all vehicles are connected, and a car slips on some ice in blinding snow, vehicles following that car – whether immediately behind or three or four or more vehicles back – will get warnings to slow down. A potential 100-car pileup could become a two-car fender-bender, or be avoided altogether.

This technological shift becomes a revolution when connectivity and automation are combined. A self-driving vehicle is like an island on the road, aware only of what is immediately around it. Connectivity empowers a driverless car. It alerts the vehicle to imminent dangers it may not otherwise sense, such as a vehicle about to run a red light, approaching from the other side of a hill or coming around a blind corner. The additional information could be what triggers an automated response that avoids a crash. In that way, connectivity enables more, and potentially better, automation.

More research needed

At the University of Michigan Mobility Transformation Center, we’re working to further the development of connected and automated vehicles.

Advanced mobility vehicle technology is evolving rapidly on many fronts. More work must be done to determine how best to feed data gathered from sensors to in-vehicle warning systems. We need to more fully understand how to fuse information from connectivity and onboard sensors effectively, under a wide variety of driving scenarios. And we must perfect artificial intelligence, the brains behind self-driving cars.

The benefits of connected and automated vehicles go well beyond safety. They hold the potential to significantly reduce fuel use and carbon emissions through more efficient traffic flow. No more idling at red lights or in rush hour jams for commuters or freight haulers.

Connected self-driving cars also promise to bring safe mobility to those who don’t have cars, don’t want cars or cannot drive due to age or illness. Everything from daily living supplies to health care could be delivered to populations without access to transportation.

Researchers at MTC are also studying possible negative unintended consequences of the new technology and watching for possible privacy violations, cyberattack vulnerabilities or increases in mileage driven. Deeper understanding of both technology and social science issues is the only way to ensure that connected self-driving cars become part of our sustainable future.

The ConversationHuei Peng, Professor of Mechanical Engineering, University of Michigan

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

Could This Tiny Robotic Drone Become the Next Rembrandt? [Video]

Computer scientists are programming drones equipped with a payload of ink to paint murals.

It’s no simple feat. Programming the aerial robots to apply each payload of ink accurately and efficiently requires complex algorithms to plan flight paths and adjust for positioning errors. Even very slight air currents can toss the featherweight drones off course.

Stippling drone artist in action. (Credit: McGill University)
Stippling drone artist in action. (Credit: McGill University)

The drones, which are small enough to fit in the palm of a hand, are outfitted with a miniature arm that holds a bit of ink-soaked sponge. As they hover near the surface to be painted, internal sensors and a motion capture system help position them to dab dots of ink in just the right places, an artistic technique known as stippling.

Professor Paul Kry at McGill University’s School of Computer Science came up with the idea a few years ago, as a way to do something about the blank hallways and stairwells in the building that houses his lab.

“I thought it would be great to have drones paint portraits of famous computer scientists on them,” he recalls.

He bought a few of the tiny quadcopters online and had a student start on the task as a summer project in 2014, under a Canadian government award for undergraduate research.

Later, master’s students Brendan Galea and Ehsan Kia took the project’s helm, often working at night and into the wee hours of the morning so the drones’ artistic efforts wouldn’t be disturbed by air turbulence from other students coming in and out of the lab.

An article on the project by Kry and the three students won a “best paper” prize in May at an international symposium in Lisbon on computational aesthetics in graphics and imaging.

Eventually, larger drones could be deployed to paint murals on hard-to-reach outdoor surfaces, including curved or irregular facades, Kry says.

“There’s this wonderful mural festival in Montreal, and we have giant surfaces in the city that end up getting amazing artwork on them,” he notes. “If we had a particularly calm day, it would be wonderful to try to do something on a larger scale like that.”

Source: Republished from as a derivative work under the Attribution 4.0 International license. Original article posted to Futurity by Chris Chipello-McGill.

Now, Check Out:

Biohybrid robots built from living tissue start to take shape [Video]

By Victoria Webster, Case Western Reserve University.

Think of a traditional robot and you probably imagine something made from metal and plastic. Such “nuts-and-bolts” robots are made of hard materials. As robots take on more roles beyond the lab, such rigid systems can present safety risks to the people they interact with. For example, if an industrial robot swings into a person, there is the risk of bruises or bone damage.

Researchers are increasingly looking for solutions to make robots softer or more compliant – less like rigid machines, more like animals. With traditional actuators – such as motors – this can mean using air muscles or adding springs in parallel with motors. For example, on a Whegs robot, having a spring between a motor and the wheel leg (Wheg) means that if the robot runs into something (like a person), the spring absorbs some of the energy so the person isn’t hurt. The bumper on a Roomba vacuuming robot is another example; it’s spring-loaded so the Roomba doesn’t damage the things it bumps into.

But there’s a growing area of research that’s taking a different approach. By combining robotics with tissue engineering, we’re starting to build robots powered by living muscle tissue or cells. These devices can be stimulated electrically or with light to make the cells contract to bend their skeletons, causing the robot to swim or crawl. The resulting biobots can move around and are soft like animals. They’re safer around people and typically less harmful to the environment they work in than a traditional robot might be. And since, like animals, they need nutrients to power their muscles, not batteries, biohybrid robots tend to be lighter too.

Tissue-engineered biobots on titanium molds.
Karaghen Hudson and Sung-Jin Park, CC BY-ND

Building a biobot

Researchers fabricate biobots by growing living cells, usually from heart or skeletal muscle of rats or chickens, on scaffolds that are nontoxic to the cells. If the substrate is a polymer, the device created is a biohybrid robot – a hybrid between natural and human-made materials.

If you just place cells on a molded skeleton without any guidance, they wind up in random orientations. That means when researchers apply electricity to make them move, the cells’ contraction forces will be applied in all directions, making the device inefficient at best.

So to better harness the cells’ power, researchers turn to micropatterning. We stamp or print microscale lines on the skeleton made of substances that the cells prefer to attach to. These lines guide the cells so that as they grow, they align along the printed pattern. With the cells all lined up, researchers can direct how their contraction force is applied to the substrate. So rather than just a mess of firing cells, they can all work in unison to move a leg or fin of the device.

Tissue-engineered soft robotic ray that’s controlled with light.
Karaghen Hudson and Michael Rosnach, CC BY-ND

Biohybrid robots inspired by animals

Beyond a wide array of biohybrid robots, researchers have even created some completely organic robots using natural materials, like the collagen in skin, rather than polymers for the body of the device. Some can crawl or swim when stimulated by an electric field. Some take inspiration from medical tissue engineering techniques and use long rectangular arms (or cantilevers) to pull themselves forward.

Others have taken their cues from nature, creating biologically inspired biohybrids. For example, a group led by researchers at California Institute of Technology developed a biohybrid robot inspired by jellyfish. This device, which they call a medusoid, has arms arranged in a circle. Each arm is micropatterned with protein lines so that cells grow in patterns similar to the muscles in a living jellyfish. When the cells contract, the arms bend inwards, propelling the biohybrid robot forward in nutrient-rich liquid.

More recently, researchers have demonstrated how to steer their biohybrid creations. A group at Harvard used genetically modified heart cells to make a biologically inspired manta ray-shaped robot swim. The heart cells were altered to contract in response to specific frequencies of light – one side of the ray had cells that would respond to one frequency, the other side’s cells responded to another.

When the researchers shone light on the front of the robot, the cells there contracted and sent electrical signals to the cells further along the manta ray’s body. The contraction would propagate down the robot’s body, moving the device forward. The researchers could make the robot turn to the right or left by varying the frequency of the light they used. If they shone more light of the frequency the cells on one side would respond to, the contractions on that side of the manta ray would be stronger, allowing the researchers to steer the robot’s movement.

Toughening up the biobots

While exciting developments have been made in the field of biohybrid robotics, there’s still significant work to be done to get the devices out of the lab. Devices currently have limited lifespans and low force outputs, limiting their speed and ability to complete tasks. Robots made from mammalian or avian cells are very picky about their environmental conditions. For example, the ambient temperature must be near biological body temperature and the cells require regular feeding with nutrient-rich liquid. One possible remedy is to package the devices so that the muscle is protected from the external environment and constantly bathed in nutrients.

The sea slug Aplysia californica. Jeff Gill,  CC BY-ND

Another option is to use more robust cells as actuators. Here at Case Western Reserve University, we’ve recently begun to investigate this possibility by turning to the hardy marine sea slug Aplysia californica. Since A. californica lives in the intertidal region, it can experience big changes in temperature and environmental salinity over the course of a day. When the tide goes out, the sea slugs can get trapped in tide pools. As the sun beats down, water can evaporate and the temperature will rise. Conversely in the event of rain, the saltiness of the surrounding water can decrease. When the tide eventually comes in, the sea slugs are freed from the tidal pools. Sea slugs have evolved very hardy cells to endure this changeable habitat.

Sea turtle-inspired biohybrid robot, powered by muscle from the sea slug.
Dr. Andrew Horchler, CC BY-ND

We’ve been able to use Aplysia tissue to actuate a biohybrid robot, suggesting that we can manufacture tougher biobots using these resilient tissues. The devices are large enough to carry a small payload – approximately 1.5 inches long and one inch wide.

A further challenge in developing biobots is that currently the devices lack any sort of on-board control system. Instead, engineers control them via external electrical fields or light. In order to develop completely autonomous biohybrid devices, we’ll need controllers that interface directly with the muscle and provide sensory inputs to the biohybrid robot itself. One possibility is to use neurons or clusters of neurons called ganglia as organic controllers.

That’s another reason we’re excited about using Aplysia in our lab. This sea slug has been a model system for neurobiology research for decades. A great deal is already known about the relationships between its neural system and its muscles – opening the possibility that we could use its neurons as organic controllers that could tell the robot which way to move and help it perform tasks, such as finding toxins or following a light.

While the field is still in its infancy, researchers envision many intriguing applications for biohybrid robots. For example, our tiny devices using slug tissue could be released as swarms into water supplies or the ocean to seek out toxins or leaking pipes. Due to the biocompatibility of the devices, if they break down or are eaten by wildlife these environmental sensors theoretically wouldn’t pose the same threat to the environment traditional nuts-and-bolts robots would.

One day, devices could be fabricated from human cells and used for medical applications. Biobots could provide targeted drug delivery, clean up clots or serve as compliant actuatable stents. By using organic substrates rather than polymers, such stents could be used to strengthen weak blood vessels to prevent aneurysms – and over time the device would be remodeled and integrated into the body. Beyond the small-scale biohybrid robots currently being developed, ongoing research in tissue engineering, such as attempts to grow vascular systems, may open the possibility of growing large-scale robots actuated by muscle.

The ConversationVictoria Webster, Ph.D. Candidate in Mechanical and Aerospace Engineering, Case Western Reserve University

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

This Amazing Sneaker-Wearing Robot Walks Like a Human [Video]

The most efficient-walking humanoid ever created wears size-13 sneakers, report its creators.

While most machines these days hunch at the waist and plod along on flat feet, DURUS strolls like a person. Its legs and chest are elongated and upright. It lands on the heel of its foot, rolls through the step, and pushes off its toe. And it wears shoes as it walks under its own power on a treadmill in the AMBER Lab at the Georgia Institute of Technology.

“Our robot is able to take much longer, faster steps than its flat-footed counterparts because it’s replicating human locomotion,” says Aaron Ames, director of the lab and a professor in the George W. Woodruff School of Mechanical Engineering and School of Electrical and Computer Engineering.

“Multi-contact foot behavior also allows it to be more dynamic, pushing us closer to our goal of allowing the robot to walk outside in the real world.”

As Ames tells it, the traditional approach to creating a robotic walker is similar to an upside-down pendulum. Researchers typically use comparatively simple algorithms to move the top of the machine forward while keeping its feet flat and grounded. As it shuffles along, the waist stays at a constant height, creating the distinctive hunched look. This not only prevents these robots from moving with the dynamic grace present in human walking, but also prevents them from efficiently propelling themselves forward.

The Georgia Tech humanoid walked with flat feet until about a week ago, although it was powered by fundamentally different algorithms than most robots.

To demonstrate the power of those methods, Ames and his team of student researchers built a pair of metal feet with arched soles. They applied their complex mathematical formulas, but watched DURUS misstep and fall for three days. The team continued to tweak the algorithms and, on the fourth day, the robot got it.

The machine walked dynamically on its new feet, displaying the heel-strike and toe push-off that is a key feature of human walking. The robot is further equipped with springs between its ankles and feet, similar to elastic tendons in people, allowing for a walking gait that stores mechanical energy from a heel strike to be later reclaimed as the foot lifts off the ground.

This natural gait makes DURUS very efficient. Robot locomotion efficiency is universally measured by a “cost of transport,” or the amount of power it uses divided by the machine’s weight and walking speed. Ames says the best humanoids are approximately 3.0. Georgia Tech’s cost of transport is 1.4, all while being self-powered—it’s not tethered by a power cord from an external source.

This new level of efficiency is achieved in no small part through human-like foot behavior. DURUS had earned its new pair of shoes.

“Flat-footed robots demonstrated that walking was possible,” says Ames, “but they’re a starting point, like a propeller-powered airplane. It gets the job done, but it’s not a jet engine. We want to build something better, something that can walk up and down stairs or run across a field.”

He adds these advances have the potential to usher in the next generation of robotic assistive devices like prostheses and exoskeletons that can enable the mobility-impaired to walk with ease.

Graduate student Jake Reher led the student team. Graduate student, Eric Ambrose created the shoes. The robotics division of SRI International collaborated on the DURUS design.

Source: Republished from as a derivative work under the Attribution 4.0 International license. Original article posted to Futurity by Jason Maderer, Georgia Tech.

Now, Check Out:

Moving exoskeletons from sci-fi into medical rehabilitation and therapy

By Rana Soltani-Zarrin, Texas A&M University ; Amin Zeiaee, Texas A&M University , and Reza Langari, Texas A&M University .

Chances are, you’ve seen a person using a powered exoskeleton – what you might think of as a sort of bionic suit – but only in the movies. In the 2013 movie “Elysium,” for example, Matt Damon’s character has an exoskeleton that makes his body stronger and faster than it would otherwise be. Simply described, they are devices that can be externally worn, resembling the skeleton of the body part they are attached to and able to provide support in many ways.

That technology isn’t just in science fiction; it really exists and has even been commercialized. It supports devices that enhance human strength, assist disabled people and even provide rehabilitation after injuries. Our work focuses on helping stroke patients’ recovery.

Every year, 15 million people worldwide suffer a stroke. More than 85 percent of them survive, but only 10 percent recover completely. The rest must deal with mobility impairment and cognitive disabilities.

Stroke victims can get help relearning skills they have lost or learn new ways of performing tasks to compensate for lost abilities. The most effective rehabilitation is specific to the skills the patient needs, and of sufficient intensity and duration to truly retrain the nerves and muscles involved. However, the number of trained human therapists who can provide this support is limited, while the demand is growing, particularly as populations age.

Physical therapy can require a lot of professional time and contact. Can robots help?
Patient and therapist via

We at the Laboratory for Control, Robotics and Automation (LCRA) at Texas A&M University are working to help solve this problem by developing an intelligent robotic device that can provide therapy services in hospitals and clinics as an enhancement to conventional therapy methods. Our device will be connected to a patient’s upper arm and back during therapy sessions, providing individualized movement assistance to increase strength and flexibility. Such a device benefits therapists by reducing the physical load of their jobs, and patients by providing affordable and widely available therapy opportunities.

Initial development of the exoskeleton was at the Laboratory for Control, Robotics and Automation at Texas A&M University.
Author provided

A growing need

The number of elderly people worldwide is growing, as life expectancies increase. The U.S. Census Bureau estimates that the number of Americans age 65 or over will double by 2050. Research suggests that people in that age group have an increased risk of suffering a stroke. We expect the number of stroke survivors who need rehabilitation services to rise significantly in the near future.

According to the U.S. Bureau of Labor Statistics, the number of occupational therapy and physical therapy jobs is expected to increase 27 percent and 34 percent, respectively, by 2020. Though interest in the field is growing, the American Academy of Physical Medicine and Rehabilitation projects the current physical therapist shortage will increase significantly in the upcoming decades. Efforts to keep rehabilitation at its current service quality could result in a shortage of as many as 26,000 physical therapists by 2020; improving service or updating it to reflect ongoing research will require even more people.

Robots for rehabilitation

While there remain a number of things that only human therapists can do, many rehab exercises are highly repetitive. This is where robotic systems excel: They can perform the same task countless times, with precision and accuracy without fatigue or loss of attention.

Many researchers around the world have developed robotic devices for rehabilitation purposes. These devices are typically designed specifically to work on patients’ paralyzed arms or legs. Many clinical studies confirm the effectiveness of automated therapy; in some cases it is even better than conventional therapy. However, there is still a long way to go.

Challenges of automated therapy

Despite the many benefits robotic based rehabilitation can offer to society, not many clinics are equipped with such devices. Rehabilitation exoskeletons often require very complicated design and control processes, which usually result in bulky, heavy and expensive devices. In addition, patient trust or comfort with a therapist might be reduced when interacting with a robot.

These challenges limit the usage of robotic devices to research centers and a few rehabilitation centers. Considering the significant role of exoskeletons in the future of rehabilitation, it is time to address these challenges.

How our robot solves these challenges

Our work is focused on developing a lighter, more compact robotic exoskeleton device that can help stroke patients recover strength and motion in their arms. To this end, we have done detailed analysis of even the simplest device components.

Performing a close analysis of device components.
Author provided

While development is ongoing, we are using new technologies and have adopted the most recent findings of rehabilitation science research to build a device that better prepare patients for activities of daily living. In addition to helping stroke patients, this device can also be used for rehabilitation of other patients with arm disabilities or injuries.

The technical evaluations of the device will be completed on the Texas A&M campus in College Station early next year. Once the safety of device is guaranteed, we will test it on real stroke patients in Hamad Medical Center in Doha, Qatar by fall 2017.

Looking to the future

Our final goal is to develop home-based exoskeletons. Currently portability, high costs and limitations on the performance of the available systems are the main barriers for using rehab exoskeletons in patients’ homes. Home-based rehabilitation could dramatically improve the intensity and effectiveness of therapy patients receive. Robots could, for example, allow patients to start therapy in the very early stages of recovery, without having to deal with the hassles of frequent and long visits to clinics. In the comfort of their own homes, people could get specific training at the appropriate level of intensity, overseen and monitored by a human therapist over the internet.

Maximizing therapy robots’ ability to help patients depends on deepening the human-robot interaction. This sort of connection is the subject of significant research of late, and not just for patient treatment. In most cases where people are working with robots, though, the human takes the lead role; in therapy, the robot must closely observe the patient and decide when to provide corrective input.

Virtual reality is another technology that has proven to be an effective tool for rehabilitation purposes. Virtual reality devices and the recently developed augmented reality systems can be adapted to use with rehab exoskeletons. Although linking the real and virtual worlds within these systems is a challenging task, an exoskeleton equipped with a high fidelity virtual- or augmented-reality device could offer unique benefits.

These opportunities are challenging to be realized. But if we manage to develop such systems, it could open a world of fantastic opportunities. Imagine automated rehabilitation gyms, with devices specific to different motions of different body parts, available for anyone who needed them. But there are even more miraculous possibilities: Would no one need a wheelchair anymore?

These devices can also help reduce the social isolation many stroke patients experience. With the aid of augmented reality tools, therapy robots can help patients interact with each other, as in a virtual exercise group. This sort of connection can make rehabilitation a pleasant experience in patients’ daily lives, one they look forward to and enjoy, which will also promote their recovery.

This technology could have everyday uses for healthy individuals, too. Perhaps people would one day own an exoskeleton for help with labor-intensive tasks at home or in the garden. Factory workers could work harder and faster, but with less fatigue and risk of injury. The research is really just beginning.

The ConversationRana Soltani-Zarrin, Ph.D. Candidate in Mechanical Engineering, Texas A&M University ; Amin Zeiaee, Ph.D. Candidate in Mechanical Engineering, Texas A&M University , and Reza Langari, Professor of Mechanical Engineering; Department Head, Engineering Technology and Industrial Distribution, Texas A&M University

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

‘Robo-Locusts’ May Lead to Better Bomb Detecting Mini-Robots

Engineers at Washington University in St. Louis are attaching sensors to locusts so they can monitor how the insects sniff out odors.

What they learn could be the basis for biorobotic sensing systems that detect dangerous chemicals or explosives.

“Why reinvent the wheel? Why not take advantage of the biological solution?”

Biological sensing systems are far more complex than their engineered counterparts, including the chemical sensing system responsible for our sense of smell, says Baranidharan Raman, associate professor of biomedical engineering at Washington University in St. Louis.

Raman has been studying how sensory signals are received and processed in relatively simple brains of locusts. He and his team have found that odors prompt dynamic neural activity in the brain that allow the locust to correctly identify a particular odor, even with other odors present.

In other research, his team also has found that locusts trained to recognize certain odors can do so even when the trained odor was presented in complex situations, such as overlapping with other scents or in different background conditions.

“Why reinvent the wheel? Why not take advantage of the biological solution?” Raman says. “That is the philosophy here. Even the state-of-the-art miniaturized chemical sensing devices have a handful of sensors. On the other hand, if you look at the insect antenna, where their chemical sensors are located, there are several hundreds of thousands of sensors and of a variety of types.”

Locust ‘Tattoos,’ or, How to Steer a Locust

The team intends to monitor neural activity from the insect brain while they are freely moving and exploring and decode the odorants present in their environment.

Such an approach will also require low power electronic components to collect, log and transmit data.

Shantanu Chakrabartty, professor of computer science and engineering, will collaborate with Raman to develop this component of the work.

The team also plans to use locusts as a biorobotic system to collect samples using remote control.

Srikanth Singamaneni, associate professor of materials science, will develop a plasmonic “tattoo” made of a biocompatible silk to apply to the locusts’ wings that will generate mild heat and help to steer locusts to move toward particular locations by remote control.

The tattoos, studded with plasmonic nanostructures, also can collect samples of volatile organic compounds in their proximity, which would allow the researchers to conduct secondary analysis of the chemical makeup of the compounds using more conventional methods.

“The canine olfactory system still remains the state-of-the-art sensing system for many engineering applications, including homeland security and medical diagnosis,” Raman says. “However, the difficulty and the time necessary to train and condition these animals, combined with lack of robust decoding procedures to extract the relevant chemical sending information from the biological systems, pose a significant challenge for wider application.

“We expect this work to develop and demonstrate a proof-of-concept, hybrid locust-based, chemical-sensing approach for explosive detection.”

Source: Republished from as a derivative work under the Attribution 4.0 International license. Original article posted to Futurity by 

Now, Check Out:

Breakthrough Squishy Motor Can Power Soft Robots Over Rugged Terrain

Scientists have created a robotic vehicle equipped with soft wheels and a flexible motor. It can easily roll over rough terrain and through water.

Future versions might be suitable for search and rescue missions after disasters, deep space and planet exploration, and manipulating objects during magnetic resonance imaging (MRI).

The silicone rubber is nearly 1 million times softer than aluminum.

The most important innovation is the soft motor that provides torque without bending or extending its housing, says Aaron D. Mazzeo, assistant professor of mechanical and aerospace engineering.

“The introduction of a wheel and axle assembly in soft robotics should enable vast improvement in the manipulation and mobility of devices. We would very much like to continue developing soft motors for future applications, and develop the science to understand the requirements that improve their performance.”

Vehicle innovations

  • Motor rotation without bending. “It’s actually remarkably simple, but providing torque without bending is something we believe will be advantageous for soft robots going forward,” Mazzeo says.
  • A unique wheel and axle configuration. The soft wheels may allow for passive suspensions in wheeled vehicles.
  • Wheels that use peristalsis—the process people use to push food to the stomach through the esophagus.
  • A consolidated wheel and motor with an integrated “transmission.”
  • Soft, metal-free motors suitable for harsh environments with electromagnetic fields.
  • The ability to handle impacts. The vehicle survived a fall eight times its height.
  • The ability to brake motors and hold them in a fixed position without the need for extra power.

To create the vehicle, engineers used silicone rubber that is nearly 1 million times softer than aluminum. They liken the softness to be somewhere between a silicone spatula and a relaxed human calf muscle. The motors were made using 3D-printed molds and soft lithography. A provisional patent has been filed with the US government.

“If you build a robot or vehicle with hard components, you have to have many sophisticated joints so the whole body can handle complex or rocky terrain,” says Xiangyu Gong, lead author of the study that is published in the journal Advanced Materials. “For us, the whole design is very simple, but it works very well because the whole body is soft and can negotiate complex terrain.”


Future possibilities include amphibious vehicles that could traverse rugged lakebeds; search and rescue missions in extreme environments and varied terrains, such as irregular tunnels; shock-absorbing vehicles that could be used as landers equipped with parachutes; and elbow-like systems with limbs on either side.

The Rutgers School of Engineering, the Department of Mechanical and Aerospace Engineering, the Rutgers Research Council, and an A. Walter Tyson Assistant Professorship Award supported the work.

Source: Republished from as a derivative work under the Attribution 4.0 International license. Original article posted to Futurity by .

Now, Check Out:

This Robot ‘Mermaid’ can Grab Shipwreck Treasures [Video]

A robot called OceanOne with artificial intelligence and haptic feedback systems gives human pilots an unprecedented ability to explore the depths of the oceans.

Oussama Khatib held his breath as he swam through the wreck of La Lune, over 300 feet below the Mediterranean. The flagship of King Louis XIV sank here in 1664, 20 miles off the southern coast of France, and no human had touched the ruins—or the countless treasures and artifacts the ship once carried—in the centuries since.

With guidance from a team of skilled deep-sea archaeologists who had studied the site, Khatib, a professor of computer science at Stanford, spotted a grapefruit-size vase. He hovered precisely over the vase, reached out, felt its contours and weight, and stuck a finger inside to get a good grip. He swam over to a recovery basket, gently laid down the vase, and shut the lid. Then he stood up and high-fived the dozen archaeologists and engineers who had been crowded around him.

This entire time Khatib had been sitting comfortably in a boat, using a set of joysticks to control OceanOne, a humanoid diving robot outfitted with human vision, haptic force feedback and an artificial brain—in essence, a virtual diver.

OceanOne, the "robot mermaid" on a dive. Credit: Frederic Osada, Teddy Seguin/DRASSM
OceanOne, the “robot mermaid” on a dive. Credit: Frederic Osada, Teddy Seguin/DRASSM

When the vase returned to the boat, Khatib was the first person to touch it in hundreds of years. It was in remarkably good condition, though it showed every day of its time underwater: The surface was covered in ocean detritus, and it smelled like raw oysters. The team members were overjoyed, and when they popped bottles of champagne, they made sure to give their heroic robot a celebratory bath.

The expedition to La Lune was OceanOne’s maiden voyage. Based on its astonishing success, Khatib hopes that the robot will one day take on highly skilled underwater tasks too dangerous for human divers, as well as open up a whole new realm of ocean exploration.

“OceanOne will be your avatar,” Khatib says. “The intent here is to have a human diving virtually, to put the human out of harm’s way. Having a machine that has human characteristics that can project the human diver’s embodiment at depth is going to be amazing.”


The concept for OceanOne was born from the need to study coral reefs deep in the Red Sea, far below the comfortable range of human divers. No existing robotic submarine can dive with the skill and care of a human diver, so OceanOne was conceived and built from the ground up, a successful marriage of robotics, artificial intelligence, and haptic feedback systems.

OceanOne looks something like a robo-mermaid. Roughly five feet long from end to end, its torso features a head with stereoscopic vision that shows the pilot exactly what the robot sees, and two fully articulated arms. The “tail” section houses batteries, computers, and eight multi-directional thrusters.

The body looks far unlike conventional boxy robotic submersibles, but it’s the hands that really set OceanOne apart. Each fully articulated wrist is fitted with force sensors that relay haptic feedback to the pilot’s controls, so the human can feel whether the robot is grasping something firm and heavy, or light and delicate. (Eventually, each finger will be covered with tactile sensors.)

The bot’s brain also reads the data and makes sure that its hands keep a firm grip on objects, but that they don’t damage things by squeezing too tightly. In addition to exploring shipwrecks, this makes it adept at manipulating delicate coral reef research and precisely placing underwater sensors.

“You can feel exactly what the robot is doing,” Khatib says. “It’s almost like you are there; with the sense of touch you create a new dimension of perception.”

The pilot can take control at any moment, but most frequently won’t need to lift a finger. Sensors throughout the robot gauge current and turbulence, automatically activating the thrusters to keep the robot in place. And even as the body moves, quick-firing motors adjust the arms to keep its hands steady as it works. Navigation relies on perception of the environment, from both sensors and cameras, and these data run through smart algorithms that help OceanOne avoid collisions. If it senses that its thrusters won’t slow it down quickly enough, it can quickly brace for impact with its arms, an advantage of a humanoid body build.


The humanoid form also means that when OceanOne dives alongside actual humans, its pilot can communicate through hand gestures during complex tasks or scientific experiments. Ultimately, though, Khatib designed OceanOne with an eye toward getting human divers out of harm’s way. Every aspect of the robot’s design is meant to allow it to take on tasks that are either dangerous—deep-water mining, oil-rig maintenance, or underwater disaster situations like the Fukushima Daiichi power plant—or simply beyond the physical limits of human divers.

“We connect the human to the robot in very intuitive and meaningful way. The human can provide intuition and expertise and cognitive abilities to the robot,” Khatib says. “The two bring together an amazing synergy. The human and robot can do things in areas too dangerous for a human, while the human is still there.”

Khatib was forced to showcase this attribute while recovering the vase. As OceanOne swam through the wreck, it wedged itself between two cannons. Firing the thrusters in reverse wouldn’t extricate it, so Khatib took control of the arms, motioned for the bot to perform a sort of pushup, and OceanOne was free.

Next month, OceanOne will return to the Stanford campus, where Khatib and his students will continue iterating on the platform. The prototype robot is a fleet of one, but Khatib hopes to build more units, which would work in concert during a dive.

The expedition to La Lune was made possible in large part thanks to the efforts of Michel L’Hour, the director of underwater archaeology research in France’s Ministry of Culture. Previous remote studies of the shipwreck conducted by L’Hour’s team made it possible for OceanOne to navigate the site. Vincent Creuze of the Universite de Montpellier in France commanded the support underwater vehicle that provided third-person visuals of OceanOne and held its support tether at a safe distance.

In addition to Stanford, Meka Robotics and the King Abdullah University of Science and Technology (KAUST) in Saudi Arabia supported the robot’s development.

Source: Republished from as a derivative work under the Attribution 4.0 International license. Original article posted to Futurity by .

Featured Photo Credit: Frederic Osada, Teddy Seguin/DRASSM

Now, Check Out:

Why robots need to be able to say ‘No’ [Video]

By Matthias Scheutz, Tufts University.

Should you always do what other people tell you to do? Clearly not. Everyone knows that. So should future robots always obey our commands? At first glance, you might think they should, simply because they are machines and that’s what they are designed to do. But then think of all the times you would not mindlessly carry out others’ instructions – and put robots into those situations.

Just consider:

  • An elder-care robot tasked by a forgetful owner to wash the “dirty clothes,” even though the clothes had just come out of the washer
  • A preschooler who orders the daycare robot to throw a ball out the window
  • A student commanding her robot tutor to do all the homework instead doing it herself
  • A household robot instructed by its busy and distracted owner to run the garbage disposal even though spoons and knives are stuck in it.

There are plenty of benign cases where robots receive commands that ideally should not be carried out because they lead to unwanted outcomes. But not all cases will be that innocuous, even if their commands initially appear to be.

Consider a robot car instructed to back up while the dog is sleeping in the driveway behind it, or a kitchen aid robot instructed to lift a knife and walk forward when positioned behind a human chef. The commands are simple, but the outcomes are significantly worse.

How can we humans avoid such harmful results of robot obedience? If driving around the dog were not possible, the car would have to refuse to drive at all. And similarly, if avoiding stabbing the chef were not possible, the robot would have to either stop walking forward or not pick up the knife in the first place.

In either case, it is essential for both autonomous machines to detect the potential harm their actions could cause and to react to it by either attempting to avoid it, or if harm cannot be avoided, by refusing to carry out the human instruction. How do we teach robots when it’s OK to say no?

How can robots know what will happen next?

In our lab, we have started to develop robotic controls that make simple inferences based on human commands. These will determine whether the robot should carry them out as instructed or reject them because they violate an ethical principle the robot is programmed to obey.

A robot that can reject unsafe orders.

Telling robots how and when – and why – to disobey is far easier said than done. Figuring out what harm or problems might result from an action is not simply a matter of looking at direct outcomes. A ball thrown out a window could end up in the yard, with no harm done. But the ball could end up on a busy street, never to be seen again, or even causing a driver to swerve and crash. Context makes all the difference.

It is difficult for today’s robots to determine when it is okay to throw a ball – such as to a child playing catch – and when it’s not – such as out the window or in the garbage. Even harder is if the child is trying to trick the robot, pretending to play a ball game but then ducking, letting the ball disappear through the open window.

Explaining morality and law to robots

Understanding those dangers involves a significant amount of background knowledge (including the prospect that playing ball in front of an open window could send the ball through the window). It requires the robot not only to consider action outcomes by themselves, but also to contemplate the intentions of the humans giving the instructions.

To handle these complications of human instructions – benevolent or not – robots need to be able to explicitly reason through consequences of actions and compare outcomes to established social and moral principles that prescribe what is and is not desirable or legal. As seen above, our robot has a general rule that says, “If you are instructed to perform an action and it is possible that performing the action could cause harm, then you are allowed to not perform it.” Making the relationship between obligations and permissions explicit allows the robot to reason through the possible consequences of an instruction and whether they are acceptable.

In general, robots should never perform illegal actions, nor should they perform legal actions that are not desirable. Hence, they will need representations of laws, moral norms and even etiquette in order to be able to determine whether the outcomes of an instructed action, or even the action itself, might be in violation of those principles.

While our programs are still a long way from what we will need to allow robots to handle the examples above, our current system already proves an essential point: robots must be able to disobey in order to obey.

The ConversationMatthias Scheutz, Professor of Cognitive and Computer Science, Tufts University

This article was originally published on The Conversation. Read the original article.

Featured Photo Credit: Jiuguang Wang, CC BY-SA

Now, Check Out:

How drones can improve scientific research in the field

By Elizabeth Basha, University of the Pacific.

Drones – and promises about drones – seem ubiquitous these days. And some of what we associate with drones comes with varying degrees of scariness.

We think of automated planes shooting missiles, drones flying near sensitive nuclear power plants or quadcopters crashing into crowds while filming. If we think about everyday possibilities, we envision toys for children or companies promising deliveries, which sounds like a futuristic version of Hitchcock’s horror film “The Birds.”

However, drones – or, to use the technical term, unmanned aerial vehicles (UAVs) – show promise to help with a large number of societal and environmental problems.

As a researcher in aerial robotics, I’m trying to bring some cutting-edge ideas for using drones closer to reality. Some of these projects aim to keep sensors alive, measure hazardous or remote environments, and deal with scenarios that would be dangerous to humans.

Links to power and data

As our world becomes more filled with sensors – such as on roads and bridges, as well as machines – it will be important to ensure the increasingly distributed monitoring devices have power. Here, drones can help. UAVs can provide wireless recharging to hard-to-access locations such as sensors monitoring bridges or floating sensors on lakes.

Idealized figure representing a number of sensors (in red) monitoring a bridge and the UAV flying to wirelessly recharge the sensors’ batteries. Carrick Detweiler, University of Nebraska-Lincoln, Idealized figure representing a number of sensors (in red) monitoring a bridge and the UAV flying to wirelessly recharge the sensors’ batteries. Carrick Detweiler, University of Nebraska-Lincoln, CC BY-ND

Dr. Carrick Detweiler at the University of Nebraska-Lincoln and I have developed a system that allows a UAV to fly to a bridge, identify which battery to charge, and wirelessly recharge it, in a manner similar to those pads on which you can just drop your cellphone.

Over time, the UAV can visit repeatedly, recharging all of the batteries and keeping all the sensors live. That will provide more data to determine when the bridge needs repair. Lack of even one or two key pieces of data can make the rest of the monitoring less helpful, so having functioning, charged sensors is critical to keeping the information flowing.

An Ascending Techologies Hummingbird quadcopter, which serves as the base UAV for wireless recharging of sensors, monitoring of crops and measuring of water. Randall Gee, University of the Pacific, CC BY-ND

Our ongoing research also explores how to retrieve measurements from floating sensors, which will allow us to monitor water quality. Similar to working with bridge monitors, the UAV flies over the sensors, collecting data from each one and returning to a base station.

This speeds up data processing, and improves data collection: without the UAV, researchers would have to get in a boat to collect all of the sensors. This is tedious and can be expensive, as the scientists need to drive a boat to a boat ramp, spend all day collecting the data from the sensors, reset the sensors and then analyze the data.

If a sensor has failed in the time since the last visit, the scientist will discover this only when collecting data and will have lost all potential data, creating a hole in the data set and making it more difficult for the scientist to understand that environment. With a UAV, the scientist can relax in her office, send the UAV out for data on a daily basis, quickly identify failed sensors and have the UAV replace those sensors. The likelihood of gathering a good set of data that the scientist can use to learn more about our environment then increases.

In addition to supporting monitoring devices, UAVs can take measurements themselves. Research at UNL is using UAVs to measure agricultural crop heights; Arizona State University scholars are gathering remote imagery to study the role of water in the environment; and Swiss researchers are mapping forest trails.

Without UAVs, these tasks are harder. Crop heights would require farmers to visit all of their fields; ecohydrology would need expensive satellite or plane data collection; and forest trail mapping would require regular confirmation from hikers. These are only a few of the many ways that UAVs can help gather hard-to-measure things in hard-to-reach locations.

Disaster response

UAVs can also help respond to disasters. We are exploring how UAVs can monitor rivers to predict floods, an extension of our prior work that only used sensors.

Timely prediction of flooding requires extensive data, something easily obtainable in urban, developed areas. For rural and less developed areas, though, the infrastructure to measure rivers and weather for prediction is often too expensive. UAVs can supplement measurements to easily provide the appropriate information to improve predictions and save lives.

Dr. Detweiler is also looking at how to start controlled burns with UAVs, to help fight wildfires and help with land management. Fire breaks help restrict wildfire movement, but creating them is dangerous to firefighters who are directly in the line of the fire.

A UAV can fly close to the fire and drop small capsules in precise locations. Those capsules self-ignite and start a small controlled burn. Firefighters do not have to get close at all; they just have to identify the location for the UAV.

They can also help with more man-made disasters. A group at DePaul University uses UAVs to monitor the Dead Sea and reveal archaeological sites that are being looted. Typically solving this problem would use satellites, where measurements are expensive and rare. UAVs provide more frequent and cheap options that could allow archaeologists to save these sites.

As promising as UAVs are, though, much of the potential of these systems remains distant. Until the FAA decides how best to manage these systems (especially in the commercial context), UAVs will not fly around freely, especially out of the eyesight of a pilot. In addition, technical challenges remain, including reliable methods for avoiding obstacles and handling changing weather conditions (such as sudden high winds).

Overall, UAVs have great potential for the good and useful. Hopefully, we remember that when the news focuses on the dangerous and frivolous.

The ConversationElizabeth Basha, Professor of Electrical and Computer Engineering, University of the Pacific

This article was originally published on The Conversation. Read the original article.

Featured Photo Credit: Beawiharta Beawiharta/Reuters

Now, Check Out: