Why you should dispense with antibacterial soaps

By Sarah Ades, Pennsylvania State University and Kenneth Keiler, Pennsylvania State University.

An FDA ruling on Sept. 2 bans the use of triclosan, triclocarban and 17 other antiseptics from household soaps because they have not been shown to be safe or even have any benefit.

About 40 percent of soaps use at least one of these chemicals, and the chemicals are also found in toothpaste, baby pacifiers, laundry detergents and clothing. It is in some lip glosses, deodorants and pet shampoos.

The current FDA action bans antiseptics like triclosan in household soaps only. It does not apply to other products like antiseptic gels designed to be used without water, antibacterial toothpaste or the many fabrics and household utensils in which antibacterials are embedded. Data suggest that the toothpastes are very effective for people suffering from gum disease, although it is not clear if they provide substantial benefits for those who don’t have gingivitis.

The FDA is currently evaluating the use of antibacterials in gels and will rule on how those products should be handled once the data are in.

Although antibacterials are still in products all around us, the current ban is a significant step forward in limiting their use.

As microbiologists who study a range of chemicals and microbes, we will explain why we don’t we need to kill all the bacteria. We also will explain how antibiotic soaps may even be bad by contributing to antibiotic-resistant strains of bacteria that can be dangerous.

Bacteria can be good

Bacteria are everywhere in the environment and almost everywhere in our bodies, and that is mostly good.

We rely on bacteria in our guts to provide nutrients and to signal our brains, and some bacteria on our skin help protect us from harmful pathogens.

Bacteria in soil can be bad for you. www.shutterstock.com

Some bacteria present in soil and animal waste can cause infections if they are ingested, however, and washing is important to prevent bacteria from spreading to places where they can cause harm.

Washing properly with soap and water removes these potential pathogens. If you have any questions about hand washing, the Centers for Disease Control and Prevention has a great site where you can learn more.

If soap and water are sufficient to remove potential pathogens, why were antibacterials like triclosan and triclocarban added in the first place?

Triclosan was introduced in 1972. These chemicals were originally used for cleaning solutions, such as before and during surgeries, where removing bacteria is critical and exposure for most people is short. Triclosan and triclocarban may be beneficial in these settings, and the FDA ruling does not affect health care or first aid uses of the chemicals.

In the 1990s, manufacturers started to incorporate triclosan and triclocarban in products for the average consumer, and many people were attracted by claims that these products killed more bacteria.

Now antibacterial chemicals can be found in many household products, from baby toys to fabrics to soaps. Laboratory tests show the addition of these chemicals can reduce the number of bacteria in some situations. However, studies in a range of environments, including urban areas in the United States and squatter settlements in Pakistan, have shown that the inclusion of antibacterials in soap does not reduce the spread of infectious disease. Because the goal of washing is human health, these data indicate that antibacterials in consumer soaps do not provide any benefit.

While not all bad, bacteria are promiscuous

What’s the downside to having antibacterials in soap? It is potentially huge, both for those using it and for society as a whole. One concern is whether the antibacterials can directly harm humans.

Triclosan had become so prevalent in household products that in 2003 a nationwide survey of healthy individuals found it in the urine of 75 percent of the 2,517 people tested. Triclosan has also been found in human plasma and breast milk.

Most studies have not shown any direct toxicity from triclosan, but some animal studies indicate that triclosan can disrupt hormone systems. We do not know yet whether triclosan affects hormones in humans.

Another serious concern is the effect of triclosan on antibiotic resistance in bacteria. Bacteria evolve resistance to nearly every threat they face, and triclosan is no exception.

Triclosan isn’t used to treat disease, so why does it matter if some bacteria become resistant? Some of the common mechanisms that bacteria use to evade triclosan also let them evade antibiotics that are needed to treat disease. When triclosan is present in the environment, bacteria that have these resistance mechanisms grow better than bacteria that are still susceptible, so the number of resistant bacteria increases.

Not only are bacteria adaptable, they are also promiscuous. Genes that let them survive antibiotic treatment are often found on pieces of DNA that can be passed from one bacterium to another, spreading resistance.

These mobile pieces of DNA frequently have several different resistance genes, making the bacteria that contain them resistant to many different drugs. Bacteria that are resistant to triclosan are more likely to also be resistant to unrelated antibiotics, suggesting that the prevalence of triclosan can spread multi-drug resistance. As resistance spreads, we will not be able to kill as many pathogens with existing drugs.

Important in some settings

Antibacterial washes are important for surgery. From www.shutterstock.com

Antibiotics were introduced in the 1940s and revolutionized the way we lead our lives. Common infections and minor scrapes that could be fatal became easily treatable. Surgeries that were once unthinkable due to the risk of infection are now routine.

However, bacteria are becoming stronger due to decades of antibiotic use and misuse. New drugs will help, but if we do not protect the antibiotics we have now more people will die from infections that used to be easily treated. Removing triclosan from consumer products will help protect antibiotics and limit the threat of toxicity from extended exposure, without any adverse effect on human health.

The FDA ruling is a welcome first step to cleansing the environment of chemicals that provide little health value to most people but pose significant risk to individuals and to public health. To a large extent, this ruling is a victory of science over advertising.

The ConversationSarah Ades, Associate Professor of Biochemistry and Molecular Biology, Pennsylvania State University and Kenneth Keiler, Professor of Biochemistry and Molecular Biology, Pennsylvania State University

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

Saving lives by letting cars talk to each other

By Huei Peng, University of Michigan.

The death of a person earlier this year while driving with Autopilot in a Tesla sedan, along with news of more crashes involving Teslas operating in Autopilot, has triggered a torrent of concerns about the safety of self-driving cars.

But there is a way to improve safety across a rapidly evolving range of advanced mobility technologies and vehicles – from semi-autonomous driver assist features like Tesla’s Autopilot to a fully autonomous self-driving car like Google’s.

A Tesla Model S on the highway. pasa47, CC BY

The answer is connectivity: wireless communication that connects vehicles to each other, to the surrounding infrastructure, even to bicyclists and pedestrians. While connectivity and automation each provide benefits on their own, combining them promises to transform the movement of people and goods more than either could alone, and to do so safely. The U.S. Department of Transportation may propose requiring all new cars to have vehicle-to-vehicle communication, known as V2V, as early as this fall.

Tesla blamed the fatal crash on the failure of both its Autopilot technology and the driver to see the white tractor-trailer against a bright sky. But the crash – and the death – might have been avoided entirely if the Tesla and the tractor-trailer it hit had been able to talk to each other.

Limitations of vehicles that are not connected

Autonomous vehicles that aren’t connected to each other is a bit like gathering together the smartest people in the world but not letting them talk to each other. Connectivity enables smart decisions by individual drivers, by self-driving vehicles and at every level of automation in between.

Despite all the safety advances in recent decades, there are still more than 30,000 traffic deaths every year in the United States, and the number may be on the rise. After years of steady declines, fatalities rose 7.2 percent in 2015 to 35,092, up from 32,744 in 2014, representing the largest percentage increase in nearly 50 years, according to the U.S. DOT.

Most American traffic crashes involve human error. Ragesoss, CC BY-SA

The federal government estimates that 94 percent of all crashes – fatal or not – involve human error. Fully automated, self-driving vehicles are considered perhaps the best way to reduce or eliminate traffic deaths by taking human error out of the equation. The benefits of automation are evident today in vehicles that can steer you back into your lane if you start to drift or brake automatically when another driver cuts you off.

A self-driving vehicle takes automation to a higher level. It acts independently, using sensors such as cameras and radars, along with decision-making software and control features, to “see” its environment and respond, just as a human driver would.

However, onboard sensors, no matter how sophisticated, have limitations. Like humans, they see only what is in their line of sight, and they can be hampered by poor weather conditions.

Connecting cars to each other

Connected vehicles anonymously and securely “talk” to each other and to the surrounding infrastructure via wireless communication similar to Wi-Fi, known as Dedicated Short Range Communications, or DSRC. Vehicles exchange data – including location, speed and direction – 10 times per second through messages that can be securely transmitted at least 1,000 feet in any direction, and through barriers such as heavy snow or fog. Bicycles and pedestrians can be linked using portable devices such as smartphones or tablets, so drivers know they are nearby.

The federal government estimates that V2V connectivity could ultimately prevent or reduce the severity of about 80 percent of collisions that don’t involve a driver impaired by drugs or alcohol.

Cars are already connected in many ways. Think satellite-based GPS navigation, in-vehicle Wi-Fi hotspots and smartphone apps that remind you where you parked or remotely unlock your doors. But when it comes to connectivity for safety, there is broad agreement within the auto industry that DSRC-based V2V communication holds the most potential for reducing crashes. After years of testing, the industry is poised to roll out the technology. The next step is putting regulations in place.

Could this congested mess become a connected, communicating system? joiseyshowaa/flickr, CC BY-SA

Perhaps the greatest benefit of connectivity is that it can transform a group of independent vehicles sharing a road into a cohesive traffic system that can exchange critical information about road and traffic conditions in real time. If all vehicles are connected, and a car slips on some ice in blinding snow, vehicles following that car – whether immediately behind or three or four or more vehicles back – will get warnings to slow down. A potential 100-car pileup could become a two-car fender-bender, or be avoided altogether.

This technological shift becomes a revolution when connectivity and automation are combined. A self-driving vehicle is like an island on the road, aware only of what is immediately around it. Connectivity empowers a driverless car. It alerts the vehicle to imminent dangers it may not otherwise sense, such as a vehicle about to run a red light, approaching from the other side of a hill or coming around a blind corner. The additional information could be what triggers an automated response that avoids a crash. In that way, connectivity enables more, and potentially better, automation.

More research needed

At the University of Michigan Mobility Transformation Center, we’re working to further the development of connected and automated vehicles.

Advanced mobility vehicle technology is evolving rapidly on many fronts. More work must be done to determine how best to feed data gathered from sensors to in-vehicle warning systems. We need to more fully understand how to fuse information from connectivity and onboard sensors effectively, under a wide variety of driving scenarios. And we must perfect artificial intelligence, the brains behind self-driving cars.

The benefits of connected and automated vehicles go well beyond safety. They hold the potential to significantly reduce fuel use and carbon emissions through more efficient traffic flow. No more idling at red lights or in rush hour jams for commuters or freight haulers.

Connected self-driving cars also promise to bring safe mobility to those who don’t have cars, don’t want cars or cannot drive due to age or illness. Everything from daily living supplies to health care could be delivered to populations without access to transportation.

Researchers at MTC are also studying possible negative unintended consequences of the new technology and watching for possible privacy violations, cyberattack vulnerabilities or increases in mileage driven. Deeper understanding of both technology and social science issues is the only way to ensure that connected self-driving cars become part of our sustainable future.

The ConversationHuei Peng, Professor of Mechanical Engineering, University of Michigan

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

August Marks Month 16 of Record-Breaking Warm Global Temperatures

According to a new report from the National Oceanic and Atmospheric Adminstration (NOAA), August 2016 was the 16th consecutive record-breaking month in a row for warmer temperatures around the globe for planet Earth, across all 7 temperature measures that have been used for many decades to measure global temperatures. These measurements include land, sea, and atmospheric temperatures as well as sea ice measurements. Additionally, the global temperature for August is the highest in 176 years of record keeping, surpassing August of 2014.

Selected Climate Events & Anomalies for August 2016. Credit: NOAA. Click/Tap for larger image
Selected Climate Events & Anomalies for August 2016. Credit: NOAA.
Click/Tap for larger image

 

A companion announcement issued simultaneously by the NASA Earth Observatory reports that August 2016 was the warmest August in 136 years of modern record-keeping, according to a monthly analysis of global temperatures by scientists at NASA’s Goddard Institute for Space Studies (GISS).

Although the seasonal temperature cycle typically peaks in July, August 2016 wound up tied with July 2016 for the warmest month ever recorded. August 2016’s temperature was 0.16 degrees Celsius warmer than the previous warmest August (2014). The month also was 0.98 degrees Celsius warmer than the mean August temperature from 1951-1980.

Temperature Visualization: NASA Earth Observatory chart by Joshua Stevens, based on data from the NASA Goddard Institute for Space Studies. Click/Tap for larger image.
Temperature Visualization: NASA Earth Observatory chart by Joshua Stevens, based on data from the NASA Goddard Institute for Space Studies. Click/Tap for larger image.

“Monthly rankings, which vary by only a few hundredths of a degree, are inherently fragile,” said GISS Director Gavin Schmidt. “We stress that the long-term trends are the most important for understanding the ongoing changes that are affecting our planet.” Those long-term trends are apparent in the plot of temperature anomalies above.

The record warm August continued a streak of 11 consecutive months (dating to October 2015) that have set new monthly temperature records. The analysis by the GISS team is assembled from publicly available data acquired by about 6,300 meteorological stations around the world, ship- and buoy-based instruments measuring sea surface temperature, and Antarctic research stations. The modern global temperature record begins around 1880 because previous observations didn’t cover enough of the planet.

Sources: News and data releases from NOAA and the NASA Earth Observatory.

Featured Image Credit: NASA Earth Observatory

Now, Check Out:

New Fabric Generates Electricity from Both Motion and Sunshine

A new fabric harvests energy from both sunshine and motion at the same time.

Fabrics that can generate electricity from physical movement have been in the works for a few years, and this is the next step.

Combining two types of electricity generation into one textile paves the way for developing garments that could provide their own source of energy to power devices such as smartphones or GPS.

“This hybrid power textile presents a novel solution to charging devices in the field from something as simple as the wind blowing on a sunny day,” says Zhong Lin Wang, professor in the Georgia Institute of Technology’s School of Materials Science and Engineering.

A bracelet made from fabric woven with special energy-harvesting strands that collect electricity from the sun and motion. (Credit: Georgia Tech)
A bracelet made from fabric woven with special energy-harvesting strands that collect electricity from the sun and motion. (Credit: Georgia Tech)

To make the fabric, Wang’s team used a commercial textile machine to weave together solar cells constructed from lightweight polymer fibers with fiber-based triboelectric nanogenerators.

Triboelectric nanogenerators use a combination of the triboelectric effect and electrostatic induction to generate small amount of electrical power from mechanical motion such as rotation, sliding or vibration.

Wang envisions that the new fabric, which is 320 micrometers thick woven together with strands of wool, could be integrated into tents, curtains, or wearable garments.

“The fabric is highly flexible, breathable, lightweight, and adaptable to a range of uses,” Wang says.

Fiber-based triboelectric nanogenerators capture the energy created when certain materials become electrically charged after they come into moving contact with a different material. For the sunlight-harvesting part of the fabric, Wang’s team used photoanodes made in a wire-shaped fashion that could be woven together with other fibers.

“The backbone of the textile is made of commonly used polymer materials that are inexpensive to make and environmentally friendly,” Wang says. “The electrodes are also made through a low-cost process, which makes it possible to use large-scale manufacturing.”

In one of their experiments, Wang’s team used a fabric only about the size of a sheet of office paper and attached it to rod like a small colorful flag. Rolling down the windows in a car and letting the flag blow in the wind, the researchers were able to generate significant power from a moving car on a cloudy day. The researchers also measured the output by a 4-by-5-centimeter piece, which charged up a 2 mF commercial capacitor to 2 volts in one minute under sunlight and movement.

“That indicates it has a decent capability of working even in a harsh environment,” Wang says.

While early tests indicate the fabric can withstand repeated and rigorous use, researchers will be looking into its long-term durability. Next steps also include further optimizing the fabric for industrial uses, including developing proper encapsulation to protect the electrical components from rain and moisture.

The work appears in the journal Nature Energy.

Source: Republished from Futurity.org as a derivative work under the Attribution 4.0 International license. Original article posted to Futurity by

Now, Check Out:

Overcooling and overheating buildings emits as much carbon as four million cars

By Eric Williams, Rochester Institute of Technology.

Six years ago, Phoenix lay burning in the sun one day. It was 110 degrees Fahrenheit and I was the only person foolish enough to be out walking instead of moving by air-conditioned car. Arriving hot and parched at a bookstore, I opened the doors to be greeted by a blast of arctic air.

The coffee shop in which I sat down felt like it was freezing. Other customers, dressed in light summer wear for Phoenix summers, were shivering. We all chatted about how cold it was, so I went over to the coffee shop manager to see if the thermostat could be changed. He agreed wholeheartedly it was entirely too cold but reported that the temperature was decided and controlled not by the branch, but at the national headquarters.

As many people know, this is an extreme example of a common experience. Americans often find themselves in a store or office that’s too cold in summer or too hot in winter.

Obviously one can’t find a temperature that will please everyone all the time, but if lots of people are dissatisfied, this is a double dose of nonsense: Energy being wasted to make people uncomfortable. This led to the questions that would guide my research: What are the thermostat settings in commercial buildings and why are they set there? How much energy is wasted in making people uncomfortable?

In the end, I was surprised at how big the impact of poor thermal management in buildings is on our country’s energy consumption.

Inefficiencies

Making progress on my research questions went on hold until I was situated in a less extreme environment than Phoenix – Rochester, New York – when I started working with Ph.D. candidate Lourdes Gutierrez, who quickly uncovered many interesting things. One is that 42 percent of workers report being dissatisfied with the temperature in their offices, with 14 percent being very dissatisfied. Thus, there is a widespread problem with thermal comfort. Curiously, there is much less information available as to what thermostat settings are and how they are decided.

Lourdes also realized thermostat settings should vary by season and location. An office worker in Minnesota, for example, will wear heavier clothes in winter than one in Florida, so the thermostat in Minnesota can be set at a lower temperature.

We went on to analyze the national potential for energy savings from changing thermostat settings, by bumping them up in summer and down in winter by an amount appropriate for the local climate.

The first step was to figure out what winter and summer thermostat settings would ensure comfort for least 80 percent of occupants in 14 different U.S. cities. Eighty percent satisfaction is a typical compromise used by experts in thermal comfort. One result of our analysis was that in winter the thermostat could be safely set at 68F (20 degrees Celsius) in Minneapolis, while in Miami 72F (22C) is a better choice, since Miami-ites will be dressed lighter.

Next, we used energy simulation models to calculate the change in energy use with these new thermostat settings, compared with the typical year-round setting of 70F (21C). Not all buildings are set year-round at 70F, but it is considered a typical figure. There are many types of commercial buildings; we decided to focus on office buildings and restaurants as important, but tractable, types.

Our results, recently published in “Sustainable Cities and Society,” showed that the new thermostat settings could reduce 2.5 percent of energy use in U.S. office buildings and restaurants. National savings on utility bills would total US$600 million.

If other types of commercial buildings such as hotels and stores get similar savings as offices and restaurants, revised thermostat settings would reduce national carbon emissions by 0.3 percent. These saved carbon emissions are equivalent to the carbon pollution generated by four million automobiles in a year. This isn’t going to save the world from climate change, but it is a heck of a lot of carbon to be reduced while saving money and making people more comfortable.

Better data and monitoring

Where to go from here? We don’t claim to have the final answer on what thermostat settings should be and how much energy could be saved, as it’s a complicated question and will vary by building.

But we do argue these results highlight the need to rethink thermostat settings in offices, stores, restaurants and other commercial buildings. Managers should investigate what thermostat settings will make their customers and employees comfortable, considering the local climate. Dress code also plays a role: The closer employee clothing fits the outdoor environment, the more energy can be saved from moving thermostat settings closer to ambient.

There are a number of other obvious steps for improving the comfort of people in buildings, while using less energy. Energy auditors can advise building managers as to how much they could save with different thermostat settings. Governments can be more active in collecting data on indoor temperatures and thermostat settings in commercial buildings. And to all you building occupants out there: If you find your office, store or restaurant too cold in summer or warm in winter, let management know about it.

The ConversationEric Williams, Associate Professor of Sustainability, Rochester Institute of Technology

This article was originally published on The Conversation. Read the original article.

Next, Check Out:

Memetics and the science of going viral

By Shontavia Johnson, Drake University.

WHO LET THE DOGS OUT? WHO-WHO-WHO-WHO-WHO? WHO LET THE DOGS OUT? WHO-WHO-WHO-WHO-WHO?

If you’ve ever heard the Baha Men’s 2000 hit “Who Let the Dogs Out,” you probably have also experienced its somewhat-annoying-but-very-catchy hook being stuck in your head for several hours.

The official video for ‘Who Let the Dogs Out’ by the Baha Men.

As you went about your day quietly humming it, perhaps someone else heard you and complained minutes later that you’d gotten the tune stuck in their head. The song’s hook seems to have the ability to jump from one brain to another. And perhaps, to jump from the web browser you are using right now to your brain. In fact, you may be singing the hook to yourself right now.

Something similar happens on the internet when things go viral – seeming to follow no rhyme or reason, people are compelled to like, share, retweet or participate in things online.

Meme world domination: When the leader of the free world impersonates Grumpy Cat. Gary Cameron/Reuters

For example, Grumpy Cat’s photo was shared so many times online that it went on to receive the “single greatest internet achievement of the year” in 2013. The owner of Grumpy Cat (whose real name is Tardar Sauce) has said she did not know the cat’s photo, originally posted to Reddit, would be anything special.

Out of the blue, some social media challenges take off to such an extent that people seem powerless to ignore them. In 2014, more than 3 million people mentioned and shared #IceBucketChallenge videos in less than three weeks. After going viral, the challenge raised more than US$100 million for the ALS Association.

In other instances, however, digital media languishes. Funny cat photos and #XYZChallenges go ignored, unshared and without retweets.

Why, and how, are we compelled to repeat and share certain cultural elements like songs, videos, words and pictures? Is it just the luck of the draw and the pity of internet strangers? The reason may have less to do with random chance and more to do with a controversial field called memetics, which focuses on how little bits of culture spread among us.

As the director of Drake University Law School’s Intellectual Property Law Center, I’ve studied memetics and its relationship to viral media. It’s hard to ignore the connection between memetics and the question of what makes certain media get shared and repeated millions of times. Companies and individuals would be well-served to understand whether there is actually a science to going viral and how to use that science in campaigns.

Idea of ‘memes’ is based on genes

The term “memetics” was first proposed by evolutionary biologist Richard Dawkins in his popular 1976 book “The Selfish Gene.” He offered a theory regarding how cultural information evolves and is transmitted from person to person.

In the way a gene is a discrete packet of hereditary information, the idea is that a meme is a similar packet of cultural information. According to Dawkins, when one person imitates another, a meme is passed to the imitator, similar to the way blue eyes are passed from parents to children through genes.

Like male elk battling for supremacy, memes also fight to be on top.
Jake Bellucci, CC BY-ND

Memetics borrows from the theory of Darwinian evolution. It suggests that memes compete, reproduce and evolve just as genes do. Only the strongest survive. So memes fiercely vie for space and advantages in our brains and behaviors. The ones that succeed through widespread imitation have best evolved for repetition and communication. A meme is not controllable by any one individual – many people can simultaneously serve as hosts for it.

It can be difficult to further explain what might fall under the heading of “meme.” Commonly, however, scientists note a meme may be a phrase, catchy jingle, or behavior. Dawkins hesitated to strictly define the term, but he noted that tunes, ideas, catch-phrases, clothes fashions, and ways of making pots or building arches could all be memes. Memetics suggests that memes have existed for as long as human beings have been on the planet.

One common illustration is the spoked wheel meme. According to philosopher Daniel C. Dennett:

A wagon with spoked wheels carries not only grain or freight from place to place; it carries the brilliant idea of a wagon with spoked wheels from mind to mind.

The first person whose brain transported the spoked wheel meme builds one spoke-wheeled wagon. Others will see this first wagon, replicate the same meme, and continue to build more wagons until there are hundreds, thousands, or millions of wagons with spoked wheels. In the earliest days of human existence, such a meme could replicate quickly in a universe where alternatives were few and far between.

Memetics is about more than just what makes a thing popular. The strongest memes – those that replicate in the most minds – are the ones responsible for creating human culture.

A strong meme is going to go places.
Richard Walker, CC BY

Enter the internet

Today, the internet meme (what most people now just call a meme) is a piece of media that is copied and quickly spread online. One of the first uses of the internet meme idea arose in 1994, when Mike Godwin, an American attorney and internet law expert, used the word “meme” to characterize the rapid spread of ideas online. He had noticed that, in disparate newsgroups and virtual communities, certain posters were labeled as “similar to the Nazis” or “Hitler-like” when they made unpopular comments. Godwin dubbed this the Nazi-comparison meme. It would pop up again and again, in different kinds of discussions with posters from around the world, and Godwin marveled at the meme’s “peculiar resilience.”

More than 20 years later, the word “meme” has become a regular part of our lexicon, and has been used to describe everything from the Ermahgerd Girl to Crying Jordan to Gangnam Style.

In today’s world, any one meme has lots of competition. Americans spend, on average, 11 hours per day interacting with digital media. Australians spend 10 hours per day on internet-connected devices. Latin Americans spend more than 12 hours consuming some sort of media daily.

Around the world, people constantly receive thousands of photos, videos and other messages. Determining which of these items captures the most attention could translate into significant advantages for digital content creators.

Manipulating memes to go viral?

The internet meme and the scientific meme are not identical. The internet meme is typically deliberately altered by human ingenuity, while the scientific meme (invented before the internet became a thing) involves random change and accurate copying. In addition, internet memes are traceable through their presence on social media, while scientific memes are (at least right now), untraceable because they have no physical form or footprint. Dawkins, however, has stated that the internet meme and the scientific meme are clearly related and connected.

What causes one meme to replicate more successfully than another? Some researchers say that memes develop characteristics called “Good Tricks” to provide them with competitive advantages, including:

  1. being genuinely useful to a human host;
  2. being easily imitated by human brains; and
  3. answering questions that the human brain finds of interest.

First, if a meme is genuinely useful to humans, it is more likely to spread. Spoked-wheel wagons will replicate quickly because early humans need to transport lots of freight easily. Second, memes that are easy to copy have a competitive advantage over those that aren’t – a catchy hook like “WHO LET THE DOGS OUT” is easier to replicate than the lines to U2’s “Numb” (called one of the toughest pop songs to understand). Third, memes that answer pressing questions are likely to replicate. Peruse any bookstore aisle and you will find numerous books about finding your purpose, figuring out the meaning of life, or losing weight quickly and effectively – all topics of immense interest to many people.

Memetics suggests that there are real benefits to pairing a strong meme (using Dawkins’ original definition) with digital and other content. If there is a scientific explanation for strong replication, marketing and advertising strategies coupled with strong memes can unlock the share-and-repeat secrets of viral media.

The answer to such secrets may be found in songs like “WHO LET THE DOGS OUT? WHO-WHO-WHO-WHO-WHO?” Are you humming it yet?

The ConversationShontavia Johnson, Professor of Intellectual Property Law, Drake University

This article was originally published on The Conversation. Read the original article.

Next, Check Out:

Alligators Are Ancient: New Studies Show They Haven’t Evolved for Over 8 Million Years

While many of today’s top predators are more recent products of evolution, the modern American alligator is a reptile from another time.

New research shows these prehistoric-looking creatures have remained virtually untouched by major evolutionary change for at least 8 million years, and may be up to 6 million years older than previously thought. Besides some sharks and a handful of others, very few living vertebrate species have such a long duration in the fossil record with so little change.

“If we could step back in time 8 million years, you’d basically see the same animal crawling around then as you would see today in the Southeast. Even 30 million years ago, they didn’t look much different,” says Evan Whiting, a former University of Florida undergraduate and the lead author of two studies published during summer 2016 that document the alligator’s evolution—or lack thereof.

“We were surprised to find fossil alligators from this deep in time that actually belong to the living species, rather than an extinct one,” he says.

Alligators and humans

Whiting, now a doctoral student at the University of Minnesota, describes the alligator as a survivor, withstanding sea-level fluctuations and extreme changes in climate that would have caused some less-adaptive animals to rapidly change or go extinct. Whiting also discovered that early American alligators likely shared the Florida coastline with a 25-foot now-extinct giant crocodile.

In modern times, however, he said alligators face a threat that could hinder the scaly reptiles’ ability to thrive like nothing in their past—humans.

Despite their resilience and adaptability, alligators were nearly hunted to extinction in the early 20th century. The Endangered Species Act has significantly improved the number of alligators in the wild, but there are still ongoing encounters between humans and alligators that are not desirable for either species and, in many places, alligator habitats are being destroyed or humans are moving into them, Whiting says.

“The same traits that allowed alligators to remain virtually the same through numerous environmental changes over millions of years can become a bit of a problem when they try to adapt to humans,” Whiting says. “Their adaptive nature is why we have alligators in swimming pools or crawling around golf courses.”

Whiting hopes his research findings serve to inform the public that the alligator was here first, and we should act accordingly by preserving the animal’s wild populations and its environment. By providing a more complete evolutionary history of the alligator, his research provides the groundwork for conserving habitats where alligators have dominated for millions of years.

“If we know from the fossil record that alligators have thrived in certain types of habitats since deep in time, we know which habitats to focus conservation and management efforts on today,” Whiting says.

Giant crocodiles

The researchers began re-thinking the alligator’s evolutionary history after Whiting examined an ancient alligator skull, originally thought to be an extinct species, unearthed in Marion County, Florida, and found it to be virtually identical to the iconic modern species. He compared the ancient skull with dozens of other fossils and modern skeletons to look at the whole genus and trace major changes, or the lack thereof, in alligator morphology.

Whiting also studied the carbon and oxygen compositions of the teeth of both ancient alligators and the 20- to 25-foot extinct crocodile Gavialosuchus americanus that once dominated the Florida coastline and died out about 5 million years ago for unknown reasons. The presence of alligator andGavialosuchus fossils at several localities in north Florida suggest the two species may have coexisted in places near the coast, he says.

Analysis of the teeth suggests, however, that the giant croc was a marine reptile, which sought its prey in ocean waters, while alligators tended to hunt in freshwater and on land. That doesn’t mean alligators weren’t occasionally eaten by the monster crocs, though.

“Evan’s research shows alligators didn’t evolve in a vacuum with no other crocodilians around,” says coauthor David Steadman, ornithology curator at the Florida Museum of Natural History at the University of Florida. “The gators we see today do not really compete with anything, but millions of years ago it was not only competing with another type of crocodilian, it was competing with a much larger one.”

Steadman says the presence of the ancient crocodile in Florida may have helped keep the alligators in freshwater habitats, though it appears alligators have always been most comfortable in freshwater.

While modern alligators do look prehistoric, study authors say they are not somehow immune to evolution. On the contrary, they are the result of an incredibly ancient evolutionary line. The group they belong to, Crocodylia, has been around for at least 84 million years and has diverse ancestors dating as far back as the Triassic, more than 200 million years ago.

Whiting’s studies were published in the Journal of Herpetology and Palaeogeography, Palaeoclimatology, Palaeoecology

Source: Republished from Futurity.org as a derivative work under the Attribution 4.0 International license. Original article posted to Futurity by

Now, Check Out:

How random is your randomness, and why does it matter?

By David Zuckerman, University of Texas at Austin and Eshan Chattopadhyay, Institute for Advanced Study.

Randomness is powerful. Think about a presidential poll: A random sample of just 400 people in the United States can accurately estimate Clinton’s and Trump’s support to within 5 percent (with 95 percent certainty), despite the U.S. population exceeding 300 million. That’s just one of many uses.

Randomness is vital for computer security, making possible secure encryption that allows people to communicate secretly even if an adversary sees all coded messages. Surprisingly, it even allows security to be maintained if the adversary also knows the key used to the encode the messages.

Often random numbers can be used to speed up algorithms. For example, the fastest way we know to test whether a particular number is prime involves choosing random numbers. That can be helpful in math, computer science and cryptography, among other disciplines.

Random numbers are also crucial to simulating very complex systems. When dealing with the climate or the economy, for example, so many factors interact in so many ways that the equations involve millions of variables. Today’s computers are not powerful enough to handle all these unknowns. Modeling this complexity with random numbers simplifies the calculations, and still results in accurate simulations.

Typing: A source of low-quality randomness. ROLENSFX/YouTube, CC BY-SA

But it turns out some – even most – computer-generated “random” numbers aren’t actually random. They can follow subtle patterns that can be observed over long periods of time, or over many instances of generating random numbers. For example, a simple random number generator could be built by timing the intervals between a user’s keystrokes. But the results would not really be random, because there are correlations and patterns in these timings, especially when looking at a large number of them.

Using this sort of output – numbers that appear at first glance to be unrelated but which really follow a hidden pattern – can weaken polls’ accuracy and communication secrecy, and render those simulations useless. How can we obtain high-quality randomness, and what does this even mean?

Randomness quality

To be most effective, we want numbers that are very close to random. Suppose a pollster wants to pick a random congressional district. As there are 435 districts, each district should have one chance in 435 of being picked. No district should be significantly more or less likely to be chosen.

Low-quality randomness is an even bigger concern for computer security. Hackers often exploit situations where a supposedly random string isn’t all that random, like when an encryption key is generated with keystroke intervals.

Radioactive decay: Unpredictable, but not efficient for generating randomness. Inductiveload

It turns out to be very hard for computers to generate truly random numbers, because computers are just machines that follow fixed instructions. One approach has been to use a physical phenomenon a computer can monitor, such as radioactive decay of a material or atmospheric noise. These are intrinsically unpredictable and therefore hard for a potential attacker to guess. However, these methods are typically too slow to supply enough random numbers for all the needs computers and people have.

There are other, more easily accessible sources of near-randomness, such as those keystroke intervals or monitoring computer processors’ activity. However, these produce random numbers that do follow some patterns, and at best contain only some amount of uncertainty. These are low-quality random sources. They’re not very useful on their own.

What we need is called a randomness extractor: an algorithm that takes as input two (or more) independent, low-quality random sources and outputs a truly random string (or a string extremely close to random).

Constructing a randomness extractor

Mathematically, it is impossible to extract randomness from just one low-quality source. A clever (but by now standard) argument from probability shows that it’s possible to create a two-source extractor algorithm to generate a random number. But that proof doesn’t tell us how to make one, nor guarantee that an efficient algorithm exists.

Until our recent work, the only known efficient two-source extractors required that at least one of the random sources actually had moderately high quality. We recently developed an efficient two-source extractor algorithm that works even if both sources have very low quality.

Our algorithm for the two-source extractor has two parts. The first part uses a cryptographic method called a “nonmalleable extractor” to convert the two independent sources into one series of coin flips. This allows us to reduce the two-source extractor problem to solving a quite different problem.

Suppose a group of people want to collectively make an unbiased random choice, say among two possible choices. The catch is that some unknown subgroup of these people have their heart set on one result or the other, and want to influence the decision to go their way. How can we prevent this from happening, and ensure the ultimate result is as random as possible?

The simplest method is to just flip a coin, right? But then the person who does the flipping will just call out the result he wants. If we have everyone flip a coin, the dishonest players can cheat by waiting until the honest players announce their coin flips.

A middling solution is to let everyone flip a coin, and go with the outcome of a majority of coin flippers. This is effective if the number of cheaters is not too large; among the honest players, the number of heads is likely to differ from the number of tails by a significant amount. If the number of cheaters is smaller, then they won’t be able to affect the outcome.

Protecting against cheaters

We constructed an algorithm, called a “resilient function,” that tolerates a much larger number of cheaters. It depends on more than just the numbers of heads and tails. A building block of our function is called the “tribes function,” which we can explain as follows.

Suppose there are 44 people involved in collectively flipping a coin, some of whom may be cheaters. To make the collective coin flip close to fair, divide them into 11 subgroups of four people each. Each subgroup will call out “heads” if all of its members flip heads; otherwise it will say “tails.” The tribes function outputs “heads” if any subgroup says “heads;” otherwise it outputs “tails.”

The tribes function works well if there is just one cheater. This is because if some other member of the cheater’s subgroup flips tails, then the cheater’s coin flip doesn’t affect the outcome. However, it works poorly if there are four cheaters, and if those players all belong to the same subgroup. For then all of them could output “heads,” and force the tribes function to output “heads.”

To handle many cheaters, we build upon work of Miklos Ajtai and Nati Linial and use many different divisions into subgroups. This gives many different tribes functions. We then output “heads” if all these tribe functions output “heads”; otherwise we output “tails.” Even a large number of cheaters is unlikely to be able to control the output, ensuring the result is, in fact, very random.

Our extractor outputs just one almost random bit – “heads” or “tails.” Shortly afterwards Xin Li showed how to use our algorithm to output many bits. While we gave an exponential improvement, other researchers have further improved our work, and we are now very close to optimal.

Our finding is truly just one piece of a many-faceted puzzle. It also advances an important field in the mathematical community, called Ramsey theory, which seeks to find structure even in random-looking objects.

The ConversationDavid Zuckerman, Professor of Computer Science, University of Texas at Austin and Eshan Chattopadhyay, Postdoctoral Researcher in Mathematics, Institute for Advanced Study

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

As climate change alters the oceans, what will happen to Dungeness crabs?

By Paul McElhany, National Oceanic and Atmospheric Administration.

Many travelers visit the Pacific Northwest to eat the region’s famous seafood – particularly Dungeness crabs, which are popular in crab cakes or wrestled straight out of the shell. Locals also love catching and eating the feisty creatures. One of my favorite ways to spend an afternoon is fishing for Dungeness crabs from a pier in Puget Sound with my daughter. We both enjoy the anticipation of not knowing what we will discover when we pull up the trap. For us, the mystery is part of the fun.

But for commercial crabbers who bring in one of the most valuable marine harvests on the U.S. West Coast, that uncertainty affects their economic future.

In my day job as a research ecologist with the National Oceanic and Atmospheric Administration’s Northwest Fisheries Science Center, I study how changes in seawater’s acidity from absorbing carbon dioxide in the air, referred to as ocean acidification, may affect the success of recreational crabbers like me and the fortunes of the crabbing industry.

Contrary to early assumptions that acidification was unlikely to have significant effects on Dungeness crabs, we found in a recent study that the larvae of this species have lower survival when they are reared in the acidified ocean conditions that we expect to see in the near future. Our findings have sobering implications for the long-term future of this US$170 million fishery.

Pike Place Market, Seattle.
jpellgen/Flickr, CC BY-NC-ND

Dissolving shells

Ocean acidification is a global phenomenon that occurs when we burn fossil fuels, pumping carbon dioxide (CO2) into the atmosphere. Some of that CO2 is absorbed by the ocean, causing chemical changes that make ocean water more acidic, which can affect many types of marine life. The acidification taking place now is the most rapid change in ocean chemistry in at least 50 million years.

Many organisms, including numerous species of fish, phytoplankton and jellyfish, do not seem to be greatly affected by these changes. But some species – particularly oysters, corals and other organisms that make hard shells from calcium carbonate in seawater – die at a higher rate as the water in which they are reared becomes more acidic. Acidification reduces the amount of carbonate in the seawater, so these species have to use more energy to produce shells.

If water becomes extremely acidic, their shells can literally dissolve. We have seen this happen in experiments using small free-swimming marine snails called pteropods.

Dungeness crabs make their exoskeleton primarily from chitin, a modified polysaccharide similar to cellulose, that contains only small amounts of calcium carbonate. Initially, scientists predicted that the species would experience relatively limited harm from acidification. However, recent experiments in our lab led by graduate student Jason Miller suggest that Dungeness crabs are also vulnerable.

Crab fishing boats, Half Moon Bay, California.
Steve McFarland/Flickr, CC BY-NC

Fewer crabs, growing more slowly

In these experiments we simulated CO2 conditions that have been observed in today’s ocean and conditions we expect to see in the future as result of continued CO2 emissions. By raising Dungeness crab larvae in this “ocean time machine,” we were able to observe how rising acidification affected their development.

Dungeness crabs’ life cycle starts in autumn, when female crabs each produce up to two million orange eggs, which they attach to their abdomens. The brooding females spend the winter buried up to their eye stalks in sediment on the sea floor with their egg masses tucked safely under a flap of exoskeleton.

In spring the eggs hatch, producing larvae in what is called the zoea stage – about the size of a period in 12-point type. Zoea-stage crab larvae look nothing like adult crabs, and have a completely different lifestyle. Instead of lurking on the bottom and scavenging on shrimp, mussels, small crabs, clams and worms, they drift and swim in the water column eating smaller free-swimming zooplankton.

Dungeness crab larva, zoea stage. Oregon Department of Fish and Wildlife

After molting through five different zoea stages, which all look pretty similar, the larvae reach the megalopae stage when they are about two months old. Next they molt into the benthic juvenile stage, which looks a lot like an adult crab, and settle to the sea floor. The crabs finally reach adulthood about two years after hatching.

Some common pH values. Wikipedia, CC BY

In our experiment, divers collected brooding female Dungeness crabs from the bottom of Puget Sound in Washington state. We reared larvae produced from these females in three different CO2 levels that roughly corresponded to acidification levels now (pH 8.0), levels projected to be relatively common at midcentury (pH 7.5) and levels expected in some locations by the end of the century (pH 7.1). The pH scale measures how acidic or basic (alkaline) a substance is, with lower pH indicating a more acidic condition and a decrease of one unit (i.e., from 8 to 7) representing a tenfold increase in acidity. This means that the ocean today (average pH 8.1) is about 25 percent more acidic than the ocean in pre-industrial times (pH 8.2) and the ocean of the future is expected to be about 100 percent more acidic than today.

Describing exactly how acidic Puget Sound is now or could be in the future is complicated, because CO2 levels in different parts of the Sound vary widely and there are seasonal shifts. Generally, however, Puget Sound is naturally more acidic than other parts of the ocean because currents bring acidic waters from the deep ocean to the surface there. But shellfishermen are concerned because human-produced CO2 is causing large changes on top of these background levels of variation.

We found that although eggs reared in high-CO2 water hatched at the same rate as those in lower-CO2 water, fewer than half as many of the larvae reared in highly acidic conditions survived for more than 45 days compared to those raised under current conditions (Figure 2). Put another way, the mortality rate in acidified conditions was more than twice as high as in more contemporary CO2 conditions. Crabs raised in more acidic water also developed more slowly, and fewer of them reached the 4th zoeal stage compared to larvae raised in less-acidic water. This slower development rate probably reflected the extra energy that larvae had to expend to grow in a more acidic environment.

Su Kim/NOAA Fisheries, Author provided

We are not entirely sure what these results mean for future populations of Dungeness crabs, but there is reason for concern. Significantly lower larval survival may translate into fewer adult crabs, which will have ripple effects on the fishery and Pacific coastal food webs.

Slower larval growth could lead to a mismatch in the timing of predators and prey. Crab larvae depend on finding abundant prey during certain times of the year, and organisms such as Chinook salmon and herring that prey on crab larvae depend on an abundance of crabs at particular times of the year. Any factor that disrupts the timing of development can have important ecological consequences.

Dungeness crab are found along the Pacific coast from California to Alaska, and over that range they experience wide variations in water temperature, ecological communities and pH. It is possible that individual crabs may be able to tolerate new CO2 conditions during their lives – in other words, to acclimate to the changes. Or if some crabs are just better able to tolerate high-CO2 conditions more easily than others, they may pass on that ability to their offspring, allowing the species to adapt to rising acidification through evolution. Our next studies will examine how Dungeness crabs may acclimate or adapt to increasing acidification.

Today Dungeness crab populations are generally in good condition, and my daughter and I usually come home from our crabbing adventures victorious. It is hard to imagine that this abundant species is at risk in the coming decades, but we need to anticipate how it could be affected by acidification. For Dungeness crabs and many other species, it is essential to understand how human actions today could alter sea life in tomorrow’s oceans.

Jason Miller, a former biologist at NOAA’s Northwest Fisheries Science Center and graduate student at the University of Washington, was lead author of the Dungeness crab larval exposure study on which much of this article is based.

The ConversationPaul McElhany, Research Ecologist, National Oceanic and Atmospheric Administration

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

Ceres asteroid may have an ‘ice volcano’ and other signs of water, NASA mission reveals

By Monica Grady, The Open University.

The arrival of NASA’s Dawn mission at the huge asteroid “1 Ceres” in early 2015 has turned out to have been well worth waiting for. This dwarf planet is the largest body in the asteroid belt between Mars and Jupiter and was the first to be discovered. But, until recently, we have only had information from ground and space-based telescopes, which have given us tantalising glimpses of a dark, possibly water-rich object.

Now the Dawn space probe has sent back a bumper harvest of findings, summarised in six new research papers published in a special issue of the journal Science. We now have a map of Ceres that reveals unusual minerals, a surface peppered with craters, and water in the form of ice and possibly an outer atmosphere of vapour. There’s also enough uncertainty in the results to sow the seeds for future research.

The data provides a global geological map of the asteroid showing that its entire surface appears to be covered in phyllosilicates, an important group of clay minerals. Two specific clays are identified: one that is magnesium-rich, the second an ammonium-rich species. There seems to be little or no pattern to the distribution of the two minerals – they are both almost everywhere.

Dawn over Ceres
NASA/JPL-Caltech/UCLA/MPS/DLR/IDA

This ubiquity is what is important. The minerals could not have been formed in a local event, such as an impact into an ice-filled crater. They must have been produced by planet-wide alteration, presumably implying there must have been volumes of water. It is clear that enormous quantities of liquid water are not present on Ceres now. But the signal of water-ice has been detected in at least one crater.

Because the temperature of Ceres is relatively warm (between -93℃ and -33℃), water-ice exposed at the surface would rapidly convert into a gas in such a low-pressure environment. So the discovered traces of water-ice suggest some underground ice was recently exposed and that there must be some mechanism to explain how the surface was disturbed in this way. Some researchers think that the answer is cryovolcanism, where subsurface layers of mixed ice and minerals percolate slowly to the surface through cracks and fractures, or more swiftly following an impact. If the minerals are chlorides, then a low temperature brine can keep the subsurface layer mobile.

Ice flows

As well as a geological map of Ceres, we also have a picture of Ceres’ global geomorphology (its surface features). This shows that the surface of Ceres is peppered with impact craters, although the craters are not distributed evenly over the surface. Much more interesting are the three distinct types of mineral flow across the landscape, produced by the movement of ice-rich material, landslides or blankets of ejected particles following impact into ice-rich material. The distribution of the flow types varies with latitude – and the researchers think this means different surface layers of the asteroid contain different amounts of ice.

One of the most remarkable results is the detection of a sudden burst of highly energetic electrons over a period of around a week in June 2015, coinciding with a solar proton storm. The researchers think the protons fired out by the sun interacted with particles in Ceres’ weak atmosphere, creating a shock wave that accelerated the electrons. Based on observations by the Hubble Space Telescope, Ceres is believed to have a tenuous exosphere (outer atmosphere) of water vapour. The results from Dawn suggest that this may indeed be the case.

Together, this new set of information shows that Ceres is a world that has been shaped by a series of events, with a strong crust of magnesium- and ammonium-bearing phyllosilicates overlying an interior of briny ice and hydrated minerals. What other hidden secrets will be revealed as research continues on the trove of data from Ceres? Questions still remain about the variety of mineral deposits, the depth of the subsurface ice-rock layer, and, of course, the potential for organic material on the minor planet. The harvest from Ceres so far has been rich and promises to keep us busy for years to come.

The ConversationMonica Grady, Professor of Planetary and Space Sciences, The Open University

This article was originally published on The Conversation. Read the original article.

Now, Check Out: