Machine learning and big data know it wasn’t you who just swiped your credit card

Jungwoo Ryoo, Pennsylvania State University

You’re sitting at home minding your own business when you get a call from your credit card’s fraud detection unit asking if you’ve just made a purchase at a department store in your city. It wasn’t you who bought expensive electronics using your credit card – in fact, it’s been in your pocket all afternoon. So how did the bank know to flag this single purchase as most likely fraudulent?

Credit card companies have a vested interest in identifying financial transactions that are illegitimate and criminal in nature. The stakes are high. According to the Federal Reserve Payments Study, Americans used credit cards to pay for 26.2 billion purchases in 2012. The estimated loss due to unauthorized transactions that year was US$6.1 billion. The federal Fair Credit Billing Act limits the maximum liability of a credit card owner to $50 for unauthorized transactions, leaving credit card companies on the hook for the balance. Obviously fraudulent payments can have a big effect on the companies’ bottom lines. The industry requires any vendors that process credit cards to go through security audits every year. But that doesn’t stop all fraud.

In the banking industry, measuring risk is critical. The overall goal is to figure out what’s fraudulent and what’s not as quickly as possible, before too much financial damage has been done. So how does it all work? And who’s winning in the arms race between the thieves and the financial institutions?

Gathering the troops

From the consumer perspective, fraud detection can seem magical. The process appears instantaneous, with no human beings in sight. This apparently seamless and instant action involves a number of sophisticated technologies in areas ranging from finance and economics to law to information sciences.

Of course, there are some relatively straightforward and simple detection mechanisms that don’t require advanced reasoning. For example, one good indicator of fraud can be an inability to provide the correct zip code affiliated with a credit card when it’s used at an unusual location. But fraudsters are adept at bypassing this kind of routine check – after all, finding out a victim’s zip code could be as simple as doing a Google search.

Traditionally, detecting fraud relied on data analysis techniques that required significant human involvement. An algorithm would flag suspicious cases to be closely reviewed ultimately by human investigators who may even have called the affected cardholders to ask if they’d actually made the charges. Nowadays the companies are dealing with a constant deluge of so many transactions that they need to rely on big data analytics for help. Emerging technologies such as machine learning and cloud computing are stepping up the detection game.

It takes a lot of computing power.
Stefano Petroni, CC BY-NC-ND

Learning what’s legit, what’s shady

Simply put, machine learning refers to self-improving algorithms, which are predefined processes conforming to specific rules, performed by a computer. A computer starts with a model and then trains it through trial and error. It can then make predictions such as the risks associated with a financial transaction.

A machine learning algorithm for fraud detection needs to be trained first by being fed the normal transaction data of lots and lots of cardholders. Transaction sequences are an example of this kind of training data. A person may typically pump gas one time a week, go grocery shopping every two weeks and so on. The algorithm learns that this is a normal transaction sequence.

After this fine-tuning process, credit card transactions are run through the algorithm, ideally in real time. It then produces a probability number indicating the possibility of a transaction being fraudulent (for instance, 97%). If the fraud detection system is configured to block any transactions whose score is above, say, 95%, this assessment could immediately trigger a card rejection at the point of sale.

The algorithm considers many factors to qualify a transaction as fraudulent: trustworthiness of the vendor, a cardholder’s purchasing behavior including time and location, IP addresses, etc. The more data points there are, the more accurate the decision becomes.

This process makes just-in-time or real-time fraud detection possible. No person can evaluate thousands of data points simultaneously and make a decision in a split second.

Here’s a typical scenario. When you go to a cashier to check out at the grocery store, you swipe your card. Transaction details such as time stamp, amount, merchant identifier and membership tenure go to the card issuer. These data are fed to the algorithm that’s learned your purchasing patterns. Does this particular transaction fit your behavioral profile, consisting of many historic purchasing scenarios and data points?


I buy gas only during daylight hours.
Christopher, CC BY-NC

The algorithm knows right away if your card is being used at the restaurant you go to every Saturday morning – or at a gas station two time zones away at an odd time such as 3:00 a.m. It also checks if your transaction sequence is out of the ordinary. If the card is suddenly used for cash-advance services twice on the same day when the historic data show no such use, this behavior is going to up the fraud probability score. If the transaction’s fraud score is above a certain threshold, often after a quick human review, the algorithm will communicate with the point-of-sale system and ask it to reject the transaction. Online purchases go through the same process.

In this type of system, heavy human interventions are becoming a thing of the past. In fact, they could actually be in the way since the reaction time will be much longer if a human being is too heavily involved in the fraud-detection cycle. However, people can still play a role – either when validating a fraud or following up with a rejected transaction. When a card is being denied for multiple transactions, a person can call the cardholder before canceling the card permanently.

Computer detectives, in the cloud

The sheer number of financial transactions to process is overwhelming, truly, in the realm of big data. But machine learning thrives on mountains of data – more information actually increases the accuracy of the algorithm, helping to eliminate false positives. These can be triggered by suspicious transactions that are really legitimate (for instance, a card used at an unexpected location). Too many alerts are as bad as none at all.

It takes a lot of computing power to churn through this volume of data. For instance, PayPal processes more than 1.1 petabytes of data for 169 million customer accounts at any given moment. This abundance of data – one petabyte, for instance, is more than 200,000 DVDs’ worth – has a positive influence on the algorithms’ machine learning, but can also be a burden on an organization’s computing infrastructure.

Enter cloud computing. Off-site computing resources can play an important role here. Cloud computing is scalable and not limited by the company’s own computing power.

Fraud detection is an arms race between good guys and bad guys. At the moment, the good guys seem to be gaining ground, with emerging innovations in IT technologies such as chip and pin technologies, combined with encryption capabilities, machine learning, big data and, of course, cloud computing.

Fraudsters will surely continue trying to outwit the good guys and challenge the limits of the fraud detection system. Drastic changes in the payment paradigms themselves are another hurdle. Your phone is now capable of storing credit card information and can be used to make payments wirelessly – introducing new vulnerabilities. Luckily, the current generation of fraud detection technology is largely neutral to the payment system technologies.

The Conversation

Jungwoo Ryoo, Associate Professor of Information Sciences and Technology at Altoona campus, Pennsylvania State University

This article was originally published on The Conversation. Read the original article.

Featured Image Credit: U.S. Navy

Locavore or vegetarian? What’s the best way to reduce climate impact of food?

Elliott Campbell, University of California, Merced

This year’s Thanksgiving feast falls only a few days before the start of the global climate summit in Paris. Although the connections are not always obvious, the topic of food – and what you choose to eat – has a lot to do with climate change.

Our global agriculture system puts food on the table but it also puts greenhouse gases (GHG) in the air, which represent a huge portion of global emissions. GHG emissions come directly from farms such as methane from cows and nitrous oxide from fertilized fields, while other emissions come from the industries that support agriculture, such as fertilizer factories that consume fossil fuels.

Still other emissions come from natural lands, which have massive stocks of natural carbon stored in plants and soils. When natural lands are cleared to make room for more food production, the carbon in those natural pools is emitted to the atmosphere as carbon dioxide.

Adding all these emissions together makes agriculture responsible for between roughly one fifth and one third of all global anthropogenic, or mandmade, greenhouse gas emissions.

How can these emissions be reduced? My own research through the University of California Global Food Initiative has focused on evaluating a wide range of factors from biofuels to local food systems.

Undoubtedly, broad emissions reductions must come from political action and industry commitments. Nevertheless, an enlightened consumer can also help achieve meaningful reductions in GHG emissions, particularly for the case of food. The trick is to understanding what food means for your personal carbon footprint and how to effectively shrink this footprint.

On par with electricity

Zooming in from the global picture on emissions to a single home reveals how important our personal food choices are for climate change. You can use carbon footprint calculators, such as the University of California CoolClimate Tool, to get an idea of how important food is in relation to choices we make about commuting, air travel, home energy use, and consumption of other goods and services.

For the average U.S. household, food consumption will be responsible for about the same GHG emissions as home electricity consumption for the average US household.

That’s a significant portion of an individual’s GHG footprint but it could be seen as a blessing in disguise. While you may be stuck with your home or your vehicle for some time and their associated GHG emissions, food is something we purchase with great frequency. And every trip to the grocery store or farmer’s market is another opportunity for an action that has a significant and lasting impact on our climate.

Making concrete decisions, though, is not always straight-forward. Many consumers are faced with a perplexing array of options from organic to conventional foods, supermarkets to farmers markets, and genetically modified organisms to more traditional varieties.

And in truth, the carbon footprint of many food options is disputed in the scientific literature. Despite the need for more research, there appears to be a very clear advantage for individuals to chose a more plant-based diet. A meat-intensive diet has more than twice the emissions of a vegan diet. Reducing the quantity of meat (particularly red meat) and dairy on the table can go a long way to reducing the carbon footprint of your food.

Food miles and water recycling

Local food systems are popularly thought to reduce GHG emissions through decreased food transport or food miles. But in many cases food miles turn out to be a meaningful but small piece of the overall GHG emissions from food.

For example, a broad analysis of the US food supply suggests that food miles may be responsible for less than 10% of the GHG emissions associated with food. This general trend suggests that where you get your food from is much less important than first-order issues, such as shifting to a more plant-based diet.

A little-appreciated way of reducing the carbon footprint of food is to recycle nearby water rather than pump it long distances. The Pajaro Valley Water Management Agency (PVWMA) Water Resources Center in California sanitizes wastewater for direct use or blending with ground (well) water.
US Department of Agriculture, CC BY

Where, then, does this leave a rapidly emerging local food movement?

For starters, there are some cases where food miles have greater importance. For example, food miles can play a big part in the carbon footprint of foods when airplanes or refrigeration are required during transport.

There is, however, untapped potential for locally produced food to deliver carbon savings around water and fertilizers.

When water is pumped long distances, it can add to food’s carbon footprint. Re-use of purified urban wastewater for irrigating crops represents one strategy for addressing this challenge but is only economically and environmentally feasible when food production is in close proximity to cities.

Using fossil fuels to produce fertilizers, such as ammonia, can also be a big piece of the carbon footprint of food. Nutrients in reclaimed wastewater and urban compost may provide a low-carbon alternative to fossil fuel-based fertilizers. But similar to water re-use, reusing nutrients is most easily done when there is a short distance between food production and consumption.

To be sure, buying local food doesn’t imply that food or nutrient recycling has happened. But developing local food systems could certainly be a first step towards exploring how to close the water and nutrient loop.

The Conversation

Elliott Campbell, Associate Professor, Environmental Engineering, University of California, Merced

This article was originally published on The Conversation. Read the original article.

Featured Image Credit: Alpha, CC BY-SA

Humans Invent Tools and Then Here’s the Unique Thing We Do Next

Many animals exhibit learned behaviors, but humans are unique in their capacity to build on existing knowledge to make new innovations.

Understanding the patterns of how new generations of tools emerged in prehistoric societies, however, has long puzzled scientists.

Observations from the archaeological record indicate that cultural traits can accumulate and emerge exponentially over time. In contrast, changes in tools often appear in punctuated, incremental bursts. Long, seemingly static periods are interspersed between “cultural explosions,” periods of sudden cultural accumulation.

The reason for these sudden changes is still up for debate, but researchers have attributed this pattern to external events, such as a change in environment, the evolution of new cognitive capacity,  or even the evolution of a culture.

Now scientists have developed a virtual way of testing these hypotheses.

In a paper published in the Proceedings of the National Academy of Sciences, scientists introduce a computer model of cultural evolution that reproduces all of these patterns in the archaeological record.

The researchers started from a point of view that a pattern of punctuated bursts of creativity can be a feature of cultural evolution itself, as opposed to a cultural response to an external change.

In this model, some human innovations require large leaps of spontaneous insight, but other innovations can be created by drawing parallels with existing technology.

One invention can inspire the need for a new companion tool, or existing technologies can be combined to make a new tool. These different processes of innovation occur at different rates, and the relationships between these processes determine whether the accumulation of tools occurs in a stepwise pattern.

First, the researchers simulate a population of a certain size. Then they allow new “large leaps” in knowledge to occur at a certain rate per person. Once an individual in the simulated population has invented something really new—the researchers assigned mathematical values to things such as shovels and boats—the model simulates whether other innovations that are dependent on this new idea might be invented quickly thereafter.

CULTURAL EXPLOSION

A significant difference between the new model proposed and previous efforts is that the scientists don’t assume that all human innovations are created the same way.

“It was insightful to realize that tools can create ‘ecological niches’ for other tools to fill,” says Oren Kolodny, a postdoctoral fellow working with Marcus Feldman, a professor of biology at Stanford University.

“Once you invent something like a raft, it paves the way for the invention of a paddle that’ll allow you to manipulate it, tools that will help you mend it, and eventually also new technologies for offshore fishing or transport of things.”

The researchers’ model also considers ways that tools are lost to a people, by incorporating two properties intrinsic to cultural evolution: the distribution of knowledge in different pockets of a population and the impact of environmental change.

“In general, humans inherit genetic traits directly from our parents,” says postdoctoral fellow Nicole Creanza. “In contrast, cultural traits—tools, beliefs, and behaviors that are transmitted by learning—can be learned not only from parents but also from teachers and peers.”

In the model, certain knowledge can be concentrated in a subset of a population, such as medicine men and medicine women. This concentration of knowledge subsequently leads to increased susceptibility to the loss of this knowledge.
Tools can also be lost following an environmental change, or if a population migrates to a new environment. A fluctuating environment can lead to large-scale cultural losses when tools are specialized for use in a particular environment. Similarly, a rapidly changing environment can select for “generalist” tools that are useful under a wider range of conditions.

Over time, the researchers believe the model will help reveal the drivers of these observed shifts in tool repertoire, which are otherwise near impossible to suss out from the archaeological record alone.

“We don’t completely understand the sudden bursts of cultural accumulation in the archaeological record, but researchers have proposed that an environmental change or a shift in cognitive capacity could spur a ‘cultural explosion,’” Creanza says.

“Our model demonstrates that these ‘explosions’ could also be a feature of cultural evolution itself, as long as some innovations are dependent on others.”

The Stanford University Center for Computational, Evolutionary and Human Genomics and the John Templeton Foundation supported the work.

 

Republished from Futurity.org  under the Creative Commons Attribution 4.0 International license, with a new headline and inline article links removed. Original article posted on Futurity by  .

Featured Image Credit: MattysFlicks/Flickr

Is Double-dipping a Food Safety Problem or Just a Nasty Habit?

Paul Dawson, Clemson University

What do you do when you are left with half a chip in your hand after dipping? Admit it, you’ve wondered whether it’s OK to double dip the chip.

Maybe you’re the sort who dips their chip only once. Maybe you look around the room before loading your half-eaten chip with a bit more dip, hoping that no one will notice.

If you’ve seen that classic episode of Seinfeld, “The Implant,” where George Costanza double-dips a chip at a wake, maybe you’ve wondered if double-dipping is really like “putting your whole mouth right in the dip!”


‘You doubled-dipped the chip.’

But is it, really? Can the bacteria in your mouth make it onto the chip then into the dip? Is this habit simply bad manners, or are you actively contaminating communal snacks with your particular germs?

This question intrigued our undergraduate research team at Clemson University, so we designed a series of experiments to find out just what happens when you double-dip. Testing to see if there is bacterial transfer seems straightforward, but there are more subtle questions to be answered. How does the acidity of the dip affect bacteria, and do different dips affect the outcome? Members of the no-double-dipping enforcement squad, prepare to have your worst, most repulsive suspicions confirmed.

Start with a cracker

Presumably some of your mouth’s bacteria transfer to a food when you take a bite. But the question of the day is whether that happens, and if so, how much bacteria makes it from mouth to dip. Students started by comparing bitten versus unbitten crackers, measuring how much bacteria could transfer from the cracker to a cup of water.

We found about 1,000 more bacteria per milliliter of water when crackers were bitten before dipping than solutions where unbitten crackers were dipped.

In a second experiment, students tested bitten and unbitten crackers in water solutions with pH levels typical of food dips (pH levels of 4, 5 and 6, which are all toward the more acidic end of the pH scale). They tested for bacteria right after the bitten and unbitten crackers were dipped, then measured the solutions again two hours later. More acidic solutions tended to lower the bacterial numbers over time.

The time had come to turn our attention to real food.

But what about the dip?

[nextpagelink][/nextpagelink]

This Animal Redefines the Words ‘Extreme,’ ‘Survivor,’ and Possibly Even ‘Evolution’

There’s a microscopic animal which many people have never even heard of that holds the world record in being an extreme survivor. It can live through being deep frozen, down to −458 °F (−272.222 °C); being boiled, up to  300 °F (149 °C); pressures 6 times that found at the bottom of the deepest ocean; radiation levels that would almost immediately kill a human; dehydration to less than 3% water in their tiny bodies; and, most astoundingly, the complete vacuum of space.

This amazing animal is called the tardigrade, often referred to as the “water bear” because they kind of look like little bears due to the claws on the ends of their eight limbs. But if they have any DNA in common with actual bears, it’s because they somehow borrowed it.

CRAZY FACT: STICK A TARDIGRADE IN AN 80-CELSIUS FREEZER FOR 10 YEARS AND IT STARTS RUNNING AROUND IN 20 MINUTES AFTER THAWING.

Scientists sequenced their genome and were shocked to find the animals, known as water bears, get a huge chunk—about 17 percent—from foreign DNA.

Previously another microscopic animal called the rotifer was the record-holder for having the most foreign DNA, but it has about half as much as the tardigrade. For comparison, most animals have less than one percent of their genome from foreign DNA.

“We had no idea that an animal genome could be composed of so much foreign DNA,” says study co-author Bob Goldstein, a researcher at the University of North Carolina at Chapel Hill. “We knew many animals acquire foreign genes, but we had no idea that it happens to this degree.”

The work, publish in the Proceeding of the National Academy of Sciences, not only raises the question of whether there is a connection between foreign DNA and the ability to survive extreme environments, but further stretches conventional views of how DNA is inherited.

The study shows that tardigrades acquire about 6,000 foreign genes primarily from bacteria, but also from plants, fungi, and Archaea, through a process called horizontal gene transfer—the swapping of genetic material between species as opposed to inheriting DNA exclusively from mom and dad.

tardigrade-swimming

“Animals that can survive extreme stresses may be particularly prone to acquiring foreign genes—and bacterial genes might be better able to withstand stresses than animal ones,” says Thomas Boothby, a postdoctoral fellow in Goldstein’s lab and first author of the study. After all, bacteria have survived the Earth’s most extreme environments for billions of years.

The team speculates that the DNA is getting into the genome randomly but what is being kept is what allows tardigrades to survive the harshest of environments: Stick a tardigrade in an 80-celsius freezer for 10 years and it starts running around in 20 minutes after thawing.

THIS IS WHAT THE TEAM THINKS HAPPENS

When tardigrades are under conditions of extreme stress such as desiccation—or a state of extreme dryness—Boothby and Goldstein believe that the tardigrade’s DNA breaks into tiny pieces.

When the cell rehydrates, the cell’s membrane and nucleus, where the DNA resides, becomes temporarily leaky and DNA and other large molecules can pass through easily. Tardigrades not only can repair their own damaged DNA as the cell rehydrates but also stitch in the foreign DNA in the process, creating a mosaic of genes that come from different species.

“We think of the tree of life, with genetic material passing vertically from mom and dad,” says Boothby. “But with horizontal gene transfer becoming more widely accepted and more well known, at least in certain organisms, it is beginning to change the way we think about evolution and inheritance of genetic material and the stability of genomes.

“So instead of thinking of the tree of life, we can think about the web of life and genetic material crossing from branch to branch. So it’s exciting. We are beginning to adjust our understanding of how evolution works.”

Tardigrades appear to live everywhere in the world, continue on to the next page to watch an incredibly interesting video by the scientist who found that these tiny animals could survive the vacuum of space…

[nextpagelink][/nextpagelink]

How Fast Can We Transition to a Low-carbon Energy System?

Paul N Edwards, University of Michigan

Starting later this month, the world’s nations will convene in traumatized Paris to hammer out commitments to slow down global climate change. Any long-term solution will require “decarbonizing” the world energy economy – that is, shifting to power sources that use little or no fossil fuel.

How fast can this happen, and what could we do to accelerate this shift?

A look at the history of other infrastructures offers some clues.

Energy infrastructures

Decarbonization is an infrastructure problem, the largest one humanity has ever faced. It involves not only energy production, but also transportation, lighting, heating, cooling, cooking and other basic systems and services. The global fossil fuel infrastructure includes not only oil and gas wells, coal mines, giant oil tankers, pipelines and refineries, but also millions of automobiles, gas stations, tank trucks, storage depots, electric power plants, coal trains, heating systems, stoves and ovens.

The total value of all this infrastructure is on the order of US$10 trillion, or nearly two-thirds of US gross domestic product. Nothing that huge and expensive will be replaced in a year, or even a few years. It will take decades.

Yet there is good news, of a sort, in the fact that all infrastructure eventually wears out. A 2010 study asked: what if the current energy infrastructure were simply allowed to live out its useful life, without being replaced?

The surprising answer: if every worn-out coal-fired power plant were exchanged for solar, wind or hydro, and every dead gas-powered car replaced with an electric one, and so on, we might just stay within our planetary boundaries.

According to the study, using the existing infrastructure until it falls apart would not push us past the 2 degrees Celsius global warming that many scientists see as the upper limit of acceptable climate change.

The problem, of course, is that we aren’t doing this yet. Instead, we’re replacing worn-out systems with more of the same, while drilling, mining and building even more. But that could change.

Take-off to build-out: a 30-100-year timeline

Historians of infrastructure like myself observe a typical pattern. A slower innovation phase is followed by a “take-off” phase, during which new technical systems are rapidly built and adopted across an entire region, until the infrastructure stabilizes at “build-out.”

This temporal pattern is surprisingly similar across all kinds of infrastructure. In the United States, the take-off phase of canals, railroads, telegraph, oil pipelines and paved roadways lasted 30-100 years. The take-off phases of radio, telephone, television and the internet each lasted 30-50 years.

The history of infrastructure suggests that “take-off” in renewable electricity production has already begun and will move very quickly now, especially when and where governments support that goal.

Solar and wind power installations are currently emerging faster than any other electric power source, growing at worldwide annual rates of 50% and 18% respectively from 2009-2014. These sources can piggyback on existing infrastructure, pumping electricity into power grids (though their intermittent power production requires managers to adjust their load-balancing techniques). But wind and solar can also provide power “off-grid” to individual homes, farms and remote locations, giving these sources a unique flexibility.

On 140 acres of unused land on Nellis Air Force Base, Nev., 70,000 solar panels are part of a solar photovoltaic array that will generate 15 megawatts of solar power for the base. (U.S. Air Force photo/Airman 1st Class Nadine Y. Barclay)
On 140 acres of unused land on Nellis Air Force Base, Nev., 70,000 solar panels are part of a solar photovoltaic array that generates 15 megawatts of solar power for the base. (U.S. Air Force photo/Airman 1st Class Nadine Y. Barclay)

Some countries, notably Germany and China, have made major commitments to renewables.

Germany now gets over 25% of its electric power from renewables, helping to reduce its total carbon output by over 25% relative to 1990. China already produces more solar electricity than any other country, with an installed base of over 30 gigawatts and plans to reach 43 gigawatts by the end of this year. In Australia between 2010 and 2015, solar photovoltaic capacity grew from 130 megawatts to 4.7 gigawatts – an annual growth rate of 96%.

Combined with complementary technologies such as electric cars, efficient LED lighting, and geothermal heating and cooling, this transition could move us closer to carbon neutrality.

Could the 30-100-year timeline for infrastructure development be accelerated? Some indicators suggest that the answer may be “yes.”

First, in the case of electricity, only the power sources need replacement; power grids – the poles, wires and other gear that transport electricity – must be managed differently, but not rebuilt from scratch. Second, less developed countries may take advantage of renewable technologies to “leapfrog” almost entirely over older infrastructures.

Similar things have happened in the recent past. Since 2000, for example, cellular telephone networks have reached most of the developing world – and simultaneously avoided the slow, costly laying of vulnerable landlines, which many such places will now never build outside major cities.

The parallel in energy is powering buildings, farms, informal settlements and other points of need with portable solar panels and small windmills, which can be installed almost anywhere with no need for long-distance power lines. This, too, is already happening all over the developing world.

In the developed world, however, the transition to renewables will likely take considerably longer.

In those regions, not only equipment, but also expertise, education, finance, law, lifestyles and other sociocultural systems both support and rely on fossil-fuel-based energy infrastructure. These, too, must adapt to change.

Some – especially the huge coal, oil, and natural gas industries – stand to lose a lot in such a transition. These historical commitments produce determined political resistance, as we see in the United States today.

Tough problems, including competition from fossil fuels

Energy infrastructure, of course, isn’t the only challenge. Indeed, decarbonization is fraught with enormous technical difficulties.

Insulating older buildings, improving fuel economy, and installing more efficient electrical gear are by far the most cost-effective ways to reduce carbon footprints, but these fail to excite people and can’t be easily flaunted.

Currently and for the foreseeable future, no energy source can be truly “zero carbon,” since fossil-fuel-powered devices are used to mine raw materials and to transport finished products, including renewable power systems such as solar panels or wind turbines.

Electricity is a wonderfully flexible form of energy, but storing it remains a conundrum; today’s best battery technologies require lithium, a relatively rare element. And despite intensive research, batteries remain expensive, heavy, and slow to recharge.

Rare earths – extremely rare elements found in only a few places – are currently critical to wind turbines and other renewable technologies, creating legitimate worries about future supplies.

Finally, in many circumstances, burning oil, coal and natural gas will remain the easiest and least expensive means of providing power.

For example, major transport modes such as transcontinental shipping, air travel and long-distance trucking remain very difficult to convert to renewable power sources. Biofuels offer one possibility for reducing the carbon footprint of these transport systems, but many plants grown as biofuel feedstocks compete with food crops and/or wild lands.

Still, the ultimate goal of providing all the world’s energy needs from renewable sources does appear to be feasible in principle. A major recent study found that those needs could readily be met with only wind, water and solar power, at consumer prices no higher than current energy systems.

Infrastructures as social commitments

Where does all this leave us in the run-up to Paris?

Accelerated decarbonization can’t be achieved by technical innovation alone, because infrastructures aren’t just technological systems. They represent complex webs of mutually reinforcing financial, social and political commitments, each with long histories and entrenched defenders. For this reason, major change will require substantial cultural shifts and political struggle.

On the cultural side, one slogan that could inspire accelerated change may be “energy democracy”: the notion that people can and should produce their own energy, on small scales, at home and elsewhere too.

New construction techniques and the low cost of solar panels have brought “net-zero” homes (which produce as much energy as their inhabitants consume) within the financial reach of ordinary people. These are one component of Germany’s ambitious Energiewende, or the country’s energy transition away from fossil fuels.

In infrastructure history, the take-off phase has often accelerated when new technologies moved out of large corporate and government settings for adoption by individuals and smaller businesses. Electric power in the early 20th century and internet use in the 1990s are cases in point.

In Queensland, Australia, over 20% of homes now generate their own electricity. This example suggests the possibility that a “tipping point” toward a new social norm of rooftop solar has already been reached in some places. In fact, a recent study found that the best indicator of whether a given homeowner adds solar panels to a house is whether a neighbor already had them.

Pieces of a puzzle

Many different policy approaches could help, both to reduce consumption and to increase the share of renewables in the energy mix.

Building codes could be gradually adjusted to require that every rooftop generate energy, and/or ratcheted up to LEED “green building” standards. A gradually increasing carbon tax or cap-and-trade system (already in place in some nations) would spur innovation while reducing fossil fuel consumption and promoting the use of renewables.

In the United States, at least, eliminating the many subsidies that currently flow to fossil fuels may prove politically easier than taxing carbon, yet send a similar price signal.

The Obama administration’s Clean Power Plan to reduce carbon output from coal-fired power plants represents the right kind of policy change. It kicks in gradually to give utility companies time to adjust and still-nascent carbon capture and storage systems time to develop. The EPA estimates that the plan will generate $20 billion in climate change benefits, as well as health benefits of $14-$34 billion, while costing much less.

Because greenhouse gases come from many sources, including agriculture, animal husbandry, refrigerants and deforestation (to name just a few), there’s a lot more to decarbonizing the global economy than converting to renewable energy sources.

This article has addressed only one piece of that very large puzzle, but an infrastructure perspective may help us think about those problems as well.

Infrastructure history tells us that decarbonization won’t happen nearly as fast as we might like it to. But it also shows that there are ways to accelerate the change, and that there are tipping-point moments when a lot can happen very fast.

We may be on the brink of such a moment. As the Paris climate negotiations develop, look for inspiration in the many national commitments to push this process forward.

The Conversation

Paul N Edwards, Professor of Information and History, University of Michigan

This article was originally published on The Conversation. Read the original article.

Featured Image Credit: D Sharon Pruitt via wikimedia.org

Face to Face with an Astounding Fish

The Mola mola is an astounding fish that has not really been researched enough due to it’s eccentric looks and habits. You see, ocean scientists refer to this fish as the “swimming head,” which is an apt description of this fish that can weigh as much as an adult rhinoceros and which generally is seen lounging at the surface of the ocean, but that description overlooks the rather special evolutionary niche which this fish occupies.

According to an excellent article on the Scientific American website, these swimming heads – also known as sunfish, are, from an evolutionary standpoint, somewhere in the middle between a fish and a shark:

Biologists have affectionately described Mola, or ocean sunfish, as a “a swimming head.” And while they seem to just float aimlessly at the surface, scientists are finding that these fish — which occupy a crucial evolutionary link in the fish family— are actually warming up after epic daily treks into deep water.

These fish live off the California coast and around the world in temperate and tropical areas. But many people have never heard of them, let alone seen one.

Mola mola are not endangered and not eaten in the United States. In fact, females can produce up to 300 million eggs, more than any other bony fish. But the hapless fish ends up tangled in fishing nets, as bycatch for more valuable target species. They make up the largest bycatch component (29 percent) in the California drift-gillnet swordfish fishery.

So why does it matter if Mola mola are caught in mesh nets?

Mola are pelagic, which means they live in the open ocean. Like humans, and many other fish, they have a bony internal skeleton. Sharks and rays, however, have a cartilaginous skeleton. According to some scientists, mola could provide a missing link to understand their open ocean neighbors animals, like sharks.

“Sunfish are one of the most advanced bony fish, but they have a lot in common with cartilaginous fish. What they have in common may be adaptive to pelagic life and to study it may lead to solve evolution of pelagic species,” says Itsumi Nakamura, a biologist at the University of Tokyo.

Mola have lost the calcium carbonate that makes their skeleton hard, so it’s more like a shark skeleton, says Christopher Lowe, a professor of marine biology at Cal State Long Beach State. Also like sharks, they lack a swim bladder that helps most bony fish stay afloat. Being lighter means using less energy, which is important when you are searching for hard to find and low calorie dinner items, common for deep-sea eaters, he says.

Nakamura has studied the fish since 2009 and recently revealed mysteries of their strange behavior. Mola are often seen just lounging at the surface, he says. But new research published in the Journal of Animal Ecology from Nakamura and his team have found that mola actually make daily treks to the deep sea more than 2,600 feet beneath the surface — a place reserved for creatures like giant squid and diving sperm whales.

mola drop[2]
A researcher from the Shark Lab at Cal State Long Beach films a mola diving. Credit: Cal State Long Beach Shark Lab

Why do they dive so deep? The article continues:

They are eating jellyfish-like creatures, called siphonophores, says Nakamura. And it turns out, after observing the camera from one mola, they might be dining on the most nutritious part of the animals- their sex organs.

“Of course I was surprised, because it is very novel that they eat only calorie rich parts of the jellyfish,” he says.

Relaxing at the surface has another benefit for the sunfish; it’s a trip to the spa. Mola line up at cleaning stations, while smaller fish peck parasites from their body.  For a more thorough cleaning, mola swim to the surface and seagulls jab through their flesh, feasting on parasitic worms.

There’s definitely more to study about these strange fish. For now, additional details can be found in the excellent article on the Scientific American website.

 

Source: ScientificAmerican.com – “Face to Face With the Ugly, Marvelous Mola Mola

Featured Image Credit: Cal State Long Beach Shark Lab

This is Definitely NOT Your Usual Gold Nugget

Although it looks like a gold nugget that a wildcat prospector would die for, this chunk of gold would blow his mind. It turns out that scientists have created a new type of foam from real gold. It is the lightest gold nugget ever created.

Raffaele Mezzenga, professor of food and soft materials at ETH Zurich, led the team that produced the foam, which is a 3D gold mesh that consists mostly of pores.

“The so-called aerogel is a thousand times lighter than conventional gold alloys. It is lighter than water and almost as light as air,” says Mezzenga.

The new gold form can hardly be differentiated from conventional gold with the naked eye—the aerogel even has a metallic shine. But in contrast to its conventional form, it is soft and malleable by hand. It consists of 98 parts air and only two parts of solid material. Of this solid material, more than four-fifths are gold and less than one-fifth is milk protein fibrils. This corresponds to around 20 carat gold.

gold_colors_1170-770x148
Above, foam of amyloid protein filaments without gold (left), with gold microparticles (middle), and gold nanoparticles (right). (Credit: Nyström G et al. Advanced Materials 2015)

HOW TO DRY SUCH A DELICATE MATERIAL

The scientists created the porous material by first heating milk proteins to produce nanometer-fine protein fibers, so-called amyloid fibrils, which they then placed in a solution of gold salt. The protein fibers interlaced themselves into a basic structure along which the gold simultaneously crystallized into small particles. This resulted in a gel-like gold fiber network.

“One of the big challenges was how to dry this fine network without destroying it,” explains Gustav Nyström, postdoctoral researcher in Mezzenga’s group and first author of the study in the journal Advanced Materials. As air-drying could damage the fine gold structure, the scientists opted for a gentle and laborious drying process using carbon dioxide.

This method, in which the gold particles are crystallized directly during manufacture of the aerogel protein structure (and not, for example, added to an existing scaffold) is new. The method’s biggest advantage is that it makes it easy to obtain a homogeneous gold aerogel, perfectly mimicking gold alloys.

The manufacturing technique also offers scientists numerous possibilities to deliberately influence the properties of gold in a simple manner.

“The optical properties of gold depend strongly on the size and shape of the gold particles,” says Nyström. “Therefore we can even change the color of the material. When we change the reaction conditions in order that the gold doesn’t crystallize into microparticles but rather smaller nanoparticles, it results in a dark-red gold.”

WATCHES, SENSORS, AND CATALYSTS

The new material could be used in many of the applications where gold is currently being used, says Mezzenga. The substance’s properties, including its lighter weight, smaller material requirement, and porous structure, have their advantages.

Applications in watches and jewelry are only one possibility. Another application is chemical catalysis: since the highly porous material has a huge surface, chemical reactions that depend on the presence of gold can take place efficiently. The material could also be used in applications where light is absorbed or reflected.

Finally, the scientists have also shown how it becomes possible to manufacture pressure sensors with it. “At normal atmospheric pressure the individual gold particles in the material do not touch, and the gold aerogel does not conduct electricity,” explains Mezzenga. “But when the pressure is increased, the material gets compressed and the particles begin to touch, making the material conductive.”

 

Republished from Futurity.org as a derivative work under the Creative Commons Attribution 4.0 International license. Original article posted to Futurity by .

Featured Photo Credit: Gustav Nyström, Raffaele Mezzenga/ETH Zurich

Next, Check Out:

Astronomers See Star Pulled Into Black Hole and What Happened Next Amazed Them

Just about a year ago, astronomers from Ohio State University using an optical telescope in Hawaii discovered a star that was being pulled from its normal path and heading for a supermassive black hole. Because of that exciting find, scientists have now for the first time witnessed a black hole swallow a star and then, well, belch!  When a black hole burps, it quickly ejects a flare of stellar debris moving at nearly light speed, a very rare and dazzling event.

Astrophysicists tracked the star—about the size of our sun—as it shifted from its customary path, slipped into the gravitational pull of a supermassive black hole, and was sucked in, says Sjoert van Velzen, a Hubble fellow at Johns Hopkins University.

“These events are extremely rare,” says van Velzen, lead author of the study published in the journal Science. “It’s the first time we see everything from the stellar destruction followed by the launch of a conical outflow, also called a jet, and we watched it unfold over several months.”

BlackHoleImage
Artist’s conception of a star drawn toward a black hole and destroyed, and the black hole soon thereafter emitting a “jet” of plasma from debris left by the star’s destruction. (Credit: Modified from an original image by Amadeo Bachar)

 

Black holes are areas of space so dense that irresistible gravitational force stops the escape of matter, gas, and even light, rendering them invisible and creating the effect of a void in the fabric of space.

Astrophysicists had predicted that when a black hole is force-fed a large amount of gas, in this case destroying a whole star, then a fast-moving jet of wreckage in the form of plasma—elementary particles in a magnetic field—can escape from near the black hole rim, or “event horizon.” This study suggests this prediction was correct, the scientists say.

“Previous efforts to find evidence for these jets, including my own, were late to the game,” adds van Velzen, who led the analysis and coordinated the efforts of 13 other scientists in the United States, the Netherlands, Great Britain, and Australia.

Supermassive black holes, the largest of black holes, are believed to exist at the center of most massive galaxies. This particular one lies at the lighter end of the supermassive black hole spectrum, at only about a million times the mass of our sun, but still packing the force to gobble a star.

Continue reading to see the unusual way that this discovery was first reported, which then led to these first-of-a-kind observations…

[nextpagelink][/nextpagelink]

Global Warming ‘Pause’ Was a Myth All Along, Says New Study

Stephan Lewandowsky, University of Bristol

The idea that global warming has “stopped” is a contrarian talking point that dates back to at least 2006. This framing was first created on blogs, then picked up by segments of the media – and it ultimately found entry into the scientific literature itself. There are now numerous peer-reviewed articles that address a presumed recent “pause” or “hiatus” in global warming, including the latest IPCC report.

So did global warming really pause, stop, or enter a hiatus? At least six academic studies have been published in 2015 that argue against the existence of a pause or hiatus, including three that were authored by me and colleagues James Risbey of CSIRO in Hobart, Tasmania, and Naomi Oreskes of Harvard University.

Our most recent paper has just been published in Nature’s open-access journal Scientific Reports and provides further evidence against the pause.

Pause not backed up by data

First, we analysed the research literature on global temperature variation over the recent period. This turns out to be crucial because research on the pause has addressed – and often conflated – several distinct questions: some asked whether there is a pause or hiatus in warming, others asked whether it slowed compared to the long-term trend and yet others have examined whether warming has lagged behind expectations derived from climate models.

These are all distinct questions and involve different data and different statistical hypotheses. Unnecessary confusion has resulted because they were frequently conflated under the blanket labels of pause or hiatus.

New NOAA data released earlier this year confirmed there had been no pause. The author’s latest study used NASA’s GISTEMP data and obtained the same conclusions.
NOAA

To reduce the confusion, we were exclusively concerned with the first question: is there, or has there recently been, a pause or hiatus in warming? It is this question – and only this question – that we answer with a clear and unambiguous “no”.

No one can agree when the pause started

We considered 40 recent peer-reviewed articles on the so-called pause and inferred what the authors considered to be its onset year. There was a spread of about a decade (1993-2003) between the various papers. Thus, rather than being consensually defined, the pause appears to be a diffuse phenomenon whose presumed onset is anywhere during a ten-year window.

Given that the average presumed duration of the pause in the same set of articles is only 13.5 years, this is of concern: it is difficult to see how scientists could be talking about the same phenomenon when they talked about short trends that commenced up to a decade apart.

This concern was amplified in our third point: the pauses in the literature are by no means consistently extreme or unusual, when compared to all possible trends. If we take the past three decades, during which temperatures increased by 0.6℃, we would have been in a pause between 30% and 40% of the time using the definition in the literature.

In other words, academic research on the pause is typically not talking about an actual pause but, at best, about a fluctuation in warming rate that is towards the lower end of the various temperature trends over recent decades.

How the pause became a meme

If there has been no pause, why then did the recent period attract so much research attention?

One reason is a matter of semantics. Many academic studies addressed not the absence of warming but a presumed discrepancy between climate models and observations. Those articles were scientifically valuable (we even wrote one ourselves), but we do not believe that those articles should have been framed in the language of a pause: the relationship between models (what was expected to happen) and observations (what actually happened) is a completely different issue from the question about whether or not global warming has paused.

A second reason is that the incessant challenge of climate science by highly vocal contrarians and Merchants of Doubt may have amplified scientists’ natural tendency to be reticent over reporting the most dramatic risks they are concerned about.

We explored the possible underlying mechanisms for this in an article earlier this year, which suggested climate denial had seeped into the scientific community. Scientists have unwittingly been influenced by a linguistic frame that originated outside the scientific community and by accepting the word pause they have subtly reframed their own research.

Research directed towards the pause has clearly yielded interesting insights into medium-term climate variability. My colleagues and I do not fault that research at all. Except that the research was not about a (non-existent) pause – it was about a routine fluctuation in warming rate. With 2015 being virtually certain to be another hottest year on record, this routine fluctuation has likely already come to an end.

The Conversation

Stephan Lewandowsky, Chair of Cognitive Psychology, University of Bristol

This article was originally published on The Conversation. Read the original article.