Beyond invisibility: engineering light with metamaterials

Thomas Vandervelde, Tufts University

Since ancient times, people have experimented with light, cherishing shiny metals like gold and cutting gemstones to brighten their sparkles. Today we are far more advanced in how we work with this ubiquitous energy.

Starting with 19th-century experimentation, we began to explore controlling how light interacts with matter.

Combining multiple materials in complex structures let us use light in new ways. We crafted lenses and mirrors to make telescopes to peer out into the universe, and microscopes to explore the world of the small.

Today this work continues, on a much more detailed level. My own research into what are called “metamaterials” explores how we can construct materials in ways that do amazing – and previously impossible – things.

We can build metamaterials to respond in particular ways to certain frequencies of light. For example, we can create a smart filter for infrared cameras that allows the user to easily determine if the white powder in an envelope is baking soda or anthrax, determine if a skin melanoma is benign or malignant and find the sewer pipe in your basement without breaking through the concrete. These are just a few applications for one device; metamaterials in general are far more powerful.

Working with light

What scientists call “light” is not just what we can see, but all electromagnetic radiation – from low-frequency radio waves to high-frequency X-rays.

Normally, light moves through a material at a slower speed. For example, visible light travels through glass about 33 percent slower than it does through air. A material’s fundamental resistance to the transmission of light at a particular frequency is called its “index of refraction.” While this number changes with the light’s frequency, it starts at 1 – the index of refraction for a vacuum – and goes up. The higher the index, the slower the light moves, and the more its path bends. This can be seen when looking at a straw in a cup of water (see below) and is the basis of how we make lenses for eyeglasses, telescopes and other optics.

Scientists have long wondered if they could make a material with a negative index of refraction at any given frequency. That would mean, for example, that light would bend in the opposite direction when entering the material allowing for new types of lenses to be made. Nothing in nature fits into this category. The properties of such a material – were it to exist – were predicted by Victor Veselago in 1967.

These odd materials have properties that look very strange compared with our everyday experiences. In the picture below, we see two cups of water, each with a straw in it. The picture on the left is what happens normally – the section of the straw in the water appears disconnected from the part of the straw that is in the air. The image is displaced because air and water refract light differently.

The image on the right indicates what the straw would look like if the fluid were a material with a negative index of refraction. Since the light bends in the opposite direction, the image is reversed, creating the observed illusion.

At left: normal refraction. At right: with simulated negative refraction. Water glass with straw (normal) from shutterstock.com

While Veselago could imagine these materials in the late 1960s, he could not conceive of a way to create them. It took an additional 30 years before John Pendry published papers in 1996, 1998 and 1999 describing how to make a composite man-made material, which he called a metamaterial.

An early metamaterial using repeating elements of copper split-rings and copper wires. D. R. Smith et al., Left-handed Metamaterials, in Photonic Crystals and Light Localization, ed. C. M. Soukoulis (Kluwer, Netherlands, 2000)., CC BY-ND

Making metamaterials

This work was followed up experimentally by David R. Smith’s group in 2000, which created a metamaterial using copper split-rings on circuit boards and lengths of copper wires as repeating elements. The picture below shows one such example produced by his group. The size and shape of the split-rings and copper posts determines what frequency of light the metamaterial is tuned to. The combination of these components interacts with the incident light, creating a region with an fully engineered effective index of refraction.

At present, we are only able to construct metamaterials that manage interactions with very specific parts of the electromagnetic spectrum.


The electromagnetic spectrum, showing all types of light, including the narrow band of visible light.
Philip Ronan, CC BY-SA

Smith’s group worked initially in the microwave portion of the spectrum, because working with larger wavelengths makes metamaterial construction easier, as multiple copies of the split-rings and pins must fit into the space of one wavelength of the light. As researchers work with shorter wavelengths, metamaterial components need to be much smaller, which is more challenging to build.

Since the first experiments, multiple research groups have made metamaterials that work in the infrared; some are skirting the fringe of the visible portion of the spectrum. For these short wavelengths, circuit boards, copper wires and pins are far too large. Instead the structures have to use micro- and nano-fabrication techniques similar to what is used to make computer chips.

Creating ‘invisibility’

Soon after the first metamaterials were fabricated, researchers began engineering applications for which they would be useful. One application that got a lot of press was the creation of an “invisibility cloak.”

Normally if a microwave radar were aimed at an object, some of the radiation would absorb and some would reflect off. Sensors can detect those disturbances and reconstruct what the object must have looked like. If an object is surrounded by the metamaterial cloak, then the radar signal bends around the object, neither being absorbed nor reflected – as if the object were never there.

By creating a metamaterial layer on the surface of an object, you can change what happens to the light that hits the object. Why is this important? When you look at a still pool of water, it is not surprising to see your reflection. When you point a flashlight at a pond at night, some of that light beam bounces off onto the trees beyond.

Now imagine you could coat the surface of that pond with a metamaterial that worked for all the visible spectrum. That would remove all reflection – you wouldn’t see your own reflection, nor any light bouncing into the woods.

This type of control is very useful for determining specifically what type of light can enter or exit a material or a device. For example, solar cells could be coated with metamaterials that would admit only specific (e.g., visible) frequencies of light for conversion to electricity, and would reflect all other light to another device that collects the remaining energy as heat.

The future of wave engineering

Engineers are now creating metamaterials with what is called a dynamic response, meaning its properties vary depending on how much electricity is passing through it, or what light is aimed at it. For example, a dynamic metamaterial filter might allow passage of light only in the near infrared, until electricity is applied, at which point it lets through only mid-infrared light. This ability to “tune” the responsiveness of metamaterials has great potential for future applications, including uses we can’t yet imagine.

The amazing thing about all of the wondrous possibilities of metamaterials’ interaction with light is that the principle works much more broadly. The same mathematics that predict the structure needed to produce these effects for light can be applied to the interaction of materials with any type of waves.

A group in Germany has successfully created a thermal cloak, preventing an area from heating by bending the heat flow around it – just as an invisibility cloak bends light. The principle has also been used for sound waves and has even been discussed for seismic vibrations. That opens the potential for making a building “invisible” to earthquakes! We are only beginning to discover how else we might use metamaterials and their underlying principles.

The ConversationThomas Vandervelde, Associate Professor of Electrical and Computer Engineering, Tufts University

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

Scientists Create Brain-Cancer Killing Stem Cells from Skin Cells

For the first time, scientists have turned skin cells into cancer-hunting stem cells that destroy brain tumors known as glioblastoma. The discovery could offer, for the first time in more than 30 years, a new and more effective treatment for the deadly disease.

The technique builds upon the newest version of the Nobel Prize-winning technology from 2007, which allowed researchers to turn skin cells into embryonic-like stem cells. Researchers hailed the possibilities for use in regenerative medicine and drug screening. Now, researchers have found a new use: killing brain cancer.

“Patients desperately need a better standard of care,” says Shawn Hingtgen, assistant professor in the Eshelman School of Pharmacy at the University of North Carolina at Chapel Hill.

The survival rate beyond two years for a patient with a glioblastoma is 30 percent because it is so difficult to treat. Even if a surgeon removes most of the tumor, it’s nearly impossible to get the invasive, cancerous tendrils that spread deeper into the brain and inevitably the remnants grow back. Most patients die within a year and a half of their diagnosis.

Researchers believe that developing a new personalized treatment for glioblastoma that starts with a patient’s own skin cells could improve those statistics, with a goal of getting rid of the cancerous tendrils, effectively killing the glioblastoma.

For the new study, published in the journal Nature Communications, researchers reprogrammed skin cells known as fibroblasts—which produce collagen and connective tissue—to become induced neural stem cells. Working with mice, the team showed that these neural stem cells have an innate ability to move throughout the brain and home in on and kill any remaining cancer cells. The team also showed that the stem cells could be engineered to produce a tumor-killing protein, adding another blow to the cancer.

Depending on the type of tumor, the researchers increased survival time of the mice by 160 to 220 percent. Next steps will focus on human stem cells and testing more effective anti-cancer drugs that can be loaded into the tumor-seeking neural stem cells.

“Our work represents the newest evolution of the stem-cell technology that won the Nobel Prize in 2012,” Hingtgen says. “We wanted to find out if these induced neural stem cells would home in on cancer cells and whether they could be used to deliver a therapeutic agent. This is the first time this direct reprogramming technology has been used to treat cancer.”

The researchers are also currently improving the staying power of stem cells within the surgical cavity. They discovered that the stem cells needed a physical matrix to support and organize them, so they will hang around long enough to seek out the cancerous tendrils.

“Without a structure like that, the stem cells wander off too quickly to do any good,” says Hingtgen, who reported these findings in the journal Biomaterials.

For that study, researchers added stem cells to an FDA-approved fibrin sealant commonly used as surgical glue. The physical matrix it creates tripled the retention of stem cells in the surgical cavity, providing further support for the applicability and strength of the technique.

The study was published in the journal Nature Communications.

Republished from Futurity.org as a derivative work under the Attribution 4.0 International license. Original article posted to Futurity by .

Featured Photo Credit: Jenny Mealing/flickr, CC BY

Now, Check Out:

The surprising link between postwar suburban development and today’s inner-city lead poisoning

Leif Fredrickson, University of Virginia

The Flint water crisis and the sad story of Freddie Gray’s lead poisoning have catalyzed a broader discussion about lead poisoning in the United States. What are the risks? Who is most vulnerable? Who is responsible?

Lead is an enormous and pervasive threat to public health. Almost any level of exposure causes permanent cognitive problems in children. And there are many sources. Ten million water service lines nationwide contain lead. Some 37 million U.S. homes contain lead-based paint somewhere in the building. Soils in many areas are contaminated with lead that was added to gasoline and emitted from car exhaust.

But the risk is not evenly distributed. Some Americans face a “triple whammy” of increased risk based on poverty, race, and place. Evidence dating back to the 1970s has shown that lead poisoning rates are higher in inner cities and low-income and minority neighborhoods than in white, affluent, and suburban neighborhoods.

And although children’s blood lead levels have fallen significantly in recent decades, these disparities still exist. My dissertation research shows that government-supported suburban development and racial segregation after World War II contributed to lead poisoning by concentrating minority families in substandard urban housing.

An urban epidemic

Humans have used lead for thousands of years in products ranging from ceramic glazes to cosmetics. Exposure increased in the industrial era. Lead piping and paint came into wide use in the 19th century, followed by lead batteries and leaded gasoline in the 1920s.

Health experts knew that lead was toxic, but childhood lead poisoning did not become a sustained public health concern until the second half of the twentieth century, due in part to obstruction from the lead industry. After World War II, child lead poisoning cases spiked in many cities, especially among low-income African-Americans. In Baltimore child lead poisoning cases rose from an average of 12 per year between 1936 and 1945 to 77 cases in 1951 and 133 cases in 1958.

Lead poisoning cases also increased in Cincinnati and other cities in the 1950s and ‘60’s. Experts identified a key source: peeling and flaking lead-based paint. The victims were mainly from poor, minority families in deteriorating inner-city neighborhoods.

Lead-based paint was available in the United States until 1978, and was widely used in public housing because of its durability. Theseter11/Wikipedia, CC BY

One obvious solution would have been to find better housing – and indeed, during this period millions of Americans were moving from cities to suburbs. But discriminatory government policies effectively excluded minority families from buying homes in suburban neighborhoods, leaving them trapped in cities, where a vicious cycle of deterioration and disinvestment exacerbated lead hazards.

The role of mortgages and highways

Suburbanization and home ownership in America exploded after World War II. Many urban scholars identify federal housing and highway policies as the most important drivers of 20th-century suburbanization.

One key agency, the Federal Housing Administration (FHA), was created during the Great Depression to make homeownership more feasible by offering federal insurance for home mortgages. FHA loans favored new suburban housing, especially from the 1930s to the 1960s. Agency guidelines, such as those for minimum lot size, excluded many inner-city homes, such as Baltimore’s classic row houses. Other FHA guidelines and suggestions for neighborhoods – such as minimum setbacks and street widths – favored new suburban developments.

FHA appraisal standards warned against “older properties” and “adverse influences” on home value, including smoke, odor and traffic congestion. Until the late 1940s the agency considered “inharmonious” racial groups a housing finance risk.

After the Supreme Court declared racial covenants legally unenforceable in 1948, the FHA moderated its policies. But for the next decade it made little effort to curb housing discrimination, with some of its major administrators continuing to defend racial segregation.

Not surprisingly, the vast majority of FHA loans went to single-family, new homes in the suburbs. According to the U.S. Commission on Civil Rights, less than two percent of FHA loans issued from 1947 through 1959 went to African-Americans.


Buyers line up to purchase homes in Levittown, NY, the archetypal postwar suburb, built between 1947 and 1951. Until 1948, contracts for Levittown houses stated that the homes could not be owned or used by non-Caucasians.
Mark Mathosian/Flickr, CC BY-NC-SA

Federal transportation policy also spurred and shaped post-war suburbanization. In 1956 Congress enacted the Interstate Highway Act, which was designed to ease traffic congestion. The act authorized billions of dollars to complete about 42,000 miles of highways, half of which were to go through cities.

The proliferation of interstates and automobiles made downtowns increasingly obsolete and furthered movement to the suburbs. According to one estimate, each highway built through a city reduced the city’s population by eighteen percent.

And suburban automobile commuting contributed directly to urban lead poisoning. Inner city residents absorbed the bulk of lead gas pollution from commuters who converged on cities daily. Lead gas exhaust contaminated soil in city neighborhoods.

White flight and urban blight

As black populations in cities increased, African Americans began moving into formerly all-white neighborhoods. “White flight” followed: panicked white homeowners moved away. Often the cycle was inflamed by “blockbusters,” people who used the threat of integration to get white homeowners to sell for low prices.

Real estate speculators who acquired these cheap properties sold some of them (at inflated prices) to minority buyers. Many used highly exploitative contracts. Black homeowners had to make high interest payments, leaving them with little money for maintenance.

Conditions were even worse for black renters. Slumlords often neglected maintenance and tax payments on their properties. Even when city health codes targeted lead paint, as in New York and Baltimore, landlords milking properties for profit often failed to comply.

Disinvestment in inner city housing became a self-perpetuating cycle. A 1975 study for the U.S. Department of Housing and Urban Development concluded that landlords who had low-income renters and few financing options scrimped on maintenance, furthering housing decline. Eventually landlords abandoned their rentals, which led to further neighborhood disinvestment.

Reinvesting in cities

Cleaning up lead contamination is expensive. One recent study estimates that it would cost US$1.2 billion to $11 billion to eliminate lead risks in one million high-risk homes (old buildings occupied by low-income families with children). But it also calculated that every dollar spent on lead paint clean-up would generate from $17 to $221 in benefits from earnings, tax revenue and reduced health and education costs.

Government agencies and nonprofits have poured money into lead research, screening, and hazard reduction programs, but more is needed. The largest source, HUD’s Lead Hazards Control Program, has received $110 million annually from 2014 to 2016, only enough to fund lead abatement in about 8,800 homes yearly. Moreover, in the past few years, the Congress has sought to cut HUD’s budget even further, by a half in 2013 and by a third just in the past year. Fortunately, those proposals were not successful, but even without them, lead hazard reduction funding is woefully inadequate.

Can we find other sources? Since government housing policies have contributed to lead poisoning, perhaps we should tap them to fund cleanup. For example, the home mortgage interest tax deduction subsidizes new homes in the suburbs, and is particularly beneficial to more affluent homeowners.

Reforming the mortgage interest deduction, which costs the federal government $70 billion annually, could generate funding to remediate older rental houses. Some of this money could also be used to expand programs run by federal agencies, local governments and nonprofits that fund multiple improvements in low-income housing, including mold abatement and energy efficiency upgrades.

Another strategy would be to create a mechanism modeled on Property Assessed Clean Energy programs for lead paint removal. PACE programs allows state and local governments or other authorities to fund the upfront costs of energy efficiency upgrades, then attach the costs to the property. Owners pay the costs back over time through assessments which are added to their property tax bills.

The United States has heavily subsidized suburban home ownership for more than 80 years. This policy helped many Americans, but hurt others, including families still trapped in homes where they are at risk of lead poisoning. Today, as many observers hail a U.S. urban renaissance, the persistence of lead poisoning highlights a continuing need for more investment in housing and health in our inner cities.

The ConversationLeif Fredrickson, Ph.D. student, Mellon Pre-Doctoral Fellow, University of Virginia

This article was originally published on The Conversation. Read the original article.

Featured Image Credit: Thester11 via Wikimedia Commons, CC BY

Now, Check Out:

Here’s How Scientists Could Detect Gravitational Waves Using Pulsars

Now that the existence of gravitational waves has moved from theory to reality, that confirmation is sending ripples through the astrophysics community as additional technologies for detecting them is now getting more focus. Some are new ideas and others are enhancements to or expansion of existing technologies, like the North American Nanohertz Observatory for Gravitational Waves (NANOGrav) project.

NANOGrav scientists assert that gravitational waves can be detected and monitored by monitoring a large set of pulsars across the sky. An article on the NASA website explains how they have been exploring this concept since 2007 and how they see that the NANOGrav network needs to be expanded in order to successfully detect gravitational waves that are “washing over Earth all the time” and the minute shifts in space-time that they produce:

The recent detection of gravitational waves by the Laser Interferometer Gravitational-Wave Observatory (LIGO) came from two black holes, each about 30 times the mass of our sun, merging into one. Gravitational waves span a wide range of frequencies that require different technologies to detect. A new study from the North American Nanohertz Observatory for Gravitational Waves (NANOGrav) has shown that low-frequency gravitational waves could soon be detectable by existing radio telescopes.

“Detecting this signal is possible if we are able to monitor a sufficiently large number of pulsars spread across the sky,” said Stephen Taylor, lead author of the paper published this week in The Astrophysical Journal Letters.  He is a postdoctoral researcher at NASA’s Jet Propulsion Laboratory, Pasadena, California. “The smoking gun will be seeing the same pattern of deviations in all of them.” Taylor and colleagues at JPL and the California Institute of Technology in Pasadena have been studying the best way to use pulsars to detect signals from low-frequency gravitational waves. Pulsars are highly magnetized neutron stars, the rapidly rotating cores of stars left behind when a massive star explodes as a supernova.

Einstein’s general theory of relativity predicts that gravitational waves — ripples in spacetime — emanate from accelerating massive objects. Nanohertz gravitational waves are emitted from pairs of supermassive black holes orbiting each other, each of which contain millions or a billion times more mass than those detected by LIGO. These black holes each originated at the center of separate galaxies that collided. They are slowly drawing closer together and will eventually merge to create a single super-sized black hole.

As they orbit each other, the black holes pull on the fabric of space and create a faint signal that travels outward in all directions, like a vibration in a spider’s web. When this vibration passes Earth, it jostles our planet slightly, causing it to shift with respect to distant pulsars. Gravitational waves formed by binary supermassive black holes take months or years to pass Earth and require many years of observations to detect.

“Galaxy mergers are common, and we think there are many galaxies harboring binary supermassive black holes that we should be able to detect,” said Joseph Lazio, one of Taylor’s co-authors, also based at JPL. “Pulsars will allow us to see these massive objects as they slowly spiral closer together.”

Once these gigantic black holes get very close to each other, the gravitational waves are too short to detect using pulsars. Space-based laser interferometers like eLISA, a mission being developed by the European Space Agency with NASA participation, would operate in the frequency band that can detect the signature of supermassive black holes merging. The LISA Pathfinder mission, which includes a stabilizing thruster system managed by JPL, is currently testing technologies necessary for the future eLISA mission.

Detecting supermassive black holes is not a simple feat, however – read on to learn why that is and about the approach that the NANOGrav researchers are taking.

[nextpagelink][/nextpagelink]

One-Year ISS Crew Returning to Earth on March 1: Complete Coverage Schedule

NASA Television will provide complete coverage Tuesday, March 1, as three crew members depart the International Space Station, including NASA astronaut Scott Kelly and cosmonaut Mikhail Kornienko of the Russian space agency Roscosmos – the station’s first one-year crew.

NASA Television coverage will begin at 3:10 p.m. EST on Monday, Feb. 29, when Kelly hands over command of the station to fellow NASA astronaut Tim Kopra. Complete coverage is as follows:

Monday, Feb. 29

  • 3:10 p.m. — Change of command ceremony (Scott Kelly hands over space station command to Tim Kopra)

Tuesday, March 1

  • 4:15 p.m. — Farewell and hatch closure coverage; hatch closure scheduled at 4:40 p.m.
  • 7:45 p.m. — Undocking coverage; undocking scheduled at 8:05 p.m.
  • 10:15 p.m. — Deorbit burn and landing coverage; deorbit burn scheduled at 10:34 p.m., with landing at 11:27 p.m. (10:27 a.m. on March 2, Kazakhstan time)

Wednesday, March 2

  • 1:30 a.m. — Video file of hatch closure, undocking and landing activities
NASA astronaut Scott Kelly and Russian cosmonaut Mikhail Kornienko marked their 300th consecutive day aboard the International Space Station on Jan. 21, 2016. The pair will land March 1 after spending a total of 340 days in space. Credits: NASA
NASA astronaut Scott Kelly and Russian cosmonaut Mikhail Kornienko marked their 300th consecutive day aboard the International Space Station on Jan. 21, 2016. The pair will land March 1 after spending a total of 340 days in space.
Credits: NASA

Twice the duration of a typical mission, Kelly and Kornienko’s station-record 340 days in space afforded researchers a rare opportunity to study the medical, physiological, and psychological and performance challenges astronauts face during long-duration spaceflight.

The science driving the one-year mission, critical to informing the agency’s Journey to Mars, began a year before Kelly or Kornienko floated into the space station. Biological samples were collected and assessments were performed in order to establish baselines. Comparison samples were taken throughout their stay in space and will continue for a year or more after their return to Earth. Kelly’s identical twin brother, former NASA astronaut Mark Kelly, participated in parallel twin studies on Earth to provide scientists more bases for comparisons.

ISS Expedition 47 officially begins, under Kopra’s command, when the Soyuz carrying Kelly, Kornienko and Volkov undocks from the space station. Kopra, Yuri Malenchenko of Roscosmos and Tim Peake of ESA (European Space Agency), will operate the station as a three-person crew until the arrival of three new crew members in two weeks. NASA astronaut Jeff Williams and Roscosmos cosmonauts Alexey Ovchinin and Oleg Skripochka are scheduled to launch from Baikonur, Kazakhstan, on March 18 EST.

For NASA TV streaming video and schedule, visit:  http://www.nasa.gov/nasatv

Article content courtesy of NASA.

Now, Check Out:

How a 3D-Printed Dracula Orchid Helped Scientists Understand How it Tricks Bugs

Using a 3D printer, scientists have unlocked the mystery of how plants called Dracula orchids use mimicry to attract flies and ensure their survival.

The research, done in the last unlogged watershed in western Ecuador, is a win in the field of evolutionary biology and helps provide information that should benefit conservation efforts. The approach could also be applicable to studies of other plant-pollinator systems, researchers say.

A 3D-printed orchid (lower right) is being prepped in the lab. The two orchids in the cup are real. (Credit: Melinda Barnadas)
A 3D-printed orchid (lower right) is being prepped in the lab. The two orchids in the cup are real. (Credit: Melinda Barnadas)

“Mimicry is one of the best examples of natural selection that we have,” says Barbara “Bitty” Roy, a biologist at the University of Oregon. “How mimicry evolves is a big question in evolutionary biology. In this case, there are about 150 species of these orchids. How are they pollinated? What sorts of connections are there? It’s a case where these orchids plug into an entire endangered system.”

Dracula orchids grow in Central America and northwest reaches of the Andes Mountains in South America. The Dracula label literally means “little dragon” because of a face-like feature in the flowers. Some observers say they see Count Dracula as a bat that appears in vampire depictions in literature and the movies.

“Dracula orchids look and smell like mushrooms,” says Tobias Policha, an adjunct instructor and plant scientist in the Institute of Ecology and Evolution and lead author of the study that is published online in the journal New Phytologist. “We wanted to understand what it is about the flowers that is attractive to these mushroom-visiting flies.”

Continue reading to learn how the researchers used “chimera” orchids to discover how these orchids lure in the flies…

[nextpagelink][/nextpagelink]

Take a chill pill if you want to avoid the flu this year

Alexander Chaitoff, Case Western Reserve University and Joshua Daniel Niforatos, Case Western Reserve University

Along with snow and frigid temperatures, the winter months also bring coughs, colds and the flu. Lower respiratory tract infections, the ones that cause feelings of chest congestion despite the deepest coughs, are one of the top 10 causes of death in the United States and around the world. In the U.S. the flu alone kills thousands of people each year.

Besides causing poor health, the flu and other respiratory illness also have a huge impact on the economy. A study published in 2007 suggests that flu epidemics account for over US$10 billion per year in direct medical care costs, while lost earnings due to illness account for an additional $16.3 billion per year. And that doesn’t cover run-of-the-mill colds and coughs. The total economic impact of non-influenza viral respiratory tract infections is estimated at another $40 billion per year.

Avoiding the flu or catching a cold in the winter months can be tough, but there is something you can do in addition to getting the flu shot and washing your hands.

Relax. There’s strong evidence that stress affects the immune system and can make you more susceptible to infections.

Big doses of stress can hurt your immune system

Health psychologist Andrew Baum defined stress as “a negative emotional experience accompanied by predictable biochemical, physiological, and behavioral changes that are directed toward adaptation.” Scientists can actually measure the body’s stress response – the actions the body takes to fight through arduous situations ranging from difficult life events to infections.

In most stress responses, the body produces chemicals called pro-inflammatory cytokines. They activate the immune system, and without them the body would not be able to fight off bacteria, viruses or fungi. Normally the stress response is helpful because it preps your body to deal with whatever challenge is coming. When the danger passes, this response is turned off with help from anti-inflammatory cytokines.

However, if the stress response cannot be turned off, or if there is an imbalance between pro-inflammatory and anti-inflammatory cytokines, the body can be damaged. This extra wear and tear due to the inflammation from a heightened stress response has been termed allostatic load. A high allostatic load has been associated with multiple chronic illnesses, such as cardiovascular disease and diabetes. This partly explains the focus on taking anti-inflammatory supplements to prevent or treat disease.

Short-term stress hurts too

An inappropriate stress response can do more than cause chronic illness down the road. It can also make you more susceptible to acute infections by suppressing the immune system.

For example, when mice are subjected to different environmental stressors, there is an increase in a molecule in their blood called corticosterone, which is known to have immunosuppressive effects on the body. This type of response is mirrored in research on humans. In a study of middle-aged and older women, stress from being instructed to complete a mental math or speech test was associated with higher levels of similar immunosuppressive molecules.

[nextpagelink][/nextpagelink]

Sugar may be as damaging to the brain as extreme stress or abuse

Jayanthi Maniam, UNSW Australia and Margaret Morris, UNSW Australia

We all know that cola and lemonade aren’t great for our waistline or our dental health, but our new study on rats has shed light on just how much damage sugary drinks can also do to our brain.

The changes we observed to the region of the brain that controls emotional behaviour and cognitive function were more extensive than those caused by extreme early life stress.

It is known that adverse experiences early in life, such as extreme stress or abuse, increase the risk of poor mental health and psychiatric disorders later in life.

The number of traumatic events (accidents; witnessing an injury; bereavement; natural disasters; physical, sexual and emotional abuse; domestic violence and being a victim of crime) a child is exposed to is associated with elevated concentrations of the major stress hormone, cortisol.

There is also evidence that childhood maltreatment is associated with reduced brain volume and that these changes may be linked to anxiety.

What we found

Looking at rats, we examined whether the impact of early life stress on the brain was exacerbated by drinking high volumes of sugary drinks after weaning. As females are more likely to experience adverse life events, we studied female Sprague-Dawley rats.

To model early life trauma or abuse, after rats were born half of the litters were exposed to limited nesting material from days two to nine after birth. They then returned to normal bedding until they were weaned. The limited nesting alters maternal behaviour and increases anxiety in the offspring later in life.

Sugar could be more damaging to the brain than trauma. from www.shutterstock.com

At weaning, half the rats were given unlimited to access to low-fat chow and water to drink, while their sisters were given chow, water and a 25% sugar solution that they could choose to drink. Animals exposed to early life stress were smaller at weaning, but this difference disappeared over time. Rats consuming sugar in both groups (control and stress) ate more calories over the experiment.

The rats were followed until they were 15 weeks old, and then their brains were examined. As we know that early life stress can impact mental health and function, we examined a part of the brain called the hippocampus, which is important for both memory and stress. Four groups of rats were studied – control (no stress), control rats drinking sugar, rats exposed to stress, and rats exposed to stress who drank sugar.

[nextpagelink][/nextpagelink]

How WII Golf Shows that Gaming can Alter Real-World Skills

Practicing your golf swing using a motion-controlled video game could actually help you compete in the real world, a new study suggests.

Motion controllers require players to use their own bodies to control the movements of the video game’s avatar.

And researchers say the findings go way beyond putting. They say motion-controlled video games, as well as future virtual reality devices, such as Oculus Rift, are turning video games into simulations.

“It seems to us that we’ve crossed an evolutionary line in game history where video games are no longer just video games any more, they’ve become simulators,” says Edward Downs, an associate professor of communication at the University of Minnesota-Duluth. “These games are getting people up and physically rehearsing, or simulating motion, so we were trying to see if gaming goes beyond symbolic rehearsal and physically simulates an action closely enough that it will change or modify someone’s behavior.”

For the study, researchers recruited 161 college students and randomly divided them into three groups: one that would operate the motion-controlled game, one that would operate the symbolically controlled game, and a control group.

Most of the participants had a moderate level of experience with video games and motion-controlled video games. They had only limited knowledge of the Wii game used in the study.

After the video-game groups were finished playing the game, they were asked to putt balls from three different distances: 3 feet, 6 feet, and 9 feet. Their accuracy was then recorded. The control group was sent directly to the putting test after they filled out a questionnaire.

How far did the effect go? Continue reading to learn the fascinating results…

[nextpagelink][/nextpagelink]

A beginner’s guide to sex differences in the brain

Donna Maney, Emory University

Asking whether there are sex differences in the human brain is a bit like asking whether coffee is good for you – scientists can’t seem to make up their minds about the answer. In 2013, for example, news stories proclaimed differences in the brain so dramatic that men and women “might almost be separate species.” Then in 2015, headlines announced that there are in fact no sex differences in the brain at all. Even as I write this, more findings of differences are coming out.

So which is it? Are there differences between men’s and women’s brains – or not?

What is a sex difference?

To clear up the confusion, we need to consider what the term “sex difference” really means in the scientific literature. To illustrate the concept, I’ve used a web-based tool I helped develop, SexDifference.org, to plot some actual data. The three graphs below show how measurements from a sample of people are distributed along a scale. Women are represented in pink, and men in blue. Most people are close to the average for their sex, so that’s the peak of each “bump.” People on the left or right side of the peak are below or above average, respectively, for their sex.

I’ve added individual data points for three hypothetical study subjects Sue, Ann and Bob. Not real people, just examples. Their data points are superimposed on the larger data set of hundreds of people.

Before we get into the brain, let’s look at a couple of familiar sex differences outside the brain. Many of us, if asked to describe how men’s bodies differ from women’s, would first mention the sex difference in external genitalia. The graph below depicts the number of nontransgender adults that have a “genital tubercle derivative” (clitoris or penis) of a given size.


Size of human genitalia. Data from Wallen & Lloyd, 2008.
Donna Maney, CC BY-ND

All of the women in this sample, including our hypothetical Sue and Ann, fall within a certain range. All of the men, including Bob, fall into a different range. With relatively rare exceptions, humans can be accurately categorized into sexes based on this measure.

[nextpagelink][/nextpagelink]