Sexual assault enters virtual reality

By Katherine Cross, City University of New York.

Although various forms of online sexual harassment have been with us since the dawn of the internet, recent news suggests that it’s moving into another dimension – the third, to be precise. Gropers are now finding a way to target women through the fully immersive headsets of virtual reality.

Helmet and hands: a virtual avatar. QuiVr/Steam

Writer Jordan Belamire recently wrote of her experience of virtual sexual assault. The man’s disembodied hands, in the “QuiVr” virtual reality archery game, simulated constant groping of Belamire’s virtual body – specifically, rubbing at her avatar’s chest – and chased her through the game world, heedless to her cries of “Stop!” over the game’s voice chat.

Some of the response – not least from the game’s developers – was encouraging. But the internet’s id manifested itself in the comments on stories about the incident, heaping imprecations, slander and abuse against Belamire. If we analyze the content of these comments, we gain insight into why these assaults – and online harassment more broadly – are occurring, and what might be done to stop them.

We’ll have to grapple with some of the most toxic parts of our communities, and find new ways of creating and enforcing social norms in all the virtual worlds we’re creating. As a scholar of online harassment, I know that most fundamentally, we must address the false belief that online harm isn’t real, because the internet itself isn’t real. When human beings are involved and interacting with each other, it’s very real indeed. And in VR, it’s even more so.

Into the pit of online comments

At the bottom of an emotional article written by QuiVR’s developers, apologizing for what happened to Belamire and promising reform, is the following comment: “You weren’t a victim of anything. The VR community has just become a victim of the outrage brigade.”

Another commenter adds: “here’s some advice for you. TURN OFF THE F—ING GAME YOU STUPID B–CH!”

A third writes, “I gotta say, you don’t have a frigging clue what sexual assault is if THIS is what you consider sexual assault.”

Several others, meanwhile, noted that Belamire writes romance novels and suggested she should “be above” the abuse, or claimed that she’s just seeking publicity. “She writes an adult lesbian romance novel and feels harassed by digital gloves,” mocks one commenter. T’was ever thus: If a woman evinces any sexual sensibility whatsoever, she must give blanket consent to any and all sexual contact.

What’s virtual, and what’s reality?

But by far the overriding theme of the angry comments is that they accused Belamire of making a mountain out of a molehill because it was an online experience. These were “floating hands” in a “virtual world” that she could easily turn off, or just “take off her headset” to escape from.

These outraged players never seem to ask why men do not worry about encountering hands-y people with boundary issues when they play games, or why such people should determine who plays and who doesn’t. Yes, Belamire chose to play the game, but that doesn’t mean she signed up to be sexually assaulted.

These notions illustrate the core mentality of both the abuser and their legions of apologists in the world’s comment sections: What happens online is not real, therefore it’s all okay.

It’s not serious, except when it is

In this abuser-apologist world, people who complain about harassment are at fault themselves, and at times demonized as the actual problem. It’s an inherently contradictory idea: The “games aren’t real” argument doesn’t seem to dissuade angry commenters from taking Belamire’s complaints personally.

“Games are supposed to be a place to mentally get entirely away from this world, these rules, with a character in another one,” laments one commenter, arguing that anti-harassment efforts will interfere with his escapism. “Feminists basically want it to be a crime for men to even approach a woman in the street, and now they want to do the same in virtual reality?” says another.

Often, in a single comment, someone yells at Belamire for complaining about an “unreal” groping and then caterwauls about some forthcoming Orwellian regime in gaming. One commenter actually tells Belamire to turn off the game, right before likening the idea of tracking repeat offenders to the Third Reich. (One wonders why he doesn’t turn off his computer for a while, if her story so offends him.)

Mobius strip: a piece of paper with only one side. David Benbennick, CC BY-SA

He’s articulating a Mobius strip of thought, folding two contradictory notions into a single idea: The offending action wasn’t real and should be ignored, but any remedy would be real enough that we have to worry about the impending Nazification of our games and get very, very angry about it.

Online experiences are real ones

Video games are not just unreal playthings. The mediating interface of a game does not make abusive behavior between two or more real people any less abusive. Slurs are still slurs; unwanted sexual advances are still both unwanted and sexual. The addition of computer graphics, a game controller, or an unfashionable headset does not render human interaction unreal.

This interaction experience is as real as friends sitting on an actual couch together. HyacintheLuynes, CC BY-SA

In VR specifically we confront another contradiction. The entire selling point of VR is its unparalleled simulation of reality. It presents a physical, embodied experience that surrounds you, fills your senses, and is tactile in ways unlike any other video game.

This has been a holy grail of game design since the dawn of the industry: fooling a player’s body into feeling like it’s really in the game world. We should not be surprised if a simulated sexual assault, then, feels real enough in all the ways that matter.

This point was addressed head-on in a discussion about designing safer VR games at the Game Connect Asia Pacific conference in Melbourne in late October. One VR developer, Justine Colla, cofounder of the Alta VR studio, argued that the “visceral” nature of immersion in VR can give abusers more power. “Users retain memories in VR as if they experienced them in real life,” she said.

This, she said, combines with an inability for players to physically push away offender to ensure that attackers have “all the power with none of the consequences.” Assaults feel real, and the target has no way to fight back.

We cannot have it both ways, touting VR’s realness while casting aspersions on people who complain of abuse in VR. Trying to do so would be laughable if the consequences weren’t so dire. Virtual reality is virtually real.

Game developers respond

Fortunately, QuiVr’s developers are modeling good behavior for the whole industry. They wrote a pointed article that explains why they not only believe Belamire but take personal responsibility for what happened to her. They also explain what steps they’re taking to improve the experience. Foremost among them is a move they call a “power gesture:”

putting your hands together, pulling both triggers, and pulling them apart as if you are creating a force field. No matter how you activate it, the effect is instantaneous and obvious – a ripple of force expands from you, dissolving any nearby player from view, at least from your perspective, and giving you a safety zone of personal space.

This is a bold step in the right direction. It not only provides an instant reprieve for harassment victims but allows them to actually embody their strength through a gesture that feels empowering. It’s an elegant solution, but this one solution may not work for every VR environment. We need something more: a change of mindset.

As games are being developed, quality assurance testers often try to “break” the game, finding ways that inventive players might unexpectedly use game systems that the developers did not intend. Testers should include in this ongoing process efforts to identify ways players could harm each other. Developers should deal with them the same way they do other problems in the game’s design. It’s not “just a game” anymore.

The ConversationKatherine Cross, Ph.D. Student in Sociology, City University of New York

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

Series of NASA CubeSat Missions Will Take a Fresh Look at Planet Earth [Video]

Beginning this month, NASA is launching a suite of six next-generation, Earth-observing small satellite missions to demonstrate innovative new approaches for studying our changing planet.

These small satellites range in size from a loaf of bread to a small washing machine and weigh from a few to 400 pounds. Their small size keeps development and launch costs down as they often hitch a ride to space as a “secondary payload” on another mission’s rocket – providing an economical avenue for testing new technologies and conducting science.

“NASA is increasingly using small satellites to tackle important science problems across our mission portfolio,” said Thomas Zurbuchen, associate administrator of NASA’s Science Mission Directorate in Washington. “They also give us the opportunity to test new technological innovations in space and broaden the involvement of students and researchers to get hands-on experience with space systems.”

Small-satellite technology has led to innovations in how scientists approach Earth observations from space. These new missions, five of which are scheduled to launch during the next several months, will debut new methods to measure hurricanes, Earth’s energy budget, aerosols, and weather.

“NASA is expanding small satellite technologies and using low-cost, small satellites, miniaturized instruments, and robust constellations to advance Earth science and provide societal benefit through applications,” said Michael Freilich, director of NASA’s Earth Science Division in Washington.

Scheduled to launch this month, RAVAN, the Radiometer Assessment using Vertically Aligned Nanotubes, is a CubeSat that will demonstrate new technology for detecting slight changes in Earth’s energy budget at the top of the atmosphere – essential measurements for understanding greenhouse gas effects on climate. RAVAN is led by Bill Swartz at the Johns Hopkins Applied Physics Laboratory in Laurel, Maryland.

In spring 2017, two CubeSats are scheduled to launch to the International Space Station for a detailed look at clouds. Data from the satellites will help improve scientists’ ability to study and understand clouds and their role in climate and weather.

IceCube, developed by Dong Wu at NASA’s Goddard Space Flight Center in Greenbelt, Maryland, will use a new, miniature, high-frequency microwave radiometer to measure cloud ice. HARP, the Hyper-Angular Rainbow Polarimeter, developed by Vanderlei Martins at the University of Maryland Baltimore County in Baltimore, will measure airborne particles and the distribution of cloud droplet sizes with a new method that looks at a target from multiple perspectives.

In early 2017, MiRaTA – the Microwave Radiometer Technology Acceleration mission – is scheduled to launch into space with the National Oceanic and Atmospheric Administration’s Joint Polar Satellite System-1. MiRaTA packs many of the capabilities of a large weather satellite into a spacecraft the size of a shoebox, according to principal investigator Kerri Cahoy from the Massachusetts Institute of Technology in Cambridge. MiRaTA’s miniature sensors will collect data on temperature, water vapor and cloud ice that can be used in weather forecasting and storm tracking.

The RAVAN, HARP, IceCube, and MiRaTA CubeSat missions are funded and managed by NASA’sEarth Science Technology Office (ESTO) in the Earth Science Division. ESTO supports technologists at NASA centers, industry, and academia to develop and refine new methods for observing Earth from space, from information systems to new components and instruments.

“The affordability and rapid build times of these CubeSat projects allow for more risk to be taken, and the more risk we take now the more capable and reliable the instruments will be in the future,” said Pamela Millar, ESTO flight validation lead. “These small satellites are changing the way we think about making instruments and measurements. The cube has inspired us to think more outside the box.”

NASA’s early investment in these new Earth-observing technologies has matured to produce two robust science missions, the first of which is set to launch in December.

CYGNSS – the Cyclone, Global Navigation Satellite System – will be NASA’s first Earth science small satellite constellation. Eight identical satellites will fly in formation to measure wind intensity over the ocean, providing new insights into tropical cyclones. Its novel approach uses reflections from GPS signals off the ocean surface to monitor surface winds and air-sea interactions in rapidly evolving cyclones, hurricanes, and typhoons throughout the tropics. CYGNSS, led by Chris Ruf at the University of Michigan, Ann Arbor, is targeted to launch on Dec. 12 from Cape Canaveral Air Force Station in Florida.

Earlier this year NASA announced the start of a new mission to study the insides of hurricanes with a constellation of 12 CubeSats. TROPICS – the Time-Resolved Observations of Precipitation structure and storm Intensity with a Constellation of Smallsats – will use radiometer instruments based on the MiRaTA CubeSat that will make frequent measurements of temperature and water vapor profiles throughout the life cycle of individual storms. William Blackwell at the Massachusetts Institute of Technology Lincoln Laboratory in Lexington leads the mission.

CYGNSS and TROPICS both benefited from early ESTO technology investments. These Earth Venture missions are small, targeted science investigations that complement NASA’s larger Earth research missions. The rapidly developed, cost-constrained Earth Venture projects are competitively selected and funded by NASA’s Earth System Science Pathfinder program within the Earth Science Division.

Small spacecraft and satellites are helping NASA advance scientific and human exploration, reduce the cost of new space missions, and expand access to space. Through technological innovation, small satellites enable entirely new architectures for a wide range of activities in space with the potential for exponential jumps in transformative science.

Source: NASA.gov news release, republished in accordance with the NASA media guidelines and public domain rights.

Now, Check Out:

How Pluto Spray-Paints Charon Red Like a Graffiti Artist

In June 2015, when the cameras on NASA’s approaching New Horizons spacecraft first spotted the large reddish polar region on Pluto’s largest moon, Charon, mission scientists knew two things: they’d never seen anything like it elsewhere in our solar system, and they couldn’t wait to get the story behind it.

Over the past year, after analyzing the images and other data that New Horizons has sent back from its historic July 2015 flight through the Pluto system, the scientists think they’ve solved the mystery. As they detail this week in the international scientific journal Nature, Charon’s polar coloring comes from Pluto itself – as methane gas that escapes from Pluto’s atmosphere and becomes “trapped” by the moon’s gravity and freezes to the cold, icy surface at Charon’s pole. This is followed by chemical processing by ultraviolet light from the sun that transforms the methane into heavier hydrocarbons and eventually into reddish organic materials called tholins.

full-res-charon
NASA’s New Horizons spacecraft captured this high-resolution, enhanced color view of Pluto’s largest moon, Charon, just before closest approach on July 14, 2015. Scientists have learned that reddish material in the north (top) polar region – informally named Mordor Macula – is chemically processed methane that escaped from Pluto’s atmosphere onto Charon. Credits: NASA/JHUAPL/SwRI. Click/Tap for larger image.

“Who would have thought that Pluto is a graffiti artist, spray-painting its companion with a reddish stain that covers an area the size of New Mexico?” asked Will Grundy, a New Horizons co-investigator from Lowell Observatory in Flagstaff, Arizona, and lead author of the paper. “Every time we explore, we find surprises. Nature is amazingly inventive in using the basic laws of physics and chemistry to create spectacular landscapes.”

The team combined analyses from detailed Charon images obtained by New Horizons with computer models of how ice evolves on Charon’s poles. Mission scientists had previously speculated that methane from Pluto’s atmosphere was trapped in Charon’s north pole and slowly converted into the reddish material, but had no models to support that theory.

The New Horizons team dug into the data to determine whether conditions on the Texas-sized moon (with a diameter of 753 miles or 1,212 kilometers) could allow the capture and processing of methane gas. The models using Pluto and Charon’s 248-year orbit around the sun show some extreme weather at Charon’s poles, where 100 years of continuous sunlight alternate with another century of continuous darkness. Surface temperatures during these long winters dip to -430 Fahrenheit (-257 Celsius), cold enough to freeze methane gas into a solid.

“The methane molecules bounce around on Charon’s surface until they either escape back into space or land on the cold pole, where they freeze solid, forming a thin coating of methane ice that lasts until sunlight comes back in the spring,” Grundy said. But while the methane ice quickly sublimates away, the heavier hydrocarbons created from it remain on the surface.

The models also suggested that in Charon’s springtime the returning sunlight triggers conversion of the frozen methane back into gas. But while the methane ice quickly sublimates away, the heavier hydrocarbons created from this evaporative process remain on the surface.

Sunlight further irradiates those leftovers into reddish material – called tholins – that has slowly accumulated on Charon’s poles over millions of years. New Horizons’ observations of Charon’s other pole, currently in winter darkness – and seen by New Horizons only by light reflecting from Pluto, or “Pluto-shine” – confirmed that the same activity was occurring at both poles.

“This study solves one of the greatest mysteries we found on Charon, Pluto’s giant moon,” said Alan Stern, New Horizons principal investigator from the Southwest Research Institute, and a study co-author. “And it opens up the possibility that other small planets in the Kuiper Belt with moons may create similar, or even more extensive ‘atmospheric transfer’ features on their moons.”

Source: NASA.gov news release used under public domain rights and the NASA Media Guidelines

Now, Check Out:

Moving toward computing at the speed of thought

By Frances Van Scoy, West Virginia University.

The first computers cost millions of dollars and were locked inside rooms equipped with special electrical circuits and air conditioning. The only people who could use them had been trained to write programs in that specific computer’s language. Today, gesture-based interactions, using multitouch pads and touchscreens, and exploration of virtual 3D spaces allow us to interact with digital devices in ways very similar to how we interact with physical objects.

This newly immersive world not only is open to more people to experience; it also allows almost anyone to exercise their own creativity and innovative tendencies. No longer are these capabilities dependent on being a math whiz or a coding expert: Mozilla’s “A-Frame” is making the task of building complex virtual reality models much easier for programmers. And Google’s “Tilt Brush” software allows people to build and edit 3D worlds without any programming skills at all.

My own research hopes to develop the next phase of human-computer interaction. We are monitoring people’s brain activity in real time and recognizing specific thoughts (of “tree” versus “dog” or of a particular pizza topping). It will be yet another step in the historical progression that has brought technology to the masses – and will widen its use even more in the coming years.

Reducing the expertise needed

From those early computers dependent on machine-specific programming languages, the first major improvement allowing more people to use computers was the development of the Fortran programming language. It expanded the range of programmers to scientists and engineers who were comfortable with mathematical expressions. This was the era of punch cards, when programs were created by punching holes in cardstock, and output had no graphics – only keyboard characters.

By the late 1960s mechanical plotters let programmers draw simple pictures by telling a computer to raise or lower a pen, and move it a certain distance horizontally or vertically on a piece of paper. The commands and graphics were simple, but even drawing a basic curve required understanding trigonometry, to specify the very small intervals of horizontal and vertical lines that would look like a curve once finished.

The 1980s introduced what has become the familiar windows, icons and mouse interface. That gave nonprogrammers a much easier time creating images – so much so that many comic strip authors and artists stopped drawing in ink and began working with computer tablets. Animated films went digital, as programmers developed sophisticated proprietary tools for use by animators.

Simpler tools became commercially available for consumers. In the early 1990s the OpenGL library allowed programmers to build 2D and 3D digital models and add color, movement and interaction to these models.

Inside a CAVE system. Davepape

In recent years, 3D displays have become much smaller and cheaper than the multi-million-dollar CAVE and similar immersive systems of the 1990s. They needed space 30 feet wide, 30 feet long and 20 feet high to fit their rear-projection systems. Now smartphone holders can provide a personal 3D display for less than US$100.

User interfaces have gotten similarly more powerful. Multitouch pads and touchscreens recognize movements of multiple fingers on a surface, while devices such as the Wii and Kinect recognize movements of arms and legs. A company called Fove has been working to develop a VR headset that will track users’ eyes, and which will, among other capabilities, let people make eye contact with virtual characters.

Planning longer term

My own research is helping to move us toward what might be called “computing at the speed of thought.” Low-cost open-source projects such as OpenBCI allow people to assemble their own neuroheadsets that capture brain activity noninvasively.

Ten to 15 years from now, hardware/software systems using those sorts of neuroheadsets could assist me by recognizing the nouns I’ve thought about in the past few minutes. If it replayed the topics of my recent thoughts, I could retrace my steps and remember what thought triggered my most recent thought.

With more sophistication, perhaps a writer could wear an inexpensive neuroheadset, imagine characters, an environment and their interactions. The computer could deliver the first draft of a short story, either as a text file or even as a video file showing the scenes and dialogue generated in the writer’s mind.

Working toward the future

Once human thought can communicate directly with computers, a new world will open before us. One day, I would like to play games in a virtual world that incorporates social dynamics as in the experimental gamesProm Week” and “Façade” and in the commercial game “Blood & Laurels.”

This type of experience would not be limited to game play. Software platforms such as an enhanced Versu could enable me to write those kinds of games, developing characters in the same virtual environments they’ll inhabit.

Years ago, I envisioned an easily modifiable application that allows me to have stacks of virtual papers hovering around me that I can easily grab and rifle through to find a reference I need for a project. I would love that. I would also really enjoy playing “Quidditch” with other people while we all experience the sensation of flying via head-mounted displays and control our brooms by tilting and twisting our bodies.

An early, single-player virtual reality version of ‘Quidditch.’

Once low-cost motion capture becomes available, I envision new forms of digital story-telling. Imagine a group of friends acting out a story, then matching their bodies and their captured movements to 3D avatars to reenact the tale in a synthetic world. They could use multiple virtual cameras to “film” the action from multiple perspectives, and then construct a video.

This sort of creativity could lead to much more complex projects, all conceived in creators’ minds and made into virtual experiences. Amateur historians without programming skills may one day be able to construct augmented reality systems in which they can superimpose onto views of the real world selected images from historic photos or digital models of buildings that no longer exist. Eventually they could add avatars with whom users can converse. As technology continues to progress and become easier to use, the dioramas built of cardboard, modeling clay and twigs by children 50 years ago could one day become explorable, life-sized virtual spaces.

The ConversationFrances Van Scoy, Associate Professor of Computer Science and Electrical Engineering, West Virginia University

This article was originally published on The Conversation. Read the original article.

Next, Check Out:

Why the current plan to save the endangered vaquita porpoise won’t work

By Andrew Frederick Johnson, University of California, San Diego.

With fewer than 60 individuals left, the world’s smallest porpoise, the vaquita marina (Phocoena sinus), continues to balance on the edge of extinction. Constant pressures from conservation groups have lead to a two-year emergency gillnet ban, which will end in May 2017, and government-led efforts are now pushing fishers to use gear that won’t threaten the vaquita through bycatch.

Despite these steps, in a new study my colleagues and I warn that unless further big changes are made in the Upper Gulf of California, Mexico, we may soon be saying goodbye to this charismatic little animal.

The history of vaquita conservation is long and convoluted. It has been characterized by intermittent top-down management interventions that have often had little more than short-term outlooks. These have perpetuated the decline of the vaquita population, which is now estimated to contain less than 25 reproductively mature females.

The new Conservation Letters study describes how the gillnet ban now in effect, and the introduction of new trawl gear may address the immediate problem of vaquita bycatch but even taken together, they will likely be yet another short-term – and, most likely, ineffective – attempt to pull the vaquita back from the brink of extinction.

The end of a day’s fishing in the Upper Gulf of California, where conservationists fight with fishers over the declining populations of the vaquita marina porpoise.
Octavio Aburto / ILCP – author provided

Switching gears

Gillnets sit in midwater and are made of fine line, which is difficult to see in the Upper Gulf’s murky waters. Similar to almost all cetacean bycatch, vaquita are unable to free themselves once entangled and risk being drowned while held under water.

Trawl gear is an alternative that reduces the risk of bycatch. These heavy gears are towed along the seafloor catching any animal not quick enough to outswim the mouth of the approaching net. The mouth of the net is much smaller than the area of a gillnet, which reduces the effective catch area that poses a risk to the vaquita. Also, the use of trawl gear is noisy, more easily visible and therefore more easily avoidable than gillnets for cetacean species.

A young fisherman in the Upper Gulf of California, Mexico holds up his prawn catch, caught with gillnets that threaten the vaquita’s survival, yet earn the local fishers a healthy livelihood.
Octavio Aburto / ILCP – author provided

But this alternative is more expensive. After accounting for lower catch rates, higher fuel expenditure and the cost of the switch from gillnets to trawls, we estimated that an annual subsidy of at least US$8.5 million would be needed to compensate fishers in the Upper Gulf for loss of employment and earnings. Long term, the economic losses from the new management interventions could have one or two side effects: 1) a reliance on subsidies and/or 2) increased illegal fishing activities.

NOAA Fisheries West Coast, CC BY-NC-ND

What’s more, an endangered yet highly prized fish is caught in these waters with gillnets. Swim bladders known as buche from the endangered totoaba (Totoaba macdonali) can sell for tens to hundreds of thousands of dollars per kilo, depending on the size of the bladder and the demand of the Chinese market. This “aquatic cocaine” complicates the plight of the vaquita because illegal fishing to catch the totoaba pose a risk to the few vaquita that remain.

There are also significant ecological risks to the new management plan. The impacts of trawl gear to seafloor species are significantly greater than those posed by gillnets because they are dragged along sea floors, reducing productivity in many shelf sea ecosystems and negatively affecting community compositions and diversity. In just 26 days of gear testing in the Upper Gulf prior to the the gillnet ban, 30 percent, or 2,819 square kilomters (1,715 square miles), of the Upper Gulf biosphere reserve’s total area was scoured by the new trawl gears. Longer term, we warn in our study this could have severely detrimental consequences for the health of the Upper Gulf marine ecosystem.

Trawl tracks after (a) 1 day and (b) 26 days of gear-testing in the Upper Gulf of California, Mexico. Moreno Báez – Author provided

Community involvement

My colleagues and I believe there is little use in pointing the finger of blame at this point, as seems to be the case in many articles discussing the fight for the vaquita. Instead, the vaquita situation urgently needs a new way of thinking, a paradigm shift.

Consistent exclusion of fishers from the design of management plans, typically driven by conservation groups and implemented by the government, has led to polarized opinions and a large divide between what should be a close collaboration between fishers and conservation agencies. Rushed, short-sighted management must be replaced by longer-term goals that involve local communities and address conservation challenges associated with both the vaquita and the totoaba.

Community support of management measures, in particular, seems essential for long-term success in conservation stories. We recommend that the local communities in the Upper Gulf require external investment. Specifically, the development of infrastructure, such as road networks to connect fishers to new markets and processing facilities, would benefit the current situation by providing new employment opportunities as well as increased returns on ever dwindling fish catches.

Education is also key. This should include programs to educate fishers in the consequences of unsustainable fisheries practices, techniques to help add value to their catches and alternative livelihoods to fishing such as tourism or potential service industry employment.

Vaquita are found only in the uppermost Gulf of California, Mexico. NOAA Fisheries West Coast, CC BY-NC-ND

At present there are few employment alternatives for fishers in the Upper Gulf. Often, men are recruited into the fishery as young as 15 and the common story of “once a fisher, always a fisher” prevails. We highlight that an investment in education could both help promote marine stewardship as fishers better understand the longer-term consequences of current fisheries practices. It could also provide the younger generation with the training to build new business or follow paths in higher education instead of joining the local fisheries.

As with many of the world’s ecological problems, overcapacity seems to be key. In the case of the upper Gulf fisheries, too many people are catching too many fish from finite stocks. Continued overexploitation of any natural resource ultimately means communities risk destroying the finite natural resources they depend on.

To put it simply, communities in the Upper Gulf of California need help to reduce both the number of fishers currently fishing and the number of future fishers entering the fisheries. This will help promote alternative, nonextractive activities in order to alleviate the impacts that current fisheries practices have on fish stocks, the vaquita and, with the new trawl gear intervention, sea floor habitats.

A fisher in the Upper Gulf of California, Mexico tears up old fish to feed to the pelicans.
Octavio Aburto / ILCP – author provided

Another band-aid

A meeting in late July of this year between Presidents Obama and Peña Nieto concluded with a tentative proposal for a permanent extension of the Upper Gulf’s gillnet ban and a crack down on the totoaba trade. Although eliminating vaquita bycatch is crucial for the species’ survival, ignoring economic losses, local livelihoods and new ecological problems related to trawl impacts, the Mexican government may have missed the point again.

With one foot of the vaquita firmly in the grave, now does not seem to be the time to make somewhat incomplete decisions regarding the survival of the vaquita, the health of the Upper Gulf of California’s ecosystem and the social well being of the families that live in this remote area of Mexico.

The ConversationAndrew Frederick Johnson, Postdoctoral Researcher of Marine Biology at Scripps Insitution of Oceanography, University of California, San Diego

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

Here’s why daylight saving time isn’t worth the trouble it causes

By Laura Grant, Claremont McKenna College.

Today the sun is shining during my commute home from work. But this weekend, public service announcements will remind us to “fall back,” ending daylight saving time (DST) by setting our clocks an hour earlier on Sunday, Nov. 6. On Nov. 7, many of us will commute home in the dark.

This semiannual ritual shifts our rhythms and temporarily makes us groggy at times when we normally feel alert. Moreover, many Americans are confused about why we spring forward to DST in March and fall back in November, and whether it is worth the trouble.

The practice of resetting clocks is not designed for farmers, whose plows follow the sun regardless of what time clocks say it is. Yet many people continue to believe that farmers benefit, including lawmakers during recent debates over changing California DST laws. Massachusetts is also studying whether to abandon DST.

Changing our clocks does not create extra daylight. DST simply shifts when the sun rises and sets relative to our society’s regular schedule and routines. The key question, then, is how people respond to this enforced shift in natural lighting. Most people have to be at work at a certain time – say, 8:30 a.m. – and if that time comes an hour earlier, they simply get up an hour earlier. The effect on society is another question, and there, the research shows DST is more burden than boon.

No energy savings

Benjamin Franklin was one of the first thinkers to endorse the idea of making better use of daylight. Although he lived well before the invention of light bulbs, Franklin observed that people who slept past sunrise wasted more candles later in the evening. He also whimsically suggested the first policy fixes to encourage energy conservation: firing cannons at dawn as public alarm clocks and fining homeowners who put up window shutters.

To this day, our laws equate daylight saving with energy conservation. However, recent research suggests that DST actually increases energy use.

Poster celebrating enactment of daylight saving time during World War I, 1917 (click to zoom). Library of Congress/Wikipedia

This is what I found in a study coauthored with Yale economist Matthew Kotchen. We used a policy change in Indiana to estimate DST effects on electricity consumption. Prior to 2007, most Indiana counties did not observe DST. By comparing households’ electricity demand before and after DST was adopted, month by month, we showed that DST had actually increased residential electricity demand in Indiana by 1 to 4 percent annually.

The largest effects occurred in the summer, when DST aligns our lives with the hottest part of the day, so people tend to use more air conditioning, and late fall, when we wake up in the dark and use more heating with no reduction in lighting needs.

Other studies corroborate these findings. Research in Australia and in the United States shows that DST does not decrease total energy use. However, it does smooth out peaks and valleys in energy demand throughout the day, as people at home use more electricity in the morning and less during the afternoon. Though people still use more electricity, shifting the timing reduces the average costs to deliver energy because not everyone demands it during typical peak usage periods.

Other outcomes are mixed

DST proponents also argue that changing times provides more hours for afternoon recreation and reduces crime rates. But time for recreation is a matter of preference. There is better evidence on crime rates: Fewer muggings and sexual assaults occur during DST months because fewer potential victims are out after dark.

So overall, the net benefits from these three durational effects of crime, recreation and energy use – that is, impacts that last for the duration of the time change – are murky.

Other consequences of DST are ephemeral. I think of them as bookend effects, since they occur at the beginning and end of DST.

When we “spring forward” in March we lose an hour, which comes disproportionately from resting hours rather than wakeful time. Therefore, many problems associated with springing forward stem from sleep deprivation. With less rest people make more mistakes, which appear to cause more traffic accidents and workplace injuries, lower workplace productivity due to cyberloafing and poorer stock market trading.

Even when we gain that hour back in the fall, we must readjust our routines over several days because the sun and our alarm clocks feel out of synchronization. Some impacts are serious: During bookend weeks, children in higher latitudes go to school in the dark, which increases the risk of pedestrian casualties. Dark commutes are so problematic for pedestrians that New York City is spending US$1.5 million on a related safety campaign. And heart attacks increase after the spring time shift – it is thought because of lack of sleep – but decrease to a lesser extent after the fall shift. Collectively, these bookend effects represent net costs and strong arguments against retaining DST.

Pick your own time zone?

Spurred by many of these arguments, several states are considering unilaterally discontinuing DST. The California State Legislature considered a bill this term that would have asked voters to decide whether or not to remain on Pacific Standard Time year-round (the measure was passed by the State Assembly but rejected by the Senate).

On the East Coast, Massachusetts has commissioned research on the impacts of dropping DST and joining Canada’s Maritime provinces on Atlantic Time, which is one hour ahead of Eastern Standard Time. If this occurred, Massachusetts would be an hour ahead of all of its neighboring states during winter months, and travelers flying from Los Angeles to Boston would cross five time zones.

Countries observing daylight saving time (blue in Northern Hemisphere, orange in Southern Hemisphere). Light gray countries have abandoned DST; dark gray nations have never practiced it.
TimeZonesBoy/Wikipedia, CC BY-SA

These proposals ignore a fundamental fact: Daylight saving time relies on coordination. If one state changes its clocks a week early, neighboring states will be out of sync.

Some states have good reason for diverging from the norm. Notably, Hawaii does not practice DST because it is much closer to the equator than the rest of the nation, so its daylight hours barely change throughout the year. Arizona is the sole contiguous state that abstains from DST, citing its extreme summer temperatures. Although this disparity causes confusion for western travelers, the state’s residents have not changed clocks’ times for over 40 years.

In my research on DST I have found that everyone has strong opinions about it. Many people welcome the shift to DST as a signal of spring. Others like the coordinated availability of daylight after work. Dissenters, including farmers, curse their loss of quiet morning hours.

When the evidence about costs and benefits is mixed but we need to make coordinated choices, how should we make DST decisions? When the California State Senate opted to stick with DST, one legislator stated, “I like daylight savings. I just like it.” But politicians’ whims are not a good basis for policy choices.

The strongest arguments support not only doing away with the switches but keeping the nation on daylight saving time year-round. Yet humans adapt. If we abandon the twice-yearly switch, we may eventually slide back into old routines and habits of sleeping in during daylight. Daylight saving time is the coordinated alarm to wake us up a bit earlier in the summer and get us out of work with more sunshine.

The ConversationLaura Grant, Assistant Professor of Economics, Claremont McKenna College

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

Science deconstructs humor: What makes some things funny?

By Alex Borgella, Tufts University.

Think of the most hilarious video you’ve ever seen on the internet. Why is it so funny?

As a researcher who investigates some of the potential side effects of humor, I spend a fair bit of time verifying the funniness of the jokes, photos and videos we present to participants in our studies. Quantifying the perception of humor is paramount in ensuring our findings are valid and reliable. We often rely on pretesting – that is, trying out jokes and other potential stimuli on different samples of people – to give us a sense of whether they might work in our studies.

To make predictions on how our funny materials will be perceived by study subjects, we also turn to a growing body of humor theories that speculate on why and when certain situations are considered funny. From ancient Greece to today, many thinkers from around the world have yearned to understand what makes us laugh. Whether their reasons for studying humor were strategic (like some of Plato’s thoughts on using humor to manipulate people’s political views) or simply inquisitive, their insights have been crucial to the development of humor research today.

Take the following video as an example of a funny stimulus one might use in humor research:

Man vs. Moose in Sweden.

To summarize: A man and his female companion are enjoying a pleasant day observing a moose in one of Sweden’s forests. The woman makes a sudden movement, causing the moose to charge the couple. The man stands his ground, causing the moose to stop in his tracks. After a few feints with a large stick and several caveman-ish grunts by the man, the defeated moose retreats while the man proclaims his victory (with more grunting).

The clip has been viewed on YouTube almost three million times, and the comments make it clear that many folks who watch it are LOLing. But why is this funny?

Superiority theory: Dumb moose

It is the oldest of all humor theories: Philosophers such as Aristotle and Plato alluded to the idea behind the superiority theory thousands of years ago. It suggests that all humor is derived from the misfortunes of others – and therefore, our own relative superiority. Thomas Hobbes also alluded to this theory in his book “Leviathan,” suggesting that humor results in any situation where there’s a sudden realization of how much better we are than our direct competition.

Taking this theory into consideration, it seems like the retreating moose is the butt of the joke in this scenario. Charles Gruner, the late expert on superiority theory, suggest that all humor is derived from competition. In this case, the moose lost that competition.

Relief theory: Nobody died

The relief theory of humor stems from Sigmund Freud’s assertion that laughter lets us relieve tension and release “psychic energy.” In other words, Freud and other relief theorists believe that some buildup of tension is inherent to all humorous scenarios and the perception of humor is directly related to the release of that tension.

Freud used this idea to explain our fascination with taboo topics and why we might find it humorous to acknowledge them. For example, my own line of research deals with humor in interracial interactions and how it can be used to facilitate these commonly tense situations. Many comedians have tackled this topic as well, focusing on how language is used in interracial settings and using it as an example of how relief can be funny.

A comedy clip focused on interracial interactions gets some of its humor from the relief when a tense situation is resolved.

Interestingly, this theory has served as the rationale behind many studies documenting the psychological and physiological benefits of laughter. In both cases, the relief of tension (physiological tension, in the case of laughing) can lead to positive health outcomes overall, including decreased stress, anxiety and even physical pain.

In the case of our moose video: Once the moose charges, the tension builds as the man and the animal face off for an extended period of time. The tension is released when the moose gives up his ground, lowers his ears and eventually scurries away. The video would probably be far less humorous if the tension had been resolved with violence – for instance, the moose trampling the man, or alternatively ending up with a stick in its eye.

Incongruity theory: It’s unexpected

The incongruity theory of humor suggests that we find fundamentally incompatible concepts or unexpected resolutions funny. Basically, we find humor in the incongruity between our expectations and reality.

Resolving incongruity can contribute to the perception of humor as well. This concept is known as as the “incongruity-resolution” theory, and primarily refers to written jokes. When identifying what makes a humorous situation funny, this theory can be applied broadly; it can account for the laughs found in many different juxtaposed concepts.

Take the following one-liners as examples:

“I have an Epi-Pen. My friend gave it to me as he was dying. It seemed very important to him that I have it.”

“Remains to be seen if glass coffins become popular.”

The humor in both of these examples relies on incongruous interpretations: In the first, a person has clearly misinterpreted his friend’s dying wish. In the second, the phrase “remains to be seen” is a play on words that takes on two very different meanings depending on how you read the joke.

In the case of our moose video, the incongruity results from the false expectation that the interaction between man and moose would result in some sort of violence. When we see our expectations foiled, it results in the perception of humor.

The safety of being in the audience at a comedy show frees you to let loose.
Mark Schiefelbein/AP

Benign violations theory: It’s bad, but harmless

Incongruity is also a fundamental part of the benign violations theory of humor (BVT), one of the most recently developed explanations. Derived from the linguist Thomas Veatch’s “violation theory,” which describes various ways for incongruity to be funny, BVT attempts to create one global theory to unify all previous theories of humor and account for issues with each.

Broadly, benign violations theory asserts that all humor derives from three necessary conditions:

  1. The presence of some sort of norm violation, be it a moral norm violation (robbing a retirement home), social norm violation (breaking up with a long-term boyfriend via text message) or physical norm violation (purposefully sneezing directly on a child).
  2. A “benign” or “safe” context in which the violation takes place (this can take many forms).
  3. The interpretation of the first two points simultaneously. In other words, one must view, read or otherwise interpret a violation as relatively harmless.

Thus far, researchers studying BVT have demonstrated a few different scenarios in which the perception of a benign violation could take place – for example, when there is weak commitment to the norm being violated.

Take the example of a church raffling off a Hummer SUV. They found this scenario is much less funny to churchgoers (with their strong commitment to the norm that the church is sacred and embodies values of humility and stewardship) than it is to non-churchgoers (with relatively weak norm commitment about the church). While both groups found the concept of the church’s choice of fundraiser disgusting, only the non-churchgoers simultaneously appraised the situation as also amusing. Hence, a benign violation is born.

In the case of our moose video, the violation is clear; there’s a moose about to charge two people, and we’re not sure what exactly is about to go down. The benign part of the situation could be credited to a number of different sources, but it’s likely due to the fact that we’re psychologically (and physically, and temporally) distant from the individuals in the video. They’re far away in Sweden, and we’re comfortably watching their dilemma on a screen.

Homing in on funny

At one point or another, we’ve all wondered why some phrase or occurrence has caused us to erupt with laughter. In many ways, this type of inquiry is what drove me to research the limits and consequences of humor in the first place. People are unique and often find different things amusing. In order to examine the effects of humor, it is our job as researchers to try to select and craft the stimuli we present to affect the widest range of people. The outcomes of good science stem from both the validity and reliability of our stimuli, which is why it’s important to think critically about the reasons why we’re laughing.

The application of this still-growing body of humor research and theory is seen everywhere, influencing everything from political speeches to advertising campaigns. And while “laughter is the best medicine” may be an overstatement (penicillin is probably better, for one), psychologists and medical professionals have started to lend credence to the idea that humor and laughter might have some positive effects for health and happiness. These applications underscore the importance of developing the best understanding of humor we can.

The ConversationAlex Borgella, Ph.D. Candidate in Psychology, Tufts University

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

What causes mind blanks during exams?

By Jared Cooney Horvath, University of Melbourne and Jason M Lodge, University of Melbourne.

It’s a pattern many of us have likely experienced in the past.

You prep for an exam and all the information seems coherent and simple. Then you sit for an exam and suddenly all the information you learned is gone. You struggle to pull something up – anything – but the harder you fight, the further away the information feels. The dreaded mind blank.

So what is going on?

To understand what’s happening during a mind blank, there are three brain regions we have to become familiar with.

The first is the hypothalamus. For all intents and purposes, we can conceive of the hypothalamus as the bridge between your emotions and your physical sensations. In short, this part of the brain has strong connections to the endocrine system, which, in turn, is responsible for the type and amount of hormones flowing throughout your body.

The second is the hippocampus. A subcortical structure, the hippocampus plays an incredibly important role in both the learning and retrieval of facts and concepts. We can conceive of the hippocampus as a sort of memory door through which all information must pass in order to enter and exit the brain.

The third is the prefrontal cortex (PFC). Located behind your eyes, this is the calm, cool, rational part of your brain. All the things that suggest you, as a human being, are in control are largely mediated here: things like working memory (the ability to hold and manipulate information in your mind), impulse control (the ability to dampen unwanted behavioural responses), decision making (the ability to select a proper response between competing possibilities), etc.

Regions of the brain. from www.shutterstock.com

How a mind blank happens

When you are preparing for an exam in a setting that is predictable and relatively low-stakes, you are able to engage in cold cognition. This is the term given to logical and rational thinking processes.

In our particular instance, when you are studying at home, seated in your comfortable bed, listening to your favourite music, the hypothalamus slows down the production and release of key stress hormones (outlined below) while the PFC and hippocampus are confidently chugging along unimpeded.

However, when you enter a somewhat unpredictable and high-stakes exam situation, you enter the realm of hot cognition. This is the term given to non-logical and emotionally driven thinking processes. Hot cognition is typically triggered in response to a clear threat or otherwise highly stressful situation.

So an exam can serve to trigger a cascade of unique thoughts – for instance,

If I fail this exam I may not get into a good university or graduate program. Then I may not get a good job. Then I may perish alone and penniless.

With this type of loaded thinking, it’s no wonder that those taking tests sometimes perceive an exam as a threat.

When a threat is detected, the hypothalamus stimulates the generation of several key stress hormones, including norepinephrine and cortisol.

Large levels of norepinephrine enter the PFC and serve to dampen neuronal firing and impair effective communication. This impairment essentially clears out your working memory (whatever you were thinking about is now gone) and stops the rational, logical PFC from influencing other brain regions.

At the same time, large levels of cortisol enter the hippocampus and not only disrupt activation patterns there, but also (with prolonged exposure) kill hippocampal neurons. This serves to impede the ability to access old memories and skews the perception and storage of new memories.

In short, when an exam is interpreted as a threat and a stress response is triggered, working memory is wiped clean, recall mechanisms are disrupted, and emotionally laden hot cognition driven by the hypothalamus (and other subcortical regions) overrides the normally rational cold cognition driven by the PFC.

Taken together, this process leads to a mind blank, making logical cognitive activity difficult to undertake.

Is there any way to avoid this?

The good news – there are some things you can do to stave off mind blanks.

The first concerns de-stressing. Through concerted practice and application of cognitive-behavioural and/or relaxation techniques aimed at reframing any perceived threat during an exam situation, those taking tests can potentially abate the stress response and re-enter a more rational thinking process.

Another concerns preparation. The reason the armed forces train new recruits in stressful situations that simulate active combat scenarios is to ensure cold cognition during future engagements.

The more a person experiences a particular situation, the less likely he or she is to perceive such a situation as threatening.

So when preparing for an exam, try not to do so in a highly relaxed soothing environment – rather, try to push yourself in ways that will mimic the final testing scenario you are preparing for.

The ConversationJared Cooney Horvath, Postdoctoral fellow, University of Melbourne and Jason M Lodge, Senior Lecturer, Melbourne Centre for the Study of Higher Education & ARC-SRI Science of Learning Research Centre, University of Melbourne

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

Novel Perovskite Solar Cell Could Rival Silicon [Video]

A new design for solar cells that uses inexpensive, commonly available materials could rival and even outperform conventional cells made of silicon.

Scientists have used tin and other abundant elements to create novel forms of perovskite—a photovoltaic crystalline material that’s thinner, more flexible, and easier to manufacture than silicon crystals. “Perovskite semiconductors have shown great promise for making high-efficiency solar cells at low cost,” says study coauthor Michael McGehee, professor of materials science and engineering at Stanford University. “We have designed a robust, all-perovskite device that converts sunlight into electricity with an efficiency of 20.3 percent, a rate comparable to silicon solar cells on the market today.”

Cross-section of a new tandem solar cell. The brown upper layer of perovskite captures low-energy lightwaves, and the red perovskite layer captures high-energy waves. (Credit: Scanning electron microscopy image by Rebecca Belisle, Giles Eperon)
Cross-section of a new tandem solar cell. The brown upper layer of perovskite captures low-energy lightwaves, and the red perovskite layer captures high-energy waves. (Credit: Scanning electron microscopy image by Rebecca Belisle, Giles Eperon)

Double perovskite stack

The new device consists of two perovskite solar cells stacked in tandem. Each cell is printed on glass, but the same technology could be used to print the cells on plastic.

“The all-perovskite tandem cells we have demonstrated clearly outline a roadmap for thin-film solar cells to deliver over 30 percent efficiency,” says coauthor Henry Snaith, professor of physics at Oxford University. “This is just the beginning.”

Previous studies showed that adding a layer of perovskite can improve the efficiency of silicon solar cells. But a tandem device consisting of two all-perovskite cells would be cheaper and less energy-intensive to build, scientists say.

“A silicon solar panel begins by converting silica rock into silicon crystals through a process that involves temperatures above 3,000 degrees Fahrenheit (1,600 degrees Celsius),” says colead author Tomas Leijtens, a postdoctoral scholar at Stanford. “Perovskite cells can be processed in a laboratory from common materials like lead, tin, and bromine, then printed on glass at room temperature.”

A difficult challenge

But building an all-perovskite tandem device has been a difficult challenge. The main problem is creating stable perovskite materials capable of capturing enough energy from the sun to produce a decent voltage.

A typical perovskite cell harvests photons from the visible part of the solar spectrum. Higher-energy photons can cause electrons in the perovskite crystal to jump across an “energy gap” and create an electric current.

A solar cell with a small energy gap can absorb most photons but produces a very low voltage. A cell with a larger energy gap generates a higher voltage, but lower-energy photons pass right through it.

An efficient tandem device would consist of two ideally matched cells, says co-lead author Giles Eperon, an Oxford postdoctoral scholar currently at the University of Washington.

“The cell with the larger energy gap would absorb higher-energy photons and generate an additional voltage,” Eperon says. “The cell with the smaller energy gap can harvest photons that aren’t collected by the first cell and still produce a voltage.”

Stability problem

The smaller gap has proven to be the bigger challenge for scientists. Working together, Eperon and Leijtens used a unique combination of tin, lead, cesium, iodine, and organic materials to create an efficient cell with a small energy gap.

“We developed a novel perovskite that absorbs lower-energy infrared light and delivers a 14.8 percent conversion efficiency,” Eperon says. “We then combined it with a perovskite cell composed of similar materials but with a larger energy gap.”

The result: A tandem device consisting of two perovskite cells with a combined efficiency of 20.3 percent.

“There are thousands of possible compounds for perovskites,” Leijtens says, “but this one works very well, quite a bit better than anything before it.”

One concern with perovskites is stability. Rooftop solar panels made of silicon typically last 25 years or more. But some perovskites degrade quickly when exposed to moisture or light. In previous experiments, perovskites made with tin were found to be particularly unstable.

To assess stability, the research team subjected both experimental cells to temperatures of 212 degrees Fahrenheit (100 degrees Celsius) for four days.

“Crucially, we found that our cells exhibit excellent thermal and atmospheric stability, unprecedented for tin-based perovskites,” the authors write.

“The efficiency of our tandem device is already far in excess of the best tandem solar cells made with other low-cost semiconductors, such as organic small molecules and microcrystalline silicon,” McGehee says. “Those who see the potential realize that these results are amazing.”

The next step is to optimize the composition of the materials to absorb more light and generate an even higher current, Snaith says.

“The versatility of perovskites, the low cost of materials and manufacturing, now coupled with the potential to achieve very high efficiencies, will be transformative to the photovoltaic industry once manufacturability and acceptable stability are also proven,” he says.

Other researchers from Stanford, Oxford, Hasselt University in Belgium, and SunPreme Inc. are coauthors of the study. They report their research in the journal Science.

Source: Republished from Futurity.org as a derivative work under the Attribution 4.0 International license. Original article posted to Futurity by Mark Shwartz-Stanford.

Now, Check Out:

How the Moon Got its Off-Kilter Orbit

Scientists say there are a couple of problems with the textbook theory of how Earth’s moon formed. One is the moon’s surprisingly Earth-like composition.

Another is that if the moon condensed from a disk of material rotating around Earth’s equator, it should be in orbit over the equator. But the moon’s orbit is tilted 5 degrees off the equator, meaning some more energy must have been put in to move it.

Now researchers are proposing an alternative model.

epicmoon5jul2016
On July 5, 2016, the moon passed between NOAA’s DSCOVR satellite and Earth. NASA’s EPIC camera aboard DSCOVR snapped these images over a period of about four hours. In this set, the far side of the moon, which is never seen from Earth, passes by. In the backdrop, Earth rotates, starting with the Australia and Pacific and gradually revealing Asia and Africa. (Credit: NASA/NOAA)

The textbook theory of lunar formation goes like this: late in the formation of the solar system came the “giant impact” phase, when hot, planet-sized objects collided with each other. A Mars-sized object grazed what would become Earth, throwing off a mass of material from which the moon condensed. This impact set the angular momentum for the Earth-moon system, and gave the early Earth a five-hour day. Over millennia, the moon has receded from the Earth and the rotation has slowed to our current 24-hour day.

Scientists have figured this out by looking at the moon’s current orbit, working out how rapidly angular momentum of the Earth-moon system has been transferred by the tidal forces between the two bodies, and working backward.

‘One giant impact’

In 2012, Sarah Stewart, professor of earth and planetary sciences at the University of California, Davis, and her former postdoctoral fellow Matija Ćuk (now a scientist at the SETI Institute in Mountain View, California) proposed that some of the angular momentum of the Earth-moon system could have been transferred to the Earth-sun system. That allows for a more energetic collision at the beginning of the process.

In the new model, a high-energy collision left a mass of vaporized and molten material from which the Earth and moon formed. The Earth was set spinning with a two-hour day, its axis pointing toward the sun.

Because the collision could have been more energetic than in the current theory, the material from Earth and the impactor would have mixed together, and both Earth and moon condensed from the same material and therefore have a similar composition.

As angular momentum was dissipated through tidal forces, the moon receded from the Earth until it reached a point called the “LaPlace plane transition,” where the forces from the Earth on the moon became less important than gravitational forces from the sun. This caused some of the angular momentum of the Earth-moon system to transfer to the Earth-sun system.

This made no major difference to the Earth’s orbit around the sun, but it did flip Earth upright. At this point, the models built by the team show the moon orbiting Earth at a high angle, or inclination, to the equator.

Over a few tens of million years, the moon continued to slowly move away from Earth until it reached a second transition point, the Cassini transition, at which point the inclination of the moon—the angle between the moon’s orbit and Earth’s equator—dropped to about 5 degrees, putting the moon more or less in its current orbit.

The new theory elegantly explains the moon’s orbit and composition based on a single, giant impact at the beginning, says Stewart, senior author of the paper published in the journal Nature. No extra intervening steps are required to nudge things along. “One giant impact sets off the sequence of events.”

NASA supported the research, which included researchers from the University of Maryland and Harvard University.

Sources: Republished from Futurity.org as a derivative work under the Attribution 4.0 International license. Original article posted to Futurity by . Images from NASA, used under public domain rights.

Now, Check Out: