Midnight Munchies Mangle Memory

An occasional late-night raid on turkey leftovers might be harmless but new research with mice suggests that making a habit of it could alter brain physiology.

Eating at times normally reserved for sleep causes a deficiency in the type of learning and memory controlled by the hippocampal area of the brain, according to findings in the journal eLife.

Researchers from the Semel Institute in the David Geffen School of Medicine at University of California, Los Angeles (UCLA) became interested in the cognitive effects of eating at inappropriate hours because it is already known to have an impact on metabolic health, for example leading to a pre-diabetic state.

“We have provided the first evidence that taking regular meals at the wrong time of day has far-reaching effects for learning and memory,” says first author Dawn Loh from the UCLA Laboratory of Circadian and Sleep Medicine.

“Since many people find themselves working or playing during times when they’d normally be asleep, it is important to know that this could dull some of the functions of the brain.”

The researchers stress that their findings have not been confirmed in humans, but highlight the fact that shift workers have been shown to perform less well on cognitive tests.

The current study shows that some learned behaviours are more affected than others. The team tested the ability of mice to recognise a novel object. Mice regularly fed during their sleep-time were significantly less able to recall the object. Long-term memory was also dramatically reduced, demonstrated during a fear conditioning experiment.

Both long-term memory and the ability to recognise a novel object are governed by the hippocampus. The hippocampus plays an important role in our ability to associate senses and emotional experiences with memory and our ability to organise and store new memories.

During an experience, nerve impulses are activated along specific pathways and, if we repeat the experience, the same pathways increase in strength. However, this effect was reduced when food was made available to mice during a six-hour window in the middle of their normal sleep time instead of a six-hour daytime window when the mice were active.

Some genes involved in both the circadian clock and in learning and memory are regulated by a protein called CREB (cAMP response element-binding protein). When CREB is less active, it decreases memory, and may play a role in the onset of Alzheimer’s disease. In the mice fed at the wrong time, the total activity of CREB throughout the hippocampus was significantly reduced, with the strongest effects in the day.

However, the master pacemaker of the circadian system, the suprachiasmatic nucleus located in the hypothalamus, is unaffected. This leads to desynchrony between the clocks in the different brain regions (misalignment), which the authors suggest underlies the memory impairment.

“Modern schedules can lead us to eat around the clock so it is important to understand how the timing of food can impact cogitation” says Professor Christopher Colwell from the Department of Psychiatry and Biobehavioral Sciences at UCLA.

“For the first time, we have shown that simply adjusting the time when food is made available alters the molecular clock in the hippocampus and can alter the cognitive performance of mice.”

Eating at the wrong time also disrupted sleep patterns. The inappropriate feeding schedule resulted in the loss of the normal day/night difference in the amount of sleep although the total time spent asleep over 24 hours was not changed. Sleep became fragmented, with the mice catching up on sleep by grabbing more short naps throughout the day and night.

 

References: The paper ‘Misaligned feeding impairs memories’ can be freely accessed online at http://dx.doi.org/10.7554/eLife.09460

Source: Press release, Eurekalert.org 

The Best Way to See the New Years Comet

Did you get a telescope or pair of binoculars under the Christmas tree? If so, you can put them to the test by searching the Eastern sky for a view of a fuzzy comet on or shortly after New Year’s Day.

Comet Catalina, formally known as C/2013 US10, is currently perched in the pre-dawn skies as it returns to the depths of space following a recent visit to the inner part of our solar system. Named for the NASA-funded Catalina Sky Survey at the University of Arizona in Tucson, the comet was discovered on Oct. 31, 2013, and will make its closest approach to Earth on January 17th.

newwise
The NEOWISE space telescope spotted Comet C/2013 US10 Catalina speeding by Earth on Aug. 28, 2015. NEOWISE captured the comet as it fizzed with activity caused by the sun’s heat. NEOWISE detected this comet a number of times in 2014 and 2015; five of the exposures are shown here in a combined image depicting the comet’s motion across the sky. Credit: NASA
catalina-orbit-jpl-horizons_st
Comet Catalina is now outbound from the solar system, to continue its long journey into space. The faint object can now be observed with binoculars or a small telescope from the Earth’s Northern Hemisphere. Credits: NASA/JPL

Comet Catalina is a first-time visitor to the inner solar system, having reached perihelion (its closest point to the sun) at a distance of 76 million miles (122 million kilometers) on Nov. 15. As it slingshotted past the sun, the comet reached a velocity of 103,000 miles per hour (166,000 kilometers per hour) – almost three times faster than NASA’s New Horizons spacecraft as it flew past Pluto. Due to its high velocity, the comet is predicted to be on an escape trajectory from the solar system, never to return.

Weather permitting, the eastern pre-dawn sky provides an opportunity to see this faint interloper over the next few weeks. Unfortunately, the waning gibbous Moon will pose a challenge for skywatchers to locate Comet Catalina. At minimum, binoculars are required to view the comet, which will appear as a fuzzy envelope of ice and dust, known as a coma.

On the next page, we’ll show you how to locate Comet Catalina in the early morning sky…

[nextpagelink][/nextpagelink]

How to Prevent Hangovers Using Science [Video]

Let’s face it, a few of you out there in science land are going to be having a few drinks to celebrate the rolling in of 2016, right?  So, before you go out and get your party hat on, check out this excellent video that is chock full of great recommendations on how to avoid having a sad face on New Years Day:

 

Courtesy of the American Chemical Society’s “Reactions” YouTube Channel

5 Big Questions About the Science of ‘Star Wars’

As Star Wars: The Force Awakens cleans up at the box office, researchers from Georgia Tech are taking a closer look at the science of the films. Here they answer five big questions about the worlds depicted in the movies and what’s possible in reality.

1. IS LIGHT SPEED EVEN POSSIBLE?

Han Solo isn’t a bashful hero. So it’s no surprise that it took him only a few moments after we first met him to brag that his Millennium Falcon was the “fastest ship in the galaxy.” But how fast is fast? Solo said his ship can go .5 past light speed.

Deirdre Shoemaker, associate professor in the Georgia Tech School of Physics, explains in this video how fast light speed really is, why it’s not fast enough, and what needs to happen for something to actually travel 186,000 miles per second:

Next, we delve into the reality of the worlds and aliens that inhabit the Star Wars universe…

[nextpagelink][/nextpagelink]

SciFi-like Almost-Human Robots are Almost Here

If you haven’t heard of Nanyang Technological University in Singapore, then it’s time to put them on your robotics research radar – a recent news release from NTU reveals some startlingly-incredible new robots that are amazingly human-like.

Two robots, with very different functions, were unveiled in their mind-bending press release:

Say hello to Nadine, a “receptionist” at Nanyang Technological University (NTU Singapore). She is friendly, and will greet you back. Next time you meet her, she will remember your name and your previous conversation with her.

She looks almost like a human being, with soft skin and flowing brunette hair. She smiles when greeting you, looks at you in the eye when talking, and can also shake hands with you. And she is a humanoid.

Unlike conventional robots, Nadine has her own personality, mood and emotions. She can be happy or sad, depending on the conversation. She also has a good memory, and can recognise the people she has met, and remembers what the person had said before.

Nadine is the latest social robot developed by scientists at NTU. The doppelganger of its creator, Prof Nadia Thalmann, Nadine is powered by intelligent software similar to Apple’s Siri or Microsoft’s Cortana. Nadine can be a personal assistant in offices and homes in future. And she can be used as social companions for the young and the elderly.

A humanoid like Nadine is just one of the interfaces where the technology can be applied. It can also be made virtual and appear on a TV or computer screen, and become a low-cost virtual social companion.

With further progress in robotics sparked by technological improvements in silicon chips, sensors and computation, physical social robots such as Nadine are poised to become more visible in offices and homes in future.

The rise of social robots

Prof Thalmann, the director of the Institute for Media Innovation who led the development of Nadine, said these social robots are among NTU’s many exciting new media innovations that companies can leverage for commercialisation.

“Robotics technologies have advanced significantly over the past few decades and are already being used in manufacturing and logistics. As countries worldwide face challenges of an aging population, social robots can be one solution to address the shrinking workforce, become personal companions for children and the elderly at home, and even serve as a platform for healthcare services in future,” explained Prof Thalmann, an expert in virtual humans and a faculty from NTU’s School of Computer Engineering.

“Over the past four years, our team at NTU have been fostering cross-disciplinary research in social robotics technologies – involving engineering, computer science, linguistics, psychology and other fields – to transform a virtual human, from within a computer, into a physical being that is able to observe and interact with other humans.

“This is somewhat like a real companion that is always with you and conscious of what is happening. So in future, these socially intelligent robots could be like C-3PO, the iconic golden droid from Star Wars, with knowledge of language and etiquette.”

Telepresence robot lets people be in two or more places at once

Nadine’s robot-in-arms, EDGAR, was also put through its paces at NTU’s new media showcase, complete with a rear-projection screen for its face and two highly articulated arms.

EDGAR is a tele-presence robot optimised to project the gestures of its human user. By standing in front of a specialised webcam, a user can control EDGAR remotely from anywhere in the world. The user’s face and expressions will be displayed on the robot’s face in real time, while the robot mimics the person’s upper body movements.

EDGAR can also deliver speeches by autonomously acting out a script. With an integrated webcam, he automatically tracks the people he meets to engage them in conversation, giving them informative and witty replies to their questions.

Such social robots are ideal for use at public venues, such as tourist attractions and shopping centres, as they can offer practical information to visitors.

Led by Assoc Prof Gerald Seet from the School of Mechanical & Aerospace Engineering and the BeingThere Centre at NTU, this made-in-Singapore robot represents three years of research and development.

“EDGAR is a real demonstration of how telepresence and social robots can be used for business and education,” added Prof Seet. “Telepresence provides an additional dimension to mobility. The user may project his or her physical presence at one or more locations simultaneously, meaning that geography is no longer an obstacle.

“In future, a renowned educator giving lectures or classes to large groups of people in different locations at the same time could become commonplace. Or you could attend classes or business meetings all over the world using robot proxies, saving time and travel costs.”

Given that some companies have expressed interest in the robot technologies, the next step for these NTU scientists is to look at how they can partner with industry to bring them to the market.

Ready or not, here they come!

 

Source: NTU.edu.sg – “NTU scientists unveil social and telepresence robots” 

Featured Photo Credit: NTU Robotics

Fat-burning fat exists, but might not be the key to weight loss

Desiree Wanders, Georgia State University

When you think about body fat, it’s probably white fat that comes to mind. That’s where our bodies store excess calories, and it’s the stuff you want to get rid of when you are trying to lose weight.

But white fat isn’t the only kind of fat in the body – you also have brown fat and beige, or brite, fat, which can actually burn calories instead of storing them.

Fat that burns calories instead of packing them on the body sounds like the Holy Grail of obesity treatment, and researchers want to find ways to activate or increase these types of fat in our bodies. In fact, the National Institutes of Health (NIH) has put out a call for research to figure out how to do it. But is the potential of brown fat to curb weight all it’s cracked up to be?

So what makes brown and beige fat different from white fat?

You might think that white fat just stores calories, but it actually does much more than that. It insulates the body, protects the internal organs and also produces proteins that regulate food intake, energy expenditure and insulin sensitivity.

Brown fat is rich in mitochondria, which gives it a brown appearance. You may remember from high school science class that mitochondria are the “powerhouses” of the cell because they burn fatty acids and glucose for energy, releasing it as heat. That is why brown fat burns calories instead of storing them, like white fat does. White fat also has mitochondria, but not nearly as much as brown fat does.

|
Brown adipose tissue seen in positron emission tomography (PET) scan.
Hellerhoff via Wikimedia Commons, CC BY

Newborn babies have brown fat because it generates heat and helps them maintain body temperature. Rodents also have brown fat for the same reason. Until recently, it was thought that brown fat disappeared over the course of childhood. Now, thanks to advances in imaging technology, we know that adults also possess brown fat.

In humans, brown fat tends to be located around the neck and clavicle, but can also be found in a few other locations around the body. Weight can influence how active a person’s brown fat is, so the more a person weighs, the less active their brown fat is at burning fatty acids and glucose.

Beige or brite fat is made up of “brown-like” fat cells present in traditionally white fat deposits. Studies using animal models have shown these beige fat cells can form in white fat deposits under certain treatments, including cold exposure.

Whether these beige fat cells were preexisting white fat cells that turned into beige cells in a process called “transdifferentiation” or they are brand new cells is a point of contention among researchers. Like brown fat cells, beige fat cells appear to have the ability to burn fatty acids and glucose as energy.

Calories in, calories out

The principle behind weight loss or weight gain is called energy balance, which is the difference between energy intake (how many calories you eat) and energy expenditure (how many calories you burn).

Sticking to a low-calorie diet and an exercise-heavy lifestyle to lose excess weight isn’t always easy, so researchers have been looking for other ways to tip the energy balance in favor of expenditure. And some think that increasing the activity or quantity of brown or beige fat in the body might be one way of doing it.

This certainly appears to be the case in rodents. Studies have found that the chemical norepinephrine, cold exposure, diets and various proteins made in the body can all induce “browning” of white fat or activate brown fat to burn more calories in rodents. Most of these treatments also have some effect on energy balance, often increasing energy expenditure and causing weight loss.

Imagine if we could do the same thing in humans and transform the metabolically inert white fat that is weighing so many of us down into metabolically active brown fat that actually burns calories throughout the day. While it sounds like it could be a game changer in the fight against obesity, the research isn’t clear on how much of a difference brown fat might make for people.

For instance, some research has shown that activation of brown fat by cold exposure in humans translates to an increase in energy expenditure equivalent to less than 20 calories per day, which is hardly enough to have the kind of effects on obesity that we all hope for. Other research has estimated that activation of brown fat in adults could burn up to 125 extra calories per day.

The reason that activated brown fat makes a relatively small contribution to daily energy expenditure is unknown, though it may be because brown fat is present in the body in minuscule amounts compared the less metabolically active white fat. For instance, a recent study showed that out of 14 subjects, only five had more than 10 grams of activated brown fat.

And we also wouldn’t want to convert all of our white fat into brown fat, because white fat is actually something our bodies need.

For instance, in rare conditions in which there are no fat deposits, people often have insulin resistance, fatty liver disease and other metabolic complications. This is partially due to the lack of proteins that are produced by the white fat, and also because the excess calories that should be stored in the fat have to be stored in other organs, such as the liver.


Brown fat has more mitochondria than white fat.
Brown and white fat image via www.shutterstock.com

Brown fat might do more than burn calories

Even if the data show that activating brown fat doesn’t seem to burn many extra calories in humans, it could have other health benefits.

Researchers found that transplanting brown fat from donor mice into the abdominal cavity of age- and sex-matched recipient mice reversed high-fat diet-induced insulin resistance, a condition that contributes to Type 2 diabetes in humans.

Other studies have shown that beige and brown fat has beneficial effects on glucose metabolism and insulin sensitivity that appear to be greater than the modest effects on body weight. Brown fat has the ability to clear lipids (fats) and glucose from the blood, resulting in lower concentrations of circulating triglycerides, cholesterol and glucose. This may contribute to the beneficial health effects of brown fat, independent of weight loss.

So future human research may lie in how these fats can positively influence insulin sensitivity, or glucose and lipid metabolism, rather than body weight.

There is much interest in being able to harvest the power of brown fat in humans to combat obesity and accompanying metabolic disease, but this research is relatively in infancy.

To help answer these questions, the NIH has announced grant opportunities to identify conditions that trigger the “browning” of white fat, or increase quantity of brown fat in humans, find ways of testing for brown fat that don’t require needle biopsies, and explore the biological functions of these fats. This push means we should be learning more about this intriguing tissue soon.

The ConversationDesiree Wanders, Assistant Professor of Nutrition, Georgia State University

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

New Technology Cuts Mining Water Recycling Process from Decades to Hours [Video]

Cleaning up the water left over from mining operations can literally take generations—25 to 50 years on average—leaving billions of gallons of the precious resource locked up and useless.

Now, researchers have figured out how to trim that time dramatically—to just two to three hours. The advance could be a potential boon to mining companies, the environment, and global regions where water is scarce.

“I think the ability to save water is going to be really big, especially when you’re talking about China and other parts of the world,” says Mark Orazem, professor of chemical engineering at the University of Florida.

Mining operations use water for mineral processing, dust suppression, and slurry transport. When they’re finished with it, the water holds particles of mineral byproducts, known in the phosphate mining business as clay effluent.

In the case of phosphate mines that are so common in Florida, the clay effluent has the consistency of milk. “It looks like a solid, but if you throw a stone into it, it’ll splash,” Orazem says.

The water is pumped into enormous settling ponds—some are as large as a mile square with a depth of about 40 feet—where the particles can sink to the bottom. Florida alone is home to more than 150 square miles of such ponds, an area that would cover about half of New York City.

But it’s a lengthy process because the particles are electrically charged. Like charges repel and opposite charges attract. The particles’ like charge causes them to repel each other, which keeps them suspended in the water instead of sticking together and sinking to the bottom.

WATER IS ‘REUSED, AND REUSED AND REUSED’

That means mining companies can re-use the water only a bit at a time—the part skimmed off the top. Not only is the particle-filled water useless, the land those settling ponds occupy is a valuable asset that could be used for other purposes.

Ideas for speeding up that process go back centuries. In 1807, an early application of the battery invented by Volta in 1800 showed that clay particles moved in response to an electric field. In the 1990s, an electric field was used to separate clay and water in batches, but that concept was deemed uneconomical.

The new design is different because it allows a continuous feed of clay effluent into a separation system. There, upper and lower plates are used as electrodes. An electrical potential difference is applied across the electrodes, creating an electric field, which causes the charged particles to move toward the bottom, where they form a wet solid called a cake. In the cake dewatering zone, the particles can’t move, so the water is forced to the top.

The cake can then be used to fill the holes created by the mining operation, while the water is now clear enough to be reused to process mined phosphate ore.

“Instead of having the water tied up in these clay settling areas, water is sent back through the process and then reused and reused and reused,” Orazem says.

The researchers have created a lab-sized prototype and say the next step is to determine how to scale it up to a point where it can work in a real-world mine.

While the concept was designed for Florida phosphate mines, it could be used anywhere and would be especially useful in arid North Africa. In Morocco and the Western Sahara, with 85 percent of the world’s phosphate reserves, water is especially in short supply.

“Recycling water is going to be critically important,” Orazem says. “So in Florida, it’s an issue. In the desert, it’s going to be a major issue.”

After eight years, NASA’s Dawn probe brings dwarf planet Ceres into closest focus [Videos]

Marc D Rayman, NASA

More than a thousand times farther from Earth than the moon, farther even than the sun, an extraordinary extraterrestrial expedition is taking place. NASA’s Dawn spacecraft is exploring dwarf planet Ceres, which orbits the sun between Mars and Jupiter. The probe has just reached the closest point it ever will, and is now beginning to collect its most detailed pictures and other measurements on this distant orb.

Ceres is a remnant from the dawn of our solar system nearly 4.6 billion years ago. All the data Dawn is now sending back will provide insight into Ceres’ history and geology, including the presence of water, past or present. Scientists believe that by studying Ceres, we can unlock some of the secrets of the epoch in which planets, including our own, formed.

But this mission isn’t only for scientists. Discovering the nature of an uncharted world is a thrill that can be shared by anyone who has ever gazed up at the night sky in wonder, been curious about the universe and Earth’s place in it, or felt the lure of a bold adventure into the unknown.

I happen to fall into all those categories. I fell in love with space at the age of four, and I knew by the fourth grade that I wanted to earn a doctorate in physics. (It was a few more years before I did.) My passion for the exploration of space and the grandeur of scientific discovery and understanding has never wavered. It’s a dream come true for me to be the mission director and chief engineer on Dawn at JPL.

False color video of Ceres from distance of 2,700 miles, courtesy of Dawn.

Ceres before Dawn

Named for the Roman goddess of agriculture and grain, Ceres was the first dwarf planet discovered, in 1801. That’s 129 years before Pluto – and in fact, both were originally considered planets, only later to be designated dwarf planets.

Although Ceres appeared as little more than a fuzzy blob of light amidst the stars, scientists determined that it’s the behemoth of the main asteroid belt between Mars and Jupiter – nearly 600 miles in diameter. Its surface area is more than a third of the area of the continental US. Before Dawn’s arrival, Ceres was the largest object between the sun and Pluto that a spacecraft had not visited.

Since well before Dawn, we’ve had telescopic evidence that Ceres harbors water. While it’s mostly in the form of ice, scientists have good reason to believe an underground ocean once circulated. The question of whether reservoirs still lurk beneath the alien surface remains open. Dawn’s studies of Ceres may even provide hints about how Earth acquired its own supply of that precious liquid billions of years ago.

Dawn en route to Ceres


Dawn launches at dawn on September 27 2007, headed for the asteroid belt.
NASA, CC BY

In 2007, we launched Dawn from Cape Canaveral, and it will never again visit its erstwhile planetary home. In 2011, it became the only spacecraft ever to orbit an object in the main asteroid belt, devoting 14 months to scrutinizing protoplanet Vesta. Dawn showed us this second most massive resident of the belt is more closely related to the terrestrial planets (including Earth) than to the much smaller chunks of rock that are typical of asteroids.

The unique capability to travel to worlds beyond Mars, enter orbit and maneuver extensively and then depart for yet another destination is achieved with advanced ion propulsion. The technology spent much of its history in the domain of sci-fi, including Star Trek and Star Wars. (Darth Vader’s TIE Fighter is named for its twin ion engines.) But what may have seemed only science fiction is science fact. Without its three ion engines (note that Dawn does the TIE Fighters one better), Dawn’s mission wouldn’t be possible.


A gridded ion thruster uses electrical energy to create, accelerate and neutralize positively charged ions to generate thrust.

The ion engines use xenon gas, a chemical cousin of helium and neon. With electrical power from Dawn’s large solar panels, the xenon is given an electrical charge in a process called ionization. The engines use high voltage to accelerate the ions. They’re then shot out of the engines at up to 90,000 mph. When the ions leave the spacecraft at this fantastically high speed, it’s pushed in the opposite direction. Dawn’s ion propulsion system is exceptionally efficient – 10 times as efficient as conventional spacecraft propulsion. It’s comparable to your car getting 250 miles per gallon.


Artist’s conception of the Dawn spacecraft arriving at Ceres. The engine’s xenon ions glow with blue light.
NASA/JPL-Caltech, CC BY

Dawn drops into Cerean orbit

Finally, after a journey of more than seven years and three billion miles, our interplanetary ambassador reached Ceres on March 6 2015, and gracefully entered the dwarf planet’s permanent gravitational embrace.

Mission controllers at JPL then piloted the craft to three orbits at successively lower altitudes, so we could first obtain an overview and then gain better and better views of this vast unexplored territory. And Dawn has just performed the penultimate act in its grand celestial choreography. It’s spent the last seven weeks maneuvering to its lowest altitude. Orbiting now about 240 miles above the exotic terrain of rock and ice, Dawn is closer to Ceres than the International Space Station is to Earth.

Dawn brings Ceres into focus

Included in the spacecraft’s suite of sophisticated sensors is a camera that has already taken 10,000 pictures of alien landscapes on Ceres. Following from Ceres’ own name, features Dawn discovers are named for agricultural deities and festivals from around the world.

We see rugged terrain and smooth areas, sometimes with streaks of material that’s flowed across it. There are craters large and small, created by billions of years of assaults in the rough-and-tumble neighborhood of the asteroid belt. We see mountains and valleys, huge fissures in the ground and bright spots that glow with a mysterious luster, reflecting much more sunlight than most of the dark surface.

The most striking of these shining regions, inside the 55-mile-wide Occator Crater (named for the Roman deity of harrowing), is so bright that the Hubble Space Telescope detected a hint of it a decade ago. Dawn’s pictures to date have been more than 200 times sharper than Hubble’s. The images we’re starting to get back now will be even better, revealing 850 times the detail that Hubble had provided.


Dawn took this image in its low-altitude mapping orbit from an approximate distance of 240 miles (385 kilometers) from Ceres on December 10.
NASA/JPL-Caltech/UCLA/MPS/DLR/IDA, CC BY

Dawn has shown us a mountain named Ahuna Mons that towers more than 20,000 feet in an otherwise unremarkable area, comparable to the elevation of North America’s tallest peak, Mt Denali. (Ahuna is a celebration of thanksgiving for the harvest among the Sumis of northeast India.) Bright streaks seem to suggest some unidentified material once flowed down the steep slopes of Ahuna Mons. While scientists have not yet determined what forces and processes shaped this conical mountain, it doesn’t take a geologist to notice its resemblance to terrestrial volcanic cones. Imagine what it might have been like to witness an eruption of some strange combination of water and other chemicals on this cold, distant world.

Beyond photos, Dawn will take a great many other measurements from its new orbital perch before its mission concludes in 2016. It will measure radiation to help scientists determine what types of atoms are present on Ceres. It will use infrared light to identify the minerals on Ceres’ surface. And it will gauge subtle variations in the gravitational field to reveal the interior structure of the dwarf planet.

Once the spacecraft exhausts the small supply of conventional rocket propellant it squirts through thrusters to control its orientation in the zero-gravity, frictionless conditions of spaceflight, it will no longer be able to point its solar arrays at the sun, its antenna at Earth, its sensors at Ceres or its ion engines in the direction needed to travel elsewhere. But the ship will remain in orbit around Ceres as surely as the moon remains in orbit around Earth and Earth remains in orbit around the sun. Its legacy in the history of our efforts to reach out from our humble home to touch the stars is secure. Dawn will become an inert celestial monument to humankind’s creativity, ingenuity, and passion for exploring the cosmos.


This part of Ceres, near the south pole, has such long shadows because, from the perspective of this location, the sun is near the horizon. At the time When Dawn took this image on December 10, the sun was 4 degrees north of the equator. If you were standing this close to Ceres’ south pole, the sun would never get high in the sky during the course of a nine-hour Cerean day.
NASA/JPL-Caltech/UCLA/MPS/DLR/IDA, CC BY

The Conversation

Marc D Rayman, Dawn Chief Engineer and Mission Director at JPL, NASA

This article was originally published on The Conversation. Read the original article.

Featured Image Credit:  NASA/JPL-Caltech/UCLA/MPS/DLR/IDA

Underwater Ruins of Greek Harbor are Full of Surprises [Video]

Researchers have made some surprising discoveries while investigating the underwater ruins of Lechaion, ancient Corinth’s partially submerged harbor town.

Lechaion was one of two bustling ports of the ancient city of Corinth. The harbor saw vibrant maritime activity for more than a thousand years, from the 6th century BCE to the 6th century CE. Ships and fleets departed filled with cargoes, colonists, and marines destined for ports all over the Mediterranean and beyond.

aerial_photo
Aerial photo of the Western Mole (Credit: K. Xenikakis & S. Gesafidis).

“According to ancient sources, most of the city’s wealth derived from the maritime trade that passed through her two harbors, eventually earning her the nickname ‘Wealthy Corinth’,” says archaeologist Bjørn Lovén from the University of Copenhagen and co-director of the Lechaion Harbor Project (LHP).

The Lechaion Harbour Project (LHP), a collaboration between the Ephorate of Underwater Antiquities in Greece, the University of Copenhagen, and the Danish Institute at Athens, is exploring the submerged main harbor of ancient Corinth.

The research team has initiated full-scale excavations and a digital and geophysical survey of the seaward side of the harbor using various innovative technologies, including a newly developed 3D parametric sub-bottom profiler. To date they have uncovered two monumental moles constructed of ashlar blocks, along with a smaller mole, two areas of wooden caissons, a breakwater, and an entrance canal that leads into Lechaion’s three inner harbor basins.

A very interesting video of the underwater excavation process is on the next page…

[nextpagelink][/nextpagelink]

How Two Tones Can Sound Different Depending On Where You Were Born

If researcher Elizabeth Petitti played two musical notes from her laptop, some people would hear the notes rise in pitch, while others would hear them fall. Why the difference?

The answer may improve our understanding of how our auditory system develops, and may help speech-language pathologists who work with people who have hearing impairment.

Petitti says the answer comes down to the way our brains perceive two components that make up sound: fundamental frequency and harmonics.

A note’s fundamental frequency is the primary element of sound from which our brains derive pitch—the highness or lowness of a note. Harmonics give a note its timbre, the quality that makes instruments sound distinct from one another.

Many sounds in the world are made up of these tones, whether you strike a key on a keyboard, play a note on a clarinet, or say a letter, says Petitti, who graduated from Boston University’s Sargent College of Health & Rehabilitation Sciences with a master’s in speech-language pathology.

Our brains expect the fundamental and the harmonics to be present in any given note. But when some of this information drops out, “the way you perceive the note can change in surprising ways,” says Petitti’s mentor, Tyler Perrachione, a professor at Sargent and director of the Communication Neuroscience Research Laboratory.

‘PITCH EXISTS ONLY IN OUR MINDS’

Petitti explains that when she removes the fundamental from a tone (using signal processing software), and then plays that note, the listener’s brain automatically supplies the pitch. People’s brains deliver this information in different ways: They either fill in the missing fundamental frequency—similar to the way the brain would compensate for a blind spot in our eye—or they determine the pitch from the harmonics.

Here’s where it gets interesting: When two different tones that have been stripped of their fundamentals are played in succession, some listeners hear their pitch rising, and some hear it falling. Who’s right?

“There’s no right answer,” Perrachione says. “Pitch only exists in our minds. It’s a perceptual quality.” So, how exactly do we determine pitch? It turns out the language we speak plays a role.

DOES NATIVE LANGUAGE MATTER?

Petitti and Perrachione theorized that individuals who grew up speaking a tone language like Mandarin would perceive pitch differently than those who grew up speaking a non-tone language like English. In Mandarin, for example, a word often has several meanings, depending on how the speaker employs pitch; mā (with a level tone) means “mother,” while mă (which drops, then rises in tone) means “horse.”

To test this theory, Petitti invited 40 native-English speakers and 40 native tone language speakers to participate in a study, which she and Perrachione presented at the International Congress of Phonetic Sciences in August 2015. Each participant listened to 72 pairs of tones stripped of their fundamental frequencies, and then indicated if the tones were moving up or down.

Petitti and Perrachione found that language does change the way we hear. Individuals who grow up speaking English are more attuned to a note’s harmonics, while the tone-language speakers are more attuned to its fundamental. So, when a note is stripped of that component, they’re more likely to derive pitch by supplying the missing fundamental than by listening to the harmonics still present in the note.

MUSICAL TRAINING AND LEARNING LANGUAGE

These results led Petitti and Perrachione to wonder if the difference in pitch is grounded in our earliest language acquisition, or if other experiences can also affect how our brains process sound. For instance, would musicians—who also rely on pitch—perceive sound the same way as tone-language ­speakers?

When they put the question to the test, Petitti and Perrachione found that neither the age at which a musician began studying nor the number of years she or he had practiced affected her or his perception of pitch. To Petitti, this suggests the way we listen is determined by our earliest brain development. While you may begin learning an instrument as early as three, “you start language learning from birth,” she says. “So your auditory system is influenced by the language you are exposed to from day one.”

It’s not just theoretical. “Big picture: we are interested in how brains change with experience and how our experiences predispose us to certain auditory skills,” Perrachione says. This understanding could “help us better understand the opposite, when things don’t work quite right,” such as when a person has a disorder like amusia (tone deafness).

Petitti underscores the study’s potential clinical impact; in her career as a speech-language pathologist, she intends to work with clients who have hearing impairments, which will involve teaching them to perceive and use pitch. This ability is “crucial when you’re teaching how to ask a question, and how to use pitch to signal the difference between words,” she says—all skills we typically begin to develop early and unconsciously.

Republished as a derivative work from Futurity.org Attribution 4.0 International license. Original article posted to Futurity by .

Featured Image Credit:  Thorbjørn Kühl/Flickr

Now, Check Out: