Here’s How Octopuses See Color Differently than Any Other Animal [Video]

Biologists have puzzled for decades over the paradox of octopus vision. Despite their brilliantly colored skin and ability to rapidly change color to blend into the background, cephalopods like octopuses and squid have eyes with only one type of light receptor—which basically means they see only black and white.

Why would a male risk flashing its bright colors during a mating dance if the female can’t even see him—but a nearby fish can, and quickly gulps him down? And how could these animals match the color of their skin with their surroundings as camouflage if they can’t actually see the colors?

A new study shows that cephalopods may actually be able to see color—just differently from any other animal.

Their secret? An unusual pupil—U-shaped, W-shaped, or dumbbell-shaped—that allows light to enter the eye through the lens from many directions, rather than just straight into the retina.

Chromatic aberration

Humans and other mammals have eyes with round pupils that contract to pinholes to give us sharp vision, with all colors focused on the same spot. But as anyone who’s been to the eye doctor knows, dilated pupils not only make everything blurry, but create colorful fringes around objects—what is known as chromatic aberration.

This is because the transparent lens of the eye—which in humans changes shape to focus light on the retina—acts like a prism and splits white light into its component colors. The larger the pupillary area through which light enters, the more the colors are spread out. The smaller our pupil, the less the chromatic aberration. Camera and telescope lenses similarly suffer from chromatic aberration, which is why photographers stop down their lenses to get the sharpest image with the least color blurring.

Cephalopods, however, evolved wide pupils that accentuate the chromatic aberration and might have the ability to judge color by bringing specific wavelengths to a focus on the retina, much the way animals like chameleons judge distance by using relative focus. They focus these wavelengths by changing the depth of their eyeball, altering the distance between the lens and the retina, and moving the pupil around to changes its off-axis location and thus the amount of chromatic blur.

“We propose that these creatures might exploit a ubiquitous source of image degradation in animal eyes, turning a bug into a feature,” says Alexander Stubbs, a graduate student at the University of California, Berkeley. “While most organisms evolve ways to minimize this effect, the U-shaped pupils of octopus and their squid and cuttlefish relatives actually maximize this imperfection in their visual system while minimizing other sources of image error, blurring their view of the world but in a color-dependent way and opening the possibility for them to obtain color information.”

How U-shaped pupils work

Stubbs came up with the idea that cephalopods could use chromatic aberration to see color after photographing lizards that display with ultraviolet light, and noticing that UV cameras suffer from chromatic aberration. He teamed up with his father, Christopher Stubbs, professor of physics and of astronomy at Harvard University, to develop a computer simulation to model how cephalopod eyes might use this to sense color. Their findings appear in the Proceedings of the National Academy of Sciences.

They concluded that a U-shaped pupil like that of squid and cuttlefish would allow the animals to determine the color based on whether or not it was focused on its retina. The dumbbell-shaped pupils of many octopuses work similarly, since they’re wrapped around the eyeball in a U shape and produce a similar effect when looking down. This may even be the basis of color vision in dolphins, which have U-shaped pupils when contracted, and jumping spiders.

“Their vision is blurry, but the blurriness depends on the color,” Stubbs says. “They would be comparatively bad at resolving white objects, which reflect all wavelengths of light. But they could fairly precisely focus on objects that are purer colors, like yellow or blue, which are common on coral reefs and rocks and algae. It seems they pay a steep price for their pupil shape but may be willing to live with reduced visual acuity to maintain chromatically-dependent blurring, and this might allow color vision in these organisms.”

“We carried out extensive computer modeling of the optical system of these animals, and were surprised at how strongly image contrast depends on color,” says Christopher Stubbs. “It would be a shame if nature didn’t take advantage of this.”

Not enough contrast

Alexander Stubbs extensively surveyed 60 years of studies of color vision in cephalopods, and discovered that, while some biologists had reported an ability to distinguish colors, others reported the opposite.

Octopus skin can sense light without eyes

The negative studies, however, often tested the animal’s ability to see solid colors or edges between two colors of equal brightness, which is hard for this type of eye because, as with a camera, it’s hard to focus on a solid color with no contrast. Cephalopods are best at distinguishing the edges between dark and bright colors, and in fact, their display patterns are typically regions of color separated by black bars.

“We believe we have found an elegant mechanism that could allow these cephalopods to determine the color of their surroundings, despite having a single visual pigment in their retina,” he says. “This is an entirely different scheme than the multi-color visual pigments that are common in humans and many other animals. We hope this study will spur additional behavioral experiments by the cephalopod community.”

 

UC Berkeley’s Museum of Vertebrate Zoology, a Graduate Research Fellow Program grant to Alexander Stubbs, and Harvard University supported the work. Their work is published in the Proceedings of the National Academy of Sciences.

Source: Republished from Futurity.org as a derivative work under the Attribution 4.0 International license. Original article posted to Futurity by Robert Sanders, UC Berkeley.

Now, Check Out:

Fish Have Evolved Venom 18 Times and Why that’s Important to Medical Research

Venom has evolved 18 separate times in fresh and saltwater fishes, according to a new paper that catalogs instances of venomous aquatic life.

The ability to produce and inject toxins into another animal is so useful, it has evolved multiple times in creatures ranging from jellyfishes to spiders, shrews to the male platypus.

The paper, published in the journal Integrative and Comparative Biology, also finds:

  • In contrast to squamates like lizards and snakes, very few fishes have evolved venomous fangs or teeth.
  • The predominant function for venom in fishes is defense rather than offense.
  • Venom in freshwater is dominated by catfishes, as opposed to marine environments where it is widespread across many groups.
  • It is surprising how comparatively common venom is in deep-sea sharks (30 percent of venomous sharks) compared to deep-sea bony fishes (5 percent of venomous bony fishes).

“For the first time ever, we looked at the evolution of venom across all fishes,” says lead author William Leo Smith, assistant curator at the University of Kansas Biodiversity Institute. “Nobody had attempted to look across all fishes. Nobody had done sharks or included eels. Nobody had looked at them all and included all fishes in an evolutionary tree at the same time.”

“Venom is a neurotoxin. The average response is incredible pain and swelling.”

Smith and coauthors spent years combing medical reports of people exposed to venom from fishes. Then the team assembled the family trees for those fish, using specimens from natural history museums to trace evidence of venom through closely related species.

“We figured out what a venom gland looks like in a known-venomous animal and what it looks like in all related groups,” says Smith. “For instance, relatives of yellowtail that people eat as sushi were reported as venomous, and we were able to find venom glands in their spines.”

According to Smith, the 18 independent evolutions of venom each pose an opportunity for drug makers to derive therapies for a host of human ailments.

“Fish venoms are often super complicated, big molecules that have big impact,” he says. “Venom can have impacts on blood pressure, cause local necrosis, breakdown of tissue and blood, and hemolytic activity—it prevents clotting to spread venom around prey. Venom is a neurotoxin. The average response is incredible pain and swelling.”

According to Smith, because fishes have to live with their own venom, “there might be helper molecules that protect the fishes themselves and help them survive.” He says these also could have therapeutic value to people.

Smith says that up to 95 percent of venomous fish use their toxins defensively, usually gathering venom within their dorsal spines, where it can be deployed in case the fish is crushed or another fish attempts to swallow it.

Some, however, use venom offensively to debilitate their prey and can sometimes injure people.

“Invasive lionfishes will orient themselves in a strange way and ram themselves at people,” Smith says. “One-jawed eels have lost the upper jaw, but with the lower one they slam prey up into a modified fang. Their venom gland sits right above the brain.”

Smith studies biological traits—like the ability to make venom, fly, or produce light—that have evolved separately in different lines of species. He says venom is distinct from poison because it typically won’t harm an organism ingesting it; venom works when it comes into contact externally, something Smith himself has experienced when working in the pet trade (where most venom US exposure occurs), cleaning his own fish tank, or collecting fishes during fieldwork.

“I’ve never been offensively stung,” Smith says. “The problem is they hide in the rocks in your fish tank, and you move the rocks. Or, when you’re collecting fishes. You’re out there in the water with a mesh bag and a spear, trying to get venomous things, and then a wave hits you and drives the bag of spines into your chest, and you say, ‘Ah, I regret that.’”

The team’s findings were published in the journal Integrative and Comparative Biology.

Source: Republished from Futurity.org as a derivative work under the Attribution 4.0 International license. Original article posted to Futurity by Brendan Lynch-KU.

Featured Image Credit: Alexander Vasenin, CC BY-SA 3.0 

Now, Check Out:

Mom’s Unhealthy Diet Can Affect up to Three Generations

Moms who eat high-fat, high-sugar diets may be putting future generations at risk for metabolic problems, even when their offspring eat healthy diets, a new study with mice suggests.

“More than two-thirds of reproductive-age women in the United States are overweight or obese.”

While other studies have linked a woman’s health in pregnancy to her child’s weight later in life, the new research is the first to indicate that even before becoming pregnant, a woman’s obesity can cause genetic abnormalities that subsequently are passed through the female bloodline to at least three generations, increasing the risk of obesity-related conditions such as type 2 diabetes and heart disease.

“Our findings indicate that a mother’s obesity can impair the health of later generations,” says senior author Kelle H. Moley, professor of obstetrics and gynecology at Washington University in St. Louis. “This is particularly important because more than two-thirds of reproductive-age women in the United States are overweight or obese.”

The research shows that a mother’s obesity—and its associated metabolic problems—can be inherited through mitochondrial DNA present in the unfertilized oocyte, or egg. Mitochondria often are referred to as the powerhouses of cells because they supply energy for metabolism and other biochemical processes. These cellular structures have their own sets of genes, inherited only from mothers, not fathers.

“Our data are the first to show that pregnant mouse mothers with metabolic syndrome can transmit dysfunctional mitochondria through the female bloodline to three generations,” Moley says. “Importantly, our study indicates oocytes—or mothers’ eggs—may carry information that programs mitochondrial dysfunction throughout the entire organism.”

For the study, published in the journal Cell Reports, researchers fed mice a high-fat, high-sugar diet comprised of about 60 percent fat and 20 percent sugar from six weeks prior to conception until weaning. “This mimics more of the Western diet,” Moley says. “Basically, it’s like eating fast food every day.”

Offspring then were fed a controlled diet of standard rodent chow, which is high in protein and low in fat and sugar. Despite the healthy diet, the pups, grand pups, and great-grand pups developed insulin resistance and other metabolic problems. Researchers found abnormal mitochondria in muscle and skeletal tissue of the mice.

“It’s important to note that in humans, in which the diets of children closely mirror those of their parents, the effects of maternal metabolic syndrome may be greater than in our mouse model,” Moley says.

More research is needed to determine if a consistent diet low in fat and sugar, as well as regular exercise, may reverse genetic metabolic abnormalities.

“In any case, eating nutritiously is critical,” Moley says. “Over the decades, our diets have worsened, in large part due to processed foods and fast foods. We’re seeing the effects in the current obesity crisis. Research, including this study, points to poor maternal nutrition and a predisposition to obesity.”

Source: Republished from Futurity.org as a derivative work under the Attribution 4.0 International license. Original article posted to Futurity by .

Now, Check Out:

Apparently the Oceans are Full of Glowing Fish

Scientists say bioluminescence—the production of light from a living organism—is more widespread among marine fishes than they thought. And it seems they have a variety of ways to glow.

But how bioluminescence evolved in fishes is still a bit of a mystery.

Most people are familiar with bioluminescence in fireflies, but the phenomenon is found throughout the ocean, including in fishes. Indeed, the authors show with genetic analysis that bioluminescence has evolved independently 27 times in 14 major fish clades—groups of fish that come from a common ancestor.

“Bioluminescence is a way of signaling between fishes, the same way that people might dance or wear bright colors at a nightclub,” says W. Leo Smith, assistant curator with the University of Kansas Biodiversity Institute, who coauthored the study in PLOS ONE. He added that some fish also are thought to use bioluminescence as camouflage.

Smith says the huge variety in ways bony fish can deploy bioluminescence—such as leveraging bioluminescent bacteria, channeling light though fiber-optic-like systems or using specialized light-producing organs—underlines the importance of bioluminescence to vertebrate fish in a major swath of the world’s deep seas called the “deep scattering layer.”

“When things evolve independently multiples times, we can infer that the feature is useful,” Smith says. “You have this whole habitat where everything that’s not living at the top or bottom of the ocean or along the edges—nearly every vertebrate living in the open water—around 80 percent of those fish species are bioluminescent.

“So this tells us bioluminescence is almost a requirement for fishes to be successful.”

The most common vertebrate species on the planet lives within this habitat and is bioluminescent.

“The bristlemouth is the most abundant vertebrate on Earth,” Smith says. “Estimates of the size are thousands of trillions of bristlemouth fish in the world’s oceans.”

An evolutionary advantage?

Smith and colleagues Matthew P. Davis of St. Cloud State University and John S. Sparks of the American Museum of Natural History found all fish they examined evolved bioluminescence between the Early Cretaceous, some 150 million years ago, and the Cenozoic Era.

Further, the team shows that once an evolutionary line of fish developed the ability to produce light, it tended soon thereafter to branch into many new species.

“Many fish proliferate species when they evolve this trait—they differentiate, but we don’t know why,” Smith says. “In the ocean, there are no physical barriers to separate groups of deep sea fishes, so why are there so many species of anglerfishes, for example?

“When they start using bioluminescence for species recognition, they diversify into a lot more species.”

To follow this line of inquiry, Smith and his coauthors now are working with a grant from the National Science Foundation to identify specific genes associated with the production of bioluminescence in fish.

In May, Smith and his two colleagues returned after taking a chartered vessel to sea from the West Coast to collect samples of bioluminescent fish for analysis.

“We need fresh specimens for modern genetic approaches,” he says. “We’ll catch fishes and look at their mRNA to see what genes are being expressed. In the groups that produce their own light, we want to get the mRNA from the light organs themselves. With this information we can begin to trace the variation within the system, including the possibility of uncovering how this system evolved.”

Source: Republished from Futurity.org as a derivative work under the Attribution 4.0 International license. Original Article posted to Futurity by Brendan Lynch, University of Kentucky.

Now, Check Out:

Early-onset Alzheimer’s: should you worry?

By Troy Rohn, Boise State University.

You have forgotten where you put your car keys, or you can’t seem to remember the name of your colleague you saw in the grocery store the other day. You fear the worst, that maybe these are signs of Alzheimer’s disease.

You’re not alone: a recent study asking Americans age 60 or older the condition they were most afraid of getting indicated the number one fear was Alzheimer’s or dementia (35 percent), followed by cancer (23 percent), and stroke (15 percent).

And when we hear of someone like legendary basketball Coach Pat Summitt  (pictured with President Obama in this articles feature image) dying on June 28 from early-onset Alzheimer’s at age 64, fears are heightened.

Losing your keys every once in a while does not mean you have Alzheimer’s Disease.
From www.shutterstock.com

Memory loss is normal; Alzheimer’s is not

Alzheimer’s is an irreversible, progressive brain disease that slowly destroys memory and thinking skills, leading to cognitive impairment that severely affects daily living. Often the terms Alzheimer’s and dementia are used interchangeably and although the two are related, they are not the same. Dementia is a general term for the loss of memory or other mental abilities that affect daily life. Alzheimer’s is a cause of dementia, with over 70 percent of all dementia cases occurring as a result of Alzheimer’s.

The majority of Alzheimer’s cases occur in people aged 65 years or older.

Slight memory loss is a normal consequence of aging, and people therefore should not be overly concerned if they lose their keys or forget the name of a neighbor at the grocery store. If these things happen infrequently, there is scant reason to worry. You most likely do not have Alzheimer’s if you simply forgot one time where you parked upon leaving Disneyland or the local mall during the holidays.

How do you know when forgetfulness is part of the normal aging process and when it could be a symptom of Alzheimer’s? Here are 10 early signs and symptoms of Alzheimer’s disease.

A key point to consider is whether these symptoms significantly affect daily living. If so, then Alzheimer’s disease might be the cause.

For every one of these 10 symptoms of Alzheimer’s, there is also a typical age-related change that is not indicative of Alzheimer’s disease. For example, an early symptom of Alzheimer’s is memory loss including forgetting important dates or events and asking for the same information numerous times over. A typical age-related change may be sometimes forgetting names and appointments, but remembering them later.

People frequently ask if they might be afflicted with the disease if a grandparent had Alzheimer’s. The majority of Alzheimer’s cases occur in people aged 65 years or older. These individuals are classified as having what is known as late-onset Alzheimer’s. In late-onset Alzheimer’s, the cause of the disease is unknown (e.g. sporadic), although advancing age and inheriting certain genes may play an important role. Importantly, although there are several known genetic risk factors associated with late-onset Alzheimer’s, inheriting any one of these genes does not assure a prognosis of Alzheimer’s as one advances in age.

Early-onset is rare – but heredity does play an important role

In fact less than 5 percent of the 5 million cases are a direct result of hereditary mutations (e.g. familial form of Alzheimer’s). Inheriting these rare, genetic mutations leads to what is known as early-onset Alzheimer’s, which is characterized by an earlier age of onset, often in the 40s and 50s, and is a more aggressive form of the disease that leads to a more rapid decline in memory impairment and cognition.

In general, most neurologists agree that early-onset and late-onset Alzheimer’s are essentially the same disease, apart from the differences in genetic cause and age of onset. The one exception is the prevalence of a condition called myoclonus (muscle twitching and spasm) that is more commonly observed in early-onset Alzheimer’s disease than in late-onset Alzheimer’s disease.

In addition, some studies suggest that people with early-onset Alzheimer’s decline at a faster rate than those with late-onset. Even though generally speaking the two forms of Alzheimer’s are medically equivalent, the large burden early-onset poses on the family is quite evident. Often these patients are still in the most productive phases of their life and yet the onset of the disease robs them of brain function at such a young age. These individuals may still be physically fit and active when diagnosed and more often than not still have family and career responsibilities. Therefore, a diagnosis of early-onset may have a greater negative, ripple effect on the patient as well as family members.

Although the genes giving rise to early-onset Alzheimer’s are extremely rare, these inherited mutations do run in families worldwide and the study of these mutations has provided critical knowledge to the molecular underpinnings of the disease. These familial forms of Alzheimer’s result from mutations in genes that are typically defined as being autosomal dominant, meaning that you only need to have one parent pass on the gene to their child. If this happens, there is no escape from an eventual Alzheimer’s diagnosis.

What scientists have learned from these rare mutations that cause early-onset Alzheimer’s is that in every case the gene mutation leads to the overproduction of a rogue, toxic, protein called beta-amyloid. The build up of beta-amyloid in the brain produces plaques that are one of the hallmarks of the disease. Just as plaques in arteries can harm the heart, plaques on the “brain” can have dire consequences for brain function.

By studying families with early-onset Alzheimer’s, scientists now realize that the build up of beta-amyloid can happen decades before the first symptoms of the disease manifest. This gives scientists tremendous hope in terms of a large therapeutic window to intervene and stop the beta-amyloid cascade.

Hope is high for large trial underway of 5,000

Indeed, one of the most anticipated clinical trials under way at this moment involves a large Colombian family of over 5,000 members who may carry an early-onset Alzheimer’s gene. Three hundred family members will participate in this trial in which half of those people who are young and years away from symptoms but who have the Alzheimer’s gene will receive a drug that has been shown to decrease the production of beta-amyloid. The other half will take a placebo and will comprise the control group.

Neither patient nor doctor will know whether they will be receiving the active drug, which helps eliminate any potential biases. The trial will last 5 years and although it will involve a small percentage of people with early-onset Alzheimer’s, the information from the trial could be applied to millions of people worldwide who will develop the more conventional, late-onset form of Alzheimer’s disease.

Currently there are no effective treatments or cure for Alzheimer’s and the only medications available are palliative in nature. What is critically needed are disease-modifying drugs: those drugs that actually stop the beta-amyloid in its tracks. Devastating as early-onset Alzheimer’s is, there is hope that prevention trials as described above could ultimately lead to effective treatments in the near future for this insidious disease.

The ConversationTroy Rohn, Professor of Biology, Boise State University

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

Moving beyond pro/con debates over genetically engineered crops

By Pamela Ronald, University of California, Davis.

Since the 1980s biologists have used genetic engineering to express novel traits in crop plants. Over the last 20 years, these crops have been grown on more than one billion acres in the United States and globally. Despite their rapid adoption by farmers, genetically engineered (GE) crops remain controversial among many consumers, who have sometimes found it hard to obtain accurate information.

Last month the U.S. National Academies of Sciences, Engineering, and Medicine released a review of 20 years of data regarding GE crops. The report largely confirms findings from previous National Academies reports and reviews produced by other major scientific organizations around the world, including the World Health Organization and the European Commission.

I direct a laboratory that studies rice, a staple food crop for half the world’s people. Researchers in my lab are identifying genes that control tolerance to environmental stress and resistance to disease. We use genetic engineering and other genetic methods to understand gene function.

I strongly agree with the NAS report that each crop, whether bred conventionally or developed through genetic engineering, should be evaluated on a case-by-case basis. Every crop is different, each trait is different and the needs of each farmer are different too. More progress in crop improvement can be made by using both conventional breeding and genetic engineering than using either approach alone.

Modern cultivated corn was domesticated from teosinte, an ancient grass, over more than 6,000 years through conventional breeding.
Nicole Rager Fuller, National Science Foundation

Convergence between biotech and conventional breeding

New molecular tools are blurring the distinction between genetic improvements made with conventional breeding and those made with modern genetic methods. One example is marker assisted breeding, in which geneticists identify genes or chromosomal regions associated with traits desired by farmers and/or consumers. Researchers then look for particular markers (patterns) in a plant’s DNA that are associated with these genes. Using these genetic markers, they can efficiently identify plants carrying the desired genetic fingerprints and eliminate plants with undesirable genetics.

Ten years ago my collaborators and I isolated a gene, called Sub1, that controls tolerance to flooding. Million of rice farmers in South and Southeast Asia grow rice in flood prone regions, so this trait is extremely valuable. Most varieties of rice will die after three days of complete submergence but plants with the Sub1 gene can withstand two weeks of complete submergence. Last year, nearly five million farmers grew Sub1 rice varieties developed by my collaborators at the International Rice Research Institute using marker assisted breeding.

In another example, researchers identified genetic variants that are associated with hornlessness (referred to as “polled”) in cattle – a trait that is common in beef breeds but rare in dairy breeds. Farmers routinely dehorn dairy cattle to protect their handlers and prevent the animals from harming each other. Because this process is painful and frightening for the animals, veterinary experts have called for research into alternative options.

In a study published last month, scientists used genome editing and reproductive cloning to produce dairy cows that carried a naturally occurring mutation for hornlessness. This approach has the potential to improve the welfare of millions of cattle each year.

Reducing chemical insecticides and enhancing yield

In assessing how GE crops affect crop productivity, human health and the environment, the NAS study primarily focused on two traits that have been engineered into plants: resistance to insect pests and tolerance of herbicides.

The study found that farmers who planted crops engineered to contain the insect-resistant trait – based on genes from the bacterium Bacillus thuringiensis, or Bt – generally experienced fewer losses and applied fewer chemical insecticide sprays than farmers who planted non-Bt varieties. It also concluded that farms where Bt crops were planted had more insect biodiversity than farms where growers used broad-spectrum insecticides on conventional crops.

Genetically modified crops currently grown in the United States (IR=insect resistant, HT=herbicide tolerant, DT=drought tolerant, VR=virus resistant).
Colorado State University Extension

The committee found that herbicide-resistant (HR) crops contribute to greater yields because weeds can be controlled more easily. For example, farmers that planted HR canola reaped greater yields and returns, which led to wide adoption of this crop variety.

Another benefit of planting of HR crops is reduced tillage – the process of turning the soil. Before planting, farmers must kill the weeds in their fields. Before the advent of herbicides and HR crops, farmers controlled weeds by tilling. However, tilling causes erosion and runoff, and requires energy to fuel the tractors. Many farmers prefer reduced tillage practices because they enhance sustainable management. With HR crops, farmers can control weeds effectively without tilling.

The committee noted a clear association between the planting of HR crops and reduced-till agricultural practices over the last two decades. However, it is unclear if the adoption of HR crops resulted in decisions by farmers to use conservation tillage, or if farmers who were using conservation tillage adopted HR crops more readily.

In areas where planting of HR crops led to heavy reliance on the herbicide glyphosate, some weeds evolved resistance to the herbicide, making it difficult for farmers to control weeds using this herbicide. The NAS report concluded that sustainable use of Bt and HR crops will require use of integrated pest management strategies.

The report also discusses seven other GE food crops grown in 2015, including apple (Malus domestica), canola (Brassica napus), sugar beet (Beta vulgaris), papaya (Carica papaya), potato, squash (Cucurbita pepo) and eggplant (Solanum melongena).

Papaya is a particularly important example. In the 1950s, papaya ringspot virus wiped out nearly all papaya production on the Hawaiian island of Oahu. As the virus spread to other islands, many farmers feared that it would wipe out the Hawaiian papaya crop.

Papaya infected with ringspot virus. Scot Nelson/Flickr, CC BY-SA

In 1998 Hawaiian plant pathologist Dennis Gonsalves used genetic engineering to splice a small snippet of ringspot virus DNA into the papaya genome. The resulting genetically engineered papaya trees were immune to infection and produced 10-20 fold more fruit than infected crops. Dennis’ pioneering work rescued the papaya industry. Twenty years later, this is still the only method for controlling papaya ringspot virus. Today, despite protests by some consumers, 80 percent of the Hawaiian papaya crop is genetically engineered.

Scientists have also used genetic engineering to combat a pest called the fruit and shoot borer, which preys on eggplant in Asia. Farmers in Bangladesh often spray insecticides every 2-3 days, and sometimes as often as twice daily, to control it. The World Health Organization estimates that some three million cases of pesticide poisoning and over than 250,000 deaths occur worldwide every year.

To reduce chemical sprays on eggplant, scientists at Cornell University and in Bangladesh engineered Bt into the eggplant genome. Bt brinjal (eggplant) was introduced in Bangladesh in 2013. Last year 108 Bangladeshi farmers grew it and were able to drastically reduce insecticides sprays.

Feed the world in an ecologically based manner

Genetically improved crops have benefited many farmers, but it is clear that genetic improvement alone cannot address the wide variety of complex challenges that farmers face. Ecologically based farming approaches as well as infrastructure and appropriate policies are also needed.

Instead of worrying about the genes in our food, we need to focus on ways to help families, farmers and rural communities thrive. We must be sure that everyone can afford the food and we must minimize environmental degradation. I hope that the NAS report can help move the discussions beyond distracting pro/con arguments about GE crops and refocus them on using every appropriate technology to feed the world in an ecologically based manner.

The ConversationPamela Ronald, Professor of Plant Pathology, University of California, Davis

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

This Sex-switching Fish Mates for Life

For tiny hermaphrodite fish found in coral reefs off Panama, a lifelong monogamous relationship comes with a bit of give and take.

The pair switch reproductive roles at least 20 times a day.

The strategy allows individual fish to fertilize about as many eggs as it produces, giving the fish a reproductive edge.

“Our study indicates that animals in long-term partnerships are paying attention to whether their partner is contributing to the relationship fairly—something many humans may identify with from their own long-term relationships,” says Mary Hart, adjunct professor of biology at the University of Florida.

The duo motivate one another to contribute eggs to the relationship. If one partner lacks eggs, the other will simply match whatever it produces. The only way for a partner to convince its mate to produce more eggs, is to pick up the slack and generate more itself, she says.

Scientists observed the short-lived chalk bass, Serranus tortugarum, for six months—and were surprised that every couple stayed together for the duration.

With only 3 to 5 percent of animals known to live monogamously, this is a rare find—and one of the first for a fish living in a high-density social group, says coauthor Andrew Kratter, an ornithologist with the Florida Museum of Natural History.

“I found it fascinating that fish with a rather unconventional reproductive strategy would end up being the ones who have these long-lasting relationships,” he says. “They live in large social groups with plenty of opportunities to change partners, so you wouldn’t necessarily expect this level of partner fidelity.”

Published in the journal of Behavioral Ecology, the new research lays the groundwork for studies that investigate mechanisms that govern partnerships in the wild.

An occasional fling

Scientists have long studied cooperative behavior in animals, like primates that groom each other or vampire bats that regurgitate food for relatives in need of a blood meal. But it has remained a point of debate among scientists whether or not these animals are paying attention to the amount of resources being exchanged. For the chalk bass, matching reproductive chores helps partners succeed, even when there are opportunities to mate with other fish, Hart says.

“We initially expected individuals with partners that were producing less eggs would be more likely to switch partners over time—trading up, so to speak. Instead we found that partners matched egg production and remained in primary partnerships for the long term.”

For their entire adult lives, the fish mating partners come together for two hours each day before dusk in their refuge area, or spawning territory. They chase away other fish and begin with a half-hour foreplay ritual of nipping and hovering around each other, an activity that helps strengthen the partners’ bond. Eventually it becomes apparent which fish is going to take on the female role for the first of many spawning rounds.

Finding a new mate every evening is time-consuming and risky for a fish that only lives for about a year. Having a safe partner may help ensure that individuals get to fertilize a similar number of eggs as they produce, rather than risk ending up with a partner with fewer eggs.

But all of this doesn’t mean the chalk bass is completely opposed to an occasional fling.

If one partner has more eggs than the other, it may share the extra with other couples, an option that, while infrequent,can add stability to the system of simultaneous hermaphroditism paired with monogamy.

Scientists are only beginning to understand how mutually beneficial relationships among animals are maintained, much as humans in general still strive to determine what makes long-term relationships last.

“Not even one of the original pairs that I observed switched mates while its partner was still alive,” Hart says. “That strong matching between partners and the investment into the partnership was surprising.”

Source: Republished from Futurity.org as a derivative work under the Attribution 4.0 International license. Original article posted to Futurity by .

Featured Photo Credit:  Kevin Bryant/Flickr

Now, Check Out: 

Why the eastern coyote should be a separate species: the ‘coywolf’

By Jonathan G. Way, Clark University.

There is considerable debate and disagreement among scientists over what to call a canid inhabiting the northeastern United States. In the course of this creature’s less than 100-year history, it has been variously called coyote, eastern coyote, coydog, Tweed wolf, brush wolf, new wolf, northeastern coyote and now coywolf, with nature documentaries highlighting recent genetic findings.

Recently, Roland Kays penned an interesting article in The Conversation concluding that “coywolf is not a thing,” and that it should not be considered for species status. Interestingly, and perhaps ironically, the beautiful light orangey-red canid in the cover picture of that article looks nothing like a western coyote and has striking observable characteristics of both coyotes and wolves, as well as dogs.

Soon after, my colleague William Lynn (Marsh Institute, Clark University) and I published a meta-analysis in the scientific journal Canid Biology & Conservation that summarized recent studies on this creature and confirmed that what we call “coyotes” in northeastern North America formed from hybridization (the mating of two or more species) between coyotes and wolves in southern Ontario around the turn of the 20th century.

In the paper, we suggest that coywolf is the most accurate term for this animal and that they warrant new species status, Canis oriens, which literally means eastern canid in Latin. We based this on the fact that they are physically and genetically distinct from their parental species of mainly western coyotes (Canis latrans) and eastern wolves (Canis lycaon). They also have smaller amounts of gray wolf (Canis lupus) and domestic dog (Canis familiaris) genes.

The eastern coyote/coywolf in a nutshell

Before I describe why the coywolf is unique, let’s get a quick snapshot of the animal we are discussing.

What we are calling Canis oriens colonized northeastern North America 50-75 years ago and has been described in detail in Gerry Parker’s 1995 book, “Eastern Coyote: The Story of Its Success,” and my 2007 paperback, “Suburban Howls.” This animal averages 13.6-18.2 kg (30-40 lbs), with individual weights exceeding 22.7-25 kg (50-55 lbs).

The emerging picture of the coywolf is that they have a larger home range than most western coyotes but smaller than wolves, at about 30 square kilometers (about 11 square miles). They also travel long distances daily (10-15 miles), eat a variety of food including white-tailed deer, medium-sized prey such as rabbits and woodchucks, and small prey such as voles and mice. They are social, often living in families of three to five members.

Eastern coyotes hunt a wide range of animals, including small rodents but also deer. Two eastern coyotes took down this deer in eastern Canada, according to the photographer. rvewong/flickr, CC BY-SA

In short, the coywolf has ecological and physical characteristics that can be seen on a continuum of coyote-like to wolf-like predators, but occupies an ecological niche that is closer to coyotes than wolves.

So why is coywolf a more accurate name?

Some argue that if the coywolf is predominantly coyote, then they should be called coyotes. Let’s analyze this claim.

I have previously found coywolves to be significantly different in body size from both western coyotes and eastern wolves. However, they are closer to coyotes whereby eastern wolves are 61-71 percent heavier than the same-sex coywolf, while coywolves are 35-37 percent heavier than western coyotes.

Bill Lynn and I concluded that they are statistically different – both genetically and physically – from their parental species since the coywolf is about 60 percent coyote, 30 percent wolf, and 10 percent dog; thus, nearly 40 percent of this animal is not coyote. That, essentially, is why we recommend that they be classified as a new species, Canis oriens.

Kays’ article stated that “coyotes” in the Northeast are mostly (60-84 percent) coyote, with lesser amounts of wolf (%-25 percent) and dog (8-11 percent). However, the values of 84 percent coyote and only 8 percent wolf used a study (by vonHoldt et al. 2011) that has since mostly been discounted by subsequent papers since eastern wolves were not adequately sampled in their analysis.

Thus, based on our analysis, the claim that coywolves are predominantly coyote is untrue. While they may be numerically closer in size and genetics to coyotes than wolves, they are clearly statistically divergent from both coyotes and wolves. Taken from a wolf-centric viewpoint, I can see that they seem more coyote-like than wolf-like, but it is important to realize that a large part of their background is not from coyotes.

Eastern coyotes, or coywolves, have ecological and physical characteristics that can fit on a continuum between coyote and wolf. Jonathan Way, Author provided

The term coywolf uses the portmanteau method (i.e., a word formed by combining two other words) of naming, whereby the first word (coyote) of the combined two (coyote-wolf) is the more dominant or robust descriptor of that term. It does not suggest that this animal is equally or more wolf than coyote as has been suggested.

Furthermore, I believe that the terms coyote, eastern coyote and northeastern coyote undervalue the importance of the eastern wolf – the animals that interbred with western coyotes in Canada in the early 20th century to produce the coywolf – in the ancestry of this canid. This naming effectively discounts that, for example, one-third of the population’s mitochondrial DNA (C1 haplotype) is derived from the eastern wolf and another one-third (C9 haplotype) is not found in most nonhybridized western coyote populations but is found in eastern wolves.

Research has confirmed that all canids in the genus Canis can and do mate with other species (or canid types). This includes gray wolves mating with eastern wolves around the Great Lakes area, eastern wolves with gray wolves and western coyotes north and south/west of Algonquin Park in Ontario, respectively. Also, western coyotes mix with eastern wolves and coywolves, especially at the edge of their respective ranges.

Given that the most up-to-date studies have discovered relatively small amounts of dog (~8-10 percent) in the coywolf’s genome, and dogs are closely related to wolves, it seems reasonable to keep ‘coywolf’ rather than ‘coywolfdog’ as this creature’s descriptor.

Benefits of hybridization

Hybridization is a natural process that can be greatly accelerated by human modifications to the environment, like hunting and habitat destruction – two key ingredients that paved the way for the creation of the coywolf.

Education efforts could actually use the hybrid coywolf as a model for science education and a flagship species for dynamic, urbanized ecosystems. While protecting natural habitat is vitally important to maintaining wild wolf populations, this isn’t possible anymore in many regions, such as much of southern New England. In these areas, any canid on the landscape is important – especially a hybrid one with genes from multiple species adapted to its environment.

Coyotes from the Plains intermixed with wolves in Canada about 100 years ago and their descendants have colonized the eastern U.S.
Way (2013) from Canadian Field Naturalist, Author provided

In one word, coywolf quite accurately summarizes the main components of this animal’s background. Other species have far more names. For instance, cougars (Puma concolor) are also called mountain lions, pumas, catamounts and panthers, among dozens of other local names. To use the terms “eastern coyote” (or northeastern coyote) and “coywolf” as synonyms seems highly valid to me.

Is eastern coyote even an accurate term?

It’s worth noting that coyote populations in eastern North America continue to change. Indeed, we recently questioned if the generic term “eastern coyote” is even accurate or appropriate considering that colonizing “coyotes” in eastern North America are considerably different from each other.

Southeastern coyotes are more coyote-like compared to northeastern coyotes/coywolves, and coyotes in the mid-Atlantic region have medium amounts of wolf intermixing, or introgression, compared with more typical western coyotes in the southeast that have little wolf but some domestic dog admixture. Comparatively, coywolves in the northeast are more wolf-like.

There is also the possibility that coywolves in the northeast will eventually become genetically swamped by western coyote genes from the south and west. Eastern coyotes from the mid-Atlantic area, which are more coyote-like and less wolf-like, have recently contacted the coywolf in the west part of its range, which could affect the makeup of the populations in the eastern U.S.

Thus, it remains to be seen whether this entity will remain distinct, which could influence future discussions of its taxonomy.

Why does it all matter anyway?

In the long run, does it really matter what we call this animal?

Science, at its best, is self-correcting, and new science often leads one in new directions. As biologists, we are charged with accurately describing natural systems, and for this reason alone it is important that we accurately characterize (and even debate about) the systems that we are studying. The more I investigate the coywolf, the more I realize it is different than other canids, including western coyotes.

Perhaps the most important finding from our recent paper is that new species status, Canis oriens, is warranted for this cool creature. While there may be continued controversy over the simple naming scheme of this canid, the premises in this paper better explain why coywolf is an appropriate term to use moving forward.

The ConversationJonathan G. Way, Research Scientist, Clark University

This article was originally published on The Conversation. Read the original article.

Featured Image Credit: Jonathan Way, www.EasternCoyoteResearch.com, Author provided

Now, Check Out:

 

New research is connecting genetic variations to schizophrenia and other mental illnesses

By Rachel Jonas, University of California, Los Angeles.

We know that changes in our genetic code can be associated with an increased risk for psychiatric illnesses such as schizophrenia and bipolar disorder. But how can a genetic mutation lead to complex psychiatric symptoms such as vivid hallucinations, manic episodes and bizarre delusions?

To find out, researchers are trying to fill in the blanks between the genetic blueprint (genotype) and psychiatric disorder (psychiatric phenotype). Phenotypes are a set of observable characteristics that result when a particular genotype interacts with its environment. The phenotype is the eventual outcome of a specific genotype.

But between genotype and psychiatric phenotype lie many measurable traits that together are called endophenotypes. This is an aspect of genetics that scientists are just starting to understand.

The National Institute of Mental Health has recently begun an initiative to push researchers to study endophenotypes with a program called Research Domain Criterion (RDoC), described as an effort to study basic dimensions of functioning that underlie human behavior.

So what exactly are endophenotypes, and how might they contribute to psychiatric illnesses?

Endophenotypes lie between genes and psychiatric phenotypes

An endophenotype can refer to anything from the size and shape of brain cells, to changes in brain structure, to impairments in working memory. The term can refer to a physical trait or a functional one.

An endophenotype must be associated with a specific psychiatric illness, such as schizophrenia, and it must be heritable. It must also be present even if the illness is not active. Within families, the endophenotype must be more common in ill family members than in healthy family members. But the endophenotype must be more common among nonaffected relatives of people with the associated illness than among the general population.

Certain endophenotypes are thought to precede behavioral symptoms. For instance, in several conditions, such as schizophrenia and Alzheimer’s disease, changes in brain structure have been found years before the onset of symptoms.

Currently doctors diagnose a psychiatric disorder based on the patient’s symptoms. The underlying neurobiology isn’t usually considered, because we lack the data to really use it.

In the future, endophenotypes might let us detect who is susceptible to psychiatric illness before clinical symptoms develop. That means we could try to combat, or at least appease, the symptoms of the disorder before they start. And knowing how endophenotypes contribute to these disorders could lead to precision medicine treatments.

How do you study endophenotypes?

One way to study the endophenotypes is to focus on a specific genetic alteration that is associated with a psychiatric disorder. This way we can get a sense of what brain changes the genetic change causes.

The links leading from genetic alterations to psychiatric illness in 22q11.2 Deletion Syndrome. Rachel Jonas, CC BY

For instance, I study a genetic disorder called 22q11.2 Deletion Syndrome (also called 22q11DS). The syndrome is due to a deletion of up to 60 genes, many of which are linked to brain function. About 30 percent of individuals with 22q11DS will develop schizophrenia (the rate in the U.S. population overall is about one percent).

Studying 22q11DS lets us draw a line from a genetic alteration to a psychiatric phenotype, such as decreased neural function, brain structure changes or fewer neurons in certain parts of the brain, and to a psychiatric phenotype, such as schizophrenia.

Let’s go through some concrete examples of how this can be done.

22q11DS: a model syndrome to study endophenotypes

In one study researchers looked at a group of 70 children and adolescents with 22q11DS, and found deficits in executive function (which encompasses cognitive processes such as motivation, working memory and attention) in patients with 22q11DS.

In fact, researchers were actually able to predict subsequent development of psychotic symptoms in individuals with 22q11DS. This study shows that cognitive endophenotypes may underlie psychiatric phenotypes and demonstrates their predictive power. And, like all endophenotypes, it is invisible to the naked eye, but measurable in the lab.

Another study, using functional magnetic resonance imaging (fMRI), found reduced neural activity in patients with 22q11DS when they performed a working memory task compared to a group of healthy control subjects. What’s more, the magnitude of the decrease correlated with the severity of their psychotic symptoms. This suggests abnormalities in neural activity might underlie symptoms associated with schizophrenia.

Other studies have found an association between psychiatric illnesses such as schizophrenia and abnormalities in the size and shape of different brain regions. For instance, a recent study found that certain parts of the brain were thicker in patients with 22q11DS. What’s more, the degree of thickness was related to psychotic symptoms. Changes in brain structure have also been associated with psychiatric disorders, such as obsessive compulsive disorder.

Researchers can you use mice models to learn about endophenotypes. Mouse via www.shutterstock.com.

In order to gain a more in-depth understanding of the underlying physiology in 22q11DS, researchers can breed mice with the deletion syndrome by “knocking out” genes in the mouse genome.

Researchers have found that mice with 22q11DS had fewer neurons in a part of the brain associated with cognition compared to unaffected mice.

The number of neurons correlated with how well the mice performed on tasks measuring executive function. These results suggest that individuals with psychiatric illnesses might actually have microscopic changes in their brain cells. This is a significant finding, because we can’t study these effects directly in humans.

These are just some examples of how we can experimentally determine endophenotypes that underlie schizophrenia in 22q11DS. And while 22q11DS is a risk factor for schizophrenia, what we learn from studying this syndrome could help us understand the endophenotypes behind other illnesses.

Of course defining endophenotypes for psychiatric illness is just the first step. After that, researchers and scientists need to find ways to use these results to inform diagnosis, treatment and prevention strategies.

The ConversationRachel Jonas, Ph.D. Candidate in Neuroscience, University of California, Los Angeles

This article was originally published on The Conversation. Read the original article.

Featured Image Credit: Therese Vesagas, CC BY

Now, Check Out:

How Monarch Butterflies Make it to Mexico Without a Map

Each fall, monarch butterflies across Canada and the United States turn their colorful wings toward the Rio Grande and migrate more than 2,000 miles to the relative warmth of central Mexico.

The journey, repeated instinctively by generations of monarchs, continues even as their numbers have plummeted due to loss of their sole larval food source—milkweed. Now, scientists think they have cracked the secret of the internal, genetically encoded compass monarchs use to determine the southwest direction they should fly each fall.

“Their compass integrates two pieces of information—the time of day and the sun’s position on the horizon—to find the southerly direction,” says Eli Shlizerman, assistant professor at the University of Washington, who has joint appointments in the applied mathematics and the electrical engineering departments.

While the nature of the monarch butterfly’s ability to integrate the time of day and the sun’s location in the sky are known from previous research, scientists have never understood how the monarch’s brain receives and processes this information. For the study, researchers wanted to model how the monarch’s compass is organized within its brain.

“We wanted to understand how the monarch is processing these different types of information to yield this constant behavior—flying southwest each fall,” Shlizerman says.

Monarchs use their large, complex eyes to monitor the sun’s position in the sky. But the sun’s position is not sufficient to determine direction. Each butterfly must also combine that information with the time of day to know where to go. Fortunately, like most animals including humans, monarchs possess an internal clock based on the rhythmic expression of key genes.

This clock maintains a daily pattern of physiology and behavior. In the monarch butterfly, the clock is centered in the antennae, and its information travels via neurons to the brain.

Biologists have previously studied the rhythmic patterns in monarch antennae that control the internal clock, as well as how their compound eyes decipher the sun’s position in the sky. For the study, published in the journal Cell Reports, researchers recorded signals from antennae nerves in monarchs as they transmitted clock information to the brain as well as light information from the eyes.

Migrating monarch butterflies. Credit:
Migrating monarch butterflies. Credit: babybluebbwCC BY-SA 2.0

SHORTEST ROUTE ISN’T THE BEST

“We created a model that incorporated this information—how the antennae and eyes send this information to the brain,” Shlizerman says. “Our goal was to model what type of control mechanism would be at work within the brain, and then asked whether our model could guarantee sustained navigation in the southwest direction.”

In their model, two neural mechanisms—one inhibitory and one excitatory—controlled signals from clock genes in the antennae. Their model had a similar system in place to discern the sun’s position based on signals from the eyes. The balance between these control mechanisms would help the monarch brain decipher which direction was southwest.

Based on their model, it also appears that when making course corrections monarchs don’t simply take the shortest turn to get back on route. Their model includes a unique feature—a separation point that would control whether the monarch turned right or left to head in the southwest direction.

“The location of this point in the monarch butterfly’s visual field changes throughout the day,” Shlizerman says. “And our model predicts that the monarch will not cross this point when it makes a course correction to head back southwest.”

Based on their simulations, if a monarch gets off course due to a gust of wind or object in its path, it will turn whichever direction won’t require it to cross the separation point.

Additional studies would need to confirm whether the researchers’ model is consistent with monarch butterfly brain anatomy, physiology, and behavior. So far, aspects of their model, such as the separation point, seem consistent with observed behaviors.

“In experiments with monarchs at different times of the day, you do see occasions where their turns in course corrections are unusually long, slow, or meandering,” Shlizerman says. “These could be cases where they can’t do a shorter turn because it would require crossing the separation point.”

Their model also suggests a simple explanation why monarch butterflies are able to reverse course in the spring and head northeast back to the United States and Canada. The four neural mechanisms that transmit information about the clock and the sun’s position would simply need to reverse direction.

“And when that happens, their compass points northeast instead of southwest,” says Shlizerman. “It’s a simple, robust system to explain how these butterflies—generation after generation—make this remarkable migration.”

Daniel Forger at the University of Michigan and James Phillips-Portillo at the University of Massachusetts are coauthors of the study, which was funded by the National Science Foundation and the Washington Research Fund and is published in the journal Cell Reports.

Source: Republished from Futurity.org as a derivative work under the Attribution 4.0 International license. Original article posted to Futurity by  .

Featured Photo Credit: Dwight Sipler via flickr, CC BY 2.0.

Now, Check Out: