A communications satellite known as “Sky Muster” was launched from French Guiana in South America today without incident. The launch was hailed by Australia’s National Broadcasting Network (NBN) as a victory, not only because the launch was successful, but also because it will bring high-speed broadband access to all of Australia.
A very informative article on the BBC’s website quotes a spokeswoman for NBN as saying:
“It’s one of the world’s largest communication satellites and is purpose-built to deliver broadband to Australia – an incredibly vast country,” National Broadband Network (NBN) spokeswoman Frances Kearey was quoted by ABC as saying.
“The NBN satellite service will provide speeds that people in the cities take for granted – opening up new opportunities in education, health, social connectivity and business.”
After extensive testing, it is expected that Sky Muster will come fully online for consumers in the second half of 2016. It’s coverage area will include all of Australia, including remote areas like Norfolk, Christmas, Macquarie and the Cocos islands.
Researchers at the University of Washington in Seattle have discovered that they can connect two people’s brains over the internet using specially designed skull caps. Once connected, study participants played a question and answer game where the person responding had the correct answer 72% of the time.
A fantastic article on Science Daily’s website provides the details:
University of Washington researchers recently used a direct brain-to-brain connection to enable pairs of participants to play a question-and-answer game by transmitting signals from one brain to the other over the Internet. The experiment, detailed today inPLOS ONE, is thought to be the first to show that two brains can be directly linked to allow one person to accurately guess what’s on another person’s mind.
“This is the most complex brain-to-brain experiment, I think, that’s been done to date in humans,” said lead author Andrea Stocco, an assistant professor of psychology and a researcher at UW’s Institute for Learning & Brain Sciences.
“It uses conscious experiences through signals that are experienced visually, and it requires two people to collaborate,” Stocco said.
Here’s how it works: The first participant, or “respondent,” wears a cap connected to an electroencephalography (EEG) machine that records electrical brain activity. The respondent is shown an object (for example, a dog) on a computer screen, and the second participant, or “inquirer,” sees a list of possible objects and associated questions. With the click of a mouse, the inquirer sends a question and the respondent answers “yes” or “no” by focusing on one of two flashing LED lights attached to the monitor, which flash at different frequencies.
A “no” or “yes” answer both send a signal to the inquirer via the Internet and activate a magnetic coil positioned behind the inquirer’s head. But only a “yes” answer generates a response intense enough to stimulate the visual cortex and cause the inquirer to see a flash of light known as a “phosphene.” The phosphene — which might look like a blob, waves or a thin line — is created through a brief disruption in the visual field and tells the inquirer the answer is yes. Through answers to these simple yes or no questions, the inquirer identifies the correct item.
The experiment was carried out in dark rooms in two UW labs located almost a mile apart and involved five pairs of participants, who played 20 rounds of the question-and-answer game. Each game had eight objects and three questions that would solve the game if answered correctly. The sessions were a random mixture of 10 real games and 10 control games that were structured the same way.
But how did the researchers make sure that study participants weren’t gaming the system somehow? We answer that question on the next page…
Demographers frequently remind us that the United States is a rapidly aging country. From 2010 to 2040, we expect that the age-65-and-over population will more than double in size, from about 40 to 82 million. More than one in five residents will be in their later years. Reflecting our higher life expectancy, over 55% of this older group will be at least in their mid-70s.
While these numbers result in lively debates on issues such as social security or health care spending, they less often provoke discussion on where our aging population should live and why their residential choices matter.
But this growing share of older Americans will contribute to the proliferation of buildings, neighborhoods and even entire communities occupied predominantly by seniors. It may be difficult to find older and younger populations living side by side together in the same places. Is this residential segregation by age a good or a bad thing?
As an environmental gerontologist and social geographer, I have long argued that it is easier, less costly, and more beneficial and enjoyable to grow old in some places than others. The happiness of our elders is at stake. In my recent book, Aging in the Right Place, I conclude that when older people live predominantly with others their own age, there are far more benefits than costs.
Why do seniors tend to live apart from other age groups?
My focus is on the 93% of Americans age 65 and older who live in ordinary homes and apartments, and not in highly age-segregated long-term care options, such as assisted living properties, board and care, continuing care retirement communities or nursing homes. They are predominantly homeowners (about 79%), and mostly occupy older single-family dwellings.
Older Americans don’t move as often as people in other age groups. Typically, only about 2% of older homeowners and 12% of older renters move annually. Strong residential inertia forces are in play. They are understandably reluctant to move from their familiar settings where they have strong emotional attachments and social ties. So they stay put. In the vernacular of academics, they opt to age in place.
Over time, these residential decisions result in what are referred to as “naturally occurring” age-homogeneous neighborhoods and communities. These residential enclaves of old are now found throughout our cities, suburbs and rural counties. In some locales with economies that have changed for the worse, these older concentrations are further explained by the wholesale exit of younger working populations looking for better job prospects elsewhere – leaving the senior population behind.
Even when older people decide to move, they often avoid locating near the young. The Fair Housing Amendments Act of 1988 allows certain housing providers to discriminate against families with children. Consequently, significant numbers of older people can move to these “age-qualified” places that purposely exclude younger residents. The best-known examples are those active adult communities offering golf, tennis and recreational activities catering to the hedonistic lifestyles of older Americans.
Others may opt to move to “age-targeted” subdivisions (many gated) and high-rise condominiums that developers predominantly market to aging consumers who prefer adult neighbors. Close to 25% of age-55-and-older households in the US occupy these types of planned residential settings.
Finally, another smaller group of relocating elders transition to low-rent senior apartment buildings made possible by various federally and state-funded housing programs. They move to seek relief from the intolerably high housing costs of their previous residences.
Is this a bad thing?
Those advocates who bemoan the inadequate social connections between our older and younger generations view these residential concentrations as landscapes of despair.
In their perhaps idyllic worlds, old and young generations should harmoniously live together in the same buildings and neighborhoods. Older people would care for the children and counsel the youth. The younger groups would feel safer, wiser and respectful of the old. The older group would feel fulfilled and useful in their roles of caregivers, confidants and volunteers. In question is whether these enriched social outcomes merely represent idealized visions of our pasts.
A less generous interpretation for why critics oppose these congregations of old is that they make the problems faced by an aging population more visible and thus harder to ignore.
Salty streaks have been discovered on Mars, which could be a sign that salt water seeps to the surface in the summers. Scientists have previously observed dark streaks (see image above) on the planet’s slopes which are thought to have resulted from seeps of water wetting surface dust. Evidence of salts left behind in these streaks as the water dried up are the best evidence for this yet. The discovery is important – not least as it raises the tantalising prospect of a viable habitat for microbial life on Mars.
I have lost track of how many times water has been “discovered” on Mars. In this case, the researchers have detected hydrated salts rather than salty water itself. But the results, published in Nature Geoscience, are an important step to finding actual, liquid water. So how close are we? Let’s take a look at what we know so far and where the new findings fit in.
Ice versus liquid water
Back in the 18th century, William Herschel suggested that Mars’s polar caps, which even a small telescope can detect, were made of ice or snow – but he had no proof. It wasn’t until the 1950s that data from telescopes fitted with spectrometers, which analyse reflected sunlight, was interpreted as showing frozen water (water-ice). However, the first spacecraft to Mars found this difficult to confirm, as water-ice is in most places usually covered by ice made up of carbon dioxide.
In the 1970s attention turned to the much juicier topic of liquid water on Mars, with the discovery by Mariner 9 of ancient river channels that must have been carved by flowing water. These channel systems were evidently very ancient (billions of years old), so although they showed an abundance of liquid water in the past they had no bearing on the occurrence of water at the present time.
We’re outnumbered by bacteria, viruses, parasites and fungi that can make us ill. And the only thing standing between them and our devastation is our immune system.
The immune system does such a good job most of the time that we only really think about it when things go wrong. But to provide such excellent protection against a whole host of pathogens, our immune system must constantly learn.
Innate immunity rapidly responds to invaders; innate immunity cells deal with more than 90% of infections, removing them within hours or days. These cells recognise invaders by looking for broad shared patterns, such as common molecules on the surface of most bacteria. They might look for lipopolysaccharides (LPS), for instance, a molecule found in many bacterial cell walls.
When the innate response fails to fend off an invasion, the invaders are handled by adaptive immunity. Instead of broad patterns, each adaptive cell sees a very specific pattern. This could be one particular protein on the surface of a virus or bacteria.
But because the adaptive immune system doesn’t know what invaders it may meet, it makes millions of different cells, each of which is created to recognise a random different pattern. One adaptive cell may recognise only the flu virus, for instance, while another may recognise only a single type of bacteria.
When adaptive immune cells recognise an invader, they replicate so they form an army to kill it. This very specialised process can take a week the first time we’re infected by a new invader. If we’re exposed to a flu virus, for instance, only the small number of adaptive cells that can randomly recognise flu viruses are activated to remove infection, which is why it takes time to fight it off.
After an invader is removed, the adaptive cells that recognised it are kept, as specialised “memory cells”. If we see the same invader again, those cells can respond before we get ill. This is how the adaptive immune system learns.
Researchers pursuing a new approach to treating one of the deadliest forms of cancer, glioblastoma, which is a virulent brain cancer. The scientists found that a combination of antidepressants and blood thinners was effective in a mouse model, increasing cancer cell “autophagy,” or the process that causes cells to essentially eat themselves.
According to a very informative article from Science Daily, Swiss researcher Douglas Hanahan, of the Swiss Federal Institute of Technology (EPFL) and the lead author on the study, had this to say about the findings:
“It is exciting to envision that combining two relatively inexpensive and non-toxic classes of generic drugs holds promise to make a difference in the treatment of patients with lethal brain cancer.”
Although the results were quite promising, there is of course more research to be done and it’s not yet known whether this combination will work in humans. Also, the combination therapy did not cure the mice’s cancer, but did extend their survival. From the article:
“Importantly, the combination therapy did not cure the mice; rather, it delayed disease progression and modestly extended their lifespan,” Hanahan says. “It seems likely that these drugs will need to be combined with other classes of anticancer drugs to have benefit in treating gliblastoma patients. One can also envision ‘co-clinical trials’ wherein experimental therapeutic trials in the mouse models of glioblastom are linked to analogous small proof-of-concept trials in GBM patients. Such trials may not be far off.”
If we hear more about future trials, we’ll be sure to keep you posted! For now, you can catch up on all the details in Science Daily’s excellent article.
Scientists are perplexed and mystified by the latest images downloaded from the New Horizons probe’s flyby of Pluto. One of the images reveals a landscape that “…looks more like tree bark or dragon scales than geology,” according to William McKinnon, New Horizons Geology, Geophysics and Imaging (GGI) team deputy lead from Washington University in St. Louis.
Take a look at this astounding image from the fascinating article we found on the NASA.gov site:
McKinnon continues, “This’ll really take time to figure out; maybe it’s some combination of internal tectonic forces and ice sublimation driven by Pluto’s faint sunlight.”
See other amazing images from the latest download in the informative article on NASA.gov.
Source: NASA.gov – “Perplexing Pluto: New ‘Snakeskin’ Image and More from New Horizons”
The Tasmanian government this month released a draft of the revised management plan for the Tasmanian Wilderness World Heritage Area, which proposes rezoning certain areas from “wilderness zones” to “remote recreation zones”.
The changes would enable greater private tourism investment in the World Heritage Area and allow for logging of speciality timbers.
At the centre of the debate is how we define wilderness – and what people can use it for.
For wildlife or people?
“Wilderness quality” is a measure of the extent to which a landscape (or seascape) is remote from, and undisturbed by, modern technological society. High wilderness quality means a landscape is relative remote from settlement and infrastructure and largely ecologically intact. Wilderness areas are those that meet particular thresholds for these criteria.
The word’s largest wilderness areas include Amazonia, the Congo forests, the Northern Australian tropical savannas, the Llanos wetlands of Venezuela, the Patagonian Steppe, Australian deserts and the Arctic Tundra.
Globally, there are 24 large intact landscapes of at least 10,000 square kilometres (1,000,000 hectares). Wilderness as a scientific concept was developed for land areas, but is also increasingly being applied to the sea.
Legal definitions of wilderness usually include these remote and intact criteria – but the goals range from human-centred to protecting the intrinsic value of wilderness. Intrinsic value recognises that things have value regardless of their worth or utility to human beings, and is recognised in the Convention on Biological Diversity to which Australia is a signatory.
In the NSW Wilderness Act 1987, for instance, one of the three objects of the Act is cast in terms of benefits to the human community: “to promote the education of the public in the appreciation, protection and management of wilderness”. The Act also states that wilderness shall be managed so as “to permit opportunities for solitude and appropriate self-reliant recreation.” Examples of formally declared wilderness areas in New South Wales are the Lost World Wilderness Area and Wollemi National Park.
Intrinsic value is evident in the the South Australia Wilderness Protection Act 1992 which sets out to, among other things, preserve wildlife and ecosystems, and protect the land and its ecosystems from the effects of modern technology – and restoring land to its condition prior to European settlement.
Our understanding of wilderness and its usefulness has changed over the last century as science has revealed its significance for biodiversity conservation and ecosystem services. We have also accepted the ecological and legal realities of Indigenous land stewardship.
The world’s rapidly shrinking areas of high wilderness quality, including formally declared wilderness areas, are largely the customary land of Indigenous peoples, whether or not this is legally recognised.
Significant bio-cultural values, such as Indigenous peoples’ knowledge of biodiversity (recognised in Australia’s federal Environmental Protection and Biodiversity Conservation Act), are dependent on these traditional relationships between people and country.
In many cases around the world, wilderness areas only remain intact because they are under Indigenous stewardship. In Australia, these facts were regrettably ignored in the past and were the source of much loss and harm to Traditional Owners when protected areas were declared without their consent.
Lessons have been learnt, some progress is being made, and the essential role of local and Indigenous communities in the conservation of wilderness areas is now being recognised and reflected in Australian national and state conservation and heritage policy and law.
For example, in 2003 the Northern Territory government agreed to joint management with the Traditional Owners of the Territory’s national parks.
It might sound like a simple thing to build a scale model of the solar system, but in reality, if you start using a marble-sized earth, it’s actually quite a feat. And it was a feat that two filmmakers, Alex Gorosh and Wylie Overstreet decided to just go ahead and do.
From a wonderful article we found on the Colossal website, here’s a fantastic video explanation of what they did:
How did the world sound to our ancient human relatives two million years ago?
While we obviously don’t have any sound recordings or written records from anywhere near that long ago, we do have one clue: the fossilized bones from inside their ears. The internal anatomy of the ear influences its hearing abilities.
Using CT scans and careful virtual reconstructions, my international colleagues and I think we’ve demonstrated how our very ancient ancestors heard the world. And this isn’t just an academic enterprise; hearing abilities are closely tied with verbal communication. By figuring out when certain hearing capacities emerged during our evolutionary history, we might be able to shed some light on when spoken language started to evolve. That’s one of the most hotly debated questions in paleoanthropology, since many researchers consider the capacity for spoken language a defining human feature.
Human hearing is unique among primates
We modern human beings have better hearing across a wider range of frequencies than most other primates, including chimpanzees, our closest living relative. Generally, we’re able to hear sounds very well between 1.0-6.0 kHz, a range that includes many of the sounds emitted during spoken language. Most of the vowels fall below about 2.0 kHz, while the higher frequencies mainly contain consonants.
Thanks to testing of their hearing in the lab, we know that chimpanzees and most other primates aren’t as sensitive in that same range. Chimpanzee hearing – like most other primates who also live in Africa, including baboons – shows a loss in sensitivity between 1.0-4.0 kHz. In contrast, human beings maintain good hearing throughout this frequency range.
We’re interested in finding out when this human hearing pattern first emerged during our evolutionary history. In particular, if we could find a similar pattern of good hearing between 1.0-6.0 kHz in a fossil human species, then we could make an argument that language was present.
Testing the hearing of a long-gone individual
To study hearing using fossils, we measure a large number of dimensions of the ancient ears – including the length of the ear canal, the size of the ear drum and so on – using virtual reconstructions of the fragile skulls on the computer. Then we input all these data into a computer model.
Published previously in the bioengineering literature, the model predicts how a person hears based on his ear anatomy. It studies the capacity of the ear as a receiver of a signal, similar to an antenna. The results tell us how efficiently the ear transmits sound energy from the environment to the brain.
We first tested the model on chimpanzee skulls, and got results similar to those of researchers who tested chimpanzee hearing in the lab. Since we know the model accurately predicts how humans hear and how chimpanzees hear, it should provide reliable results for our fossil human ancestors as well.