Static electricity’s tiny sparks

By Sebastian Deffner, University of Maryland, Baltimore County.

Static electricity is a ubiquitous part of everyday life. It’s all around us, sometimes funny and obvious, as when it makes your hair stand on end, sometimes hidden and useful, as when harnessed by the electronics in your cellphone. The dry winter months are high season for an annoying downside of static electricity – electric discharges like tiny lightning zaps whenever you touch door knobs or warm blankets fresh from the clothes dryer.

Static electricity is one of the oldest scientific phenomena people observed and described. Greek philosopher Thales of Miletus made the first account; in his sixth century B.C. writings, he noted that if amber was rubbed hard enough, small dust particles will start sticking to it. Three hundred years later, Theophrastus followed up on Thales’ experiments by rubbing various kinds of stone and also observed the “power of attraction.” But neither of these natural philosophers found a satisfactory explanation for what they saw.

It took almost 2,000 more years before the English word “electricity” was first coined, based on the Latin “electricus,” meaning “like amber.” Some of the most famous experiments were conducted by Benjamin Franklin in his quest to understand the underlying mechanism of electricity – which is one of the reasons why his face smiles from the US$100 bill. People quickly recognized electricity’s potential usefulness.

The amazing flying boy relies on static electricity to wow the crowd. Frontispiece of Novi profectus in historia electricitatis, post obitum auctoris, by Christian August Hausen (1746)

Of course, in the 18th century people mostly made use of static electricity in magic tricks and other performances. For instance, Stephen Gray‘s “flying boy experiment” became a popular public demonstration: He’d use a Leyden jar to charge up the youth, suspended from silk cords, and then show how he could turn book pages via static electricity, or lift small objects just using the static attraction.

Building on Franklin’s insights – including his realization that electric charge comes in positive and negative flavors, and that total charge is always conserved – we nowadays understand at the atomic level what causes the electrostatic attraction, why it can cause mini lightning bolts and how to harness what can be a nuisance for use in various modern technologies.

What are these tiny sparks?

Static electricity comes down to the interactive force between electrical charges. At the atomic scale, negative charges are carried by tiny elementary particles called electrons. Most electrons are neatly packed inside the bulk of matter, whether it be a hard and lifeless stone or the soft, living tissue of your body. However, many electrons also sit right on the surface of any material. Each different material holds on to these surface electrons with its own different characteristic strength. If two materials rub against each other, electrons can be ripped out of the “weaker” material and find themselves on the material with stronger binding force.

This transfer of electrons – what we know as a spark of static electricity – happens all the time. Infamous examples are children sliding down a playground slide, feet shuffling along a carpet or someone removing wool gloves in order to shake hands.

But we notice its effect more frequently in the dry months of winter, when the air has very low humidity. Dry air is an electrical insulator, whereas moist air acts as a conductor. This is what happens: In dry air, electrons get trapped on the surface with the stronger binding force. Unlike when the air is moist, they can’t find their way to flow back to the surface where they came from, and they can’t make the distribution of charges uniform again.

A static electric spark occurs when an object with a surplus of negative electrons comes close to another object with less negative charge – and the surplus of electrons is large enough to make the electrons “jump.” The electrons flow from where they’ve built up – like on you after walking across a wool rug – to the next thing you contact that doesn’t have an excess of electrons – such as a doorknob.

You’ll feel the electrons jump.
Muhammed Ibrahim, CC BY-ND

When electrons have nowhere to go, the charge builds up on surfaces – until it reaches a critical maximum and discharges in the form of a tiny lightning bolt. Give the electrons a place to go – such as your outstretched finger – and you will most certainly feel the zap.

The power of the mini sparks

Though sometimes annoying, the amount of charge in static electricity is typically quite little and rather innocent. The voltage can be about 100 times the voltage of typical power outlets. However, these huge voltages are nothing to worry about, since voltage is just a measure of the charge difference between objects. The “dangerous” quantity is current, which tells how many electrons are flowing. Since typically only a few electrons are transmitted in a static electric discharge, these zaps are pretty harmless.

Nevertheless, these little sparks can be fatal to sensitive electronics, such as the hardware components of a computer. Small currents carried by only few electrons can be enough to accidentally fry them. That’s why workers in electronic industries have to remain “grounded.” Being grounded just means maintaining a wired connection to the ground, which for the electrons looks like an empty highway “home.” Grounding yourself is easily done by touching a metal component or holding a key in your hand. Metals are very good conductors, and so electrons are quite happy to go there.

A more serious threat is an electric discharge in the vicinity of flammable gases. This is why it’s advisable to ground yourself before touching the pumps at gas stations; you don’t want a stray spark to combust any stray gasoline fumes. Or you can invest in the kind of anti-static wristband widely used by workers in the electronic industries to safely ground individuals before they work on very sensitive electronic components. They prevent static buildups using a conductive ribbon that coils around your wrist.

In settings where a few electrons can do big damage, workers wear anti-static wrist straps.
Wristband image via www.shutterstock.com.

In everyday life, the best method to reduce charge buildups is running a humidifier to raise the amount of moisture in the air. Also keeping your skin moist by applying moisturizer can make a big difference. Dryer sheets prevent charges from building up as your clothes tumble dry by spreading a small amount of fabric softener over the cloth. These positive particles balance out loose electrons, and the effective charge nullifies, meaning your clothes won’t emerge from the dryer clingy and stuck to one another. You can rub fabric softener on your carpets to prevent charge buildup too. Last but not least, wearing cotton clothes and leather-soled shoes are the better choice, rather than wool clothing and rubber-soled shoes, if you’ve really had it with static electricity.

Harnessing static electricity

Despite the nuisance and possible dangers of static electricity, it definitely has its benefits.

Many everyday applications of modern technology crucially rely on static electricity. For instance, Xerox machines and photocopiers use electric attraction to “glue” charged tone particles onto paper. Air fresheners not only make the room smell nice, but they really do eliminate bad odors by discharging static electricity onto dust particles, thus dissembling the bad smell.

Static electricity can attract and trap charged pollution particles before they’re emitted from factories.
Muhammed Ibrahim, CC BY-ND

Similarly, the smokestacks found in modern factories use charged plates to reduce pollution. As smoke particles move up the stack, they pick up negative charges from a metal grid. Once charged, they are attracted to plates on the other sides of the smokestack that are positively charged. Finally, the charged smoke particles are collected onto a tray from the collecting plates and can be disposed of.

Static electricity has also found its way into nanotechnology, where it is used, for instance, to pick up single atoms by laser beams. These atoms can then be manipulated for all kinds of purposes as in various computing applications. Another exciting application in nanotechnology is the control of nanoballoons, which through static electricity can be switched between an inflated and a collapsed state. These molecular machines could one day deliver medication to specific tissues within the body.

Static electricity has seen two and a half millennia since its discovery. Still it’s a curiosity, a nuisance – but it’s also proven to be important for our everyday lives.


This article was coauthored by Muhammed Ibrahim, a system engineer at an environmental software company. He is conducting collaborative research with Dr. Sebastian Deffner on reducing computational errors in quantum memories.

The ConversationSebastian Deffner, Assistant Professor of Physics, University of Maryland, Baltimore County

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

Sexual assault enters virtual reality

By Katherine Cross, City University of New York.

Although various forms of online sexual harassment have been with us since the dawn of the internet, recent news suggests that it’s moving into another dimension – the third, to be precise. Gropers are now finding a way to target women through the fully immersive headsets of virtual reality.

Helmet and hands: a virtual avatar. QuiVr/Steam

Writer Jordan Belamire recently wrote of her experience of virtual sexual assault. The man’s disembodied hands, in the “QuiVr” virtual reality archery game, simulated constant groping of Belamire’s virtual body – specifically, rubbing at her avatar’s chest – and chased her through the game world, heedless to her cries of “Stop!” over the game’s voice chat.

Some of the response – not least from the game’s developers – was encouraging. But the internet’s id manifested itself in the comments on stories about the incident, heaping imprecations, slander and abuse against Belamire. If we analyze the content of these comments, we gain insight into why these assaults – and online harassment more broadly – are occurring, and what might be done to stop them.

We’ll have to grapple with some of the most toxic parts of our communities, and find new ways of creating and enforcing social norms in all the virtual worlds we’re creating. As a scholar of online harassment, I know that most fundamentally, we must address the false belief that online harm isn’t real, because the internet itself isn’t real. When human beings are involved and interacting with each other, it’s very real indeed. And in VR, it’s even more so.

Into the pit of online comments

At the bottom of an emotional article written by QuiVR’s developers, apologizing for what happened to Belamire and promising reform, is the following comment: “You weren’t a victim of anything. The VR community has just become a victim of the outrage brigade.”

Another commenter adds: “here’s some advice for you. TURN OFF THE F—ING GAME YOU STUPID B–CH!”

A third writes, “I gotta say, you don’t have a frigging clue what sexual assault is if THIS is what you consider sexual assault.”

Several others, meanwhile, noted that Belamire writes romance novels and suggested she should “be above” the abuse, or claimed that she’s just seeking publicity. “She writes an adult lesbian romance novel and feels harassed by digital gloves,” mocks one commenter. T’was ever thus: If a woman evinces any sexual sensibility whatsoever, she must give blanket consent to any and all sexual contact.

What’s virtual, and what’s reality?

But by far the overriding theme of the angry comments is that they accused Belamire of making a mountain out of a molehill because it was an online experience. These were “floating hands” in a “virtual world” that she could easily turn off, or just “take off her headset” to escape from.

These outraged players never seem to ask why men do not worry about encountering hands-y people with boundary issues when they play games, or why such people should determine who plays and who doesn’t. Yes, Belamire chose to play the game, but that doesn’t mean she signed up to be sexually assaulted.

These notions illustrate the core mentality of both the abuser and their legions of apologists in the world’s comment sections: What happens online is not real, therefore it’s all okay.

It’s not serious, except when it is

In this abuser-apologist world, people who complain about harassment are at fault themselves, and at times demonized as the actual problem. It’s an inherently contradictory idea: The “games aren’t real” argument doesn’t seem to dissuade angry commenters from taking Belamire’s complaints personally.

“Games are supposed to be a place to mentally get entirely away from this world, these rules, with a character in another one,” laments one commenter, arguing that anti-harassment efforts will interfere with his escapism. “Feminists basically want it to be a crime for men to even approach a woman in the street, and now they want to do the same in virtual reality?” says another.

Often, in a single comment, someone yells at Belamire for complaining about an “unreal” groping and then caterwauls about some forthcoming Orwellian regime in gaming. One commenter actually tells Belamire to turn off the game, right before likening the idea of tracking repeat offenders to the Third Reich. (One wonders why he doesn’t turn off his computer for a while, if her story so offends him.)

Mobius strip: a piece of paper with only one side. David Benbennick, CC BY-SA

He’s articulating a Mobius strip of thought, folding two contradictory notions into a single idea: The offending action wasn’t real and should be ignored, but any remedy would be real enough that we have to worry about the impending Nazification of our games and get very, very angry about it.

Online experiences are real ones

Video games are not just unreal playthings. The mediating interface of a game does not make abusive behavior between two or more real people any less abusive. Slurs are still slurs; unwanted sexual advances are still both unwanted and sexual. The addition of computer graphics, a game controller, or an unfashionable headset does not render human interaction unreal.

This interaction experience is as real as friends sitting on an actual couch together. HyacintheLuynes, CC BY-SA

In VR specifically we confront another contradiction. The entire selling point of VR is its unparalleled simulation of reality. It presents a physical, embodied experience that surrounds you, fills your senses, and is tactile in ways unlike any other video game.

This has been a holy grail of game design since the dawn of the industry: fooling a player’s body into feeling like it’s really in the game world. We should not be surprised if a simulated sexual assault, then, feels real enough in all the ways that matter.

This point was addressed head-on in a discussion about designing safer VR games at the Game Connect Asia Pacific conference in Melbourne in late October. One VR developer, Justine Colla, cofounder of the Alta VR studio, argued that the “visceral” nature of immersion in VR can give abusers more power. “Users retain memories in VR as if they experienced them in real life,” she said.

This, she said, combines with an inability for players to physically push away offender to ensure that attackers have “all the power with none of the consequences.” Assaults feel real, and the target has no way to fight back.

We cannot have it both ways, touting VR’s realness while casting aspersions on people who complain of abuse in VR. Trying to do so would be laughable if the consequences weren’t so dire. Virtual reality is virtually real.

Game developers respond

Fortunately, QuiVr’s developers are modeling good behavior for the whole industry. They wrote a pointed article that explains why they not only believe Belamire but take personal responsibility for what happened to her. They also explain what steps they’re taking to improve the experience. Foremost among them is a move they call a “power gesture:”

putting your hands together, pulling both triggers, and pulling them apart as if you are creating a force field. No matter how you activate it, the effect is instantaneous and obvious – a ripple of force expands from you, dissolving any nearby player from view, at least from your perspective, and giving you a safety zone of personal space.

This is a bold step in the right direction. It not only provides an instant reprieve for harassment victims but allows them to actually embody their strength through a gesture that feels empowering. It’s an elegant solution, but this one solution may not work for every VR environment. We need something more: a change of mindset.

As games are being developed, quality assurance testers often try to “break” the game, finding ways that inventive players might unexpectedly use game systems that the developers did not intend. Testers should include in this ongoing process efforts to identify ways players could harm each other. Developers should deal with them the same way they do other problems in the game’s design. It’s not “just a game” anymore.

The ConversationKatherine Cross, Ph.D. Student in Sociology, City University of New York

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

What wind, currents and geography tell us about how people first settled Oceania

By Alvaro Montenegro, The Ohio State University.

Just look at a map of Remote Oceania – the region of the Pacific that contains Hawaii, New Zealand, Samoa, French Polynesia and Micronesia – and it’s hard not to wonder how people originally settled on these islands. They’re mostly small and located many hundreds to thousands of kilometers away from any large landmass as well as from each other. As our species colonized just about every region of the planet, these islands seem to be the last places our distant ancestors reached.

A comprehensive body of archaeological, linguistic, anthropological and genetic evidence suggests that people started settling there about 3,400 years before present (BP). While we have a relatively clear picture of when many of the major island groups were colonized, there is still considerable debate as to precisely where these settlers originated and the strategies and trajectories they used as they voyaged.

In new experiments, my colleagues and I investigated how environmental variability and Oceania’s geographical setting would have influenced the colonization process. We built computer seafaring simulations and analyzed wind, precipitation and land distribution data over this region of the Pacific. We wanted to understand how seasonal and climate variability in weather and currents might lead to some potential routes being favored over others. How would these factors, including the periodic El Niño and La Niña patterns, affect even the feasibility of different sailing strategies? Did they play a role in the puzzling 2,000-year pause we see in eastward expansion? Could they have provided incentives to migration?

Standing questions about Oceania’s settlement

While the archaeological record contains no concrete information on the sailing capabilities of these early voyagers, their navigational prowess is undeniable. Settlement required trips across thousands of kilometers of open ocean toward very small targets. Traditional Pacific vessels such as double-hulled voyaging canoes and outrigger canoes would be able to make these potentially harrowing journeys, but at this point we have no way of knowing what kind of boat technology those early settlers used.

And colonization occurred in the opposite direction of mean winds and currents, which in this area of the Pacific flow on average from east to west. Scientists think the pioneers came from west to east, with western Melanesia and eastern Maritime Southeast Asia being the most likely source areas. But there’s still considerable debate as to exactly where these settlers came from, where they traveled and how.

Among the many intriguing aspects of the colonization process is the fact that it occurred in two rapid bursts separated by an almost 2,000-year-long hiatus. Starting around 3,400 BP, the region between the source areas and the islands of Samoa and Tonga was mostly occupied over a period of about 300 years. Then there was a pause in expansion; regions farther to the east such as Hawaii, Rapa Nui and Tahiti were only colonized sometime between about 1,100 and 800 BP. New Zealand, to the west of Samoa and Tonga but located far to the south, was occupied during this second expansion period. What might have caused that millennia-long lag?

Contemporary replica of a waʻa kaulua, a Polynesian double-hulled voyaging canoe.
Shihmei Barger 舒詩玫, CC BY-NC-ND

Simulating sailing conditions

The goal of our simulations was to take into account what we know about the real-world sailing conditions these intrepid settlers would have encountered at the time they were setting out. We know the general sailing performance of traditional Polynesian vessels – how fast these boats move given a particular wind speed and direction. We ran the simulation using observed present-day wind and current data – our assumption was that today’s conditions would be very close to those from 3,000 years ago and offer a better representation of variability than paleoclimate models.

The simulations compute how far one of these boats would have traveled daily based on winds and currents. We simulated departures from several different areas and at different times of year.

First we considered what would happen if the boats were sailing downwind; the vessels have no specified destination and are allowed to sail only in the direction in which the wind is blowing. Then we ran directed sailing experiments; in these, the boats are still influenced by currents and winds, but are forced to move a minimum daily distance, no matter the environmental conditions, toward a predetermined target. We still don’t know what type of vessels were used or how the sailors navigated; we just ran the model assuming they had some way to voyage against the wind, whether via sails or paddling.

One goal of our analysis was to describe how variations in winds and precipitation associated with the annual seasons and with the El Niño and La Niña weather patterns could have affected voyaging. We focused on conditions that would have favored or motivated movement from west to east, opposite to the mean winds, but in the general direction of the real migratory flow.

We also used land distribution data to determine “shortest hop” trajectories. These are the routes that would be formed if eastward displacement took place by a sequence of crossings in which each individual crossing always reaches the closest island to the east of the departure island.

What did the environmental data suggest?

After conducting thousands of voyaging simulations and calculating hundreds of shortest-hop trajectories, patterns started to emerge.

While the annually averaged winds in the region are to the west, there is significant variability, and eastward winds blow quite frequently in some seasons. The occurrence and intensity of these eastward winds increase during El Niño years. So downwind sailing, especially if conducted during particular times of the year (June-November in areas north of the equator and December-February in the Southern Hemisphere), can be an effective way to move eastward. It could be used to reach islands in the region of the first colonization pulse. Trips by downwind sailing become even more feasible under El Niño conditions.

Though many do believe early settlers were able to sail efficiently against the wind, our simulations suggest that even just following the winds and currents would be one way human beings conceivably could have traveled east in this area. (Moving eastward in the area east of Samoa does require sailing against the wind, though.)

Filled red lines depict all shortest-hop paths with starts from central and southern Philippines, Maluku and Solomon departure areas.
Using seafaring simulations and shortest-hop trajectories to model the prehistoric colonization
of Remote Oceania. Montenegro et al., PNAS 2016 doi:10.1073/pnas.1612426113

Our shortest-hop analysis points to two “gateway islands” – eastward expansion into large areas of Oceania would require passage through them. Movement into Micronesia would have to go through Yap. Expansion into eastern Polynesia would mean traveling through Samoa. This idea of gateway islands that would have to be colonized first opens new possibilities for understanding the process of settling Oceania.

As for that 2,000-year-long pause in migration, our simulation provided us with a few ideas about that, too. The area near Samoa is marked by an increase in distance between islands. And no matter what time of year, El Niño or not, you need to move against the wind to travel eastward around Samoa. So it makes sense that the pause in the colonization process was related to the development of technological advances that would allow more efficient against-the-wind sailing.

And finally, we think our analysis suggests some incentives to migration, too. In addition to changes to wind patterns that facilitate movement to the east, the El Niño weather pattern also causes drier conditions over western portions of Micronesia and Polynesia every two to seven years. It’s possible to imagine El Niño leading to tougher conditions, such as crop-damaging drought. El Niño weather could simultaneously have provided a reason to want to strike out for greener pastures and a means for eastward exploration and colonization. On the flip side, changes in winds and precipitation associated with La Niña could have encouraged migration to Hawaii and New Zealand.

Synthesis of results. Filled and dashed arrows refer to crossings that, according to simulations, are viable under downwind and directed sailing, respectively.
Using seafaring simulations and shortest-hop trajectories to model the prehistoric colonization
of Remote Oceania. Montenegro et al., PNAS 2016 doi:10.1073/pnas.1612426113

Overall, our results lend weight to various existing theories. El Niño and La Niña have been proposed as potential migration influences before, but we’ve provided a much more detailed view in both space and time of how this could have taken place. Our simulations strengthen the case for a lack of technology being the cause for the pause in migration, and downwind sailing as a viable strategy for the first colonization pulse 3,400 BP.

In the future, we hope to create new models – turning to time-series of environmental data instead of the statistical descriptions we used this time – to see if they produce similar results. We also want to develop experiments that would evaluate sailing strategies not in the context of discovery and colonization but of exchange networks. Are the islands along “easier” pathways between distant points also places where the archaeology shows a diverse set of artifacts from different regions? There’s still plenty to figure out about how people originally undertook these amazing voyages of exploration and expansion.

The ConversationAlvaro Montenegro, Assistant Professor of Geography and Director Atmospheric Sciences Program, The Ohio State University

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

Why insurance companies control your medical care

By Christy Ford Chapin, University of Maryland, Baltimore County.

It’s that time of year again. Insurance companies that participate in the Affordable Care Act’s state health exchanges are signaling that prices will rise dramatically this fall.

And if insurance costs aren’t enough of a crisis, researchers are highlighting deficiencies in health care quality, such as unnecessary tests and procedures that cause patient harm, medical errors bred by disjointed or fragmented care and disparities in service distribution.

While critics emphasize the ACA’s shortcomings, cost and quality issues have long plagued the U.S. health care system. As my research demonstrates, we have these problems because insurance companies are at the center of the system, where they both finance and manage medical care.

If this system is so flawed, how did we get stuck with it in the first place?

Answer: organized physicians.

As I explain in my book, “Ensuring America’s Health: The Public Creation of the Corporate Health Care System,” from the 1930s through the 1960s, the American Medical Association, the foremost professional organization for physicians, played a leading role in implementing the insurance company model.

What existed before health insurance companies?

Between the 1900s and the 1940s, patients flocked to what were called “prepaid physician groups,” or “prepaid doctor groups.”

Prepaid groups offered inexpensive health care because physicians acted as their own insurers. Patients paid a monthly fee directly to the group rather than to an insurance company. Physicians undermined their financial position if they either oversupplied services (as they do today) or if they rationed services. Ordering unnecessary tests and procedures drained away the group’s resources and adversely affected physician pay, which was often tied to quarterly profits. But if patients were unhappy with their care, the group stood to lose paying patients.

Harry Truman. Credit: Library of Congress

Unlike today’s medical group practices, prepaid groups were composed of doctors from various specialties. So rather than solely working with other general practitioners, GPs worked with surgeons, obstetricians and ophthalmologists. At the end of each day, the group’s physicians met with one another to consult over tricky cases. Thus, chronically sick patients and individuals with several conditions or difficult-to-diagnose illnesses enjoyed one-stop medical care.

Many health care reformers, including those behind President Truman’s failed 1948 universal care proposal, hoped to develop the medical economy around prepaid groups. Progressives believed that by federally funding prepaid groups, they could efficiently supply the entire population with comprehensive care.

Why did the AMA oppose prepaid doctor groups?

As prepaid doctors groups gained in popularity, the AMA took notice and began organizing to combat them.

AMA leaders were afraid that self-insuring, multi-specialty groups would eventually evolve into health care corporations. They feared that this “corporate medicine” would render physicians mere cogs in a bureaucratic hierarchy.

So AMA officials threatened doctors working for or contemplating joining prepaid groups. Because AMA members occupied influential roles in hospitals and on state licensing boards, practitioners who refused to heed their warnings usually lost their hospital admitting privileges and medical licenses. These actions severely weakened existing prepaid groups and prevented physicians from establishing new ones.

But the AMA also vigorously opposed government involvement in health care. While they had great success defeating prepaid doctor groups, AMA leaders realized that that if they continued knocking down private attempts to organize health care, government officials would step in to manage the medical economy. Indeed, throughout the 1930s and 1940s, health care reform was a popular goal for progressive policymakers.

The birth of the insurance company model

In order to build up the private sector as a means for fighting government health care reform, AMA leaders designed the insurance company model.

AMA leaders decided that rather than allowing doctors to insure patients, only insurance companies would be permitted to offer medical coverage.

During the 1930s, insurance companies sold life insurance policies and worked with businesses to provide employee pensions. Insurance company executives had no interest in entering the health care field. But they reluctantly agreed to go along with the AMA plan in order to help physicians defeat nationalized medicine.

AMA officials believed they could keep corporate power separate from medicine by instituting a few rules. First, insurance companies were forbidden from financing multi-specialty physician groups. AMA officials insisted that physicians practice individually or in single-specialty partnerships. Second, the AMA banned the use of set salaries or per-patient fees. They instead required insurance companies to pay doctors for each and every service they supplied (fee-for-service payment). Finally, the AMA prohibited insurance companies from supervising physician work. Physician leaders concluded that these arrangements would protect their earnings and autonomy.

Unfortunately, the insurance company model fragmented care across numerous specialties and encouraged physicians and hospitals to practice without regard for financial resources. With a distant corporation footing the bill, there was little to prevent hospitals and physicians from ordering unessential tests and procedures for insured patients. Many patients with insurance received excessive medical services. Unwarranted surgeries – for example, medically unnecessary appendectomies – became a national crisis by the 1950s, and hospital admission rates increased far beyond what even the most innovative technologies called for.

President Lyndon B. Johnson signs the Medicare Bill. President Harry S. Truman is seated next to him.
LBJ Library

Medicare adopts the insurance company model

From the 1940s on, the nation’s health care system steadily developed around the faulty insurance company model. Though initially uneasy with one another, physicians and insurers worked together to strengthen and spread insurance company arrangements. They did so to demonstrate that the federal government need not interfere in health care. And their gambit worked: Physicians and insurers defeated attempts under Presidents Truman and Eisenhower to reform health care.

When federal politicians finally did intervene in health care with the passage of Medicare in 1965, the insurance company model had been developing for decades. Government agencies simply could not match the private economy’s organizational capabilities. So, grudgingly, the health care reformers and progressive politicians behind Medicare built their program of government-funded health policies for the elderly around the insurance company model. Medicare’s architects also appointed insurance companies to act as program administrators, to operate as intermediaries between the federal government and hospitals and physicians, a role that they have to this day.

Medicare’s adoption of the insurance company model signaled its complete domination of U.S. health care.

Predictably, health care prices skyrocketed. Even before Medicare’s passage, politicians, journalists, and academics had been debating what to do about rising health care costs. Then Medicare brought millions of new elderly – and more sickly – patients into the system. Consequently, from 1966 through 1973, health care spending increased approximately 12 percent each year. Today, U.S. medical care expenditures are the highest in the world, making up 18 percent of the nation’s gross domestic product.

To control prices, insurers have gradually, over the course of many decades, implemented cost containment measures. These measures have required doctors to report their actions to insurers and increasingly seek insurer permission to perform medical services and procedures.

Insurers, once forbidden from supervising physician work, now act as managers, peering over the shoulders of doctors in a vain effort to counteract payment incentives that have created an oversupply of insured care.

Insurance companies play a big role in the ACA.
Jonathan Ernst/Reuters

Insurance companies maintain their position in the ACA

While the flaws of the insurance company model have become more evident, reforming the system has proven extremely difficult. Just look at the Affordable Care Act.

ACA planners attempted to undermine the insurance company model by proposing a public option – government-managed insurance that officials could deck out with generous benefits while subsidizing coverage to hold down policy prices. This strategy would allow the public option to outcompete and eventually destroy existing private-sector coverage. Opponents, including the AMA, viewed it as a step toward a government takeover of health care. Amid the intense political fighting, the public option was dropped, and the ACA was built around the insurance company model.

Thus, since the ACA’s passage, premium prices have continued to climb and deductibles have increased. Insurers have scaled back the number of physicians and hospitals in their networks. At the same time, researchers question health care quality and service disparities.

Looking to the future

Reacting to voters’ frustration with this news, both presidential candidates have called for additional health care reforms. Reforms based on prepaid doctor groups hold the potential for bipartisan support.

Hillary Clinton is calling for a public option, which, if passed, would weaken the power of insurance companies. Clinton could use such a policy to reboot the prepaid group model.

Donald Trump advocates the repeal of the ACA and the sale of insurance across state lines. Republicans, citing fealty to market competition and consumer choice, could also rally around prepaid doctor groups.

With growing patient dissatisfaction and concern among physicians
about insurance company dominance, prepaid groups could finally succeed.

The ConversationChristy Ford Chapin, Visiting scholar at Johns Hopkins University and Assistant Professor of History, University of Maryland, Baltimore County

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

Feds: We can read all your email, and you’ll never know

[Editor’s note: this article is themed on legal and ethical issues than science, but we felt our readers would want to know!]

By Clark D. Cunningham, Georgia State University

Fear of hackers reading private emails in cloud-based systems like Microsoft Outlook, Gmail or Yahoo has recently sent regular people and public officials scrambling to delete entire accounts full of messages dating back years. What we don’t expect is our own government to hack our email – but it’s happening. Federal court cases going on right now are revealing that federal officials can read all your email without your knowledge.

As a scholar and lawyer who started researching and writing about the history and meaning of the Fourth Amendment to the Constitution more than 30 years ago, I immediately saw how the FBI versus Apple controversy earlier this year was bringing the founders’ fight for liberty into the 21st century. My study of that legal battle caused me to dig into the federal government’s actual practices for getting email from cloud accounts and cellphones, causing me to worry that our basic liberties are threatened.

A new type of government search

The federal government is getting access to the contents of entire email accounts by using an ancient procedure – the search warrant – with a new, sinister twist: secret court proceedings.

The earliest search warrants had a very limited purpose – authorizing entry to private premises to find and recover stolen goods. During the era of the American Revolution, British authorities abused this power to conduct dragnet searches of colonial homes and to seize people’s private papers looking for evidence of political resistance.

To prevent the new federal government from engaging in that sort of tyranny, special controls over search warrants were written into the Fourth Amendment to the Constitution. But these constitutional provisions are failing to protect our personal documents if they are stored in the cloud or on our smartphones.

Fortunately, the government’s efforts are finally being made public, thanks to legal battles taken up by Apple, Microsoft and other major companies. But the feds are fighting back, using even more subversive legal tactics.

Searching in secret

To get these warrants in the first place, the feds are using the Electronic Communications Privacy Act, passed in 1986 – long before widespread use of cloud-based email and smartphones. That law allows the government to use a warrant to get electronic communications from the company providing the service – rather than the true owner of the email account, the person who uses it.

And the government then usually asks that the warrant be “sealed,” which means it won’t appear in public court records and will be hidden from you. Even worse, the law lets the government get what is called a “gag order,” a court ruling preventing the company from telling you it got a warrant for your email.

You might never know that the government has been reading all of your email – or you might find out when you get charged with a crime based on your messages.

Microsoft steps up

Much was written about Apple’s successful fight earlier this year to prevent the FBI from forcing the company to break the iPhone’s security system.

But relatively little notice has come to a similar Microsoft effort on behalf of customers that began in April 2016. The company’s suit argued that search warrants delivered to Microsoft for customers’ emails are violating regular people’s constitutional rights. (It also argued that being gagged violates Microsoft’s own First Amendment rights.)

Microsoft’s suit, filed in Seattle, says that over the course of 20 months in 2015 and 2016, it received more than 3,000 gag orders – and that more than two-thirds of the gag orders were effectively permanent, because they did not include end dates. Court documents supporting Microsoft describe thousands more gag orders issued against Google, Yahoo, Twitter and other companies. Remarkably, three former chief federal prosecutors, who collectively had authority for the Seattle region for every year from 1989 to 2009, and the retired head of the FBI’s Seattle office have also joined forces to support Microsoft’s position.

The feds get everything

This search warrant clearly spells out who the government thinks controls email accounts – the provider, not the user. U.S. District Court for the Southern District of New York

It’s very difficult to get a copy of one of these search warrants, thanks to orders sealing files and gagging companies. But in another Microsoft lawsuit against the government a redacted warrant was made part of the court record. It shows how the government asks for – and receives – the power to look at all of a person’s email.

On the first page of the warrant, the cloud-based email account is clearly treated as “premises” controlled by Microsoft, not by the email account’s owner:

“An application by a federal law enforcement officer or an attorney for the government requests the search of the following … property located in the Western District of Washington, the premises known and described as the email account [REDACTED]@MSN.COM, which is controlled by Microsoft Corporation.”

The Fourth Amendment requires that a search warrant must “particularly describe the things to be seized” and there must be “probable cause” based on sworn testimony that those particular things are evidence of a crime. But this warrant orders Microsoft to turn over “the contents of all e-mails stored in the account, including copies of e-mails sent from the account.” From the day the account was opened to the date of the warrant, everything must be handed over to the feds.

The warrant orders Microsoft to turn over every email in an account – including every sent message. U.S. District Court for the Southern District of New York

Reading all of it

In warrants like this, the government is deliberately not limiting itself to the constitutionally required “particular description” of the messages it’s looking for. To get away with this, it tells judges that incriminating emails can be hard to find – maybe even hidden with misleading names, dates and file attachments – so their computer forensic experts need access to the whole data base to work their magic.

If the government were serious about obeying the Constitution, when it asks for an entire email account, at least it would write into the warrant limits on its forensic analysis so only emails that are evidence of a crime could be viewed. But this Microsoft warrant says an unspecified “variety of techniques may be employed to search the seized emails,“ including “email by email review.”

The right to read every email. U.S. District Court for the Southern District of New York

As I explain in a forthcoming paper, there is good reason to suspect this type of warrant is the government’s usual approach, not an exception.

Former federal computer-crimes prosecutor Paul Ohm says almost every federal computer search warrant lacks the required particularity. Another former prosecutor, Orin Kerr, who wrote the first edition of the federal manual on searching computers, agrees: “Everything can be seized. Everything can be searched.” Even some federal judges are calling attention to the problem, putting into print their objections to signing such warrants – but unfortunately most judges seem all too willing to go along.

What happens next

If Microsoft wins, then citizens will have the chance to see these search warrants and challenge the ways they violate the Constitution. But the government has come up with a clever – and sinister – argument for throwing the case out of court before it even gets started.

The government has asked the judge in the case to rule that Microsoft has no legal right to raise the Constitutional rights of its customers. Anticipating this move, the American Civil Liberties Union asked to join the lawsuit, saying it uses Outlook and wants notice if Microsoft were served with a warrant for its email.

The government’s response? The ACLU has no right to sue because it can’t prove that there has been or will be a search warrant for its email. Of course the point of the lawsuit is to protect citizens who can’t prove they are subject to a search warrant because of the secrecy of the whole process. The government’s position is that no one in America has the legal right to challenge the way prosecutors are using this law.

Far from the only risk

The government is taking a similar approch to smartphone data.

For example, in the case of U.S. v. Ravelo, pending in Newark, New Jersey, the government used a search warrant to download the entire contents of a lawyer’s personal cellphone – more than 90,000 items including text messages, emails, contact lists and photos. When the phone’s owner complained to a judge, the government argued it could look at everything (except for privileged lawyer-client communications) before the court even issued a ruling.

The federal prosecutor for New Jersey, Paul Fishman, has gone even farther, telling the judge that once the government has cloned the cellphone it gets to keep the copies it has of all 90,000 items even if the judge rules that the cellphone search violated the Constitution.

Where does this all leave us now? The judge in Ravelo is expected to issue a preliminary ruling on the feds’ arguments sometime in October. The government will be filing a final brief on its motion to dismiss the Microsoft case September 23. All Americans should be watching carefully to what happens next in these cases – the government may be already watching you without your knowledge.

The ConversationClark D. Cunningham, W. Lee Burge Chair in Law & Ethics; Director, National Institute for Teaching Ethics & Professionalism, Georgia State University

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

How Tiny Primate Virtual Brains Help with Understanding Evolution

Virtual brains reconstructed from ancient, kiwi-sized primate skulls could help resolve one of the most intriguing evolutionary mysteries: how modern primates developed such large brains.

Paleontologists found clues in the remarkably preserved skulls of adapiforms, lemur-like primates that scurried around the tropical forests of Wyoming about 50 million years ago. Thought to be a link between primitive and advanced primates, their fossil skulls are the best evidence available for understanding the neuroanatomy of the earliest ancestors of modern primates.

But there was just one problem—the brain cavities of the fragile skulls contained only rock and dust.

That is, until scientists used CT technology to create the first virtual 3D brain casts of the early primates. The eight virtually reconstructed and dissected brains—the most ever created for a single study—show an evolutionary burst including improved vision and more complex neurological function came before the brain size boost.

Tiny primate brain virtual "casts"
Top and bottom views, respectively, of the virtual brains of Notharctus tenebrosus (A, B, C, E and F), Adapis parisiensis (G and H), and Smilodectes gracilis (bottom two rows) within transparent renderings of their skulls. (Credit: U. Florida)

“It may be that these early specializations allowed primate brains to expand later in time,” says lead author Arianna Harrington, previously a undergraduate and master’s student at the Florida Museum of Natural History at the University of Florida, who is now a doctoral student at Duke University. “The idea is that any patterns we find in primate brain evolution could lead to a better understanding of the early evolution that led to the human brain.”

Scientists have long debated whether primates have always had big brains compared to body size, or if the trait appeared later. The new study’s findings, published in the Journal of Human Evolution, are consistent with previous endocast studies of Australopithecus afarensis, the oldest hominid known, andVictoriapithecus macinnesi, an early Old World monkey, which showed brain size increase followed brain specialization in early hominids and monkeys.

primate-brain

Adapiforms, which are not directly related to humans, evolved after the earliest primate ancestors, called plesiadapiforms, which lived about 65 million years ago. The scientists created virtual endocasts for three different species of adapiforms: Notharctus tenebrosus and Smilodectes gracilis from the middle Eocene Bridger formation of Wyoming and a late Eocene European specimen named Adapis parisiensis.

Adapiforms’ skulls differ from the earlier plesiadapiforms in a few ways including having more forward-facing eyes. The new virtual endocasts allowed scientists to take a closer look at anatomical features which revealed that, while adapiforms placed relatively less emphasis on smell more similar to modern primate brains, the relative brain size was not so different from that of plesiadapiforms, says study coauthor Jonathan Bloch, curator of vertebrate paleontology at the Florida Museum.

“While it’s true humans and other modern primates have very large brains, that story started down at the base of our group,” Bloch says. “As our study shows, the earliest primates actually had relatively small brains. So they didn’t start out with large brains and maintain them.”

Modern primates are specialized in the visual sense. One of the main differences between the early plesiadapiforms and adapiforms is the region of the brain responsible for the sense of smell, the olfactory bulb, is smaller, while there appears to be an expansion in the area of the brains responsible for vision, Harrington says.

“It is likely this indicates they’re beginning to rely more on vision than smell,” she says. “Scientists have hypothesized that vision may have helped early primates forage in complex arboreal forest systems.”

Source: Republished from Futurity.org as a derivative work under the Attribution 4.0 International license. Original article posted to Futurity by

Now, Check Out:

Geomythology: Can geologists relate ancient stories of great floods to real events?

By David R. Montgomery, University of Washington.

Modern people have long wondered about ancient stories of great floods. Do they tell of real events in the distant past, or are they myths rooted in imagination? Most familiar to many of us in the West is the biblical story of Noah’s flood. But cultures around the world have passed down their own tales of devastating natural disasters.

New research recently published in Science by a group of mostly Chinese researchers led by Qinglong Wu reports geological evidence for an event they propose may be behind China’s story of a great flood. This new research delves into the field of geomythology, which relates oral traditions and folklore to natural phenomena like earthquakes, volcanic eruptions and floods.

A view of Jishi Gorge, upstream from the landslide dam researchers say unleashed a great flood in China almost 4,000 years ago. Gray silt deposits are visible dozens of meters above the water.
Wu Qinglong, CC BY-NC

“Great Yu controls the waters”

The story of Emperor Yu, the legendary founder of China’s first dynasty, centers on his ability to drain persistent floodwaters from lowland areas, bringing order to the land. This ancient flood story centers on the triumph of human ingenuity and labor over the chaotic forces of the natural world. It’s strikingly different from other flood traditions in that its hero didn’t survive a world-destroying flood but rather pulled off feats of river engineering that brought order to the land and paved the way for lowland agriculture. But was Emperor Yu a real historic person, and if so what triggered the great flood so central to his story?

Diagram of the hypothesized dam outburst process in the Jishi Gorge. Wu Qinglong,, CC BY-ND

In their new analysis, Wu and colleagues build on previous studies of landslides in the Jishi Gorge that
dammed the Yellow River where it flows down off the Tibetan Plateau. They marshal geological and archaeological evidence to argue that when a landslide dam failed, a flood ripped down China’s Yellow River around 1920 B.C. They dated lake sediments trapped upstream of the landslide dam and flood sediments deposited downstream at elevations of up to 165 feet above river level. They estimated the landslide dam’s failure sent almost a half million cubic meters of water per second surging down the Yellow River and on across early China. They also note that the timing of this flood coincides with a major archaeological transition from the Neolithic to Bronze Age in the downstream lowlands along the Yellow River.

Detail of hanging scroll of Emperor Yu. Ma Lin

The Science study not only reports evidence of a great flood at the right time and place to be Yu’s flood, but also notes how it coincides with a previously identified shift in the course of the Yellow River to a new outlet across the North China plain. The researchers suggest the flood they identified may have breached the levees on the lowland river and triggered this shift.

And this, in turn, would help explain a unique aspect of the story of Yu’s flood. A large river rerouted to a new course could trigger persistent lowland flooding. A longer route to the sea would impose a gentler slope that would promote deposition of sediment, clogging the channel, and splitting flow into multiple channels – all of which would exacerbate flooding of lowland areas. This sounds like a pretty good setup for the story of Yu’s long labor to drain the floodwaters and channel them to the sea.

Flood stories from cultures around the globe

When I researched the potential geological origins of the world’s flood stories for my book “The Rocks Don’t Lie: A Geologist Investigates Noah’s Flood,” I was impressed with how the geography of seemingly curious details in many local myths was consistent with geological processes that cause disastrous floods in different regions. Even along the Nile, where the annual flood is quite predictable, the lack of flood stories is consistent with how droughts were the real danger in ancient Egypt. There, failure to flood would have been catastrophic.

Around the tsunami-prone Pacific, flood stories tell of disastrous waves that rose from the sea. Early Christian missionaries were perplexed as to why flood traditions from South Pacific islands didn’t mention the Bible’s 40 days and nights of rain, but instead told of great waves that struck without warning. A traditional story from the coast of Chile described how two great snakes competed to see which could make the sea rise more, triggering an earthquake and sending a great wave ashore. Native American stories from coastal communities in the Pacific Northwest tell of great battles between Thunderbird and Whale that shook the ground and sent great waves crashing ashore. These stories sound like prescientific descriptions of a tsunami: an earthquake-triggered wave that can catastrophically inundate shorelines without warning.

Glacial dams can give way unexpectedly, releasing massive amounts of water that had been held back by the ice.
Dominic Alves, CC BY

Other flood stories evoke the failure of ice and debris dams on the margins of glaciers that suddenly release the lakes they held back. A Scandinavian flood story, for example, tells of how Odin and his brothers killed the ice giant Ymir, causing a great flood to burst forth and drown people and animals. It doesn’t take a lot of imagination to see how this might describe the failure of a glacial dam.

While doing fieldwork in Tibet, I learned of a local story about a great guru draining a lake in the valley of the Tsangpo River on the edge of the Tibetan Plateau – after our team had discovered terraces made of lake sediments perched high above the valley floor. The 1,200-year-old carbon dates from wood fragments we collected from the lake sediments correspond to the time when the guru arrived in the valley and converted the local populace to Buddhism by defeating, so the story goes, the demon of the lake to reveal the fertile lake bottom that the villagers still farm.

The most deadly and disruptive floods would be talked about for years to come. Here Aztecs perform a ritual to appease the angry gods who had flooded their capital.

Don’t expect definitive proof

Of course, attempts to bring science to bear on relating ancient tales to actual events are fraught with speculation. But it is clear that stories of great floods are some of humanity’s oldest. And the global pattern of tsunamis, glacial outburst floods, and catastrophic flooding of lowlands fits rather well with unusual details within many flood stories.

And even though geological evidence put the idea of a global flood to rest almost two centuries ago, there are options for a rational explanation of the biblical flood. One is a catastrophic inundation that oceanographers Bill Ryan and Walter Pitman propose happened when the post-glacial rise in sea level breached the Bosporus and decanted the Mediterranean into a lowland freshwater valley, forming the Black Sea. Or perhaps it could relate to cataclysmic lowland flooding in estuarine Mesopotamia like that which inundated the Irrawaddy Delta in 2008, killing more than 130,000 people.

Does the new study by Wu and his colleagues prove that the great flood they reconstruct was in fact Emperor Yu’s flood? No, but it does make an intriguing case for the possibility. Yet previous researchers studying landslide dams in the Jishi gorge have concluded that ancient lakes there drained slowly and dated to more than 1,000 years before the dates reported in this latest article. Was there more than one generation of landslide dams and floods? No doubt geologists will continue to argue about the evidence. That is, after all, what we do.

It’s always been part of human nature to be fascinated by and pay attention to the natural world. Great floods and other natural disasters were long seen as the work of angry deities or supernatural entities or powers. But now that we are learning that some stories once viewed as folklore and myth may be rooted in real events, scientists are paying a little more attention to the storytellers of old.

The ConversationDavid R. Montgomery, Professor of Earth and Space Sciences, University of Washington

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

How do you know you’re not living in a computer simulation?

By Laura D’Olimpio, University of Notre Dame Australia.

Consider this: right now, you are not where you think you are. In fact, you happen to be the subject of a science experiment being conducted by an evil genius.

Your brain has been expertly removed from your body and is being kept alive in a vat of nutrients that sits on a laboratory bench.

The nerve endings of your brain are connected to a supercomputer that feeds you all the sensations of everyday life. This is why you think you’re living a completely normal life.

Do you still exist? Are you still even “you”? And is the world as you know it a figment of your imagination or an illusion constructed by this evil scientist?

Sounds like a nightmare scenario. But can you say with absolute certainty that it’s not true?

Could you prove to someone that you aren’t actually a brain in a vat?

Deceiving demons

The philosopher Hilary Putnam proposed this famous version of the brain-in-a-vat thought experiment in his 1981 book, Reason, Truth and History, but it is essentially an updated version of the French philosopher René Descartes’ notion of the Evil Genius from his 1641 Meditations on First Philosophy.

While such thought experiments might seem glib – and perhaps a little unsettling – they serve a useful purpose. They are used by philosophers to investigate what beliefs we can hold to be true and, as a result, what kind of knowledge we can have about ourselves and the world around us.

Descartes thought the best way to do this was to start by doubting everything, and building our knowledge from there. Using this sceptical approach, he claimed that only a core of absolute certainty will serve as a reliable foundation for knowledge. He said:

If you would be a real seeker after truth, it is necessary that at least once in your life you doubt, as far as possible, all things.

Descartes believed everyone could engage in this kind of philosophical thinking. In one of his works, he describes a scene where he is sitting in front of a log fire in his wooden cabin, smoking his pipe.

He asks if he can trust that the pipe is in his hands or his slippers are on his feet. He notes that his senses have deceived him in the past, and anything that has been deceptive once previously cannot be relied upon. Therefore he cannot be sure that his senses are reliable.

Perhaps you’re really just a brain in a vat?
Shutterstock

Down the rabbit hole

It is from Descartes that we get classical sceptical queries favoured by philosophers such as: how can we be sure that we are awake right now and not asleep, dreaming?

To take this challenge to our assumed knowledge further, Descartes imagines there exists an omnipotent, malicious demon that deceives us, leading us to believe we are living our lives when, in fact, reality could be very different to how it appears to us.

I shall suppose that some malicious demon of the utmost power and cunning has employed all his energies in order to deceive me.

The brain-in-a-vat thought experiment and the challenge of scepticism has also been employed in popular culture. Notable contemporary examples include the 1999 film The Matrix and Christopher Nolan’s 2010 film Inception.

By watching a screened version of a thought experiment, the viewer may imaginatively enter into a fictional world and safely explore philosophical ideas.

For example, while watching The Matrix, we identify with the protagonist, Neo (Keanu Reeves), who discovers the “ordinary” world is a computer-simulated reality and his atrophied body is actually suspended in a vat of life-sustaining liquid.

Even if we cannot be absolutely certain that the external world is how it appears to our senses, Descartes commences his second meditation with a small glimmer of hope.

At least we can be sure that we ourselves exist, because every time we doubt that, there must exist an “I” that is doing the doubting. This consolation results in the famous expression cogito ergo sum, or “I think therefore I am”.

So, yes, you may well be a brain in a vat and your experience of the world may be a computer simulation programmed by an evil genius. But, rest assured, at least you’re thinking!

The ConversationLaura D’Olimpio, Senior Lecturer in Philosophy, University of Notre Dame Australia

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

3D printing: a new threat to gun control and security policy?

By Daniel C. Tirone, Louisiana State University and James Gilley, Louisiana State University .

Following the recent mass shooting in Orlando, and the shootings in Minnesota and Dallas, the sharp political divisions over gun control within the U.S. are once again on display. In June, House Democrats even staged a sit-in to advocate for stronger laws.

There is some evidence that more restrictions can reduce gun violence, but another recent shooting highlighted some limitations of regulation. British Member of Parliament Jo Cox was murdered with a “makeshift gun” despite the United Kingdom’s restrictive gun-control laws.

The threat of self-manufactured firearms is not new, but a critical barrier is collapsing. Until recently, most people didn’t have the skills to make a weapon as capable as commercially available ones. However, recent developments in the field of additive manufacturing, also known as 3D printing, have made home manufacturing simpler than ever before. The prospect of more stringent legislation is also fueling interest in at-home production.

Plans for basic handguns that can be created on consumer-grade 3D printers are readily available online. With more advanced 3D printers and other at-home technologies such as the Ghost Gunner computer-controlled mill, people can even make more complex weapons, including metal handguns and components for semi-automatic rifles.

These technologies pose challenges not only for gun regulation but also for efforts to protect humanity from more powerful weapons. In the words of Bruce Goodwin, associate director at large for national security policy and research at the Lawrence Livermore National Laboratory, “All by itself, additive manufacturing changes everything, including defense matters.”

Policymakers and researchers respond

‘The Liberator,’ a 3D-printed handgun that raised the concern of the U.S. State Department. Justin Pickard/flickr, CC BY-SA

>

Government officials have recently begun to react to this emerging threat. The U.S. State Department argued that posting online instructions to make a 3D-printed single-shot handgun violated federal laws barring exports of military technology. At the local level, the city of Philadelphia outlawed the possession of 3D-printed guns or their components in 2013.

Those of us in the research community have also been addressing the security implications of additive manufacturing. A 2014 conference of intelligence community and private sector professionals noted that current at-home and small-scale 3D printing technology can’t produce the same quality output as industrial equipment, and doesn’t work with as wide a range of plastics, metals and other materials. Nevertheless, participants recommended a number of policies, such as more rigorous intellectual property laws, to counter the evolving threat of unregulated 3D-printed weapons. These types of policies will become increasingly important as at-home manufacturing of firearms weakens traditional gun control regulations such as those focusing on the buying and selling of weapons.

Expanding the security threat

The danger goes well beyond firearms. Countries seeking to develop nuclear weapons could use additive manufacturing to evade international safeguards against nuclear proliferation. Traditional nuclear weapon control efforts include watching international markets for sales of components needed for manufacturing a nuclear device. Additional measures place restrictions on the types of technology nuclear capable states can export. Additive manufacturing could avoid these efforts by letting countries make the equipment themselves, instead of buying it abroad.

Research into this threat led nonproliferation scholar Grant Christopher to recommend that governments enact export restrictions on certain types of 3D printers. Nuclear policy experts Matthew Kroenig and Tristan Volpe proposed other approaches to limit additive manufacturing’s dangers to nuclear security. One way could be increasing international cooperation to regulate the spread of 3D printing technology.

Beyond regulating the hardware, governments and industry professionals can also work to more effectively secure the files needed to build components for weapons of mass destruction. Arms control analyst Amy Nelson points out that the risk this kind of data will spread increases as it becomes increasingly digital.

Terrorist groups and other nongovernment forces could also find ways to use 3D printing to make more destructive weapons. We argue that despite these groups’ interest in using weapons of mass destruction, they don’t use them regularly because their homemade devices are inherently unreliable. Additive manufacturing could help these groups produce more effective canisters or other delivery mechanisms, or improve the potency of their chemical and biological ingredients. Such developments would make these weapons more attractive and increase the likelihood of their use in a terror attack.

Where to go from here

The worst threats 3D printing poses to human life and safety are likely some distance in the future. However, the harder policymakers and others work to restrict access to handguns or unconventional weapons, the more attractive 3D printing becomes to those who want to do harm.

Additive manufacturing holds great promise for improvements across many different areas of people’s lives. Scholars and policymakers must work together to ensure we can take advantage of these benefits while guarding against the technology’s inherent dangers.

The ConversationDaniel C. Tirone, Assistant Professor of Political Science, Louisiana State University and James Gilley, Instructor of International Studies, Louisiana State University

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

Sex and other myths about weight loss

By Tammy Chang, University of Michigan and Angie Wang, University of Michigan.

The estimated annual health care costs related to obesity are over $210 billion, or nearly 21 percent of annual medical spending in the United States. Americans spend $60 billion on weight loss products each year, trying everything from expensive meal replacement products to do-it-yourself programs on the latest cell phone apps. We gather weight loss advice, voluntarily or involuntarily, from news outlets, social media and just about everyone.

Americans have known for 15 years that obesity is an epidemic; the surgeon general declared it so in 2001.
Despite intense efforts to prevent and treat obesity, however, studies published June 7 in the Journal of the American Medical Association showed that 35 percent of men, 40 percent of women, and 17 percent of children and adolescents are obese. Even more worrisome, the rates continue to rise among women and adolescents.

In fact, experts predict that this generation of children may be the first in 200 years to have a shorter life expectancy than their parents, likely due to obesity.

So what is our society doing wrong? Clearly, what doctors and policy makers have been doing for the last 15 years to address this epidemic is not working.

Weight loss myths have broad appeal

An article from 2013 in the New England Journal of Medicine (NEJM) identified common myths surrounding obesity from popular media and scientific literature. The authors defined myths as ideas that are commonly held, but go against scientific data. Could these myths be keeping us from treating obesity effectively? As family physicians who treat overweight patients every day, we believe they do. Not only can these myths discourage people, they also provide misinformation that can prevent people from reaching their weight loss goals.

You might be surprised to hear some of these myths:

Just switching chips for carrots is not enough.
From www.shutterstock.com

Myth 1: Small changes in your diet or exercise will lead to large, long-term weight changes.

Unfortunately, this is not true. In weight loss, two plus two may only equal three instead of four. Small changes simply do not add up since physiologically, your body tries to stay the same weight. This doesn’t mean that making small healthy choices don’t matter, because even small things you do to stay healthy matter. It just means you are not likely to meet your weight loss goals by just taking one less bite. It’s likely going to take bigger changes in your diet and exercise.

Myth 2: Setting realistic goals when you are trying to lose weight is important because otherwise you will feel frustrated and lose less weight.

Patients often come in with ambitious goals for weight loss, and we as family physicians nearly always say- go for it! (within safety and reason). There is no evidence that shooting for the stars leads to frustration. If anything, aiming for a larger goal may lead to better weight-loss outcomes.

Myth 3: Losing a lot of weight fast doesn’t keep weight off as well as losing a few pounds slowly.

Again, studies have shown that losing a larger amount of weight fast in the beginning (maybe while you are super motivated) has been associated with lower weight in the long-term. There just isn’t evidence to go “slow and steady” when it comes to weight loss.

Finally, to our favorite one:

Myth 4: Having sex one time burns about as many calories as walking a mile.

Sorry to disappoint, but for an average sexual encounter (lasting 6 minutes!), an average man in his 30s burns just 20 calories. And as the NEJM articles further explains, this is just 14 more calories than just sitting and watching TV. So if the thought went through your head that sex may be your exercise for the day, you should think again.

Myths take hold

As family physicians, we were curious to know if our own patients in clinic might believe in these myths. Maybe in the few short years since the NEJM paper was published, this information has permeated through popular media, and corrected itself. Everyone must know these basic facts about obesity, right?

To figure this out, we conducted a study of over 300 people in the waiting room of our diverse academic family medicine clinic. People who participated in our survey had an average age of 37, were mostly female (76 percent), had at least some college education (76 percent), and were a mix of non-Hispanic black (38 percent) and non-Hispanic white (47 percent).

The grand majority of people we surveyed still believed these myths (Myth 1: 85 percent, Myth 2: 94 percent, Myth 3: 85 percent, Myth 4: 61 percent)! Even more interestingly, there were no differences in what people believed across gender, age, or educational levels. These myths were pervasive.

How can we expect people to lose weight if most do not know the basics of weight loss? We didn’t need to go far before we realized that these myths are still found in popular media. In some cases, physicians themselves may fall victim to these myths.

Of course, healthcare providers should only give evidence-based advice to patients about weight loss in order to optimize their chance of success. Studies have shown that when primary care doctors provide advice on weight loss, patients are more likely to attempt to change their behaviors related to weight. However, even giving better and more advice may not be enough.

The first step is to acknowledge that patients are likely influenced by the myths that are so easily found online and among the advice given by friends and family. This means patients must be particularly savvy consumers of health information and to seek out information from reputable sources. This also means that educating and empowering overweight patients is only one part of the solution. Informing those – friends, family, and also the media – who influence overweight patients is also important if we want to change the trajectory of obesity in the U.S.

If we don’t translate the research on obesity into practice, we cannot expect this problem to improve in our lifetime. We will only have a chance if we use what we know about weight loss and drop these myths.

The ConversationTammy Chang, Assistant Professor, Family Medicine, University of Michigan and Angie Wang, Resident, Department of Family Medicine, University of Michigan

This article was originally published on The Conversation. Read the original article.

Now, Check Out: