If your computer is infected with ransomware, your antivirus software probably won’t detect it until it’s too late.
Hackers use the malware to encrypt your computer files and demand money in exchange for freeing those contents. The attacks are on the rise.
In May the FBI issued a warning that the number of attacks has doubled in the past year and is expected to grow even more rapidly this year.
Attacks most often show up in the form of an email that appears to be from someone familiar. The recipient clicks on a link in the email and unknowingly unleashes malware that encrypts his or her data. The next thing to appear is a message demanding the ransom, typically anywhere from a few hundred to a few thousand dollars. Often the ransoms are paid in Bitcoin, a digital currency that defies tracing.
“These attacks are tailored and unique every time they get installed on someone’s system,” says Nolen Scaife, a University of Florida doctoral student. “Antivirus is really good at stopping things it’s seen before … That’s where our solution is better than traditional anti-viruses.”
Scaife is part of the team that has come up with the ransomware solution, which it calls CryptoDrop. It doesn’t keep ransomware out, but rather confronts it once it’s there. CryptoDrop actually lets the malware lock up a few files before clamping down on it.
“If something that’s benign starts to behave maliciously, then what we can do is take action against that based on what we see is happening to your data. So we can stop, for example, all of your pictures form being encrypted,” says Scaife.
“Our system is more of an early-warning system. It doesn’t prevent the ransomware from starting … it prevents the ransomware from completing its task … so you lose only a couple of pictures or a couple of documents rather than everything that’s on your hard drive, and it relieves you of the burden of having to pay the ransom,” adds Scaife.
Scaife and colleagues say early tests of the program have been impressive.
“We ran our detector against several hundred ransomware samples that were live,” Scaife says, “and in those case it detected 100 percent of those malware samples and it did so after only a median of 10 files were encrypted.”
And CryptoDrop works seamlessly with antivirus software.
“About one-tenth of 1 percent of the files were lost,” says Patrick Traynor, an associate professor in computer and information science and engineering, “but the advantage is that it’s flexible. We don’t have to wait for that anti-virus update. If you have a new version of your ransomware, our system can detect that.”
The team currently has a functioning prototype that works with Windows-based systems and is seeking a partner to commercialize it and make it available publicly. They recently presented their results at the IEEE International Conference on Distributed Computing Systems in Japan.
Scientists have found a way to boost the intensity of light waves on a silicon microchip using the power of sound.
Writing in the journal Nature Photonics, a team led by Peter Rakich describes a new waveguide system that harnesses the ability to precisely control the interaction of light and sound waves. This work solves a long-standing problem of how to utilize this interaction in a robust manner on a silicon chip as the basis for powerful new signal-processing technologies.
The prevalence of silicon chips in today’s technology makes the new system particularly advantageous, the researchers note.
“Silicon is the basis for practically all microchip technologies,” says Rakich, who is an assistant professor of applied physics and physics at Yale University. “The ability to combine both light and sound in silicon permits us to control and process information in new ways that weren’t otherwise possible.”
Rakich says combining the two capabilities “is like giving a UPS driver an amphibious vehicle—you can find a much more efficient route for delivery when traveling by land or water.”
These opportunities have motivated numerous groups around the world to explore such hybrid technologies on a silicon chip. However, progress was stifled because those devices weren’t efficient enough for practical applications.
The Yale group lifted this roadblock using new device designs that prevent light and sound from escaping the circuits.
“Figuring out how to shape this interaction without losing amplification was the real challenge,” says Eric Kittlaus, a graduate student in Rakich’s lab and the study’s first author. “With precise control over the light-sound interaction, we will be able to create devices with immediate practical uses, including new types of lasers.”
The researchers say there are commercial applications for the technology in a number of areas, including fiber-optic communications and signal processing. The system is part of a larger body of research the Rakich lab has conducted for the past five years, focused on designing new microchip technologies for light.
The US Department of Defense’s Defense Advanced Research Projects Agency supported the project.
Source: Republished from Futurity.org as a derivative work under the Attribution 4.0 International license. Original article posted to Futurity by Jim Shelton-Yale
A project housed inside a 15-foot-tall geodesic dome allows people to dance with a computer-controlled figure named VAI.
The virtual partner “watches” and improvises its own moves based on prior experiences. When the human responds, the figure reacts again, creating an impromptu dance couple based on artificial intelligence (AI).
The LuminAI project dome designed and constructed Jessica Anderson, a digital media master’s student at the Georgia Institute of Technology. The system uses Kinect devices to capture the person’s movement, which is then projected as a digitally enhanced silhouette on the dome’s screens.
The dome is lined with custom-made projection panels for mapping.
The surfaces allow people to watch their own shadowy avatar as it struts with a virtual character named VAI, which learns how to dance by paying attention to which moves the current user (and everyone before them) is doing and when. The more moves it sees, the better and deeper the computer’s dance vocabulary. It then uses this vocabulary as a basis for future improvisation.
“HUMANS AREN’T FULLY IN THE DRIVER’S SEAT ANYMORE. THE PROCESS GIVES AUTONOMY BACK TO THE COMPUTER.”
“Co-creative artificial intelligence, or using AI as a creative collaborator, is rare,” says Brian Magerko, the Georgia Tech digital media associate professor who leads the project. “As computers become more ubiquitous, we must understand how they can co-exist with humans. Part of that is creating things together.”
“This episodic memory is filled with experiences of how people have danced with it in the past,” says Mikhail Jacob, a computer science PhD student and lead developer of the LuminAI technology. “For example, the computer learns to predict that when one person pumps their arms into the air, their partner is likely to do something similar. So on seeing that movement, the avatar might pump its arms sideways at the same pace or use that as the basis for its response.”
The team says this improvisation is one of the most important parts of the project. The avatar recognizes patterns, but doesn’t always react the same way every time. That means that the person must improvise too, which leads to greater creativity all around. All the while, the computer is capturing these new experiences and storing the information to use as a basis for future dance sessions.
“Humans aren’t fully in the driver’s seat anymore. The process gives autonomy back to the computer,” says Jacob. “LuminAI forces a person to create something new—potentially something better—with their partner because they’re forced to take their (virtual) partner’s actions into consideration.”
The technology has broader implications than art. As Magerko explains it, these days AI mostly relies on instructions fed to it by humans, and programming a computer with every possible instruction is impossible.
“That’s because humans are so unpredictable,” says Magerko. “Let’s say a computer and a person are going to write a story together about a family conversation at a restaurant. The story could go in a typical fashion or veer wildly into novel territory. The computer won’t do well unless it has been programmed with all of the pieces of knowledge that the story could possibly contain.
“However, if it can learn that knowledge from people and prior experiences, its improvisation can become somewhat consistent and accurate and the AI learning new story content (or dance moves) becomes part of the user experience.”
Cybersecurity researchers hacked into the leading “smart home” automation system and essentially got the PIN code to a home’s front door.
Their “lock-pick malware app” was one of four attacks that the cybersecurity researchers leveled at an experimental set-up of Samsung’s SmartThings, a top-selling Internet of Things platform for consumers. The work is believed to be the first platform-wide study of a real-world connected home system. The researchers didn’t like what they saw.
“At least today, with the one public IoT software platform we looked at, which has been around for several years, there are significant design vulnerabilities from a security perspective,” says Atul Prakash, professor of computer science and engineering at the University of Michigan. “I would say it’s okay to use as a hobby right now, but I wouldn’t use it where security is paramount.”
Earlence Fernandes, a doctoral student in computer science and engineering who led the study, says that “letting it control your window shades is probably fine.”
“One way to think about it is if you’d hand over control of the connected devices in your home to someone you don’t trust and then imagine the worst they could do with that and consider whether you’re okay with someone having that level of control,” he says.
Regardless of how safe individual devices are or claim to be, new vulnerabilities form when hardware like electronic locks, thermostats, ovens, sprinklers, lights, and motion sensors are networked and set up to be controlled remotely. That’s the convenience these systems offer. And consumers are interested in that.
As a testament to SmartThings’ growing use, its Android companion app that lets you manage your connected home devices remotely has been downloaded more than 100,000 times. SmartThings’ app store, where third-party developers can contribute SmartApps that run in the platform’s cloud and let users customize functions, holds more than 500 apps.
FOUR ‘ATTACKS’
The researchers performed a security analysis of the SmartThings’ programming framework and to show the impact of the flaws they found, they conducted four successful proof-of-concept attacks.
They demonstrated a SmartApp that eavesdropped on someone setting a new PIN code for a door lock, and then sent that PIN in a text message to a potential hacker. The SmartApp, which they called a “lock-pick malware app” was disguised as a battery level monitor and only expressed the need for that capability in its code.
As an example, they showed that an existing, highly rated SmartApp could be remotely exploited to virtually make a spare door key by programming an additional PIN into the electronic lock. The exploited SmartApp was not originally designed to program PIN codes into locks.
They showed that one SmartApp could turn off “vacation mode” in a separate app that lets you program the timing of lights, blinds, etc., while you’re away to help secure the home.
They demonstrated that a fire alarm could be made to go off by any SmartApp injecting false messages.
How is all this possible? The security loopholes the researchers uncovered fall into a few categories. One common problem is that the platform grants its SmartApps too much access to devices and to the messages those devices generate. The researchers call this “over-privilege.”
“The access SmartThings grants by default is at a full device level, rather than any narrower,” Prakash says. “As an analogy, say you give someone permission to change the lightbulb in your office, but the person also ends up getting access to your entire office, including the contents of your filing cabinets.”
More than 40 percent of the nearly 500 apps they examined were granted capabilities the developers did not specify in their code. That’s how the researchers could eavesdrop on setting of lock PIN codes.
The researchers also found that it is possible for app developers to deploy an authentication method called OAuth incorrectly. This flaw, in combination with SmartApps being over-privileged, allowed the hackers to program their own PIN code into the lock—to make their own secret spare key.
Finally, the “event subsystem” on the platform is insecure. This is the stream of messages devices generate as they’re programmed and carry out those instructions. The researchers were able to inject erroneous events to trick devices. That’s how they managed the fire alarm and flipped the switch on vacation mode.
THE BOTTOM LINE
These results have implications for all smart home systems, and even the broader Internet of Things.
“The bottom line is that it’s not easy to secure these systems” Prakash says. “There are multiple layers in the software stack and we found vulnerabilities across them, making fixes difficult.”
The researchers told SmartThings about these issues in December 2015 and the company is working on fixes. The researchers rechecked a few weeks ago if a lock’s PIN code could still be snooped and reprogrammed by a potential hacker, and it still could.
In a statement, SmartThings officials say they’re continuing to explore “long-term, automated, defensive capabilities to address these vulnerabilities.” They’re also analyzing old and new apps in an effort to ensure that appropriate authentication is put in place, among other steps.
Jaeyeon Jung of Microsoft Research also contributed to this work. The researchers will present a paper on the findings on May 24 at the IEEE Symposium on Security and Privacy in San Jose.
Generic – or what is now considered “old school” – phishing attacks typically took the form of the infamous “Nigerian prince” type emails, trying to trick recipients into responding with some personal financial information. “Spearphishing” attacks are similar but far more vicious. They seek to persuade victims to click on a hyperlink or an attachment that usually deploys software (called “malware”) allowing attackers access to the user’s computer or even to an entire corporate network. Sometimes attacks like this also come through text messages, social media messages or infected thumb drives.
The sobering reality is there isn’t much we can do to stop these types of attacks. This is partly because spearphishing involves a practice called social engineering, in which attacks are highly personalized, making it particularly hard for victims to detect the deception. Existing technical defenses, like antivirus software and network security monitoring, are designed to protect against attacks from outside the computer or network. Once attackers gain entry through spearphishing, they assume the role of trusted insiders, legitimate users against whom protective software is useless.
This makes all of us Internet users the sole guardians of our computers and organizational networks – and the weakest links in cyberspace security.
The real target is humans
Stopping spearphishing requires us to build better defenses around people. This, in turn, requires an understanding of why people fall victim to these sorts of attacks. My team’s recent research into the psychology of people who use computers developed a way to understand exactly how spearphishing attacks take advantage of the weaknesses in people’s online behaviors. It’s called the Suspicion, Cognition, Automaticity Model (SCAM).
We built SCAM using simulated spearphishing attacks – conducted after securing permission from university research supervision groups who regulate experiments on human subjects to ensure nothing inappropriate is happening – on people who volunteered to participate in our tests.
We found two primary reasons people are victimized. One factor appears to be that people naturally seek what is called “cognitive efficiency” – maximal information for minimal brain effort. As a result, they take mental shortcuts that are triggered by logos, brand names or even simple phrases such as “Sent from my iPhone” that phishers often include in their messages. People see those triggers – such as their bank’s logo – and assume a message is more likely to be legitimate. As a result, they don’t properly scrutinize those elements of the phisher’s request, such as the typos in the message, its intent, or the message’s header information, that could help reveal the deception.
Compounding this problem are people’s beliefs that online actions are inherently safe. Sensing (wrongly) that they are at low risk causes them to put relatively little effort into closely reviewing the message in the first place.
Our research shows that news coverage that has mostly focused on malware attacks on computers has caused many people to mistakenly believe that mobile operating systems are somehow more secure. Many others wrongly believe that Adobe’s PDF is safer than a Microsoft Word document, thinking that their inability to edit a PDF translates to its inability to be infected with malware. Still others erroneously think Google’s free Wi-Fi, which is available in some popular coffee shops, is inherently more secure than other free Wi-Fi services. Those kinds of misunderstandings make users more cavalier about opening certain file formats, and more careless while using certain devices or networks – all of which significantly enhances their risk of infection.
Habits weaken security
Another often-ignored factor involves the habitual ways people use technology. Many individuals use email, social media and texting so often that they eventually do so largely without thinking. Ask people who drive the same route each day how many stop lights they saw or stopped at along the way and they often cannot recall. Likewise, when media use becomes routine, people become less and less conscious of which emails they opened and what links or attachments they clicked on, ultimately becoming barely aware at all. It can happen to anyone, even the director of the FBI.
When technology use becomes a habit rather than a conscious act, people are more likely to check and even respond to messages while walking, talking or, worse yet, driving. Just as this lack of mindfulness leads to accidents, it also leads to people opening phishing emails and clicking on malicious hyperlinks and attachments without thinking.
Currently, the only real way to prevent spearphishing is to train users, typically by simulating phishing attacks and going over the results afterward, highlighting attack elements a user missed. Some organizations punish employees who repeatedly fail these tests. This method, though, is akin to sending bad drivers out into a hazard-filled roadway, demanding they avoid every obstacle and ticketing them when they don’t. It is much better to actually figure out where their skills are lacking and teach them how to drive properly.
Identifying the problems
That is where our model comes in. It provides a framework for pinpointing why individuals fall victim to different types of cyberattacks. At its most basic level, the model lets companies measure each employee’s susceptibility to spearphishing attacks and identify individuals and workgroups who are most at risk.
When used in conjunction with simulated phishing attack tests, our model lets organizations identify how an employee is likely to fall prey to a cyberattack and determine how to reduce that person’s specific risks. For example, if an individual doesn’t focus on email and checks it while doing other things, he could be taught to change that habit and pay closer attention. If another person wrongly believed she was safe online, she could be taught otherwise. If other people were taking mental shortcuts triggered by logos, the company could help them work to change that behavior.
Finally, our method can help companies pinpoint the “super detectors” – people who consistently detect the deception in simulated attacks. We can identify the specific aspects of their thinking or behaviors that aid them in their detection and urge others to adopt those approaches. For instance, perhaps good detectors examine email messages’ header information, which can reveal the sender’s actual identity. Others earmark certain times of their day to respond to important emails, giving them more time to examine emails in detail. Identifying those and other security-enhancing habits can help develop best-practice guidelines for other employees.
Yes, people are the weakest links in cybersecurity. But they don’t have to be. With smarter, individualized training, we could convert many of these weak links into strong detectors – and in doing so, significantly strengthen cybersecurity.
Virtual reality (VR) appears ready to take the entertainment world by storm in 2016. In addition to the much-hyped Oculus Rift, major corporations such as Facebook, Sony and Samsung are poised to release high-quality VR headsets to the public this year. After years of VR being discussed as the “next big thing,” this may be the year consumers will be able to get their hands on actual products.
It turns out some athletes have already begun exploring the promise of VR. Sports teams – both professional and collegiate – are taking advantage of the unique qualities of VR video to understand games in new and unique ways. Stanford’s STRIVR system, for example, provides services for its teams as well as for Clemson University and several NFL teams.
As a researcher and teacher of new media technology in sports journalism, I have had my opinion on VR changed dramatically over the course of the last year. My initial feeling was that VR was little more than a new fad that would fade, along the same lines as 3D television. But after using the technology and seeing its applications, I have changed my mind completely on it. VR technology is a radical departure from traditional video presentation, and it has myriad applications in both consumer media and in athletic practice.
We are already seeing certain sports take advantage of these applications. At the Mark Cuban Center for Sports Media and Technology at Indiana University, five sports teams actively use VR, including men’s basketball and football. According to Cuban Center videographer Patrick Dhaene, that number is expected to double next year.
VR and sports training
Coaches and players have been using regular two-dimensional video for multiple generations, generally relying on a wide camera angle to capture the entirety of a formation or play. This can make players feel distant from the material they are studying.
With VR, however, the player is able to put on a headset and experience a play from a much closer vantage point – as if they are inside the play as it takes place.
A quarterback wearing a VR headset can take a simulated snap and physically turn his head left or right in real time as the play progresses, helping him learn both the progressions of his wide receivers and the positioning of the defense.
Players can use VR to help memorize plays and formations without having to step onto the field, by repeatedly watching different aspects of looped plays within the VR headset. Coaches enjoy the benefits of players using VR to experience play repetitions, without the potential for injury that comes from being on the practice field.
How it works
The video that athletes and coaches see in a VR environment is constructed differently than normal video. Providing the user with an immersive environment requires different types of lenses and cameras, and computers must aid in production.
The typical VR video consists of footage from multiple cameras, shooting and recording in sync with each other. These cameras are generally fastened to a “rig” that holds the cameras in place. The rig is then anchored to a pedestal, allowing it to remain motionless during filming.
To make a VR film of a football practice focused on the offensive side of the ball, for example, the camera rig is stationed near the quarterback in the backfield. Each play is then run as normal, with the quarterback taking the snap, going through his progressions and making a pass.
Once the practice is over, the real work begins for the video crew.
For each play, a VR producer must assemble the footage from all the cameras into a single 180- or 360-degree visual field, a process known as “stitching.” This is arguably the most important part of the VR process, as improper stitching can render the video unusable.
After a play is properly stitched, players can view it through a VR headset, allowing them to concentrate on different areas of the play. Quarterbacks can even turn their heads away from the line of scrimmage and watch themselves throwing the ball, in order to evaluate their mechanics.
Does it help?
Academic research on the effectiveness of VR-aided sports training is still in its preliminary stages. Comments from both college and professional athletes who have experienced VR-aided training have been almost uniformly positive. But only now are scientists entering a stage where broader adoption of the technology will allow proper evaluation of the mental and psychological impacts of VR.
There are certain limitations to VR in the sports training environment. Camera rigs and production computers are expensive to purchase and difficult to learn how to use. Rigs generally must stay stationary during filming, because moving rigs tend to produce video that causes motion sickness in users. And the need for a pedestal makes capturing in-game VR footage for instructional purposes difficult.
Furthermore, VR isn’t a panacea for real-world practice. Using VR-aided training is unlikely to lead to consistently perfect throws or defenders who sniff out every play before it starts.
However, the comments from players, coaches and VR specialists show a tremendous amount of potential, allowing players an unprecedented perspective on the game that extends on-field practice time into the film room. As the technology continues to mature, we should expect the teams using it to be operating with a competitive advantage.
Software source code and hardware designs tend to be closely guarded trade secrets. But researchers recently made the full design of one of their microprocessors available as an open-source system.
Luca Benin, a professor at ETH Zurich involved with the project, says making the system open source maximizes the freedom of other developers to use and change the system. “It will now be possible to build open-source hardware from the ground up.
“In many recent examples of open-source hardware, usage is restricted by exclusive marketing rights and non-competition clauses,” adds Benini. “Our system, however, doesn’t have any strings attached when it comes to licensing.”
The arithmetic instructions that the microprocessor can perform are also open source: The scientists made the processor compatible with an open-source instruction set developed at the University of California in Berkeley.
PERFECT FOR SMARTWATCHES
The new processor is called PULPino and it is designed for battery-powered devices with extremely low energy consumption (PULP stands for ‘parallel ultra low power’).
These could be for chips for small devices, such as smartwatches, sensors for monitoring physiological functions (that can communicate with a heart rate monitor, for instance), or sensors for the Internet of Things.
Benini offers an example from the research currently underway in his lab: “Using the PULPino processor, we are developing a smartwatch equipped with electronics and a micro-camera. It can analyze visual information and use it to determine the user’s whereabouts.
“The idea is that such a smartwatch could one day control something like home electronics.”
Prototype of a smartwatch in Luca Benini’ lab. The prototype above is shown with a commercial processor, not the open-source PULPino. (Credit: ETH Zurich/Frank K. Gürkaynak)
You can download the entire source code, test programs, programming environment, and even the bitstream for the popular ZEDboard for free at www.pulp-platform.org/.
The Justice Department has managed to unlock an iPhone 5c used by the gunman Syed Rizwan Farook, who with his wife killed 14 people in San Bernardino, California, last December. The high-profile case has pitted federal law enforcement agencies against Apple, which fought a legal order to work around its passcode security feature to give law enforcement access to the phone’s data. The FBI said it relied on a third party to crack the phone’s encrypted data, raising questions about iPhone security and whether federal agencies should disclose their method.
But what if the device had been running Android? Would the same technical and legal drama have played out?
We are Android users and researchers, and the first thing we did when the FBI-Apple dispute hit popular media was read Android’s Full Disk Encryption documentation.
We attempted to replicate what the FBI had wanted to do on an Android phone and found some useful results. Beyond the fact the Android ecosystem involves more companies, we discovered some technical differences, including a way to remotely update and therefore unlock encryption keys, something the FBI was not able to do for the iPhone 5c on its own.
The easy ways in
Data encryption on smartphones involves a key that the phone creates by combining 1) a user’s unlock code, if any (often a four- to six-digit passcode), and 2) a long, complicated number specific to the individual device being used. Attackers can try to crack either the key directly – which is very hard – or combinations of the passcode and device-specific number, which is hidden and roughly equally difficult to guess.
Decoding this strong encryption can be very difficult. But sometimes getting access to encrypted data from a phone doesn’t involve any code-breaking at all. Here’s how:
A custom app could be installed on a target phone to extract information. In March 2011, Google remotely installed a program that cleaned up phones infected by malicious software. It is unclear if Android still allows this.
Many applications use Android’s Backup API. The information that is backed up, and thereby accessible from the backup site directly, depends on which applications are installed on the phone.
Some phones have fingerprint readers, which can be unlocked with an image of the phone owner’s fingerprint.
Some people have modified their phones’ operating systems to give them “root” privileges – access to the device’s data beyond what is allowed during normal operations – and potentially weakening security.
But if these options are not available, code-breaking is the remaining way in. In what is called a “brute force” attack, a phone can be unlocked by trying every possible encryption key (i.e., all character combinations possible) until the right one is reached and the device (or data) unlocks.
Starting the attack
A very abstract representation of the derivation of the encryption keys on Android. William Enck and Adwait Nadkarni, CC BY-ND
There are two types of brute-force attacks: offline and online. In some ways an offline attack is easier – by copying the data off the device and onto a more powerful computer, specialized software and other techniques can be used to try all different passcode combinations.
But offline attacks can also be much harder, because they require either trying every single possible encryption key, or figuring out the user’s passcode and the device-specific key (the unique ID on Apple, and the hardware-bound key on newer versions of Android).
To try every potential solution to a fairly standard 128-bit AES key means trying all 100 undecillion (1038) potential solutions – enough to take a supercomputer more than a billion billion years.
Guessing the passcode could be relatively quick: for a six-digit PIN with only numbers, that’s just a million options. If letters and special symbols like “$” and “#” are allowed, there would be more options, but still only in the hundreds of billions. However, guessing the device-specific key would likely be just as hard as guessing the encryption key.
Considering an online attack
That leaves the online attack, which happens directly on the phone. With the device-specific key readily available to the operating system, this reduces the task to the much smaller burden of trying only all potential passcodes.
However, the phone itself can be configured to resist online attacks. For example, the phone can insert a time delay between a failed passcode guess and allowing another attempt, or even delete the data after a certain number of failed attempts.
Apple’s iOS has both of these capabilities, automatically introducing increasingly long delays after each failure, and, at a user’s option, wiping the device after 10 passcode failures.
Attacking an Android phone
What happens when one tries to crack into a locked Android phone? Different manufacturers set up their Android devices differently; Nexus phones run Google’s standard Android configuration. We used a Nexus 4 device running stock Android 5.1.1 and full disk encryption enabled.
Android adds 30-second delays after every five failed attempts; snapshot of the 40th attempt. William Enck and Adwait Nadkarni, CC BY-ND
We started with a phone that was already running but had a locked screen. Android allows PINs, passwords and pattern-based locking, in which a user must connect a series of dots in the correct sequence to unlock the phone; we conducted this test with each type. We had manually assigned the actual passcode on the phone, but our unlocking attempts were randomly generated.
After five failed passcode attempts, Android imposed a 30-second delay before allowing another try. Unlike the iPhone, the delays did not get longer with subsequent failures; over 40 attempts, we encountered only a 30-second delay after every five failures. The phone kept count of how many successive attempts had failed, but did wipe the data. (Android phones from other manufacturers may insert increasing delays similar to iOS.)
These delays impose a significant time penalty on an attacker. Brute-forcing a six-digit PIN (one million combinations) could incur a worst-case delay of just more than 69 days. If the passcode were six characters, even using only lowercase letters, the worst-case delay would be more than 58 years.
When we repeated the attack on a phone that had been turned off and was just starting up, we were asked to reboot the device after 10 failed attempts. After 20 failed attempts and two reboots, Android started a countdown of the failed attempts that would trigger a device wipe. We continued our attack, and at the 30th attempt – as warned on the screen and in the Android documentation – the device performed a “factory reset,” wiping all user data.
Just one attempt remaining before the device wipes its data. William Enck and Adwait Nadkarni, CC BY-ND
In contrast to offline attacks, there is a difference between Android and iOS for online brute force attacks. In iOS, both the lock screen and boot process can wipe the user data after a fixed number of failed attempts, but only if the user explicitly enables this. In Android, the boot process always wipes the user data after a fixed number of failed attempts. However, our Nexus 4 device did not allow us to set a limit for lock screen failures. That said, both Android and iOS have options for remote management, which, if enabled, can wipe data after a certain number of failed attempts.
Using special tools
The iPhone 5c in the San Bernardino case is owned by the employer of one of the shooters, and has mobile device management (MDM) software installed that lets the company track it and perform other functions on the phone by remote control. Such an MDM app is usually installed as a “Device Administrator” application on an Android phone, and set up using the “Apple Configurator” tool for iOS.
Our test MDM successfully resets the password. Then, the scrypt key derivation function (KDF) is used to generate the new key encryption key (KEK). William Enck and Adwait Nadkarni, CC BY-ND
We built our own MDM application for our Android phone, and verified that the passcode can be reset without the user’s explicit consent; this also updated the phone’s encryption keys. We could then use the new passcode to unlock the phone from the lock screen and at boot time. (For this attack to work remotely, the phone must be on and have Internet connectivity, and the MDM application must already be programmed to reset the passcode on command from a remote MDM server.)
Figuring out where to get additional help
If an attacker needed help from a phone manufacturer or software company, Android presents a more diverse landscape.
Generally, operating system software is signed with a digital code that proves it is genuine, and which the phone requires before actually installing it. Only the company with the correct digital code can create an update to the operating system software – which might include a “back door” or other entry point for an attacker who had secured the company’s assistance. For any iPhone, that’s Apple. But many companies build and sell Android phones.
Google, the primary developer of the Android operating system, signs the updates for its flagship Nexus devices. Samsung signs for its devices. Cellular carriers (such as AT&T or Verizon) may also sign. And many users install a custom version of Android (such as Cyanogenmod). The company or companies that sign the software would be the ones the FBI needed to persuade – or compel – to write software allowing a way in.
Comparing iOS and Android
Overall, devices running the most recent versions of iOS and Android are comparably protected against offline attacks, when configured correctly by both the phone manufacturer and the end user. Older versions may be more vulnerable; one system could be cracked in less than 10 seconds. Additionally, configuration and software flaws by phone manufacturers may also compromise security of both Android and iOS devices.
But we found differences for online attacks, based on user and remote management configuration: Android has a more secure default for online attacks at start-up, but our Nexus 4 did not allow the user to set a maximum number of failed attempts from the lock screen (other devices may vary). Devices running iOS have both of these capabilities, but a user must enable them manually in advance.
Android security may also be weakened by remote control software, depending on the software used. Though the FBI was unable to gain access to the iPhone 5c by resetting the password this way, we were successful with a similar attack on our Android device.
Wi-Fi is everywhere—invisibly connecting laptops to printers, allowing smartphones to make calls or stream movies without cell service, and letting online gamers battle it out. That’s the upside.
But there’s a downside, too: Using Wi-Fi consumes a significant amount of energy, draining the batteries of all those connected devices.
Now, computer scientists and electrical engineers have demonstrated that it’s possible to generate Wi-Fi transmissions using 10,000 times less power than conventional methods.
The new Passive Wi-Fi system also consumes 1,000 times less power than existing energy-efficient wireless communication platforms, such as Bluetooth Low Energy and Zigbee.
“We wanted to see if we could achieve Wi-Fi transmissions using almost no power at all,” says coauthor Shyam Gollakota, assistant professor of computer science and engineering at the University of Washington. “That’s basically what Passive Wi-Fi delivers. We can get Wi-Fi for 10,000 times less power than the best thing that’s out there.”
“WE WANTED TO SEE IF WE COULD ACHIEVE WI-FI TRANSMISSIONS USING ALMOST NO POWER AT ALL.”
In Passive Wi-Fi, power-intensive functions are handled by a single device plugged into the wall. Passive sensors use almost no energy to communicate with routers, phones and other devices. (Author provided)
Passive Wi-Fi can for the first time transmit Wi-Fi signals at bit rates of up to 11 megabits per second that can be decoded on any of the billions of devices with Wi-Fi connectivity. These speeds are lower than the maximum Wi-Fi speeds but 11 times higher than Bluetooth.
Aside from saving battery life on today’s devices, wireless communication that uses almost no power will help enable an “Internet of Things” reality where household devices and wearable sensors can communicate using Wi-Fi without worrying about power.
To achieve such low-power Wi-Fi transmissions, researchers essentially decoupled the digital and analog operations involved in radio transmissions. In the last 20 years, the digital side of that equation has become extremely energy efficient, but the analog components still consume a lot of power.
ONE PLUGGED-IN DEVICE
The Passive Wi-Fi architecture assigns the analog, power-intensive functions—like producing a signal at a specific frequency—to a single device in the network that is plugged into the wall.
An array of sensors produces Wi-Fi packets of information using very little power by simply reflecting and absorbing that signal using a digital switch. In real-world conditions, researchers found the passive Wi-Fi sensors and a smartphone can communicate even at distances of 100 feet between them.
“All the networking, heavy-lifting, and power-consuming pieces are done by the one plugged-in device,” says coauthor Vamsi Talla, an electrical engineering doctoral student. “The passive devices are only reflecting to generate the Wi-Fi packets, which is a really energy-efficient way to communicate.”
Because the sensors are creating actual Wi-Fi packets, they can communicate with any Wi-Fi enabled device right out of the box.
“Our sensors can talk to any router, smartphone, tablet, or other electronic device with a Wi-Fi chipset,” says coauthor and electrical engineering doctoral student Bryce Kellogg. “The cool thing is that all these devices can decode the Wi-Fi packets we created using reflection so you don’t need specialized equipment.”
The technology could enable entirely new types of communication that haven’t been possible because energy demands have outstripped available power supplies. It could also simplify our data-intensive worlds.
For instance, smart home applications that use sensors to track everything from which doors are open to whether kids have gotten home from school have typically used their own communication platforms because Wi-Fi is so power-hungry.
“Even though so many homes already have Wi-Fi, it hasn’t been the best choice for that,” says coauthor Joshua Smith, associate professor of computer science and engineering and of electrical engineering. “Now that we can achieve Wi-Fi for tens of microwatts of power and can do much better than both Bluetooth and ZigBee, you could now imagine using Wi-Fi for everything.”
The researchers will present a paper describing their results in March at the 13th USENIX Symposium on Networked Systems Design and Implementation. The National Science Foundation, the University of Washington, and Qualcomm funded the work.
When you hear some drummers play, you may wonder “does that guy have three arms?” In this case, which involves a robotic arm programmed for drumming, indeed he does.
Scientists have created a new “smart arm”—a robotic wearable limb that can be attached to the shoulder to let drummers play with three arms.
The two-foot long “smart arm,” that can be attached to a musician’s shoulder, responds to human gestures and the music it hears. When the drummer moves to play the high hat cymbal, for example, the robotic arm maneuvers to play the ride cymbal. When the drummer switches to the snare, the mechanical arm shifts to the tom.
Georgia Tech Professor Gil Weinberg oversees the project, which is funded by the National Science Foundation. He says the goal is to push the limits of what humans can do.
“If you augment humans with smart, wearable robotics, they could interact with their environment in a much more sophisticated manner,” says Gil Weinberg, director of the Center for Music Technology at Georgia Tech. “The third arm provides a much richer and more creative experience, allowing the human to play many drums simultaneously with virtuosity and sophistication that are not otherwise possible.”
The robotic arm is smart for a few reasons. First, it knows what to play by listening to the music in the room. It improvises based on the beat and rhythm. For instance, if the musician plays slowly, the arm slows the tempo. If the drummer speeds up, it plays faster.
Another aspect of its intelligence is knowing where it’s located at all times, where the drums are, and the direction and proximity of the human arms. When the robot approaches an instrument, it uses built-in accelerometers to sense the distance and proximity. On-board motors make sure the stick is always parallel to the playing surface, allowing it to rise, lower or twist to ensure solid contact with the drum or cymbal. The arm moves naturally with intuitive gestures because it was programmed using human motion capture technology.
On the next page, learn what inspired these scientists to build this new robo-arm, and watch a video of the smart arm in action…