The Future of Electronics is Light

By Arnab Hazari, University of Michigan.

For the past four decades, the electronics industry has been driven by what is called “Moore’s Law,” which is not a law but more an axiom or observation. Effectively, it suggests that the electronic devices double in speed and capability about every two years. And indeed, every year tech companies come up with new, faster, smarter and better gadgets.

Specifically, Moore’s Law, as articulated by Intel cofounder Gordon Moore, is that “The number of transistors incorporated in a chip will approximately double every 24 months.” Transistors, tiny electrical switches, are the fundamental unit that drives all the electronic gadgets we can think of. As they get smaller, they also get faster and consume less electricity to operate.

In the technology world, one of the biggest questions of the 21st century is: How small can we make transistors? If there is a limit to how tiny they can get, we might reach a point at which we can no longer continue to make smaller, more powerful, more efficient devices. It’s an industry with more than US$200 billion in annual revenue in the U.S. alone. Might it stop growing?

Getting close to the limit

At the present, companies like Intel are mass-producing transistors 14 nanometers across – just 14 times wider than DNA molecules. They’re made of silicon, the second-most abundant material on our planet. Silicon’s atomic size is about 0.2 nanometers.

Today’s transistors are about 70 silicon atoms wide, so the possibility of making them even smaller is itself shrinking. We’re getting very close to the limit of how small we can make a transistor.

At present, transistors use electrical signals – electrons moving from one place to another – to communicate. But if we could use light, made up of photons, instead of electricity, we could make transistors even faster. My work, on finding ways to integrate light-based processing with existing chips, is part of that nascent effort.

Putting light inside a chip

A transistor has three parts; think of them as parts of a digital camera. First, information comes into the lens, analogous to a transistor’s source. Then it travels through a channel from the image sensor to the wires inside the camera. And lastly, the information is stored on the camera’s memory card, which is called a transistor’s “drain” – where the information ultimately ends up.

Light waves can have different frequencies. maxhurtz

Right now, all of that happens by moving electrons around. To substitute light as the medium, we actually need to move photons instead. Subatomic particles like electrons and photons travel in a wave motion, vibrating up and down even as they move in one direction. The length of each wave depends on what it’s traveling through.

In silicon, the most efficient wavelength for photons is 1.3 micrometers. This is very small – a human hair is around 100 micrometers across. But electrons in silicon are even smaller – with wavelengths 50 to 1,000 times shorter than photons.

This means the equipment to handle photons needs to be bigger than the electron-handling devices we have today. So it might seem like it would force us to build larger transistors, rather than smaller ones.

However, for two reasons, we could keep chips the same size and deliver more processing power, shrink chips while providing the same power, or, potentially both. First, a photonic chip needs only a few light sources, generating photons that can then be directed around the chip with very small lenses and mirrors.

And second, light is much faster than electrons. On average photons can travel about 20 times faster than electrons in a chip. That means computers that are 20 times faster, a speed increase that would take about 15 years to achieve with current technology.

Scientists have demonstrated progress toward photonic chips in recent years. A key challenge is making sure the new light-based chips can work with all the existing electronic chips. If we’re able to figure out how to do it – or even to use light-based transistors to enhance electronic ones – we could see significant performance improvement.

When can I get a light-based laptop or smartphone?

We still have some way to go before the first consumer device reaches the market, and progress takes time. The first transistor was made in the year 1907 using vacuum tubes, which were typically between one and six inches tall (on average 100 mm). By 1947, the current type of transistor – the one that’s now just 14 nanometers across – was invented and it was 40 micrometers long (about 3,000 times longer than the current one). And in 1971 the first commercial microprocessor (the powerhouse of any electronic gadget) was 1,000 times bigger than today’s when it was released.

The vast research efforts and the consequential evolution seen in the electronics industry are only starting in the photonic industry. As a result, current electronics can perform tasks that are far more complex than the best current photonic devices. But as research proceeds, light’s capability will catch up to, and ultimately surpass, electronics’ speeds. However long it takes to get there, the future of photonics is bright.

The ConversationArnab Hazari, Ph.D. student in Electrical Engineering, University of Michigan

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

Understanding the four types of AI, from reactive robots to self-aware beings

By Arend Hintze, Michigan State University.

The common, and recurring, view of the latest breakthroughs in artificial intelligence research is that sentient and intelligent machines are just on the horizon. Machines understand verbal commands, distinguish pictures, drive cars and play games better than we do. How much longer can it be before they walk among us?

The new White House report on artificial intelligence takes an appropriately skeptical view of that dream. It says the next 20 years likely won’t see machines “exhibit broadly-applicable intelligence comparable to or exceeding that of humans,” though it does go on to say that in the coming years, “machines will reach and exceed human performance on more and more tasks.” But its assumptions about how those capabilities will develop missed some important points.

As an AI researcher, I’ll admit it was nice to have my own field highlighted at the highest level of American government, but the report focused almost exclusively on what I call “the boring kind of AI.” It dismissed in half a sentence my branch of AI research, into how evolution can help develop ever-improving AI systems, and how computational models can help us understand how our human intelligence evolved.

The report focuses on what might be called mainstream AI tools: machine learning and deep learning. These are the sorts of technologies that have been able to play “Jeopardy!” well, and beat human Go masters at the most complicated game ever invented. These current intelligent systems are able to handle huge amounts of data and make complex calculations very quickly. But they lack an element that will be key to building the sentient machines we picture having in the future.

We need to do more than teach machines to learn. We need to overcome the boundaries that define the four different types of artificial intelligence, the barriers that separate machines from us – and us from them.

Type I AI: Reactive machines

The most basic types of AI systems are purely reactive, and have the ability neither to form memories nor to use past experiences to inform current decisions. Deep Blue, IBM’s chess-playing supercomputer, which beat international grandmaster Garry Kasparov in the late 1990s, is the perfect example of this type of machine.

Deep Blue can identify the pieces on a chess board and know how each moves. It can make predictions about what moves might be next for it and its opponent. And it can choose the most optimal moves from among the possibilities.

But it doesn’t have any concept of the past, nor any memory of what has happened before. Apart from a rarely used chess-specific rule against repeating the same move three times, Deep Blue ignores everything before the present moment. All it does is look at the pieces on the chess board as it stands right now, and choose from possible next moves.

This type of intelligence involves the computer perceiving the world directly and acting on what it sees. It doesn’t rely on an internal concept of the world. In a seminal paper, AI researcher Rodney Brooks argued that we should only build machines like this. His main reason was that people are not very good at programming accurate simulated worlds for computers to use, what is called in AI scholarship a “representation” of the world.

The current intelligent machines we marvel at either have no such concept of the world, or have a very limited and specialized one for its particular duties. The innovation in Deep Blue’s design was not to broaden the range of possible movies the computer considered. Rather, the developers found a way to narrow its view, to stop pursuing some potential future moves, based on how it rated their outcome. Without this ability, Deep Blue would have needed to be an even more powerful computer to actually beat Kasparov.

Similarly, Google’s AlphaGo, which has beaten top human Go experts, can’t evaluate all potential future moves either. Its analysis method is more sophisticated than Deep Blue’s, using a neural network to evaluate game developments.

These methods do improve the ability of AI systems to play specific games better, but they can’t be easily changed or applied to other situations. These computerized imaginations have no concept of the wider world – meaning they can’t function beyond the specific tasks they’re assigned and are easily fooled.

They can’t interactively participate in the world, the way we imagine AI systems one day might. Instead, these machines will behave exactly the same way every time they encounter the same situation. This can be very good for ensuring an AI system is trustworthy: You want your autonomous car to be a reliable driver. But it’s bad if we want machines to truly engage with, and respond to, the world. These simplest AI systems won’t ever be bored, or interested, or sad.

Type II AI: Limited memory

This Type II class contains machines can look into the past. Self-driving cars do some of this already. For example, they observe other cars’ speed and direction. That can’t be done in a just one moment, but rather requires identifying specific objects and monitoring them over time.

These observations are added to the self-driving cars’ preprogrammed representations of the world, which also include lane markings, traffic lights and other important elements, like curves in the road. They’re included when the car decides when to change lanes, to avoid cutting off another driver or being hit by a nearby car.

But these simple pieces of information about the past are only transient. They aren’t saved as part of the car’s library of experience it can learn from, the way human drivers compile experience over years behind the wheel.

So how can we build AI systems that build full representations, remember their experiences and learn how to handle new situations? Brooks was right in that it is very difficult to do this. My own research into methods inspired by Darwinian evolution can start to make up for human shortcomings by letting the machines build their own representations.

Type III AI: Theory of mind

We might stop here, and call this point the important divide between the machines we have and the machines we will build in the future. However, it is better to be more specific to discuss the types of representations machines need to form, and what they need to be about.

Machines in the next, more advanced, class not only form representations about the world, but also about other agents or entities in the world. In psychology, this is called “theory of mind” – the understanding that people, creatures and objects in the world can have thoughts and emotions that affect their own behavior.

This is crucial to how we humans formed societies, because they allowed us to have social interactions. Without understanding each other’s motives and intentions, and without taking into account what somebody else knows either about me or the environment, working together is at best difficult, at worst impossible.

If AI systems are indeed ever to walk among us, they’ll have to be able to understand that each of us has thoughts and feelings and expectations for how we’ll be treated. And they’ll have to adjust their behavior accordingly.

Type IV AI: Self-awareness

The final step of AI development is to build systems that can form representations about themselves. Ultimately, we AI researchers will have to not only understand consciousness, but build machines that have it.

This is, in a sense, an extension of the “theory of mind” possessed by Type III artificial intelligences. Consciousness is also called “self-awareness” for a reason. (“I want that item” is a very different statement from “I know I want that item.”) Conscious beings are aware of themselves, know about their internal states, and are able to predict feelings of others. We assume someone honking behind us in traffic is angry or impatient, because that’s how we feel when we honk at others. Without a theory of mind, we could not make those sorts of inferences.

While we are probably far from creating machines that are self-aware, we should focus our efforts toward understanding memory, learning and the ability to base decisions on past experiences. This is an important step to understand human intelligence on its own. And it is crucial if we want to design or evolve machines that are more than exceptional at classifying what they see in front of them.

The ConversationArend Hintze, Assistant Professor of Integrative Biology & Computer Science and Engineering, Michigan State University

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

How Twitter bots affected the US presidential campaign

By Emilio Ferrara, University of Southern California.

Key to democracy is public engagement – when people discuss the issues of the day with each other openly, honestly and without outside influence. But what happens when large numbers of participants in that conversation are biased robots created by unseen groups with unknown agendas? As my research has found, that’s what has happened this election season.

Since 2012, I have been studying how people discuss social, political, ideological and policy issues online. In particular, I have looked at how social media are abused for manipulative purposes.

It turns out that much of the political content Americans see on social media every day is not produced by human users. Rather, about one in every five election-related tweets from Sept. 16 to Oct. 21 was generated by computer software programs called “social bots.”

These artificial intelligence systems can be rather simple or very sophisticated, but they share a common trait: They are set to automatically produce content following a specific political agenda determined by their controllers, who are nearly impossible to identify. These bots have affected the online discussion around the presidential election, including leading topics and how online activity was perceived by the media and the public.

How active are they?

The operators of these systems could be political parties, foreign governments, third-party organizations, or even individuals with vested interests in a particular election outcome. Their work amounts to at least four million election-related tweets during the period we studied, posted by more than 400,000 social bots.

That’s at least 15 percent of all the users discussing election-related issues. It’s more than twice the overall concentration of bots on Twitter – which the company estimates at 5 to 8.5 percent of all accounts.

To determine which accounts are bots and which are humans, we use Bot Or Not, a publicly available bot-detection service that I developed in collaboration with colleagues at Indiana University. Bot Or Not uses advanced machine learning algorithms to analyze multiple cues, including Twitter profile metadata, the content and topics posted by the account under inspection, the structure of its social network, the timeline of activity and much more. After considering more than 1,000 factors, Bot Or Not generates a likelihood score that the account under scrutiny is a bot. Our tool is 95 percent accurate at this determination.

There are many examples of bot-generated tweets, supporting their candidates, or attacking the opponents. Here is just one:

@u_edilberto: RT @WeNeedHillary: Polls Are All Over the Place. Keep Calm & Hillary On! https://t.co/XwBFfLjz7x #p2 #ctl #ImWithHer #TNTweeters https://t …

How effective are they?

The effectiveness of social bots depends on the reactions of actual people. We learned, distressingly, that people were not able to ignore, or develop a sort of immunity toward, the bots’ presence and activity. Instead, we found that most human users can’t tell whether a tweet is posted by another real user or by a bot. We know this because bots are being retweeted at the same rate as humans. Retweeting bots’ content without first verifying its accuracy can have real consequences, including spreading rumors, conspiracy theories or misinformation.

Some of these bots are very simple, and just retweet content produced by human supporters. Other bots, however, produce new tweets, jumping in the conversation by using existing popular hashtags (for instance, #NeverHillary or #NeverTrump). Real users who follow these Twitter hashtags will be exposed to bot-generated content seamlessly blended with the tweets produced by other actual people.

Bots produce content automatically, and therefore at a very fast and continuous rate. That means they form consistent and pervasive parts of the online discussion throughout the campaign. As a result, they were able to build significant influence, collecting large numbers of followers and having their tweets retweeted by thousands of humans.

A deeper understanding of bots

Our investigation into these politically active social bots also uncovered information that can lead us to more nuanced understanding of them. One such lesson was that bots are biased, by design. For example, Trump-supporting bots systematically produced overwhelmingly positive tweets in support of their candidate. Previous studies showed that this systematic bias alters public perception. Specifically, it creates the false impression that there is grassroots, positive, sustained support for a certain candidate.

Location provided another lesson. Twitter provides metadata about the physical location of the device used to post a certain tweet. By aggregating and analyzing their digital footprints, we discovered that bots are not uniformly distributed across the United States: They are significantly overrepresented in some states, in particular southern states like Georgia and Mississippi. This suggests that some bot operations may be based in those states.

Also, we discovered that bots can operate in multiple ways: For example, when they are not engaged in producing content supporting their respective candidates, bots can target their opponents. We discovered that bots pollute certain hashtags, like #NeverHillary or #NeverTrump, where they smear the opposing candidate.

These strategies leverage known human biases: for example, the fact that negative content travels faster on social media, as one of our recent studies demonstrated. We found that, in general, negative tweets are retweeted at a pace 2.5 times higher than positive ones. This, in conjunction with the fact that people are naturally more inclined to retweet content that aligns with their preexisting political views, results in the spreading of content that is often defamatory or based on unsupported, or even false, claims.

It is hard to quantify the effects of bots on the actual election outcome, but it’s plausible to think that they could affect voter turnout in some places. For example, some people may think there is so much local support for their candidate (or the opponent) that they don’t need to vote – even if what they’re seeing is actually artificial support provided by bots.

Our study hit the limits of what can be done today by using computational methods to fight the issue of bots: Our ability to identify the bot masters is bound by technical constraints on recognizing patterns in their behavior. Social media is acquiring increasing importance in shaping political beliefs and influencing people’s online and offline behavior. The research community will need to continue to explore, to make these platforms as safe from abuse as possible.

The ConversationEmilio Ferrara, Research Assistant Professor of Computer Science, University of Southern California

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

Moving toward computing at the speed of thought

By Frances Van Scoy, West Virginia University.

The first computers cost millions of dollars and were locked inside rooms equipped with special electrical circuits and air conditioning. The only people who could use them had been trained to write programs in that specific computer’s language. Today, gesture-based interactions, using multitouch pads and touchscreens, and exploration of virtual 3D spaces allow us to interact with digital devices in ways very similar to how we interact with physical objects.

This newly immersive world not only is open to more people to experience; it also allows almost anyone to exercise their own creativity and innovative tendencies. No longer are these capabilities dependent on being a math whiz or a coding expert: Mozilla’s “A-Frame” is making the task of building complex virtual reality models much easier for programmers. And Google’s “Tilt Brush” software allows people to build and edit 3D worlds without any programming skills at all.

My own research hopes to develop the next phase of human-computer interaction. We are monitoring people’s brain activity in real time and recognizing specific thoughts (of “tree” versus “dog” or of a particular pizza topping). It will be yet another step in the historical progression that has brought technology to the masses – and will widen its use even more in the coming years.

Reducing the expertise needed

From those early computers dependent on machine-specific programming languages, the first major improvement allowing more people to use computers was the development of the Fortran programming language. It expanded the range of programmers to scientists and engineers who were comfortable with mathematical expressions. This was the era of punch cards, when programs were created by punching holes in cardstock, and output had no graphics – only keyboard characters.

By the late 1960s mechanical plotters let programmers draw simple pictures by telling a computer to raise or lower a pen, and move it a certain distance horizontally or vertically on a piece of paper. The commands and graphics were simple, but even drawing a basic curve required understanding trigonometry, to specify the very small intervals of horizontal and vertical lines that would look like a curve once finished.

The 1980s introduced what has become the familiar windows, icons and mouse interface. That gave nonprogrammers a much easier time creating images – so much so that many comic strip authors and artists stopped drawing in ink and began working with computer tablets. Animated films went digital, as programmers developed sophisticated proprietary tools for use by animators.

Simpler tools became commercially available for consumers. In the early 1990s the OpenGL library allowed programmers to build 2D and 3D digital models and add color, movement and interaction to these models.

Inside a CAVE system. Davepape

In recent years, 3D displays have become much smaller and cheaper than the multi-million-dollar CAVE and similar immersive systems of the 1990s. They needed space 30 feet wide, 30 feet long and 20 feet high to fit their rear-projection systems. Now smartphone holders can provide a personal 3D display for less than US$100.

User interfaces have gotten similarly more powerful. Multitouch pads and touchscreens recognize movements of multiple fingers on a surface, while devices such as the Wii and Kinect recognize movements of arms and legs. A company called Fove has been working to develop a VR headset that will track users’ eyes, and which will, among other capabilities, let people make eye contact with virtual characters.

Planning longer term

My own research is helping to move us toward what might be called “computing at the speed of thought.” Low-cost open-source projects such as OpenBCI allow people to assemble their own neuroheadsets that capture brain activity noninvasively.

Ten to 15 years from now, hardware/software systems using those sorts of neuroheadsets could assist me by recognizing the nouns I’ve thought about in the past few minutes. If it replayed the topics of my recent thoughts, I could retrace my steps and remember what thought triggered my most recent thought.

With more sophistication, perhaps a writer could wear an inexpensive neuroheadset, imagine characters, an environment and their interactions. The computer could deliver the first draft of a short story, either as a text file or even as a video file showing the scenes and dialogue generated in the writer’s mind.

Working toward the future

Once human thought can communicate directly with computers, a new world will open before us. One day, I would like to play games in a virtual world that incorporates social dynamics as in the experimental gamesProm Week” and “Façade” and in the commercial game “Blood & Laurels.”

This type of experience would not be limited to game play. Software platforms such as an enhanced Versu could enable me to write those kinds of games, developing characters in the same virtual environments they’ll inhabit.

Years ago, I envisioned an easily modifiable application that allows me to have stacks of virtual papers hovering around me that I can easily grab and rifle through to find a reference I need for a project. I would love that. I would also really enjoy playing “Quidditch” with other people while we all experience the sensation of flying via head-mounted displays and control our brooms by tilting and twisting our bodies.

An early, single-player virtual reality version of ‘Quidditch.’

Once low-cost motion capture becomes available, I envision new forms of digital story-telling. Imagine a group of friends acting out a story, then matching their bodies and their captured movements to 3D avatars to reenact the tale in a synthetic world. They could use multiple virtual cameras to “film” the action from multiple perspectives, and then construct a video.

This sort of creativity could lead to much more complex projects, all conceived in creators’ minds and made into virtual experiences. Amateur historians without programming skills may one day be able to construct augmented reality systems in which they can superimpose onto views of the real world selected images from historic photos or digital models of buildings that no longer exist. Eventually they could add avatars with whom users can converse. As technology continues to progress and become easier to use, the dioramas built of cardboard, modeling clay and twigs by children 50 years ago could one day become explorable, life-sized virtual spaces.

The ConversationFrances Van Scoy, Associate Professor of Computer Science and Electrical Engineering, West Virginia University

This article was originally published on The Conversation. Read the original article.

Next, Check Out:

Turning diamonds’ defects into long-term 3-D data storage

By Siddharth Dhomkar, City College of New York and Jacob Henshaw, City College of New York.

With the amount of data storage required for our daily lives growing and growing, and currently available technology being almost saturated, we’re in desparate need of a new method of data storage. The standard magnetic hard disk drive (HDD) – like what’s probably in your laptop computer – has reached its limit, holding a maximum of a few terabytes. Standard optical disk technologies, like compact disc (CD), digital video disc (DVD) and Blu-ray disc, are restricted by their two-dimensional nature – they just store data in one plane – and also by a physical law called the diffraction limit, based on the wavelength of light, that constrains our ability to focus light to a very small volume.

And then there’s the lifetime of the memory itself to consider. HDDs, as we’ve all experienced in our personal lives, may last only a few years before things start to behave strangely or just fail outright. DVDs and similar media are advertised as having a storage lifetime of hundreds of years. In practice this may be cut down to a few decades, assuming the disk is not rewritable. Rewritable disks degrade on each rewrite.

Without better solutions, we face financial and technological catastrophes as our current storage media reach their limits. How can we store large amounts of data in a way that’s secure for a long time and can be reused or recycled?

In our lab, we’re experimenting with a perhaps unexpected memory material you may even be wearing on your ring finger right now: diamond. On the atomic level, these crystals are extremely orderly – but sometimes defects arise. We’re exploiting these defects as a possible way to store information in three dimensions.

Focusing on tiny defects

One approach to improving data storage has been to continue in the direction of optical memory, but extend it to multiple dimensions. Instead of writing the data to a surface, write it to a volume; make your bits three-dimensional. The data are still limited by the physical inability to focus light to a very small space, but you now have access to an additional dimension in which to store the data. Some methods also polarize the light, giving you even more dimensions for data storage. However, most of these methods are not rewritable.

Here’s where the diamonds come in.

The orderly structure of a diamond, but with a vacancy and a nitrogen replacing two of the carbon atoms. Zas2000

A diamond is supposed to be a pure well-ordered array of carbon atoms. Under an electron microscope it usually looks like a neatly arranged three-dimensional lattice. But occasionally there is a break in the order and a carbon atom is missing. This is what is known as a vacancy. Even further tainting the diamond, sometimes a nitrogen atom will take the place of a carbon atom. When a vacancy and a nitrogen atom are next to each other, the composite defect is called a nitrogen vacancy, or NV, center. These types of defects are always present to some degree, even in natural diamonds. In large concentrations, NV centers can impart a characteristic red color to the diamond that contains them.

This defect is having a huge impact in physics and chemistry right now. Researchers have used it to detect the unique nuclear magnetic resonance signatures of single proteins and are probing it in a variety of cutting-edge quantum mechanical experiments.

Nitrogen vacancy centers have a tendency to trap electrons, but the electron can also be forced out of the defect by a laser pulse. For many researchers, the defects are interesting only when they’re holding on to electrons. So for them, the fact that the defects can release the electrons, too, is a problem.

But in our lab, we instead look at these nitrogen vacancy centers as a potential benefit. We think of each one as a nanoscopic “bit.” If the defect has an extra electron, the bit is a one. If it doesn’t have an extra electron, the bit is a zero. This electron yes/no, on/off, one/zero property opens the door for turning the NV center’s charge state into the basis for using diamonds as a long-term storage medium.

Starting from a blank ensemble of NV centers in a diamond (1), information can be written (2), erased (3), and rewritten (4).
Siddharth Dhomkar and Carlos A. Meriles, CC BY-ND

Turning the defect into a benefit

Previous experiments with this defect have demonstrated some properties that make diamond a good candidate for a memory platform.

First, researchers can selectively change the charge state of an individual defect so it either holds an electron or not. We’ve used a green laser pulse to assist in trapping an electron and a high-power red laser pulse to eject an electron from the defect. A low-power red laser pulse can help check if an electron is trapped or not. If left completely in the dark, the defects maintain their charged/discharged status virtually forever.

The NV centers can encode data on various levels.
Siddharth Dhomkar and Carlos A. Meriles, CC BY-ND

Our method is still diffraction limited, but is 3-D in the sense that we can charge and discharge the defects at any point inside of the diamond. We also present a sort of fourth dimension. Since the defects are so small and our laser is diffraction limited, we are technically charging and discharging many defects in a single pulse. By varying the duration of the laser pulse in a single region we can control the number of charged NV centers and consequently encode multiple bits of information.

Though one could use natural diamonds for these applications, we use artificially lab-grown diamonds. That way we can efficiently control the concentration of nitrogen vacancy centers in the diamond.

All these improvements add up to about 100 times enhancement in terms of bit density relative to the current DVD technology. That means we can encode all the information from a DVD into a diamond that takes up about one percent of the space.

Past just charge, to spin as well

If we could get beyond the diffraction limit of light, we could improve storage capacities even further. We have one novel proposal on this front.

A human cell, imaged on the right with super-resolution microscope.
Dr. Muthugapatti Kandasamy, CC BY-NC-ND

Nitrogen vacancy centers have also been used in the execution of what is called super-resolution microscopy to image things that are much smaller than the wavelength of light. However, since the super-resolution technique works on the same principles of charging and discharging the defect, it will cause unintentional alteration in the pattern that one wants to encode. Therefore, we won’t be able to use it as it is for memory storage application and we’d need to back up the already written data somehow during a read or write step.

Here we propose the idea of what we call charge-to-spin conversion; we temporarily encode the charge state of the defect in the spin state of the defect’s host nitrogen nucleus. Spin is a fundamental property of any elementary particle; it’s similar to its charge, and can be imagined as having a very tiny magnet permanently attached it.

While the charges are being adjusted to read/write the information as desired, the previously written information is well protected in the nitrogen spin state. Once the charges have encoded, the information can be back converted from the nitrogen spin to the charge state through another mechanism which we call spin-to-charge conversion.

With these advanced protocols, the storage capacity of a diamond would surpass what existing technologies can achieve. This is just a beginning, but these initial results provide us a potential way of storing huge amount of data in a brand new way. We’re looking forward to transform this beautiful quirk of physics into a vastly useful technology.

The ConversationSiddharth Dhomkar, Postdoctoral Associate in Physics, City College of New York and Jacob Henshaw, Teaching Assistant in Physics, City College of New York

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

Saving lives by letting cars talk to each other

By Huei Peng, University of Michigan.

The death of a person earlier this year while driving with Autopilot in a Tesla sedan, along with news of more crashes involving Teslas operating in Autopilot, has triggered a torrent of concerns about the safety of self-driving cars.

But there is a way to improve safety across a rapidly evolving range of advanced mobility technologies and vehicles – from semi-autonomous driver assist features like Tesla’s Autopilot to a fully autonomous self-driving car like Google’s.

A Tesla Model S on the highway. pasa47, CC BY

The answer is connectivity: wireless communication that connects vehicles to each other, to the surrounding infrastructure, even to bicyclists and pedestrians. While connectivity and automation each provide benefits on their own, combining them promises to transform the movement of people and goods more than either could alone, and to do so safely. The U.S. Department of Transportation may propose requiring all new cars to have vehicle-to-vehicle communication, known as V2V, as early as this fall.

Tesla blamed the fatal crash on the failure of both its Autopilot technology and the driver to see the white tractor-trailer against a bright sky. But the crash – and the death – might have been avoided entirely if the Tesla and the tractor-trailer it hit had been able to talk to each other.

Limitations of vehicles that are not connected

Autonomous vehicles that aren’t connected to each other is a bit like gathering together the smartest people in the world but not letting them talk to each other. Connectivity enables smart decisions by individual drivers, by self-driving vehicles and at every level of automation in between.

Despite all the safety advances in recent decades, there are still more than 30,000 traffic deaths every year in the United States, and the number may be on the rise. After years of steady declines, fatalities rose 7.2 percent in 2015 to 35,092, up from 32,744 in 2014, representing the largest percentage increase in nearly 50 years, according to the U.S. DOT.

Most American traffic crashes involve human error. Ragesoss, CC BY-SA

The federal government estimates that 94 percent of all crashes – fatal or not – involve human error. Fully automated, self-driving vehicles are considered perhaps the best way to reduce or eliminate traffic deaths by taking human error out of the equation. The benefits of automation are evident today in vehicles that can steer you back into your lane if you start to drift or brake automatically when another driver cuts you off.

A self-driving vehicle takes automation to a higher level. It acts independently, using sensors such as cameras and radars, along with decision-making software and control features, to “see” its environment and respond, just as a human driver would.

However, onboard sensors, no matter how sophisticated, have limitations. Like humans, they see only what is in their line of sight, and they can be hampered by poor weather conditions.

Connecting cars to each other

Connected vehicles anonymously and securely “talk” to each other and to the surrounding infrastructure via wireless communication similar to Wi-Fi, known as Dedicated Short Range Communications, or DSRC. Vehicles exchange data – including location, speed and direction – 10 times per second through messages that can be securely transmitted at least 1,000 feet in any direction, and through barriers such as heavy snow or fog. Bicycles and pedestrians can be linked using portable devices such as smartphones or tablets, so drivers know they are nearby.

The federal government estimates that V2V connectivity could ultimately prevent or reduce the severity of about 80 percent of collisions that don’t involve a driver impaired by drugs or alcohol.

Cars are already connected in many ways. Think satellite-based GPS navigation, in-vehicle Wi-Fi hotspots and smartphone apps that remind you where you parked or remotely unlock your doors. But when it comes to connectivity for safety, there is broad agreement within the auto industry that DSRC-based V2V communication holds the most potential for reducing crashes. After years of testing, the industry is poised to roll out the technology. The next step is putting regulations in place.

Could this congested mess become a connected, communicating system? joiseyshowaa/flickr, CC BY-SA

Perhaps the greatest benefit of connectivity is that it can transform a group of independent vehicles sharing a road into a cohesive traffic system that can exchange critical information about road and traffic conditions in real time. If all vehicles are connected, and a car slips on some ice in blinding snow, vehicles following that car – whether immediately behind or three or four or more vehicles back – will get warnings to slow down. A potential 100-car pileup could become a two-car fender-bender, or be avoided altogether.

This technological shift becomes a revolution when connectivity and automation are combined. A self-driving vehicle is like an island on the road, aware only of what is immediately around it. Connectivity empowers a driverless car. It alerts the vehicle to imminent dangers it may not otherwise sense, such as a vehicle about to run a red light, approaching from the other side of a hill or coming around a blind corner. The additional information could be what triggers an automated response that avoids a crash. In that way, connectivity enables more, and potentially better, automation.

More research needed

At the University of Michigan Mobility Transformation Center, we’re working to further the development of connected and automated vehicles.

Advanced mobility vehicle technology is evolving rapidly on many fronts. More work must be done to determine how best to feed data gathered from sensors to in-vehicle warning systems. We need to more fully understand how to fuse information from connectivity and onboard sensors effectively, under a wide variety of driving scenarios. And we must perfect artificial intelligence, the brains behind self-driving cars.

The benefits of connected and automated vehicles go well beyond safety. They hold the potential to significantly reduce fuel use and carbon emissions through more efficient traffic flow. No more idling at red lights or in rush hour jams for commuters or freight haulers.

Connected self-driving cars also promise to bring safe mobility to those who don’t have cars, don’t want cars or cannot drive due to age or illness. Everything from daily living supplies to health care could be delivered to populations without access to transportation.

Researchers at MTC are also studying possible negative unintended consequences of the new technology and watching for possible privacy violations, cyberattack vulnerabilities or increases in mileage driven. Deeper understanding of both technology and social science issues is the only way to ensure that connected self-driving cars become part of our sustainable future.

The ConversationHuei Peng, Professor of Mechanical Engineering, University of Michigan

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

How random is your randomness, and why does it matter?

By David Zuckerman, University of Texas at Austin and Eshan Chattopadhyay, Institute for Advanced Study.

Randomness is powerful. Think about a presidential poll: A random sample of just 400 people in the United States can accurately estimate Clinton’s and Trump’s support to within 5 percent (with 95 percent certainty), despite the U.S. population exceeding 300 million. That’s just one of many uses.

Randomness is vital for computer security, making possible secure encryption that allows people to communicate secretly even if an adversary sees all coded messages. Surprisingly, it even allows security to be maintained if the adversary also knows the key used to the encode the messages.

Often random numbers can be used to speed up algorithms. For example, the fastest way we know to test whether a particular number is prime involves choosing random numbers. That can be helpful in math, computer science and cryptography, among other disciplines.

Random numbers are also crucial to simulating very complex systems. When dealing with the climate or the economy, for example, so many factors interact in so many ways that the equations involve millions of variables. Today’s computers are not powerful enough to handle all these unknowns. Modeling this complexity with random numbers simplifies the calculations, and still results in accurate simulations.

Typing: A source of low-quality randomness. ROLENSFX/YouTube, CC BY-SA

But it turns out some – even most – computer-generated “random” numbers aren’t actually random. They can follow subtle patterns that can be observed over long periods of time, or over many instances of generating random numbers. For example, a simple random number generator could be built by timing the intervals between a user’s keystrokes. But the results would not really be random, because there are correlations and patterns in these timings, especially when looking at a large number of them.

Using this sort of output – numbers that appear at first glance to be unrelated but which really follow a hidden pattern – can weaken polls’ accuracy and communication secrecy, and render those simulations useless. How can we obtain high-quality randomness, and what does this even mean?

Randomness quality

To be most effective, we want numbers that are very close to random. Suppose a pollster wants to pick a random congressional district. As there are 435 districts, each district should have one chance in 435 of being picked. No district should be significantly more or less likely to be chosen.

Low-quality randomness is an even bigger concern for computer security. Hackers often exploit situations where a supposedly random string isn’t all that random, like when an encryption key is generated with keystroke intervals.

Radioactive decay: Unpredictable, but not efficient for generating randomness. Inductiveload

It turns out to be very hard for computers to generate truly random numbers, because computers are just machines that follow fixed instructions. One approach has been to use a physical phenomenon a computer can monitor, such as radioactive decay of a material or atmospheric noise. These are intrinsically unpredictable and therefore hard for a potential attacker to guess. However, these methods are typically too slow to supply enough random numbers for all the needs computers and people have.

There are other, more easily accessible sources of near-randomness, such as those keystroke intervals or monitoring computer processors’ activity. However, these produce random numbers that do follow some patterns, and at best contain only some amount of uncertainty. These are low-quality random sources. They’re not very useful on their own.

What we need is called a randomness extractor: an algorithm that takes as input two (or more) independent, low-quality random sources and outputs a truly random string (or a string extremely close to random).

Constructing a randomness extractor

Mathematically, it is impossible to extract randomness from just one low-quality source. A clever (but by now standard) argument from probability shows that it’s possible to create a two-source extractor algorithm to generate a random number. But that proof doesn’t tell us how to make one, nor guarantee that an efficient algorithm exists.

Until our recent work, the only known efficient two-source extractors required that at least one of the random sources actually had moderately high quality. We recently developed an efficient two-source extractor algorithm that works even if both sources have very low quality.

Our algorithm for the two-source extractor has two parts. The first part uses a cryptographic method called a “nonmalleable extractor” to convert the two independent sources into one series of coin flips. This allows us to reduce the two-source extractor problem to solving a quite different problem.

Suppose a group of people want to collectively make an unbiased random choice, say among two possible choices. The catch is that some unknown subgroup of these people have their heart set on one result or the other, and want to influence the decision to go their way. How can we prevent this from happening, and ensure the ultimate result is as random as possible?

The simplest method is to just flip a coin, right? But then the person who does the flipping will just call out the result he wants. If we have everyone flip a coin, the dishonest players can cheat by waiting until the honest players announce their coin flips.

A middling solution is to let everyone flip a coin, and go with the outcome of a majority of coin flippers. This is effective if the number of cheaters is not too large; among the honest players, the number of heads is likely to differ from the number of tails by a significant amount. If the number of cheaters is smaller, then they won’t be able to affect the outcome.

Protecting against cheaters

We constructed an algorithm, called a “resilient function,” that tolerates a much larger number of cheaters. It depends on more than just the numbers of heads and tails. A building block of our function is called the “tribes function,” which we can explain as follows.

Suppose there are 44 people involved in collectively flipping a coin, some of whom may be cheaters. To make the collective coin flip close to fair, divide them into 11 subgroups of four people each. Each subgroup will call out “heads” if all of its members flip heads; otherwise it will say “tails.” The tribes function outputs “heads” if any subgroup says “heads;” otherwise it outputs “tails.”

The tribes function works well if there is just one cheater. This is because if some other member of the cheater’s subgroup flips tails, then the cheater’s coin flip doesn’t affect the outcome. However, it works poorly if there are four cheaters, and if those players all belong to the same subgroup. For then all of them could output “heads,” and force the tribes function to output “heads.”

To handle many cheaters, we build upon work of Miklos Ajtai and Nati Linial and use many different divisions into subgroups. This gives many different tribes functions. We then output “heads” if all these tribe functions output “heads”; otherwise we output “tails.” Even a large number of cheaters is unlikely to be able to control the output, ensuring the result is, in fact, very random.

Our extractor outputs just one almost random bit – “heads” or “tails.” Shortly afterwards Xin Li showed how to use our algorithm to output many bits. While we gave an exponential improvement, other researchers have further improved our work, and we are now very close to optimal.

Our finding is truly just one piece of a many-faceted puzzle. It also advances an important field in the mathematical community, called Ramsey theory, which seeks to find structure even in random-looking objects.

The ConversationDavid Zuckerman, Professor of Computer Science, University of Texas at Austin and Eshan Chattopadhyay, Postdoctoral Researcher in Mathematics, Institute for Advanced Study

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

What you see is not always what you get: how virtual reality can manipulate our minds [Video]

By David Evans Bailey, Auckland University of Technology.

It is often said that you should not believe everything you see on the internet. But with the advent of immersive technology – like virtual reality (VR) and augmented reality (AR) – this becomes more than doubly true.

The full capabilities of these immersive technologies have yet to be explored, but already we can get a sense of how they can be used to manipulate us.

You may not think you are someone who is easily duped, but what if the techniques used are so subtle that you are not even aware of them? The truth is that once you’re in a VR world, you can be influenced without knowing it.

Unlike video conferencing, where video data is presented exactly as it is recorded, immersive technologies only send select information and not necessarily the actual graphical content.

This has always been the case in multiplayer gaming, where the gaming server simply sends location and other information to your computer. It’s then up to your computer to translate that into a full picture.

Interactive VR is similar. In many cases, very little data is shared between the remote computer and yours, and the actual visual scene is constructed locally.

This means that what you are seeing on your end is not necessarily the same as what is being seen at the other end. If you are engaged in a VR chat, the facial features, expressions, gestures, bodily appearance and many other factors can be altered by software without you knowing it.

Stanford researchers examine the psychology of virtual reality

Like you like me

In a positive sense VR can be helpful in many fields. For example, research shows that eye contact increases the attentiveness of students, but a teacher lecturing a large class cannot make eye contact with every student.

With VR, though, the software can be programmed to make the teacher appear to be making eye contact with all of the students at the same time. So a physical impossibility becomes virtually possible.

But there will always be some people who will co-opt a tool and use it for something perhaps more nefarious. What if, instead of a teacher, we had a politician or lobbyist, and something more controversial or contentious was being said? What if the eye contact meant that you were more persuaded as a result? And this is only the beginning.

Research has shown that the appearance of ourselves and others in a virtual world can influence us in the real world.

This can also be coupled with techniques that are already used to boost influence. Mimicry is one example. If one person mimics the body language of another in a conversation, then the person being mimicked will become more favourably disposed towards them.

In VR it is easy to do this as the movements of each individual are tracked, so a speaker’s avatar could be made to mimic every person in the audience without them realising it.

More insidious still, all the features of a person’s face can easily be captured by software and turned into an avatar. Several studies from Stanford University have shown that if the features of a political figure could be changed even slightly to resemble each voter in turn, then that could have a significant influence on how people voted.

The experiments took pictures of study participants and real candidates in an mock up of an election campaign. The pictures of each candidate were then morphed to resemble each participant in turn.

Stanford researcher Jeremy Bailenson explains how political manipulation was easily done in VR experiments.

They found that if 40% of the participant’s features were incorporated into the candidates face, the participants were entirely unaware the image had been manipulated. Yet the blended picture significantly influenced the intended voting result in favour of the morphed candidate.

What happens in the virtual world does not stay in the virtual world. We must therefore be mindful when we step into this new realm that what we see is not always what we get.

The ConversationDavid Evans Bailey, PhD Researcher in Virtual Reality, Auckland University of Technology

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

All you need for quantum computing at room temperature is some mothballs

By Mohammad Choucair, University of Sydney.

Much of the current research on the development of a quantum computer involves work at very low temperatures. The challenge to make them more practical for everyday use is to make them work at room temperature.

The breakthrough here came from the use of some everyday materials, with details published today in Nature Communications.

A typical modern-day computer represents information using a binary number system of discrete bits, represented as either 0 and 1.

A quantum computer uses a sequence of quantum bits, or qubits. They can represent information as 0 or 1 or any of a series of states between 0 and 1, known as quantum superposition of those qubits.

It’s this leap that makes quantum computers capable of solving problems much more quickly and powerfully than today’s typical computers.

All in the spin

An electron has a charge and a spin – the spin determines if an atom will generate a magnetic field. The spin can also be used as a qubit as it can undergo transitions between the spin-up and spin-down quantum states, represented classically by 0 and 1.

But the electron spin states therefore need to be robust against “decoherence”. This is the disordering of the electron spins during quantum superposition which results in the loss of information.

The electron spin lifetimes are affected by lattice vibrations in a material and neighbouring magnetic interactions. Long electron spin lifetimes exceeding 100 nanoseconds are needed for quantum computing.

Cooling a material to low temperatures close to absolute zero, -273C, does increases the spin lifetime. So too does the use of magnetically pure conducting materials.

Cool computing

So quantum devices using atomically heavy materials such as silicon or metals need to be cooled to low temperatures near absolute zero.

Other materials have been used to perform quantum manipulations at room temperature. But these materials need to be isotopically engineered, which requires large facilities like nuclear reactors, and pose limitations around qubit density.

Molecules such as metal-organic cluster compounds have also been used, but they too require low temperatures and isotopic engineering.

There are clear and established trade-offs to be considered regarding the feasibility of applying a qubit material system for quantum computing.

A conducting material of light atomic weight with a long electron spin lifetime exceeding 100 nanoseconds at room temperature would permit practical quantum computing. Such a material would combine the best aspects of current solid-state material qubit schemes.

Why you need mothballs

We have demonstrated that a long conduction electron spin lifetime in metallic-like material made up of carbon nanospheres can be achieved at room temperature.

This material was produced simply by burning naphthalene, the active ingredient in mothballs.

The material is produced as a solid powder and handled in air. It can then be dispersed in ethanol and water solvents, or deposited directly onto a surface like glass. As the material was remarkably homogeneous, the measurements could be made on the bulk solid powder.

This allowed us to achieve a new record electron spin lifetime of 175 nanoseconds at room temperature. This might not sound like a long time, but it exceeds the prerequisite for applications in quantum computing and is about 100 times longer than that found in graphene.

This was possibly due to the materials’ self-doping of conduction electrons and their nanometre spatial confinement. This basically means the spheres could be made entirely from carbon while preserving their unique electronic property.

Our work now opens the possibility for spin qubits to be manipulated in a conducting material at room temperature. This method doesn’t need any isotopic engineering of a host material, dilution of the spin-carrying molecule, or cryogenic temperatures.

It allows a higher density packing of qubits to be, in principle, achieved over other promising qubits like those used in silicon.

Reduced costs

This very easy preparation of a carbon material using common laboratory reagents reduces many of the technological barriers to realising practical quantum computing.

For example, the refrigeration systems required to cool materials close to absolute zero can cost upwards of millions of dollars and occupy physical spaces the size of large rooms.

To build a quantum computer one would need to demonstrate that qubits can undergo manipulations involving the superposition of quantum states and also to a build a functioning quantum logic gate (switch).

In our work we have demonstrated the former while making the latter a question of engineering rather than breakthrough science. The next step would be to build a quantum logic gate – an actual device.

What is exciting is that the material is prepared in a form suitable for device processing. We have already demonstrated that the individual conducting carbon nanospheres can be isolated on a silicon surface.

In principal, this may provide an initial avenue to high-density qubit arrays of nanospheres that are integrated onto existing silicon technologies or thin-film-based electronics.

The ConversationMohammad Choucair, Research Fellow, University of Sydney

This article was originally published on The Conversation. Read the original article.

Now, Check Out:

Blockchains: Focusing on bitcoin misses the real revolution in digital trust

By Ari Juels, Cornell University and Ittay Eyal, Cornell University.

In 2008, short of sending a suitcase full of cash, there was essentially just one way for an individual to send money between, say, the United States and Europe. You had to wire the money through a mainstream financial service, like Western Union or a bank. That meant paying high fees and waiting up to several days for the money to arrive.

A radically new option arose in 2009 with the introduction of bitcoin. Bitcoin makes it possible to transfer value between two individuals anywhere in the world quickly and at minimal cost. It is often called a “cryptocurrency,” as it is purely digital and uses cryptography to protect against counterfeiting. The software that executes this cryptography runs simultaneously on computers around the world. Even if one or more of these computers is misused in an attempt to corrupt the bitcoin network (such as to steal money), the collective action of the others ensures the integrity of the system as a whole. Its distributed nature also enables bitcoin to process transactions without the fees, antiquated networks and (for better or worse) the rules governing intermediaries like banks and wire services.

Bitcoin’s exciting history and social impact have fired imaginations. The aggregate market value of all issued bitcoins today is roughly US$10 billion. The computing devices that maintain its blockchain are geographically dispersed and owned by thousands of different individuals, so the bitcoin network has no single owner or point of control. Even its creator remains a mystery (despite many efforts to unmask her, him or them). Bitcoin’s lack of government regulation made it attractive to black markets and malware writers. Although the core system is well-secured, people who own bitcoins have experienced a litany of heists and fraud.

Even more than the currency itself, though, what has drawn the world’s attention are the unprecedented reliability and security of bitcoin’s underlying transaction system, called a blockchain. Researchers, entrepreneurs, and developers believe that blockchains will solve a stunning array of problems, such as stabilization of financial systems, identification of stateless persons, establishing title to real estate and media, and efficiently managing supply chains.

Understanding the blockchain

Despite its richly varied applications, a blockchain such as bitcoin’s aims to realize a simple goal. Abstractly, it can be viewed as creating a kind of public bulletin board, often called a “distributed ledger.” This ledger is public. Anyone – plebeian or plutocrat, baker or banker – can read it. And anyone can write valid data to it. Specifically, in bitcoin, any owner of money can add a transaction to the ledger that transfers some of her money to someone else. The bitcoin network makes sure that the ledger includes only authorized transactions, meaning those digitally signed by the owners of the money being transferred.

The key feature of blockchains is that new data may be written at any time, but can never be changed or erased. At first glance, this etched-in-stone rule seems a needless design restriction. But it gives rise to a permanent, ever-growing transactional history that creates strong transparency and accountability. For example, the bitcoin blockchain contains a record of every transaction in the system since its birth. This feature makes it possible to prevent account holders from reneging on transactions, even if their identities remain anonymous. Once in the ledger, a transaction is undeniable. The indelible nature of the ledger is much more powerful and general, though, allowing blockchains to support applications well beyond bitcoin.

Consider, for example, the management of title to a piece of land or property. Property registries in many parts of the world today are fragmented, incomplete, poorly maintained, and difficult to access. The legal uncertainty surrounding ownership of property is a major impediment to growth in developing economies. Were property titles authoritatively and publicly recorded on a blockchain, anyone could learn instantly who has title to a piece of property. Even legitimate anonymous ownership – as through a private trust – could be recorded on a blockchain.

Such transparency would help resolve legal ambiguity and shed light on malfeasance. Advocates envision similar benefits in blockchain recording of media rights – such as rights to use images or music – identity documents and shipping manifests. In addition, the decentralized nature of the database provides resilience not just to technical failures, but also to political ones – failed states, corruption and graft.

Smart contracts

Blockchains can be enhanced to support not just transactions, but also pieces of code known as smart contracts. A smart contract is a program that controls assets on the blockchain – anything from cryptocurrency to media rights – in ways that guarantee predictable behavior. A smart contract may be viewed as playing the role of a trusted third party: Whatever task it is programmed to do, it will carry out faithfully.

Suppose for example that a user wishes to auction off a piece of land for which her rights are represented on a blockchain. She could hire an auctioneer, or use an online auction site. But that would require her and her potential customers to trust, without proof, that the auctioneer conducts the auction honestly.

To achieve greater transparency, the user could instead create a smart contract that executes the auction automatically. She would program the smart contract with the ability to deliver the item to be sold and with rules about minimum bids and bidding deadlines. She would also specify what the smart contract is to do at the end of the auction: send the winning bid amount from the winner to the seller’s account and transfer the land title to the winner.

Because the blockchain is publicly visible, anyone with suitable expertise could check that the code in the smart contract implements a fair and valid auction. Auction participants would only need to trust the correctness of the code. They wouldn’t need to rely on an auctioneer to run the auction honestly – and as an added benefit, they also wouldn’t need to pay high auctioneer fees.

Handling confidentiality

Behind this compelling vision lurk many technical challenges. The transparency and accountability of a fully public ledger have many benefits, but are at odds with confidentiality. Suppose the seller mentioned above wanted to conduct a sealed-bid auction or conceal the winning bid amount? How could she do this on a blockchain that everyone can read? Achieving both transparency and confidentiality on blockchains is in fact possible, but requires new techniques under development by researchers.

Another challenge is ensuring that smart contracts correctly reflect user intent. A lawyer, arbiter or court can remedy defects or address unforeseen circumstances in written contracts. Smart contracts, though, are expressly designed as unalterable code. This inflexibility avoids ambiguity and cheating and ensures trustworthy execution, but it can also cause brittleness. An excellent example was the recent theft of around $55 million in cryptocurrency from a smart contract. The thief exploited a software bug, and the smart contract creators couldn’t fix it once the contract was running.

Bitcoin is a proof of concept of the viability of blockchains. As researchers and developers overcome the technical challenges of smart contracts and other blockchain innovations, marveling at money flying across the Atlantic will someday seem quaint.

The ConversationAri Juels, Professor of Computer Science, Jacobs Technion-Cornell Institute, Cornell Tech, and Co-Director, Initiative for CryptoCurrencies and Contracts (IC3), Cornell University and Ittay Eyal, Research Associate, Computer Science and Associate Director, Initiative For Cryptocurrencies and Contracts (IC3), Cornell University

This article was originally published on The Conversation. Read the original article.

Now, Check Out: