Numerous articles have been written in recent years about the fantastic features of future quantum computers: in all fields that accumulate huge amounts of data, these superfast computers are supposed to revolutionize results both in pace and precision. A few examples: finding distant planets faster in astronomical data, building better airplanes (Lockheed Martin was the first customer of D-Wave Systems Inc., the Canadian company that built the first commercial quantum computer; more about this later), developing self-driving cars, making precision weather forecasts, developing personalized drugs by analyzing DNA-sequenzing data, making better earthquake predictions, finding cancer faster with the help of disease simulations, and last but not least: quantum encryption. It’s no coincidence that the CIA is investing in D-Wave Inc., or that the NSA is eager for quantum computers to break encryption codes.
In the early 1980s, Nobel laureate Richard Feynman was one of the first to propose that in order to study certain problems in quantum mechanics that were too complicated for classical computers, novel quantum systems needed to be designed and applied. Many research groups all around the globe have been working on this ever since. And although many of them have come a long way, we’re “still far from large-scale, fault tolerant quantum computer”, as Serge Haroche said yesterday in Lindau during his lecture ‘Cavity Quantum Electrodynamics and Circuit QED: from Fundamental Tests to Quantum Information’.
To understand why it’s so tricky to build quantum computers, here a few facts about classical computers first: A ‘bit’ in a normal computer is like a switch, ON or OFF, 1 or 0, in short: a binary transistor. So if you want to run many computations simultanously, you need just as many bits. And while today’s chips can feature several billion transistors, their number is still limited. But a quantum bit, called ‘qubit’, can be on AND off at the same time, and can be on OR off. This means it can perform two equations simultaneously. Thus two qubits can perform four equations, three qubits can perform eight, etc. In quantum mechanics, this ability to do – or be – several things simultanously is called ‘superposition’.
So a qubit is a two-state quantum-mechanical system. There are several ways to build one: it may consist of a polarized single photon, or an ionized single atom, some research groups are also working with superconducting circuits, just to name a few more common approaches. Because thermal fluctuations degrade quantum states, all of these qubits need extensive cooling, and they need to be shielded from magnetic fields, waves and light that are not part of the experiment. Even as the approaches and setups of the research groups differ, all of these qubits behave according to the same physical laws, because they can all exist in a superposition of two states. Now the difficult task is to keep them there, and to build large-scale systems with hundreds or even thousands of interacting qubits.
Much fundamental research has been conducted in the last twenty years to tackle these questions and problems, and many Nobel Laureates and their research groups have contributed to this quest – and won the prestigious prize along the way. Serge Haroche, who was awarded the 2012 Nobel Prize in Physics “for ground-breaking experimental methods that enable measuring and manipulation of individual quantum systems”, describes how it took his team 15 years to perfect their setup to trap single photons in so-called ‘cavities’ (superconducting mirrors) long enough to manipulate them with single atoms, mostly highly excited so-called Rydberg atoms.
As Haroche summarizes in another video from the Lindau Mediatheque: “In Boulder, Colorado, David Wineland and his team trap ions with electrodes and use laser beams to cool them down. In Paris, we do the opposite, we trap the photons and use a beam of atoms that we excited by laser to interrogate the photon field inside the cavity.” He continues that these different approaches are actually “two sides of the same coin: manipulating non-destructively single atoms with photons or single photons with atoms.” In both cases, light-matter interaction can be studied at the most fundamental level.
David Wineland’s lab, called the Ion Storage Group at the National Institute of Standards and Technology NIST, has developed a quantum clock that features a much higher precision than the existing international standard for caesium atomic clocks. Wineland’s ‘favourite project’ uses a single aluminum ion, laser cooled and held inside an electromagnetic trap. The physicists now use the changing energy states of this ion as their clock ‘pendulum’. In his lecture on ‘Atomic Ion Clocks’ in Lindau yesterday, he explained the advantages of using ions for timekeeping: “All atoms of a kind are exactly identical, and they don’t wear out.”
If you’ve read this far, you’ll probably agree that the topic of quantum computing is somewhat difficult – but building one is even harder. Incidentally, Richard Feynman not only proposed a quantum computer, he also coined the famous sentence: “If you think you understand quantum mechanics, you don’t understand quantum mechanics.” So if you think all of this is really hard to understand – you’re in good company; Feynman received his Nobel prize in 1965 for his theory of quantum electrodynamics.
Now think about the possible application that I mentioned earlier, and the countless applications we don’t know about yet: it is self-evident that building a working quantum computer is extremely lucrative. D-Wave Systems Inc., a Canadian company, sells the world’s first commercial quantum computer for 15 million US dollars. Only last December Hartmut Neven, Google’s head of engineering, announced that their latest tests had shown that the D-Wave computer could solve certain problems up to a 100 million times faster than a classical computer. But there is an ongoing debate whether D-Wave really outperforms classical computers. Scott Aaronson, a professor at MIT, has called himself ‘Chief D-Wave Sceptic’. One of his main objections is that the test was designed so that the D-Wave computer would automatically be very fast (read Aaronson’s reply to the 100 million speedup here).
Despite all scepticism, research is moving on inexorably: just this month, Nature published a commentary with the title “Google moves closer to a universal quantum computer”: Hartmut Neven is among the authors of the original paper in the same issue, as is John Martinis from the University of California in Santa Barbara; Martinis is working for Google as well. His team built qubits using superconductive materials, and with them it seems to be leading the current quantum computer race.
But even if we’re still far from building large-scale, fault tolerant quantum computers, as Haroche said, already we’re learning a lot about quantum mechanics by using quantum systems for quantum simulations. In last week’s Nature, Prof. Rainer Blatt from the University of Innsbruck in Austria published a groundbreaking study how his team was able to use four calcium ions in an electromagnetic field to simulate particle interactions, for instance between electrons in an electromagnetic field. In a comment in the same issue, a physicist not involved in this experiment writes that “it is indeed realistic to use quantum-optics techniques to study particle physics and fundamental forces.”
Not only can we learn about quantum mechanics and particle interaction – there are countless possible applications of quantum simulation in both chemistry and biology, called quantum biology and quantum chemistry.
An amazing feature of quantum computing is optimization: quantum computers are said to be able to ‘find’ the fastest solutions all by themselves, they’re basically learning machines (in quantum speak this optimization is called ‘quantum annealing’). This fact entails that at some point in time, machines could be smarter than humans – and what will they do with us if then? “Our AI systems must do what we want them to do,” says an open letter signed by Stephen Hawking and many others warning about unintended side effects of Artificial Intelligence and talking about a ‘control problem’ here. (Now this would be a nice topic for another long article.) Okay, large-scale quantum computing may still be in the future, but quantum simulations show us: the future is closer than you think.