A quantum computer (also known as a quantum supercomputer) is a computation device that makes direct use of quantummechanical phenomena, such as superposition and entanglement, to perform operations on data. Quantum computers are different from digital computers based on transistors. Whereas digital computers require data to be encoded into binary digits (bits), quantum computation uses quantum properties to represent data and perform operations on these data.^{[1]} A theoretical model is the quantum Turing machine, also known as the universal quantum computer. Quantum computers share theoretical similarities with nondeterministic and probabilistic computers. One example is the ability to be in more than one state simultaneously. The field of quantum computing was first introduced by Yuri Manin in 1980^{[2]} and Richard Feynman in 1982.^{[3]}^{[4]} A quantum computer with spins as quantum bits was also formulated for use as a quantum spacetime in 1969.^{[5]}
Although quantum computing is still in its infancy, experiments have been carried out in which quantum computational operations were executed on a very small number of qubits (quantum bits).^{[6]} Both practical and theoretical research continues, and many national governments and military funding agencies support quantum computing research to develop quantum computers for both civilian and national security purposes, such as cryptanalysis.^{[7]}
Largescale quantum computers will be able to solve certain problems much more quickly than any classical computer using the best currently known algorithms, like integer factorization using Shor's algorithm or the simulation of quantum manybody systems. There exist quantum algorithms, such as Simon's algorithm, which run faster than any possible probabilistic classical algorithm.^{[8]} Given sufficient computational resources, a classical computer could be made to simulate any quantum algorithm; quantum computation does not violate the Church–Turing thesis.^{[9]} However, the computational basis of 500 qubits, for example, would already be too large to be represented on a classical computer because it would require 2^{500} complex values (2^{501} bits) to be stored.^{[10]} (For comparison, a terabyte of digital information is only 2^{43} bits.)
Basis
A classical computer has a memory made up of bits, where each bit represents either a one or a zero. A quantum computer maintains a sequence of qubits. A single qubit can represent a one, a zero, or any quantum superposition of these two qubit states; moreover, a pair of qubits can be in any quantum superposition of 4 states, and three qubits in any superposition of 8. In general, a quantum computer with $n$ qubits can be in an arbitrary superposition of up to $2^n$ different states simultaneously (this compares to a normal computer that can only be in one of these $2^n$ states at any one time). A quantum computer operates by setting the qubits in a controlled initial state that represents the problem at hand and by manipulating those qubits with a fixed sequence of quantum logic gates. The sequence of gates to be applied is called a quantum algorithm. The calculation ends with measurement of all the states, collapsing each qubit into one of the two pure states, so the outcome can be at most $n$ classical bits of information.
An example of an implementation of qubits for a quantum computer could start with the use of particles with two spin states: "down" and "up" (typically written $\{\backslash downarrow\}\backslash rangle$ and $\{\backslash uparrow\}\backslash rangle$, or $0\{\backslash rangle\}$ and $1\{\backslash rangle\}$). But in fact any system possessing an observable quantity A, which is conserved under time evolution such that A has at least two discrete and sufficiently spaced consecutive eigenvalues, is a suitable candidate for implementing a qubit. This is true because any such system can be mapped onto an effective spin1/2 system.
Bits vs. qubits
A quantum computer with a given number of qubits is fundamentally different from a classical computer composed of the same number of classical bits. For example, to represent the state of an nqubit system on a classical computer would require the storage of 2^{n} complex coefficients. Although this fact may seem to indicate that qubits can hold exponentially more information than their classical counterparts, care must be taken not to overlook the fact that the qubits are only in a probabilistic superposition of all of their states. This means that when the final state of the qubits is measured, they will only be found in one of the possible configurations they were in before measurement. Moreover, it is incorrect to think of the qubits as only being in one particular state before measurement since the fact that they were in a superposition of states before the measurement was made directly affects the possible outcomes of the computation.
For example: Consider first a classical computer that operates on a threebit register. The state of the computer at any time is a probability distribution over the $2^3=8$ different threebit strings 000, 001, 010, 011, 100, 101, 110, 111. If it is a deterministic computer, then it is in exactly one of these states with probability 1. However, if it is a probabilistic computer, then there is a possibility of it being in any one of a number of different states. We can describe this probabilistic state by eight nonnegative numbers A,B,C,D,E,F,G,H (where A = probability computer is in state 000, B = probability computer is in state 001, etc.). There is a restriction that these probabilities sum to 1.
The state of a threequbit quantum computer is similarly described by an eightdimensional vector (a,b,c,d,e,f,g,h), called a ket. However, instead of the sum of the coefficient magnitudes adding up to one, the sum of the squares of the coefficient magnitudes, $a^2+b^2+...+h^2$, must equal one. Moreover, the coefficients can have complex values. Since the absolute square of these complexvalued coefficients denote probability amplitudes of given states, the phase between any two coefficients (states) represents a meaningful parameter, which presents a fundamental difference between quantum computing and probabilistic classical computing.^{[12]}
If you measure the three qubits, you will observe a threebit string. The probability of measuring a given string is the squared magnitude of that string's coefficient (i.e., the probability of measuring 000 = $a^2$, the probability of measuring 001 = $b^2$, etc..). Thus, measuring a quantum state described by complex coefficients (a,b,...,h) gives the classical probability distribution $(a^2,\; b^2,\; ...,\; h^2)$ and we say that the quantum state "collapses" to a classical state as a result of making the measurement.
Note that an eightdimensional vector can be specified in many different ways depending on what basis is chosen for the space. The basis of bit strings (e.g., 000, 001, ..., 111) is known as the computational basis. Other possible bases are unitlength, orthogonal vectors and the eigenvectors of the Paulix operator. Ket notation is often used to make the choice of basis explicit. For example, the state (a,b,c,d,e,f,g,h) in the computational basis can be written as:
 $a\backslash ,000\backslash rangle\; +\; b\backslash ,001\backslash rangle\; +\; c\backslash ,010\backslash rangle\; +\; d\backslash ,011\backslash rangle\; +\; e\backslash ,100\backslash rangle\; +\; f\backslash ,101\backslash rangle\; +\; g\backslash ,110\backslash rangle\; +\; h\backslash ,111\backslash rangle$
 where, e.g., $010\backslash rangle\; =\; \backslash left(0,0,1,0,0,0,0,0\backslash right)$
The computational basis for a single qubit (two dimensions) is $0\backslash rangle\; =\; \backslash left(1,0\backslash right)$ and $1\backslash rangle\; =\; \backslash left(0,1\backslash right)$.
Using the eigenvectors of the Paulix operator, a single qubit is $+\backslash rangle\; =\; \backslash tfrac\{1\}\{\backslash sqrt\{2\}\}\; \backslash left(1,1\backslash right)$ and $\backslash rangle\; =\; \backslash tfrac\{1\}\{\backslash sqrt\{2\}\}\; \backslash left(1,1\backslash right)$.
Operation
While a classical threebit state and a quantum threequbit state are both eightdimensional vectors, they are manipulated quite differently for classical or quantum computation. For computing in either case, the system must be initialized, for example into the allzeros string, $000\backslash rangle$, corresponding to the vector (1,0,0,0,0,0,0,0). In classical randomized computation, the system evolves according to the application of stochastic matrices, which preserve that the probabilities add up to one (i.e., preserve the L1 norm). In quantum computation, on the other hand, allowed operations are unitary matrices, which are effectively rotations (they preserve that the sum of the squares add up to one, the Euclidean or L2 norm). (Exactly what unitaries can be applied depend on the physics of the quantum device.) Consequently, since rotations can be undone by rotating backward, quantum computations are reversible. (Technically, quantum operations can be probabilistic combinations of unitaries, so quantum computation really does generalize classical computation. See quantum circuit for a more precise formulation.)
Finally, upon termination of the algorithm, the result needs to be read off. In the case of a classical computer, we sample from the probability distribution on the threebit register to obtain one definite threebit string, say 000. Quantum mechanically, we measure the threequbit state, which is equivalent to collapsing the quantum state down to a classical distribution (with the coefficients in the classical state being the squared magnitudes of the coefficients for the quantum state, as described above), followed by sampling from that distribution. Note that this destroys the original quantum state. Many algorithms will only give the correct answer with a certain probability. However, by repeatedly initializing, running and measuring the quantum computer, the probability of getting the correct answer can be increased.
For more details on the sequences of operations used for various quantum algorithms, see universal quantum computer, Shor's algorithm, Grover's algorithm, DeutschJozsa algorithm, amplitude amplification, quantum Fourier transform, quantum gate, quantum adiabatic algorithm and quantum error correction.
Potential
Integer factorization is believed to be computationally infeasible with an ordinary computer for large integers if they are the product of few prime numbers (e.g., products of two 300digit primes).^{[13]} By comparison, a quantum computer could efficiently solve this problem using Shor's algorithm to find its factors. This ability would allow a quantum computer to decrypt many of the cryptographic systems in use today, in the sense that there would be a polynomial time (in the number of digits of the integer) algorithm for solving the problem. In particular, most of the popular public key ciphers are based on the difficulty of factoring integers (or the related discrete logarithm problem, which can also be solved by Shor's algorithm), including forms of RSA. These are used to protect secure Web pages, encrypted email, and many other types of data. Breaking these would have significant ramifications for electronic privacy and security.
However, other existing cryptographic algorithms do not appear to be broken by these algorithms.^{[14]}^{[15]} Some publickey algorithms are based on problems other than the integer factorization and discrete logarithm problems to which Shor's algorithm applies, like the McEliece cryptosystem based on a problem in coding theory.^{[14]}^{[16]} Latticebased cryptosystems are also not known to be broken by quantum computers, and finding a polynomial time algorithm for solving the dihedral hidden subgroup problem, which would break many lattice based cryptosystems, is a wellstudied open problem.^{[17]} It has been proven that applying Grover's algorithm to break a symmetric (secret key) algorithm by brute force requires time equal to roughly 2^{n/2} invocations of the underlying cryptographic algorithm, compared with roughly 2^{n} in the classical case,^{[18]} meaning that symmetric key lengths are effectively halved: AES256 would have the same security against an attack using Grover's algorithm that AES128 has against classical bruteforce search (see Key size). Quantum cryptography could potentially fulfill some of the functions of public key cryptography.
Besides factorization and discrete logarithms, quantum algorithms offering a more than polynomial speedup over the best known classical algorithm have been found for several problems,^{[19]} including the simulation of quantum physical processes from chemistry and solid state physics, the approximation of Jones polynomials, and solving Pell's equation. No mathematical proof has been found that shows that an equally fast classical algorithm cannot be discovered, although this is considered unlikely. For some problems, quantum computers offer a polynomial speedup. The most wellknown example of this is quantum database search, which can be solved by Grover's algorithm using quadratically fewer queries to the database than are required by classical algorithms. In this case the advantage is provable. Several other examples of provable quantum speedups for query problems have subsequently been discovered, such as for finding collisions in twotoone functions and evaluating NAND trees.
Consider a problem that has these four properties:
 The only way to solve it is to guess answers repeatedly and check them,
 The number of possible answers to check is the same as the number of inputs,
 Every possible answer takes the same amount of time to check, and
 There are no clues about which answers might be better: generating possibilities randomly is just as good as checking them in some special order.
An example of this is a password cracker that attempts to guess the password for an encrypted file (assuming that the password has a maximum possible length).
For problems with all four properties, the time for a quantum computer to solve this will be proportional to the square root of the number of inputs. That can be a very large speedup, reducing some problems from years to seconds. It can be used to attack symmetric ciphers such as Triple DES and AES by attempting to guess the secret key.
Grover's algorithm can also be used to obtain a quadratic speedup over a bruteforce search for a class of problems known as NPcomplete.
Since chemistry and nanotechnology rely on understanding quantum systems, and such systems are impossible to simulate in an efficient manner classically, many believe quantum simulation will be one of the most important applications of quantum computing.^{[20]}
There are a number of technical challenges in building a largescale quantum computer, and thus far quantum computers have yet to solve a problem faster than a classical computer. David DiVincenzo, of IBM, listed the following requirements for a practical quantum computer:^{[21]}
 scalable physically to increase the number of qubits;
 qubits can be initialized to arbitrary values;
 quantum gates faster than decoherence time;
 universal gate set;
 qubits can be read easily.
Quantum decoherence
One of the greatest challenges is controlling or removing quantum decoherence. This usually means isolating the system from its environment as interactions with the external world cause the system to decohere. However, other sources of decoherence also exist. Examples include the quantum gates, and the lattice vibrations and background nuclear spin of the physical system used to implement the qubits. Decoherence is irreversible, as it is nonunitary, and is usually something that should be highly controlled, if not avoided. Decoherence times for candidate systems, in particular the transverse relaxation time T_{2} (for NMR and MRI technology, also called the dephasing time), typically range between nanoseconds and seconds at low temperature.^{[12]}
These issues are more difficult for optical approaches as the timescales are orders of magnitude shorter and an oftencited approach to overcoming them is optical pulse shaping. Error rates are typically proportional to the ratio of operating time to decoherence time, hence any operation must be completed much more quickly than the decoherence time.
If the error rate is small enough, it is thought to be possible to use quantum error correction, which corrects errors due to decoherence, thereby allowing the total calculation time to be longer than the decoherence time. An often cited figure for required error rate in each gate is 10^{−4}. This implies that each gate must be able to perform its task in one 10,000th of the decoherence time of the system.
Meeting this scalability condition is possible for a wide range of systems. However, the use of error correction brings with it the cost of a greatly increased number of required qubits. The number required to factor integers using Shor's algorithm is still polynomial, and thought to be between L and L^{2}, where L is the number of bits in the number to be factored; error correction algorithms would inflate this figure by an additional factor of L. For a 1000bit number, this implies a need for about 10^{4} qubits without error correction.^{[22]} With error correction, the figure would rise to about 10^{7} qubits. Note that computation time is about $L^2$ or about $10^7$ steps and on 1 MHz, about 10 seconds.
A very different approach to the stabilitydecoherence problem is to create a topological quantum computer with anyons, quasiparticles used as threads and relying on braid theory to form stable logic gates.^{[23]}^{[24]}
Developments
There are a number of quantum computing models, distinguished by the basic elements in which the computation is decomposed. The four main models of practical importance are:
The Quantum Turing machine is theoretically important but direct implementation of this model is not pursued. All four models of computation have been shown to be equivalent to each other in the sense that each can simulate the other with no more than polynomial overhead.
For physically implementing a quantum computer, many different candidates are being pursued, among them (distinguished by the physical system used to realize the qubits):
The large number of candidates demonstrates that the topic, in spite of rapid progress, is still in its infancy. But at the same time, there is also a vast amount of flexibility.
In 2001, researchers were able to demonstrate Shor's algorithm to factor the number 15 using a 7qubit NMR computer.^{[38]}
In 2005, researchers at the University of Michigan built a semiconductor chip that functioned as an ion trap. Such devices, produced by standard lithography techniques, may point the way to scalable quantum computing tools.^{[39]} An improved version was made in 2006.
In 2009, researchers at Yale University created the first rudimentary solidstate quantum processor. The twoqubit superconducting chip was able to run elementary algorithms. Each of the two artificial atoms (or qubits) were made up of a billion aluminum atoms but they acted like a single one that could occupy two different energy states.^{[40]}^{[41]}
Another team, working at the University of Bristol, also created a siliconbased quantum computing chip, based on quantum optics. The team was able to run Shor's algorithm on the chip.^{[42]}
Further developments were made in 2010.^{[43]}
Springer publishes a journal ("Quantum Information Processing") devoted to the subject.^{[44]}
In April 2011, a team of scientists from Australia and Japan made a breakthrough in quantum teleportation. They successfully transferred a complex set of quantum data with full transmission integrity achieved. Also the qubits being destroyed in one place but instantaneously resurrected in another, without affecting their superpositions.^{[45]}^{[46]}
In 2011, DWave Systems announced the first commercial quantum annealer on the market by the name DWave One. The company claims this system uses a 128 qubit processor chipset.^{[47]} On May 25, 2011 DWave announced that Lockheed Martin Corporation entered into an agreement to purchase a DWave One system.^{[48]} Lockheed Martin and the University of Southern California (USC) reached an agreement to house the DWave One Adiabatic Quantum Computer at the newly formed USC Lockheed Martin Quantum Computing Center, part of USC's Information Sciences Institute campus in Marina del Rey.^{[49]} DWave's engineers use an empirical approach when designing their quantum chips, focusing on whether the chips are able to solve particular problems rather than designing based on a thorough understanding of the quantum principles involved. This approach was liked by investors more than by some academic critics, who said that DWave had not yet sufficiently demonstrated that they really had a quantum computer. Such criticism softened once DWave published a paper in Nature giving details, which critics said proved that the company's chips did have some of the quantum mechanical properties needed for quantum computing.^{[50]}^{[51]}
During the same year, researchers working at the University of Bristol created an allbulk optics system able to run an iterative version of Shor's algorithm. They successfully managed to factorize 21.^{[52]}
In September 2011 researchers also proved that a quantum computer can be made with a Von Neumann architecture (separation of RAM).^{[53]}
In November 2011 researchers factorized 143 using 4 qubits.^{[54]}
In February 2012 IBM scientists said that they had made several breakthroughs in quantum computing with superconducting integrated circuits that put them "on the cusp of building systems that will take computing to a whole new level."^{[55]}
In April 2012 a multinational team of researchers from the University of Southern California, Delft University of Technology, the Iowa State University of Science and Technology, and the University of California, Santa Barbara, constructed a twoqubit quantum computer on a crystal of diamond doped with some manner of impurity, that can easily be scaled up in size and functionality at room temperature. Two logical qubit directions of electron spin and nitrogen kernels spin were used. A system which formed an impulse of microwave radiation of certain duration and the form was developed for maintenance of protection against decoherence. By means of this computer Grover's algorithm for four variants of search has generated the right answer from the first try in 95% of cases.^{[56]}
In September 2012, Australian researchers at the University of New South Wales said the world's first quantum computer was just 5 to 10 years away, after announcing a global breakthrough enabling manufacture of its memory building blocks. A research team led by Australian engineers created the first working "quantum bit" based on a single atom in silicon, invoking the same technological platform that forms the building blocks of modern day computers, laptops and phones.^{[57]}
^{[58]}
In October 2012, Nobel Prizes were presented to David J. Wineland and Serge Haroche for their basic work on understanding the quantum world  work which may eventually help make quantum computing possible.^{[59]}^{[60]}
In November 2012, the first quantum teleportation from one macroscopic object to another was reported.^{[61]}^{[62]}
In February 2013, a new technique Boson Sampling was reported by two groups using photons in an optical lattice that is not a universal quantum computer but which may be good enough for practical problems. Science Feb 15, 2013
In May 2013, Google Inc announced that it was launching the Quantum Artificial Intelligence Lab, to be hosted by NASA’s Ames Research Center. The lab will house a 512qubit quantum computer from DWave Systems, and the USRA (Universities Space Research Association) will invite researchers from around the world to share time on it. The goal being to study how quantum computing might advance machine learning^{[63]}
Relation to computational complexity theory
The class of problems that can be efficiently solved by quantum computers is called BQP, for "bounded error, quantum, polynomial time". Quantum computers only run probabilistic algorithms, so BQP on quantum computers is the counterpart of BPP ("bounded error, probabilistic, polynomial time") on classical computers. It is defined as the set of problems solvable with a polynomialtime algorithm, whose probability of error is bounded away from one half.^{[65]} A quantum computer is said to "solve" a problem if, for every instance, its answer will be right with high probability. If that solution runs in polynomial time, then that problem is in BQP.
BQP is contained in the complexity class #P (or more precisely in the associated class of decision problems P^{#P}),^{[66]} which is a subclass of PSPACE.
BQP is suspected to be disjoint from NPcomplete and a strict superset of P, but that is not known. Both integer factorization and discrete log are in BQP. Both of these problems are NP problems suspected to be outside BPP, and hence outside P. Both are suspected to not be NPcomplete. There is a common misconception that quantum computers can solve NPcomplete problems in polynomial time. That is not known to be true, and is generally suspected to be false.^{[66]}
The capacity of a quantum computer to accelerate classical algorithms has rigid limits—upper bounds of quantum computation's complexity. The overwhelming part of classical calculations cannot be accelerated on a quantum computer.^{[67]} A similar fact takes place for particular computational tasks, like the search problem, for which Grover's algorithm is optimal.^{[68]}
Although quantum computers may be faster than classical computers, those described above can't solve any problems that classical computers can't solve, given enough time and memory (however, those amounts might be practically infeasible). A Turing machine can simulate these quantum computers, so such a quantum computer could never solve an undecidable problem like the halting problem. The existence of "standard" quantum computers does not disprove the Church–Turing thesis.^{[69]} It has been speculated that theories of quantum gravity, such as Mtheory or loop quantum gravity, may allow even faster computers to be built. Currently, defining computation in such theories is an open problem due to the problem of time, i.e., there currently exists no obvious way to describe what it means for an observer to submit input to a computer and later receive output.^{[70]}
We still are unable to use quantum computers efficiently yet.
See also
References
Bibliography
General references

 David P. DiVincenzo (2000). "The Physical Implementation of Quantum Computation". Experimental Proposals for Quantum Computation. quantph/0002077
 Table 1 lists switching and dephasing times for various systems.




 Sam Lomonaco Four Lectures on Quantum Computing given at Oxford University in July 2006
 C. Adami, N.J. Cerf. (1998). "Quantum computation with linear optics". quantph/9806048v1.
External links
 Quantum Computing" by Amit Hagar.
 Quantiki – Wiki and portal with freecontent related to quantum information science.
 Scott Aaronson's blog, which features informative and critical commentary on developments in the field
 Quantum Annealing and Computation: A Brief Documentary Note, A. Ghosh and S. Mukherjee
 Lectures
 Umesh Vazirani
 Michael Nielsen
 David Deutsch
 Lectures at the Institut Henri Poincaré (slides and videos)
 Online lecture on An Introduction to Quantum Computing, Edward Gerjuoy (2008)
 Quantum Computing research by Mikko Möttönen at Aalto University (video)
Template:Emerging technologies
This article was sourced from Creative Commons AttributionShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, EGovernment Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a nonprofit organization.