What Quantum Computers Do Faster, with Caveats

posted by Craig Gidney on February 19, 2014

I find reading media coverage of quantum computers frustrating. With rare exceptions, the explanations of what a quantum computer is or can do are totally unsatisfying. I remember being really confused about the whole thing. “You say a quantum computer can be in two states at the same time? Sounds cool, but how does that help? Why doesn’t it help as much as I’d expect? Do you have a better analogy? An example of something it does faster? Something it doesn’t do faster? Yes? No? …Hello?”

In this post I’ll try to explain what quantum computers do that’s faster, and (even more importantly) the limitations on that mechanism.

Straight to the Point

The operation quantum computers do faster is matrix multiplication. They can multiply a gigantic, humungous, exponentially huge matrix against a similarly huge vector in a single step.

That sounds amazing, but there’s caveats: it’s hard to input the vector you want, there’s limitations on what matrices you can use, and the output can only be sampled. (Also, there’s a huge engineering challenge.) I find these caveats enlightening, so let’s go over them in more detail.

Caveat #1: Huge Input

The vector being transformed by a quantum computer is the computer’s own superposition. That is to say, there is a vector containing weights for each (classical) state the computer might be in, and the matrices we apply will operate on that vector.

For example, if you have a quantum computer with three bits then it can be in eight states: 000, 001, 010, 011, 100, 101, 110, or 111. A quantum superposition of those states assigns each state a weight, some complex number, with the constraint that the squared lengths of the weights must add up to 1. A classical state assigns all the weight to one state, but quantum states can spread the weight around. That’s just how the physics works.

So specifying a quantum state containing n bits requires 2^n complex numbers, because each of the 2^n possible states is given a weight. The problem here is that you don’t have the millenia it would take to input the 2^{100} weights making up some particular input superposition you care about. Your only hope is to create the input computationally, by starting from some state that’s “easy” to make (like a classical state) and then applying operations resulting in the superposition you need. Even then, the vast majority of superpositions can’t be reached in a reasonable amount of time.

Caveat #2: Twisted Operations

Quantum physics imposes two important restrictions on the operations (i.e. matrices) that you can apply. First, the operation must preserve the state vector’s “sum of squared lengths is 1″ property regardless of what the initial state is. Second, you must be able to reverse the operation. More formally, the only matrices you can use are unitary matrices.

The best analogy I can think of for why this makes things harder is a rubik’s cube. You could go directly to the goal, by taking the cube apart and rebuilding it, but you’re not allowed to do that. All you can do is rotate the sides, and that makes it challenging. (Unitary matrices are also made up of rotations.)

Caveat #3: Sampled Output

You can’t actually read the weights output by a quantum algorithm. Not directly. Superpositions don’t work like that. All you can do is expose the computer’s state to the world, causing it to decohere into a single classical state. The more weight a state had in the superposition, the more likely you are to record it as the result.

In principle, by running the same computation many times, you can statistically infer what the weights of the final superposition are. In practice, there are so many possible states that this is completely hopeless unless almost all of the states end up having negligible weights. Otherwise the outputs would just look like random numbers, and you wouldn’t be able to determine anything useful.

Example: Fourier Transform

A good example of quantum computers’ strengths, and weaknesses, is the Quantum Fourier Transform (QFT).

Fourier Transforms have tons of applications. Want to do signal processing? Fourier. Multiply numbers fast? Fourier. Find three equally spaced 1s in an array? Fourier. You get the idea.

Because it has so many applications, computing Fourier transforms quickly is a big deal. That’s why the Fast Fourier Transform algorithm, which needs O(n \cdot lg(n)) time instead of O(n^2) time to compute a Fourier transform, shows up constantly on “Most Important Algorithms” lists. (Notable: recently the Sparse Fourier Transform reduced that to O(k \cdot lg(n)), where k is the number of non-negligible frequencies.)

Quantum computers can do Fourier transforms a lot faster. Using the QFT, they can Fourier transform their own superposition in just O(log(n)^2) operations. They mock FFT’s O(n \cdot log(n)) run time. That’s, like, enough time to read the input! Who even does that anymore?

The following diagram shows an example QFT of size 8. It determines that the input superposition [1,i,-1,-i,1,i,-1,-i]/\sqrt{8} is rotating by two eighths of a turn in the complex plan per sample. You can see that the output is in fact 100% in state 010, which is 2 in binary. (I will explain what’s going on in the diagram in more detail in next week’s post, specifically about the QFT.)

Quantum Fourier Transform

So the QFT is amazing, but don’t get too excited. Remember the caveats. The input is huge, and the output is sampled. Even given the magical ability to directly write to the superposition, it would still take O(n) time to enter all the values. Then, because the output can only be sampled, we’d have to hope there were only a few non-negligible frequencies present while repeating the procedure hundreds of times just to get a rough statistical idea of what the output spectrum looked like.

The real benefit of the QFT is as an intermediate step in some other quantum computation, like factoring two numbers. Then the earlier operations can computationally create the input, and the later operations can distill the result into some more useful answer.

Thus the QFT is an example of quantum computers being absurdly fast and, at the same time, an example of why you can’t always use those speedups.


Quantum computers can do huge matrix multiplications very quickly. But the input is the computer’s own superposition. And the only operations you can do are rotations or combinations thereof. And the output can’t be read, only sampled.

I go into more detail about the basics in my post on Grover’s Quantum Search Algorithm. For really in-depth information, the text book Quantum Computation and Quantum Information is available online as a pdf.

Discuss on Reddit

My Twitter: @CraigGidney

Comments are closed.

Twisted Oak Studios offers consulting and development on high-tech interactive projects. Check out our portfolio, or Give us a shout if you have anything you think some really rad engineers should help you with.


More interesting posts (1 of 5 articles)

Or check out our Portfolio.