## What Quantum Computers Do Faster, with Caveats

I find reading media coverage of quantum computers frustrating. With rare exceptions, the explanations of what a quantum computer is or can do are totally unsatisfying. I remember being really confused about the whole thing. “You say a quantum computer can be in two states at the same time? Sounds cool, but how does that help? Why doesn’t it help as much as I’d expect? Do you have a better analogy? An example of something it does faster? Something it doesn’t do faster? Yes? No? …Hello?”

In this post I’ll try to explain what quantum computers do that’s faster, and (even more importantly) the limitations on that mechanism.

### Straight to the Point

The operation quantum computers do faster is matrix multiplication. They can multiply a gigantic, humungous, exponentially huge matrix against a similarly huge vector in a single step.

That sounds amazing, but there’s caveats: it’s hard to input the vector you want, there’s limitations on what matrices you can use, and the output can only be sampled. (Also, there’s a huge engineering challenge.) I find these caveats enlightening, so let’s go over them in more detail.

### Caveat #1: Huge Input

The vector being transformed by a quantum computer is *the computer’s own superposition*. That is to say, there is a vector containing weights for each (classical) state the computer might be in, and the matrices we apply will operate on that vector.

For example, if you have a quantum computer with three bits then it can be in eight states: 000, 001, 010, 011, 100, 101, 110, or 111. A quantum superposition of those states assigns each state a weight, some complex number, with the constraint that the squared lengths of the weights must add up to 1. A classical state assigns all the weight to one state, but quantum states can spread the weight around. That’s just how the physics works.

So specifying a quantum state containing bits requires complex numbers, because each of the possible states is given a weight. The problem here is that you don’t have the *millenia* it would take to input the weights making up some particular input superposition you care about. Your only hope is to create the input *computationally*, by starting from some state that’s “easy” to make (like a classical state) and then applying operations resulting in the superposition you need. Even then, the vast majority of superpositions can’t be reached in a reasonable amount of time.

### Caveat #2: Twisted Operations

Quantum physics imposes two important restrictions on the operations (i.e. matrices) that you can apply. First, the operation must preserve the state vector’s “sum of squared lengths is 1″ property regardless of what the initial state is. Second, you must be able to reverse the operation. More formally, the only matrices you can use are unitary matrices.

The best analogy I can think of for why this makes things harder is a rubik’s cube. You *could* go directly to the goal, by taking the cube apart and rebuilding it, but you’re not allowed to do that. All you can do is rotate the sides, and that makes it challenging. (Unitary matrices are also made up of rotations.)

### Caveat #3: Sampled Output

You can’t actually read the weights output by a quantum algorithm. Not directly. Superpositions don’t work like that. All you can do is expose the computer’s state to the world, causing it to decohere into a single classical state. The more weight a state had in the superposition, the more likely you are to record it as the result.

In principle, by running the same computation many times, you can statistically infer what the weights of the final superposition are. In practice, there are so many possible states that this is completely hopeless unless almost all of the states end up having negligible weights. Otherwise the outputs would just look like random numbers, and you wouldn’t be able to determine anything useful.

### Example: Fourier Transform

A good example of quantum computers’ strengths, and weaknesses, is the Quantum Fourier Transform (QFT).

Fourier Transforms have tons of applications. Want to do signal processing? Fourier. Multiply numbers fast? Fourier. Find three equally spaced 1s in an array? Fourier. You get the idea.

Because it has so many applications, computing Fourier transforms quickly is a big deal. That’s why the Fast Fourier Transform algorithm, which needs time instead of time to compute a Fourier transform, shows up constantly on “Most Important Algorithms” lists. (Notable: recently the Sparse Fourier Transform reduced that to , where is the number of non-negligible frequencies.)

Quantum computers can do Fourier transforms *a lot* faster. Using the QFT, they can Fourier transform their own superposition in just operations. They mock FFT’s run time. That’s, like, enough time to read the input! Who even does that anymore?

The following diagram shows an example QFT of size . It determines that the input superposition is rotating by two eighths of a turn in the complex plan per sample. You can see that the output is in fact 100% in state , which is in binary. (I will explain what’s going on in the diagram in more detail in next week’s post, specifically about the QFT.)

So the QFT is amazing, but don’t get too excited. Remember the caveats. The input is huge, and the output is sampled. Even given the magical ability to directly write to the superposition, it would still take time to enter all the values. Then, because the output can only be sampled, we’d have to hope there were only a few non-negligible frequencies present while repeating the procedure hundreds of times just to get a rough statistical idea of what the output spectrum looked like.

The real benefit of the QFT is as an intermediate step in some other quantum computation, like factoring two numbers. Then the earlier operations can computationally create the input, and the later operations can distill the result into some more useful answer.

Thus the QFT is an example of quantum computers being absurdly fast and, *at the same time*, an example of why you can’t always use those speedups.

### Summary

Quantum computers can do huge matrix multiplications very quickly. But the input is the computer’s own superposition. And the only operations you can do are rotations or combinations thereof. And the output can’t be read, only sampled.

I go into more detail about the basics in my post on Grover’s Quantum Search Algorithm. For really in-depth information, the text book Quantum Computation and Quantum Information is available online as a pdf.

—

### Discuss on Reddit

—

### My Twitter: @CraigGidney

—

Twisted Oak Studios offers consulting and development on high-tech interactive projects. Check out our portfolio, or Give us a shout if you have anything you think some really rad engineers should help you with.

## Archive

- strilanc
- What Quantum Computers Do Faster, with Caveats
- Naming Things: Fail-Useful
- Detecting Simple Cycles Forming, Faster
- Third Party Bit Commitment
- Angular Velocity is Simple
- Collection Equality is Hard
- Deadlocks in Practice: Don’t Hold Locks While Notifying
- Brute Force Parallelization
- A Year’s Worth of Opinions about Objective-C
- Referencing Substrings Faster, without Leaking Memory
- Not Crying Over Old Code
- Exploring Universal Ternary Gates
- Impractical Experiments #2: Securing Peer to Peer Fog of War against Map Hacks
- Achieving Exponential Slowdown by Enumerating Twice
- Using Immortality to Kill Accidental Callback Cycles
- Cancellation Tokens (and Collapsing Futures) for Objective-C
- Visualizing the Eigenvectors of a Rotation
- Collapsing Futures in Objective-C
- Bug Hunting #1: Garbled Audio from End to End
- Impractical Experiments #1: Representing Numbers as Polynomials
- Implementing Quantum Pseudo-Telepathy
- Turn On Your Damn Warnings
- Big-O Made Trivial
- Unfathomable Bugs #7: The Broken Oven
- Solomonoff’s Mad Scientist
- Yearly Blogging Roundup #1
- What isn’t a Monad
- Searching a Sorted Matrix Faster
- How to Read Nested Ternary Operators
- Making Sublime Text 2 Jump to the Correct Line with Unity on OS X
- My Bug, My Bad #4: Reading Concurrently
- Whole API Testing with Reflection
- Optimizing a Parser Combinator into a memcpy
- Don’t Treat Paths Like Strings
- Breaking a Toy Hash Function
- Counting Iterators Lazily
- Unfathomable Bugs #6: Pretend Precision
- My Bug, My Bad #3: Accidentally Attacking WarCraft 3
- Collapsing Types vs Monads (followup)
- Collapsing Futures: Easy to Use, Hard to Represent
- Eventual Exceptions vs Programming in a Minimal Functional Style
- The Mystery of Flunf
- Explain it like I’m Five: The Socialist Millionaire Problem and Secure Multi-Party Computation
- Computer Science Blows My Mind
- A visit to Execution Labs in Montréal
- Transmuting Dice, Conserving Entropy
- Rule of Thumb: Ask for the Clock
- Rule of Thumb: Use Purposefully Weakened Methods
- Rule of thumb: Preconditions Should be Checked Explicitly
- Intersecting Linked Lists Faster
- Mouse Path Smoothing for Jack Lumber
- My Bug, My Bad #2: Sunk by Float
- Repeat Yourself Differently
- Grover’s Quantum Search Algorithm
- Followup to Non-Nullable Types vs C#
- Optimizing Just in Time with Expression Trees
- When One-Way Latency Doesn’t Matter
- Determining exactly if/when/where a moving line intersected a moving point
- Emulating Actors in C# with Async/Await
- Making an immutable queue with guaranteed constant time operations
- Improving Checked Exceptions
- Perishable Collections: The Benefits of Removal-by-Lifetime
- Decoupling shared control
- Decoupling inlined UI code
- Linq to Collections: Beyond IEnumerable<T>
- Publish your .Net library as a NuGet package
- When null is not enough: an option type for C#
- Unfathomable Bugs #5: Readonly or not
- Minkowski sums: examples
- My Bug, My Bad #1: Fractal Spheres
- Working around the brittle UI Virtualization in Windows 8
- Encapsulating Angles
- Unfathomable Bugs #4: Keys that aren’t
- How would I even use a monad (in C#)?
- Useful/Interesting Methods #1: Observable.WhenEach
- Unfathomable Bugs #3: Stringing you along
- Anonymous Implementation Classes – A Design Pattern for C#
- Tasks for ActionScript 3 – Improving on Event-Driven Programming
- Minkowski sums and differences
- Non-Nullable Types vs C#: Fixing the Billion Dollar Mistake
- Unfathomable Bugs #2: Slashing Out
- Script templates and base classes
- Unity font extraction
- Abusing “Phantom Types” to Encode List Lengths Into Their Type
- Constructive Criticism of the Reactive Extensions API
- Quaternions part 3
- Quaternions part 2
- Quaternions part 1
- Unfathomable Bugs #1: You can have things! You can have things IN things! You can have …
- Coroutines – More than you want to know
- Asset Bundle Helper
- The Visual Studio goes away
- .Net’s time traveling StopWatch
- Introducing Catalyst