Classical and Quantum Complexity Theory

Classical and Quantum Complexity Theory


We've come across the terms "quantum speedup" a lot in popular media, claiming that quantum computers can solve problems classical computers cannot solve, within seconds. This post isn't very technical, and is aimed to skim over some of the basics of complexity theory to make sure we understand what a "quantum speedup" actually means.

Recently researchers at Google and USTC have claimed to have solved the problem of quantum supremacy, where they have devised problems which are intractable for classical computers to solve in a reasonable amount of time, but can be solved very fast using quantum computers. Both claims had been very controversial and had given rise to a lot of debate in the theoretical computer science community, including claims about the veracity of these results! However, USTC has been on a roll as of now: their initial Gauss-Boson sampling experiment based on Aaronson-Arkhipov, 2011 had achieved supremacy with upto 70 detected photons, but this time, they announced a new experiment with a 144-mode photonic circuit and upto 113 detected photons. Skeptics had pointed out that the initial experiment could have been faked by reproducing photon correlations for small values, see, for e.g., Popova-Rubtsov, 2021. This time though, USTC claims to measure upto 19-photon correlations. Their paper is available on arXiv: USTC group, 2021.

One common misconception about quantum computers is that they can beat classical computers in all kinds of problems. But this claim is not true, as the quantum supremacy experiments are usually deviced for exotic problems very difficult for a classical computer to solve in a reasonable amount of time. In fact, all quantum algorithms for practical computational problems can either be simulated, or have classical solutions that outperform these algorithms. This is mainly due to the small size, decoherence, and error propagation of current quantum computers, which will inevitably improve in the future. We're just not there yet.

And therefore, it seems important to understand what complexity theory actually says about quantum and classical computation. Let's take a look at some of the basics now.

Basic Complexity Classes

The following classes are usually studied for "decision problems" - problems whose answers at 0 or 1. They are very well-known and well-studied in theoretical computer science:

  • $P$: polynomial-time solvable by classical computers.
  • $NP$: polynomial-time verifiable by classical computers.
  • $BPP$: polynomial-time solvable with success probability at least $2/3$ by classical computers with access to random numbers.
  • $BQP$: polynomial-time solvable with success probability at least $2/3$ by quantum computers.
  • $QMA$: the "NP"-analogue of "BQP"; polynomial-time verifiable by quantum computers.

Relations between the classes

It is known that:

  • $P\subseteq NP$, $P\subseteq BPP\subseteq BQP$.
  • $QMA$ contains all of $P$, $NP$, $BPP$, $BQP$.
  • If factoring is not in $P$, $P\neq BQP$.

Computer scientists strongly believe that (but all of this is unproven):

  • $P\neq NP$
  • $P = BPP$
  • $P \neq BQP$
  • $\color{red}{NP\not\subseteq BQP}$
  • $BQP\neq QMA$ (implied by $P\neq NP$ or $P\neq BQP$).

The fourth point ($NP$ is NOT a subset of $BQP$) is crucial to take home. Whenever popular media claims that quantum computers can solve all $NP$-complete problems, one should be extremely skeptical of that source as it is contradictory to what computer scientists and physicists believe.

Quantum Speedups and Query Complexity

It is known that Shor's algorithm provides an exponential speedup and Grover's algorithm only gives a quadratic speedup. For which problems can we expect quantum speedup? How do we know which problems give better speedup?

There are some standard results that can help us answer these questions. First, we deal with Quantum Query Complexity. In this model, we measure the number of calls a quantum algorithm makes to an oracle (black box function), which operates on input and output registers as follows: $O\lvert x\rangle \lvert y\rangle = \lvert x\rangle \lvert y\oplus f(x)\rangle$. If $O$ is provided as a circuit instead of a black box, it is possible that classical algorithms can closely analyze the structure of the circuit to solve the problem faster by itself. This is why we restrict ourselves to the model of query complexity, even though an algorithm with polynomial query complexity might require exponential time to run.

Grover's algorithm, for example, runs in $O(\sqrt N)$ calls to the oracle when it has to search for one bitstring out of $N=2^n$ bitstrings. This statement does not say much about the runtime of Grover's algorithm, as it depends on the complexity of the oracle calls.

Example Problems (decision versions)

  • Deutsch-Josza Algorithm: Determine if a function $f$ is balanced or constantly $0$.
  • Grover's Algorithm: Is $f(x)=0$ for every $x\in {0,\ldots, 2^{n-1}}$? Or is there some $x$ for which $f(x)=1$?
  • Hidden Subgroup Problem: Given $f:G\to X$ that hides a subgroup $H\leq G$, is $H$ trivial?

If ANY function $f$ is allowed without restrictions (like in Grover's algorithm), it is known that the quantum speedup can be at most polynomial. If $f$ has additional restrictions (like in Deutsch-Josza or the Hidden Subgroup problem), then we know that the speedup can be exponential!

Lower Bounds

In fact, there are lower bounds on the best complexity improvements that can be obtained. The oldest such result was due to Nisan, 1991, Nisan-Szegedy, 1994, and Beals et. al, 2001. They prove that:

If there's a quantum algorithm that solves a decision problem for a general $f$ with $T$ queries, then there's a classical algorithm that solves the same with $O(T^6)$ queries.

Further, if the answer to the problem doesn't change for any permutation of the input bits, then there's a classical algorithm that solves it with $O(T^2)$ queries.

The second result implies that Grover's algorithm's speedup is optimal.

More recently, there has been a breakthrough result in Computer science on the sensitivity conjecture by Huang, 2019, which has led to a seminal result by Aaronson et. al., 2020. They prove that:

If there's a quantum algorithm that solves a decision problem for a general $f$ with $T$ queries, then there's a classical algorithm that solves the same with $O(T^4)$ queries.

This bound is tight due to Ambainis et. al., 2017.

When can we get exponential speedup?

The algorithm needs to be able to exploit special properties of the boolean function $f$, as is the case for Deutsch-Josza algorithm and the Hidden subgroup problem which implies Shor's factoring algorithm. For example, in Deutsch-Josza, we need $f$ to be constant or balanced. For the hidden subgroup problem, $f$ needs to hide a subgroup $H$, i.e. for all $g_1, g_2\in G$, $f(g_1) = f(g_2) \Longleftrightarrow g_1H = g_2H$. Without these constraints, one can only get upto quartic speedup, as proved by the latest result of Aaronson et. al.


There has been a lot of misconception among the masses about the efficiency of quantum algorithms and quantum computers. I have come across this while explaining quantum computing even to my family! They think of quantum computers as these revolutionary machines that can beat giant supercomputers in everything that they do by "going over all possibilities at once".

The reason behind my writing this post is to emphasize that one needs to be skeptical about claims made by popular media, and embrace the current limitations of quantum computers, while letting the field solve problems that it is good at solving. Thanks for reading! :)


Sayan Mukherjee
Sayan Mukherjee
Related posts

blueqat Inc.

Shibuya Scramble Square 39F 2-24-12, Shibuya, Shibuya-ku, Tokyo