Classical and Quantum Complexity Theory
Introduction
We've come across the terms "quantum speedup" a lot in popular media, claiming that quantum computers can solve problems classical computers cannot solve, within seconds. This post isn't very technical, and is aimed to skim over some of the basics of complexity theory to make sure we understand what a "quantum speedup" actually means.
Recently researchers at Google and USTC have claimed to have solved the problem of quantum supremacy, where they have devised problems which are intractable for classical computers to solve in a reasonable amount of time, but can be solved very fast using quantum computers. Both claims had been very controversial and had given rise to a lot of debate in the theoretical computer science community, including claims about the veracity of these results! However, USTC has been on a roll as of now: their initial Gauss-Boson sampling experiment based on Aaronson-Arkhipov, 2011 had achieved supremacy with upto 70 detected photons, but this time, they announced a new experiment with a 144-mode photonic circuit and upto 113 detected photons. Skeptics had pointed out that the initial experiment could have been faked by reproducing photon correlations for small values, see, for e.g., Popova-Rubtsov, 2021. This time though, USTC claims to measure upto 19-photon correlations. Their paper is available on arXiv: USTC group, 2021.
One common misconception about quantum computers is that they can beat classical computers in all kinds of problems. But this claim is not true, as the quantum supremacy experiments are usually deviced for exotic problems very difficult for a classical computer to solve in a reasonable amount of time. In fact, all quantum algorithms for practical computational problems can either be simulated, or have classical solutions that outperform these algorithms. This is mainly due to the small size, decoherence, and error propagation of current quantum computers, which will inevitably improve in the future. We're just not there yet.
And therefore, it seems important to understand what complexity theory actually says about quantum and classical computation. Let's take a look at some of the basics now.
Basic Complexity Classes
The following classes are usually studied for "decision problems" - problems whose answers at 0 or 1. They are very well-known and well-studied in theoretical computer science:
: polynomial-time solvable by classical computers.P : polynomial-time verifiable by classical computers.NP : polynomial-time solvable with success probability at leastBPP by classical computers with access to random numbers.2/3 : polynomial-time solvable with success probability at leastBQP by quantum computers.2/3 : the "NP"-analogue of "BQP"; polynomial-time verifiable by quantum computers.QMA
Relations between the classes
It is known that:
,P\subseteq NP .P\subseteq BPP\subseteq BQP contains all ofQMA ,P ,NP ,BPP .BQP - If factoring is not in
,P .P\neq BQP
Computer scientists strongly believe that (but all of this is unproven):
P\neq NP P = BPP P \neq BQP \color{red}{NP\not\subseteq BQP} (implied byBQP\neq QMA orP\neq NP ).P\neq BQP
The fourth point (
Quantum Speedups and Query Complexity
It is known that Shor's algorithm provides an exponential speedup and Grover's algorithm only gives a quadratic speedup. For which problems can we expect quantum speedup? How do we know which problems give better speedup?
There are some standard results that can help us answer these questions.
First, we deal with Quantum Query Complexity. In this model, we measure the number of calls a quantum algorithm makes to an oracle (black box function), which operates on input and output registers as follows:
Grover's algorithm, for example, runs in
Example Problems (decision versions)
- Deutsch-Josza Algorithm: Determine if a function
is balanced or constantlyf .0 - Grover's Algorithm: Is
for everyf(x)=0 ? Or is there somex\in \{0,\ldots, 2^{n-1}\} for whichx ?f(x)=1 - Hidden Subgroup Problem: Given
that hides a subgroupf:G\to X , isH\leq G trivial?H
If ANY function
If
Lower Bounds
In fact, there are lower bounds on the best complexity improvements that can be obtained. The oldest such result was due to Nisan, 1991, Nisan-Szegedy, 1994, and Beals et. al, 2001. They prove that:
If there's a quantum algorithm that solves a decision problem for a general
with f queries, then there's a classical algorithm that solves the same with T queries. O(T^6)
Further, if the answer to the problem doesn't change for any permutation of the input bits, then there's a classical algorithm that solves it with
queries. O(T^2)
The second result implies that Grover's algorithm's speedup is optimal.
More recently, there has been a breakthrough result in Computer science on the sensitivity conjecture by Huang, 2019, which has led to a seminal result by Aaronson et. al., 2020. They prove that:
If there's a quantum algorithm that solves a decision problem for a general
with f queries, then there's a classical algorithm that solves the same with T queries. O(T^4)
This bound is tight due to Ambainis et. al., 2017.
When can we get exponential speedup?
The algorithm needs to be able to exploit special properties of the boolean function
Conclusion
There has been a lot of misconception among the masses about the efficiency of quantum algorithms and quantum computers. I have come across this while explaining quantum computing even to my family! They think of quantum computers as these revolutionary machines that can beat giant supercomputers in everything that they do by "going over all possibilities at once".
The reason behind my writing this post is to emphasize that one needs to be skeptical about claims made by popular media, and embrace the current limitations of quantum computers, while letting the field solve problems that it is good at solving. Thanks for reading! :)