common.title

Overview
Service overview
Terms of service

Privacy policy

Contact

Sign in
Sign up
common.title

Classical and Quantum Complexity Theory

Sayan Mukherjee

2021/07/14 12:42
#algorithm

Classical and Quantum Complexity Theory

Introduction

We've come across the terms "quantum speedup" a lot in popular media, claiming that quantum computers can solve problems classical computers cannot solve, within seconds. This post isn't very technical, and is aimed to skim over some of the basics of complexity theory to make sure we understand what a "quantum speedup" actually means.

Recently researchers at Google and USTC have claimed to have solved the problem of quantum supremacy, where they have devised problems which are intractable for classical computers to solve in a reasonable amount of time, but can be solved very fast using quantum computers. Both claims had been very controversial and had given rise to a lot of debate in the theoretical computer science community, including claims about the veracity of these results! However, USTC has been on a roll as of now: their initial Gauss-Boson sampling experiment based on Aaronson-Arkhipov, 2011 had achieved supremacy with upto 70 detected photons, but this time, they announced a new experiment with a 144-mode photonic circuit and upto 113 detected photons. Skeptics had pointed out that the initial experiment could have been faked by reproducing photon correlations for small values, see, for e.g., Popova-Rubtsov, 2021 . This time though, USTC claims to measure upto 19-photon correlations. Their paper is available on arXiv: USTC group, 2021 .

One common misconception about quantum computers is that they can beat classical computers in all kinds of problems. But this claim is not true, as the quantum supremacy experiments are usually deviced for exotic problems very difficult for a classical computer to solve in a reasonable amount of time. In fact, all quantum algorithms for practical computational problems can either be simulated, or have classical solutions that outperform these algorithms. This is mainly due to the small size, decoherence, and error propagation of current quantum computers, which will inevitably improve in the future. We're just not there yet.

And therefore, it seems important to understand what complexity theory actually says about quantum and classical computation. Let's take a look at some of the basics now.

Basic Complexity Classes

The following classes are usually studied for "decision problems" - problems whose answers at 0 or 1. They are very well-known and well-studied in theoretical computer science:

    • PP: polynomial-time solvable by classical computers.
    • NPNP: polynomial-time verifiable by classical computers.
    • BPPBPP: polynomial-time solvable with success probability at least 2/32/3 by classical computers with access to random numbers.
    • BQPBQP: polynomial-time solvable with success probability at least 2/32/3 by quantum computers.
    • QMAQMA: the "NP"-analogue of "BQP"; polynomial-time verifiable by quantum computers.

Relations between the classes

It is known that:

    • PNPP\subseteq NP, PBPPBQPP\subseteq BPP\subseteq BQP.
    • QMAQMA contains all of PP, NPNP, BPPBPP, BQPBQP.
    • If factoring is not in PP, PBQPP\neq BQP.

Computer scientists strongly believe that (but all of this is unproven):

    • PNPP\neq NP
    • P=BPPP = BPP
    • PBQPP \neq BQP
    • NP⊈BQP\color{red}{NP\not\subseteq BQP}
    • BQPQMABQP\neq QMA (implied by PNPP\neq NP or PBQPP\neq BQP).

The fourth point (NPNP is NOT a subset of BQPBQP) is crucial to take home. Whenever popular media claims that quantum computers can solve all NPNP-complete problems, one should be extremely skeptical of that source as it is contradictory to what computer scientists and physicists believe.

Quantum Speedups and Query Complexity

It is known that Shor's algorithm provides an exponential speedup and Grover's algorithm only gives a quadratic speedup. For which problems can we expect quantum speedup? How do we know which problems give better speedup?

There are some standard results that can help us answer these questions. First, we deal with Quantum Query Complexity . In this model, we measure the number of calls a quantum algorithm makes to an oracle (black box function), which operates on input and output registers as follows: Oxy=xyf(x)O\lvert x\rangle \lvert y\rangle = \lvert x\rangle \lvert y\oplus f(x)\rangle. If OO is provided as a circuit instead of a black box, it is possible that classical algorithms can closely analyze the structure of the circuit to solve the problem faster by itself. This is why we restrict ourselves to the model of query complexity, even though an algorithm with polynomial query complexity might require exponential time to run.

Grover's algorithm, for example, runs in O(N)O(\sqrt N) calls to the oracle when it has to search for one bitstring out of N=2nN=2^n bitstrings. This statement does not say much about the runtime of Grover's algorithm, as it depends on the complexity of the oracle calls.

Example Problems (decision versions)

    • Deutsch-Josza Algorithm: Determine if a function ff is balanced or constantly 00.
    • Grover's Algorithm: Is f(x)=0f(x)=0 for every x{0,,2n1}x\in \{0,\ldots, 2^{n-1}\}? Or is there some xx for which f(x)=1f(x)=1?
    • Hidden Subgroup Problem: Given f:GXf:G\to X that hides a subgroup HGH\leq G, is HH trivial?

If ANY function ff is allowed without restrictions (like in Grover's algorithm), it is known that the quantum speedup can be at most polynomial. If ff has additional restrictions (like in Deutsch-Josza or the Hidden Subgroup problem), then we know that the speedup can be exponential!

Lower Bounds

In fact, there are lower bounds on the best complexity improvements that can be obtained. The oldest such result was due to Nisan, 1991 , Nisan-Szegedy, 1994 , and Beals et. al, 2001 . They prove that:

If there's a quantum algorithm that solves a decision problem for a general ff with TT queries, then there's a classical algorithm that solves the same with O(T6)O(T^6) queries.

Further, if the answer to the problem doesn't change for any permutation of the input bits, then there's a classical algorithm that solves it with O(T2)O(T^2) queries.

The second result implies that Grover's algorithm's speedup is optimal.

More recently, there has been a breakthrough result in Computer science on the sensitivity conjecture by Huang, 2019 , which has led to a seminal result by Aaronson et. al., 2020 . They prove that:

If there's a quantum algorithm that solves a decision problem for a general ff with TT queries, then there's a classical algorithm that solves the same with O(T4)O(T^4) queries.

This bound is tight due to Ambainis et. al., 2017 .

When can we get exponential speedup?

The algorithm needs to be able to exploit special properties of the boolean function ff, as is the case for Deutsch-Josza algorithm and the Hidden subgroup problem which implies Shor's factoring algorithm. For example, in Deutsch-Josza, we need ff to be constant or balanced. For the hidden subgroup problem, ff needs to hide a subgroup HH, i.e. for all g1,g2Gg_1, g_2\in G, f(g1)=f(g2)g1H=g2Hf(g_1) = f(g_2) \Longleftrightarrow g_1H = g_2H. Without these constraints, one can only get upto quartic speedup, as proved by the latest result of Aaronson et. al.

Conclusion

There has been a lot of misconception among the masses about the efficiency of quantum algorithms and quantum computers. I have come across this while explaining quantum computing even to my family! They think of quantum computers as these revolutionary machines that can beat giant supercomputers in everything that they do by "going over all possibilities at once".

The reason behind my writing this post is to emphasize that one needs to be skeptical about claims made by popular media, and embrace the current limitations of quantum computers, while letting the field solve problems that it is good at solving. Thanks for reading! :)

References

© 2024, blueqat Inc. All rights reserved