top of page

The resource cost of large scale quantum computing

Reference Type: 

Asiani, Marco Fellous. 2022. “The Resource Cost of Large Scale Quantum Computing.” Doctorate, Grenoble, France: Université Grenoble Alpes. https://theses.hal.science/tel-03579666.

This thesis deals with the problematics of the scalability of fault-tolerant quantum computing. This question is studied under the angle of estimating the resources needed to set up such computers. Now that the first prototypes of quantum computers exist, it is time to start making such estimates. What we call a resource is, in principle, very general; it could be the power, the energy, or even the total bandwidth allocated to the different qubits. However, we focus mainly on the energetic cost of quantum computing within this thesis, although most of the approaches used can be adapted to deal with any other resource. We first study what is the maximum accuracy a fault-tolerant quantum computer can achieve in the presence of a scale-dependent noise, i.e., a noise that increases with the number of qubits and physical gates present in the computer. Indeed, this regime may violate an assumption behind the central theorem of fault-tolerance: the quantum threshold theorem. This theorem states that the accuracy of algorithms implemented on a quantum computer can be arbitrarily high if they are protected by quantum error correction, if enough physical elements (qubits and gates) are available and if the noise strength is below a certain threshold. Since this last assumption must be satisfied regardless of the number of physical elements in the computer, scale-dependent noise can violate it. In the case where this scale-dependent noise can be expressed as a function of a resource, these estimates allow (i) to estimate the maximum precision that the computer can achieve in the presence of a fixed quantity of this resource (which makes possible to deduce the maximum size of the algorithms that the computer will be able to implement, in order to know if the scale-dependent noise is a real problem) and vice versa (ii) to estimate the minimum quantity of resource allowing to reach a given accuracy. Throughout this thesis, our calculations are based on the concatenated Steane error-correcting code (because it is a theoretically well-documented construction that protects the qubits against an arbitrary error and allows us to make analytical calculations). In a second study, we generalize these approaches in order to be able to estimate the resource cost of a calculation in the most general case. By asking to find the minimum amount of resources required to perform a computation under the constraint that the algorithm provides a correct answer with a targeted probability, it is possible to optimize the entire architecture of the computer to minimize the resources spent while being sure to have a correct answer with a high probability. We apply this approach to a complete model of fault-tolerant quantum computer based on superconducting qubits. Our results indicate that for algorithms implemented on thousands of logical qubits, our method makes possible to reduce the energetic cost by a factor of 100 in regimes where, without optimizing, the power consumption could exceed the gigawatt. This work illustrates the fact that the energetic cost of quantum computing should be a criterion in itself, allowing to evaluate the scaling potential of a given quantum computer technology. It also illustrates that optimizing the architecture of a quantum computer via inter-disciplinary methods, including algorithmic considerations, quantum physics, and engineering aspects, such as the ones that we propose, can prove to be a powerful tool, clearly improving the scaling potential of quantum computers. Finally, we provide general hints about how to make faulttolerant quantum computers energy efficient.

Download Reference:

Search for the Publication In:

Formatted Reference:

bottom of page