Achieving quantum advantage in CFD simulations within 2 years: Our roadmap
CO-AUTHORS FOR THIS POST
Dr. Ljubomir Budinski Dr. Ossi Niemimäki
Quantum Algorithm Researcher Quantum Algorithm Researcher
Dr. Roberto A. Zamora Zamora
Quantum Scientist
Our research in quantum algorithms aims to harness the power of quantum computers for more accurate and efficient CFD simulations.
In this blog post, we will explain our goal to achieve quantum advantage in CFD simulations within the next 2 years and outline our roadmap for getting there.
Defining quantum advantage
Quantum advantage and quantum supremacy are terms often used interchangeably, but they refer to distinct concepts in the world of quantum computing. Quantum supremacy refers to a quantum computer solving tasks that are impossible for classical computers in any reasonable time, as demonstrated by Google in 2019 with their 53-qubit Sycamore processor.
On the other hand, quantum advantage refers to quantum computers efficiently solving tasks with practical applications, such as problems in physics or economics. While quantum advantage does not necessarily require commercial viability, the terms useful quantum advantage or commercially useful quantum advantage can be used for applications that provide commercial benefits.
The potential impact of quantum advantage in CFD
Achieving quantum advantage in CFD simulations can lead to significant improvements in various industries and applications, such as:
- Aerodynamics simulation for automotive and aerospace industries: Quantum computing can enable more accurate simulations of airfoil designs and vehicle aerodynamics, reducing the need for expensive wind tunnel testing.
- Weather and climate modeling: Quantum algorithms can help scale and speed up simulations that combine large-scale climate models with small-scale meteorological details, which is computationally intensive for classical computers.
- Geological modeling: Applications such as groundwater transport and coupled transport-chemical reaction modeling can benefit from the increased computational power of quantum computers.
Requirements for quantum hardware
To run meaningful large CFD simulations on quantum computers, we need to address several hardware requirements:
- Qubits: Between 50-100 qubits are needed for the initial proof-of-concept (PoC) simulations, with a roadmap towards larger scale computers with 500, 1000, or more qubits.
- Fidelities and error mitigation techniques: The hardware should allow for several hundreds of CNOT gates and have a credible roadmap towards several thousands of CNOTs within a reasonable timeframe.
- Qubit topology: All-to-all connectivity is important for efficient transpilation of most quantum algorithms.
How do we justify these numbers?
As an example, let’s consider a quantum computing simulation of the advection-diffusion equation. Our flagship method, the quantum lattice Boltzmann method allows us to scale the size of the simulated lattice (number of grid points) exponentially with the number of qubits. To make the algorithm scale efficiently, we will need a linearly increasing number of ancilla qubits in addition to the working qubits. As a rough upper limit estimate, for N working qubits, there are N ancilla qubits, so the total number of qubits is 2N.
The exact number of grid points that we can simulate with 2N qubits depends on the type of the lattice utilized. For a standard 1D lattice (D1Q2), the number of grid points is 2N/2, since each lattice site hosts two distribution functions. The major parts of one timestep of a QLBM simulation are
- Initial data encoding (if needed)
- Collision step
- Propagation step
- Macroscopic variables computation (if needed)
In CFD, we can often start by evolving a zero-velocity distribution. Therefore encoding is, in some sense, less of a problem than it is in for example quantum linear solvers. In the case of an advection-diffusion equation simulation, we can for now assume that we can efficiently encode an initial state, and that the collision part is efficient and comes with an insignificant constant overhead. Moreover, at Quanscient we have recently been able to connect simulation time steps efficiently without full register measurements and re-encoding.
With these optimistic but justified assumptions, the most expensive step in such a simulation is the propagation step, which can be carried out very efficiently, as reported in our recent paper.
Let us thus consider the count of the expensive CNOT gates for the propagation step of a 1D advection-diffusion equation simulation, resulting from compiling our algorithm with the Qiskit compiler, transpiled with the IBM basic gate set assuming full qubit connectivity.
Let us consider the following grid sizes in terms of the number of grid points:
1×106 ~= 220 and 1×109 ~= 230.
Between these numbers we are already at very meaningful LBM simulations. For such a simulation, we obtain the following numbers for the required qubits and number of CNOTs per one time step:
Number of total qubits required: 42–62
Number of CNOTs: 402–578
Between 1×109 (~=230) to 1×1012 (~=240) grid points we can expect to be pushing the boundaries of currently possible classical simulations. For this, one timestep would require:
Number of total qubits required: 62–82
Number of CNOTs: 578–750
To simulate something meaningful, we will need to combine several time steps. The number of CNOTs would then be multiplied by the number of time steps. Thus, we would need to be able to execute some thousands of CNOTs with high fidelity. The number of available qubits is not a problem. If we measure and re-encode after every time step, we can carry out large proof of concepts simulating several time steps already with the best current devices, assuming efficient encoding.
This means that it may be possible to demonstrate quantum advantage for a simulation of a small number of timesteps in the near future.
However, to make this into an industrially useful simulation in itself, we must find ways to further optimize the algorithm and mitigate noise in hardware to efficiently simulate enough time steps.
Note that these numbers will change as we simulate different types of lattices. In 2D, if we consider the D2Q5 lattice, 2N qubits can host a grid of 2N/5. However, in terms of the number of CNOTs, scaling to 2D or 3D adds only a small constant overhead to the propagation step.
This can open up the possibility of solving fluid dynamics problems of meaningful size, even in the near-to-medium term in the noisy intermediate-scale quantum era.
Our 24-month roadmap to quantum advantage in CFD
Our roadmap for achieving quantum advantage in CFD simulations within the next 2 years includes the following milestones:
Connecting more time steps efficiently to enable industrially relevant PoC simulations.
Developing and testing 2D advection-diffusion equation (ADE) PoCs on real devices later this year.
Implementing 2D Navier-Stokes PoC on real devices, focusing on the most efficient ways of solving nonlinearities in the collision part of the algorithm.
Pushing the boundaries of what can be simulated with classical computing to demonstrate quantum advantage within 24 months.
Conclusion
Quantum advantage in CFD simulations has the potential to revolutionize various industries by providing more accurate and efficient simulations than ever before.
At Quanscient, we are committed to advancing research and development in quantum computing to achieve this ambitious goal within the next 2 years.
Through our innovative software and research, we strive to make a lasting impact on the field of computational fluid dynamics and beyond.