Hybrid quantum physics-informed neural networks for simulating computational fluid dynamics in complex shapes (2024)

thanks: Corresponding author, e-mail: alexey@melnikov.infoPlease check the published version, which includes all the latest additions and corrections: Mach. Learn.: Sci. Technol. 5:025045, 2024, DOI: 10.1088/2632-2153/ad43b2

AlexandrSedykhTerra Quantum AG, 9000 St.Gallen, Switzerland  ManinadhPodapakaEvonik Operations GmbH, 63450 Hanau-Wolfgang, Germany  AselSagingalievaTerra Quantum AG, 9000 St.Gallen, Switzerland  KaranPintoTerra Quantum AG, 9000 St.Gallen, Switzerland  MarkusPflitschTerra Quantum AG, 9000 St.Gallen, Switzerland  AlexeyMelnikovTerra Quantum AG, 9000 St.Gallen, Switzerland

Abstract

Finding the distribution of the velocities and pressures of a fluid by solving the Navier-Stokes equations is a principal task in the chemical, energy, and pharmaceutical industries, as well as in mechanical engineering and the design of pipeline systems. With existing solvers, such as OpenFOAM and Ansys, simulations of fluid dynamics in intricate geometries are computationally expensive and require re-simulation whenever the geometric parameters or the initial and boundary conditions are altered. Physics-informed neural networks are a promising tool for simulating fluid flows in complex geometries, as they can adapt to changes in the geometry and mesh definitions, allowing for generalization across fluid parameters and transfer learning across different shapes. We present a hybrid quantum physics-informed neural network that simulates laminar fluid flows in 3D Y𝑌Yitalic_Y-shaped mixers. Our approach combines the expressive power of a quantum model with the flexibility of a physics-informed neural network, resulting in a 21% higher accuracy compared to a purely classical neural network. Our findings highlight the potential of machine learning approaches, and in particular hybrid quantum physics-informed neural network, for complex shape optimization tasks in computational fluid dynamics. By improving the accuracy of fluid simulations in complex geometries, our research using hybrid quantum models contributes to the development of more efficient and reliable fluid dynamics solvers.

I Introduction

Computational fluid dynamics (CFD) solvers are primarily used to find the distribution of the velocity vector, 𝒗𝒗\bm{v}bold_italic_v, and pressure, p𝑝pitalic_p, of a fluid (or several fluids) given the initial conditions (e.g., initial velocity profile) and the geometrical domain in which the fluid flowscfd_review ; Anderson1995ComputationalFD .To do this, it is necessary to solve a system of differential equationsSimmons1972DifferentialEW called the Navier-Stokes (NS) equations that govern the fluid flowJameson2009MeshlessMF . A well-established approach is to use numerical CFD solvers from several vendors, as well as publicly accessible alternatives, such as OpenFOAMopenfoam or Ansysansys . These solvers discretize a given fluid volume into several small parts known as cellsUnstructured , where it is easier to get an approximate solution and then join the solutions of all the cells to get a complete distribution of the pressure and velocity over the entire geometrical domain.

While this is a rather crude explanation of how CFD solvers work, discretizing a large domain into smaller pieces accurately captures one of their main working principlesMari2013voFoamA .The runtime of the computation and the accuracy of the solution both sensitively depend on the fineness of discretization, with finer grids taking longer but giving more accurate solutions.Furthermore, any changes to the geometrical parameters necessitate the creation of a new mesh and a new simulation. This process consumes both time and resources since one has to remesh and rerun the simulation every time a geometrical parameter is alteredpinn_ns_review .

We propose a workflow employing physics-informed neural networks (PINNs)raissi2019physics to escape the need to restart the simulations whenever a geometrical property is changed completely. A PINN is a new promising tool for solving all kinds of parameterized partial differential equations (PDEs)pinn_ns_review , as it does not require many prior assumptions, linearization or local time-stepping. One defines an architecture of the neural network (number of neurons, layers, etc.) and then embeds physical laws and boundary conditions into it via constructing an appropriate loss function, so the prior task immediately travels to the domain of optimization problems.

For the classical solver, in the case of a parameterized geometric domain problem, getting accurate predictions for new modified shapes requires a complete program restart, even if the geometry has changed slightly. In the case of a PINN, to overcome this difficulty, one can use the transfer learning methodTransfer_Learning (Sec.III.1), which allows a model previously trained on some geometry to be trained on a slightly modified geometry without the need for a complete reset.

Also, using a trained PINN, it is easy to obtain a solution for other parameters of the PDE equation (e.g. kinematic viscosity in the NS equationJameson2009MeshlessMF , thermal conductivity in the heat equation, etc.) with no additional training or restart of the neural network, but in the case of traditional solvers, restarts cannot be avoided.

One of the features that makes PINNs appealing is that they suffer less from the curse of dimensionality. Finite discretization of a d𝑑ditalic_d-dimensional cube with N𝑁Nitalic_N points along each axis would require Ndsuperscript𝑁𝑑N^{d}italic_N start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT points for a traditional solver. In other words, the complexity of the problem grows exponentially as the sampling size d𝑑ditalic_d increases. Using a neural network, however, one can define a dsuperscript𝑑\mathbb{R}^{d}\rightarrow\mathbb{R}blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT → blackboard_R mapping (in case of just one target feature) with some weight parameters. Research on the topic suggests that the amount of weights/complexity of a problem in such neural networks grows polynomially with the input dimension d𝑑ditalic_dhutzenthaler2020proof ; grohs2018proof . This theoretical foundation alone allows PINNs to be a competitive alternative to solvers.

It is worth noting that although a PINN does not require Ndsuperscript𝑁𝑑N^{d}italic_N start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT points for inference, it does require many points for training. There exist a variety of sampling methods, such as latin hypercube samplingstein1987large , Sobol sequencessobol2011construction , etc., which can be used to improve a PINN’s convergence on trainingzubov2021neuralpde . However, a simple static grid of points is used in this work for the purpose of simplicity.

Classical machine learning can benefit substantially from quantum technologies. In Gaitan2020 , quantum computing is used in a similar problem setting. The performance of the current classical models is constrained by the high computing requirements. Quantum computing models can improve the learning process of existing classical modelsdunjko2018machine ; qml_review_2023 ; Neven2012QBoostLS ; PhysRevLett.113.130503 ; saggio2021experimental ; kordzanganeh2023parallel ; kurkin2023forecasting , allowing for better target function prediction accuracy with fewer iterationsasel1 . In many industries, including the pharmaceuticalsag_hyb_2022 ; fedorov , aerospacerainjonneau2023quantum , automotiveasel2 , logisticshaboury2023supervised and financialAlcazar.Perdomo-Ortiz.2020 ; Coyle.Kashefi.2021 ; Pistoia.Yalovetzky.2021wtn ; Emmanoulopoulos.Dimoska.2022n6o ; Cherrat.Pistoia.2023 sector quantum technologies can provide unique advantages over classical computing. Many traditionally important machine learning domains are also getting potential benefits from utilizing quantum technologies, e.g., in image processingsenokosov2024quantum ; Li.Wang.2022 ; naumov2023tetra ; Riaz.Hopkins.2023 and natural language processingHong.Xiao.2022 ; Lorenz.Coecke.20218h ; Coecke.Toumi.2020 ; Meichanetzidis.Coecke.202020f . Solving nonlinear differential equations is also an application area for quantum algorithms that use differentiable quantum circuitsquantum_expressivity ; paine2023physicsinformed and quantum kernelsPhysRevA.107.032428 .

Recent developments in automatic differentiation enable us to compute the exact derivatives of any order of a PINN, so there is no need to use finite differences or any other approximate differentiation techniques. It, therefore, seems that we do not require a discretized mesh over the computational domain.However, we still need a collection of points from the problem’s domain to train and evaluate a PINN.For a PINN to provide an accurate solution for a fluid dynamics problem, it is important to have a high expressivity (ability to learn solutions for a large variety of, possibly complex, problems).Fortunately, expressivity is a known strength of quantum computersmo_2022 ; schuld2021effect ; schuld2020circuit . Furthermore, quantum circuits are differentiable, meaning their derivatives can be calculated analytically, which is essential for noisy intermediate-scale quantum devices.

In this article, we propose a hybrid quantum PINN (HQPINN) shown in Fig.1 to solve the NS equations with a steady flow in a 3D Y𝑌Yitalic_Y-shape mixer. The general principles and loss function of PINN workflow are described in Sec.II. The problem description, including the geometrical details, is presented in Sec.III while in Sec.III.0.5 and Sec.III.2, we describe classical and hybrid PINNs in detail. Sec.III.0.1 explains the intricacies of PINN’s training process and simulation results. A transfer learning approach, applied to PINNs, is presented in Sec.III.1.Conclusions and further plans are described in Sec.IV.

Hybrid quantum physics-informed neural networks for simulating computational fluid dynamics in complex shapes (1)

II Physics-informed neural networks for solving partial differential equations

PINNs were originally introduced inraissi2019physics . The main idea is to use a neural network - usually a feedforward neural network like a multilayer perceptron - as a trial function for a PDE’s solution.Let us consider an abstract PDE:

𝒟[f(𝒓,t);λ]=0,𝒟𝑓𝒓𝑡𝜆0\mathcal{D}[f(\bm{r},t);\lambda]=0,caligraphic_D [ italic_f ( bold_italic_r , italic_t ) ; italic_λ ] = 0 ,(1)

where ΩdΩsuperscript𝑑\Omega\subset\mathbb{R}^{d}roman_Ω ⊂ blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT is the computational domain, 𝒓Ω𝒓Ω\bm{r}\in\Omegabold_italic_r ∈ roman_Ω is a coordinate vector, t𝑡t\in\mathbb{R}italic_t ∈ blackboard_R is time, 𝒟𝒟\mathcal{D}caligraphic_D is a nonlinear differential operator with λ𝜆\lambdaitalic_λ standing in for the physical parameters of the fluid and f(𝒓,t)𝑓𝒓𝑡f(\bm{r},t)italic_f ( bold_italic_r , italic_t ) is a solution function.

Let us consider a neural network u(𝒓,t)𝑢𝒓𝑡u(\bm{r},t)italic_u ( bold_italic_r , italic_t ) that takes coordinates, 𝒓𝒓\bm{r}bold_italic_r, and time, t𝑡titalic_t, as input and yields some real value (e.g., the pressure of a liquid at this coordinate at this particular moment).

We can evaluate u(𝒓,t)𝑢𝒓𝑡u(\bm{r},t)italic_u ( bold_italic_r , italic_t ) at any point in the computational domain via a forward pass and compute its derivatives (of any order) tnu(𝒓,t)superscriptsubscript𝑡𝑛𝑢𝒓𝑡\partial_{t}^{n}u(\bm{r},t)∂ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT italic_u ( bold_italic_r , italic_t ), 𝒓nu(𝒓,t)superscriptsubscript𝒓𝑛𝑢𝒓𝑡\partial_{\bm{r}}^{n}u(\bm{r},t)∂ start_POSTSUBSCRIPT bold_italic_r end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT italic_u ( bold_italic_r , italic_t ) through backpropagationrumelhart1986learning . Therefore, we could substitute f(𝒓,t)=u(𝒓,t)𝑓𝒓𝑡𝑢𝒓𝑡f(\bm{r},t)=u(\bm{r},t)italic_f ( bold_italic_r , italic_t ) = italic_u ( bold_italic_r , italic_t ) and try to learn the correct solution for the PDE via common machine learning gradient optimization methodsgradient (e.g., gradient descent).

This approach is inspired firstly by the ability to calculate the exact derivatives of a neural network via autodifferentiationautodiffPaper and secondly by neural networks being universal function approximatorsHornik1989MultilayerFN .

The loss, \mathcal{L}caligraphic_L, that the PINN tries to minimize is defined as

=PDE+BC,subscriptPDEsubscriptBC\mathcal{L}=\mathcal{L}_{\text{PDE}}+\mathcal{L}_{\text{BC}},caligraphic_L = caligraphic_L start_POSTSUBSCRIPT PDE end_POSTSUBSCRIPT + caligraphic_L start_POSTSUBSCRIPT BC end_POSTSUBSCRIPT ,(2)

where BCsubscriptBC\mathcal{L}_{\text{BC}}caligraphic_L start_POSTSUBSCRIPT BC end_POSTSUBSCRIPT is the boundary condition loss and PDEsubscriptPDE\mathcal{L}_{\text{PDE}}caligraphic_L start_POSTSUBSCRIPT PDE end_POSTSUBSCRIPT is the partial differential equation loss.

The boundary condition loss is responsible for satisfying the boundary conditions of the problem (e.g., a fixed pressure on the outlet of a pipe). For any field value, u𝑢uitalic_u, let us consider a Dirichlet (fixed-type) boundary conditiongreenshields2022notes

u(𝒓,t)|𝒓B=u0(𝒓,t),evaluated-at𝑢𝒓𝑡𝒓𝐵subscript𝑢0𝒓𝑡u(\bm{r},t)|_{\bm{r}\in B}=u_{0}(\bm{r},t),italic_u ( bold_italic_r , italic_t ) | start_POSTSUBSCRIPT bold_italic_r ∈ italic_B end_POSTSUBSCRIPT = italic_u start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ( bold_italic_r , italic_t ) ,(3)

where u0(𝒓,t)subscript𝑢0𝒓𝑡u_{0}(\bm{r},t)italic_u start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ( bold_italic_r , italic_t ) is a boundary condition and Bd𝐵superscript𝑑B\subset\mathbb{R}^{d}italic_B ⊂ blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT is the region where a boundary condition is applied.

If u(𝒓,t)𝑢𝒓𝑡u(\bm{r},t)italic_u ( bold_italic_r , italic_t ) is a neural network function (see Sec.II), the boundary condition loss is calculated in a mean-squared error (MSE) manner:

BC=(u(𝒓,t)u0(𝒓,t))2B,subscriptBCsubscriptdelimited-⟨⟩superscript𝑢𝒓𝑡subscript𝑢0𝒓𝑡2𝐵\mathcal{L}_{\text{BC}}=\langle(u(\bm{r},t)-u_{0}(\bm{r},t))^{2}\rangle_{B},caligraphic_L start_POSTSUBSCRIPT BC end_POSTSUBSCRIPT = ⟨ ( italic_u ( bold_italic_r , italic_t ) - italic_u start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ( bold_italic_r , italic_t ) ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ⟩ start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT ,(4)

where Bsubscriptdelimited-⟨⟩𝐵\langle\cdot\rangle_{B}⟨ ⋅ ⟩ start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT denotes averaging over all the data points 𝒓B𝒓𝐵\bm{r}\in Bbold_italic_r ∈ italic_B that have this boundary condition.

The PDE loss is responsible for solving the governing PDE. If we have an abstract PDE (1) and a neural network function u(𝒓,t)𝑢𝒓𝑡u(\bm{r},t)italic_u ( bold_italic_r , italic_t ), substituting f(𝒓,t)=u(𝒓,t)𝑓𝒓𝑡𝑢𝒓𝑡f(\bm{r},t)=u(\bm{r},t)italic_f ( bold_italic_r , italic_t ) = italic_u ( bold_italic_r , italic_t ) and calculating the mean-squared error of the PDE gives:

PDE=(𝒟[u(𝒓,t);λ])2Ω,subscriptPDEsubscriptdelimited-⟨⟩superscript𝒟𝑢𝒓𝑡𝜆2Ω\mathcal{L}_{\text{PDE}}=\langle(\mathcal{D}[u(\bm{r},t);\lambda])^{2}\rangle_%{\Omega},caligraphic_L start_POSTSUBSCRIPT PDE end_POSTSUBSCRIPT = ⟨ ( caligraphic_D [ italic_u ( bold_italic_r , italic_t ) ; italic_λ ] ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ⟩ start_POSTSUBSCRIPT roman_Ω end_POSTSUBSCRIPT ,(5)

where Ωsubscriptdelimited-⟨⟩Ω\langle\cdot\rangle_{\Omega}⟨ ⋅ ⟩ start_POSTSUBSCRIPT roman_Ω end_POSTSUBSCRIPT means averaging over all the data points in the domain of the PDE.

III Simulations

In this work, we consider the steady (i.e., time-independent) flow of an incompressible fluid in 3D without any external forces.

The NS equations (6) and the continuity equation (7) describe this scenario as follows:

(𝒗)𝒗+νΔ𝒗1ρp=0,𝒗𝒗𝜈Δ𝒗1𝜌𝑝0-(\bm{v}\cdot\nabla)\bm{v}+\nu\Delta\bm{v}-\frac{1}{\rho}\nabla p=0,- ( bold_italic_v ⋅ ∇ ) bold_italic_v + italic_ν roman_Δ bold_italic_v - divide start_ARG 1 end_ARG start_ARG italic_ρ end_ARG ∇ italic_p = 0 ,(6)
𝒗=0,𝒗0\nabla\cdot\bm{v}=0,∇ ⋅ bold_italic_v = 0 ,(7)

where 𝒗(𝒓)𝒗𝒓\bm{v}(\bm{r})bold_italic_v ( bold_italic_r ) is the velocity vector, p(𝒓)𝑝𝒓p(\bm{r})italic_p ( bold_italic_r ) is the pressure, ν𝜈\nuitalic_ν is the kinematic viscosity and ρ𝜌\rhoitalic_ρ is the fluid density. The PDE parameters ν𝜈\nuitalic_ν and ρ𝜌\rhoitalic_ρ were previously referred to as λ𝜆\lambdaitalic_λ.For each of the 4 PDEs (3 projections of vector equation 6 and 1 scalar equation 7), the PDEsubscriptPDE\mathcal{L}_{\text{PDE}}caligraphic_L start_POSTSUBSCRIPT PDE end_POSTSUBSCRIPT is calculated separately and then summed up.

III.0.1 Training physics-informed neural networks

The geometry is a collection of points organized in a .csv file. It is split into four groups: fluid domain, walls, inlets, and outlets.The fluid domain is the domain in which the NS equations are solved, i.e., where the fluid flows. The other three groups have boundary conditions described in Sec.II.

While untrained, PINN produces some random distribution of velocities and pressures. These values and their gradients are substituted into the corresponding NS equations and boundary conditions.With every iteration, the weights of the neural network are updated to minimize the error in the governing equations, and our solution becomes more and more accurate.

The training iteration is simple: the point cloud is passed through the PINN, the MSE loss is calculated (getting it requires taking gradients of (𝒗,p𝒗𝑝\bm{v},pbold_italic_v , italic_p) at each point of the geometry), the gradient of the loss with respect to the weights is taken and the parameters are updated.

To visualize the training outcomes of the neural network, we used ParaViewparaview , and the simulation results are shown in Fig.4 (1) for loss and velocity distribution.

III.0.2 Cylinder flow simulation

At first, we used a simple 3D cylinder flow as a baseline solution for classical PINN. We also validated PINN’s predictions by comparing them to OpenFOAM’s solution.

In this simulation, we have no-slip boundary condition on the walls, fixed velocity of v0=10mm/ssubscript𝑣010mmsv_{0}=10\ \text{mm}/\text{s}italic_v start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = 10 mm / s on the inlet, and fixed zero pressure p=0Pa𝑝0Pap=0\ \text{Pa}italic_p = 0 Pa on the outlet.The fluid is water with standard density ρ=1g/cm3𝜌1gsuperscriptcm3\rho=1\ \text{g}/\text{cm}^{3}italic_ρ = 1 g / cm start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT and ν=1mm2/s𝜈1superscriptmm2s\nu=1\ \text{mm}^{2}/\text{s}italic_ν = 1 mm start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT / s kinematic viscosity.The cylinder has radius of 2mm2mm2\ \text{mm}2 mm and height of 10mm10mm10\ \text{mm}10 mm (2). Reynolds number for such flow is 10101010, so it can be considered laminar.

Hybrid quantum physics-informed neural networks for simulating computational fluid dynamics in complex shapes (2)

The training was done on a single NVIDIA A100 GPU for 20000superscript2000020^{\prime}00020 start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT 000 iterations with an L-BFGS optimizer. For its training, the model uses 25000superscript2500025^{\prime}00025 start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT 000 points inside the cylinder, which are taken straight from the mesh nodes used in OpenFOAM’s simulation.PINN represented the solver’s solution quite well: the relative error of velocity magnitude averaged over the whole cylinder is 1.2%percent1.21.2\ \%1.2 %, and the mean relative error of pressure is 0.8%percent0.80.8\ \%0.8 %. Refer to Fig.3 for ground truth (OpenFOAM) and prediction (PINN) field values, as well as the distribution of relative error across the geometry.

The only error spikes both for pressure and velocity fields are located near the inlet edge of the pipe, where the uniform velocity profile of 10mm/s10mms10\ \text{mm}/\text{s}10 mm / s abruptly changes to 00 due to non-slip boundary condition on walls, which is somewhat unphysical. This can be corrected by making the inlet velocity profile parabolic instead of uniform, but we decided to stick to a simple problem statement for the baseline solution.

III.0.3 Generalization of physics-informed neural networks

To check if our baseline model possesses any generalization capabilities, we trained the same model as in SectionIII.0.2, but now incorporating 4444 input parameters: the x,y,z𝑥𝑦𝑧x,y,zitalic_x , italic_y , italic_z coordinates and the kinematic viscosity ν𝜈\nuitalic_ν, which was held constant at 1mm2/s1superscriptmm2s1\,\text{mm}^{2}/\text{s}1 mm start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT / s for the aforementioned model. Given that we employ a full-batch strategy, where the optimizer processes the gradient of all points in the geometry simultaneously, such a modification significantly increases the memory requirements for storing the total gradient. Consequently, we limited our training set to a set of ν𝜈\nuitalic_ν values, {1,2,3,4}mm2/s1234superscriptmm2s\{1,2,3,4\}\,\text{mm}^{2}/\text{s}{ 1 , 2 , 3 , 4 } mm start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT / s. Nonetheless, we believe this range is sufficient to explore the generalization capabilities of the classical PINN.

Following the same training regime as in SectionIII.0.2, we present the results on the test data in Table1. The results indicate that the model generalizes well across the range of ν𝜈\nuitalic_ν values it trained on.

ν,mm2/s𝜈superscriptmm2s\nu,\ \text{mm}^{2}/\text{s}italic_ν , mm start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT / sp𝑝pitalic_p rel. error, %𝒗norm𝒗\|\bm{v}\|∥ bold_italic_v ∥ rel. error, %
111110.010.010.010.04.74.74.74.7
1.51.5\bm{1.5}bold_1.52.02.02.02.02.82.82.82.8
22222.72.72.72.71.51.51.51.5
2.52.5\bm{2.5}bold_2.55.95.95.95.93.63.63.63.6
33338.58.58.58.55.85.85.85.8
3.53.5\bm{3.5}bold_3.510.510.510.510.57.77.77.77.7
444412.312.312.312.39.49.49.49.4
𝟓5\bm{5}bold_515.115.115.115.112.312.312.312.3
𝟏𝟎10\bm{10}bold_1021.021.021.021.024.624.624.624.6
Hybrid quantum physics-informed neural networks for simulating computational fluid dynamics in complex shapes (3)

However, when the model attempts to extrapolate solutions for ν𝜈\nuitalic_ν values outside its training range, the quality of the solutions begins to degrade rapidly. More specifically, the solution tends to converge to the trivial case of 𝒗=0𝒗0\bm{v}=0bold_italic_v = 0 as it approaches the end of the pipe. This phenomenon will be revisited and discussed further in SectionIII.0.4, where we attempt simulations on more complex geometries than the current one.

III.0.4 Y-shaped mixer flow simulation

This time, we try to simulate the flow of liquid in a Y𝑌Yitalic_Y-shaped mixer consisting of three tubes (Fig.4). The mixing fluids are identical and have parameters ρ=1.0kg/m3𝜌1.0kgsuperscriptm3\rho=1.0\ \text{kg}/\text{m}^{3}italic_ρ = 1.0 kg / m start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT and ν=1.0m2/s𝜈1.0superscriptm2s\nu=1.0\ \text{m}^{2}/\text{s}italic_ν = 1.0 m start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT / s.

The imposed boundary conditions are as follows:

  • no-slip BC on walls: 𝒗(𝒓)|walls=0evaluated-at𝒗𝒓walls0\bm{v}(\bm{r})|_{\text{walls}}=0bold_italic_v ( bold_italic_r ) | start_POSTSUBSCRIPT walls end_POSTSUBSCRIPT = 0,

  • fixed velocity profile on inlets: 𝒗(𝒓)|inlets=𝒗0(𝒓)evaluated-at𝒗𝒓inletssubscript𝒗0𝒓\bm{v}(\bm{r})|_{\text{inlets}}=\bm{v}_{0}(\bm{r})bold_italic_v ( bold_italic_r ) | start_POSTSUBSCRIPT inlets end_POSTSUBSCRIPT = bold_italic_v start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ( bold_italic_r ),

  • fixed pressure on outlets: p(𝒓)|outlets=p0evaluated-at𝑝𝒓outletssubscript𝑝0p(\bm{r})|_{\text{outlets}}=p_{0}italic_p ( bold_italic_r ) | start_POSTSUBSCRIPT outlets end_POSTSUBSCRIPT = italic_p start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT,

where, 𝒗0(𝒓)subscript𝒗0𝒓\bm{v}_{0}(\bm{r})bold_italic_v start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ( bold_italic_r ) is a parabolic velocity profile on each inlet and p0(𝒓)=0subscript𝑝0𝒓0p_{0}(\bm{r})=0italic_p start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ( bold_italic_r ) = 0.

Hybrid quantum physics-informed neural networks for simulating computational fluid dynamics in complex shapes (4)

The PINN was trained via full-batch gradient descent with the Adam optimizeradam for 1000100010001000 epochs and then with the L-BFGS optimizer for 100100100100 epochs until gradient vanished and training became impossible. After the training, the PINN managed to learn a non-trivial downward flow at the beginning of both pipes. On the edges of these pipes, the velocities become zero, as they should, due to no-slip wall boundary conditions. However, further down the mixer, the solution degenerates to zero, so it does not even reach the mixing point. This fact should at least violate the continuity equation because there is an inward flow without matching outward flow. This problem is primarily caused by gradient vanishing inherent to PINN models.

In an attempt to overcome the challenge, we introduce an HQPINN model in III.2, which shows better results in terms of PDE and boundary conditions satisfaction. We also research potential ways of making a generalizable PINN in III.1 by employing a transfer learning method.

III.0.5 Physics-informed neural network architecture

In this section, we provide details on the PINN’s architecture.The core of the PINN is a neural network whose architecture is a multilayer perceptron with several fully connected layers. As shown in Fig.1, the first layer consists of 3333 neurons (since the problem is 3D), then there are l=5𝑙5l=5italic_l = 5 hidden layers with n𝑛nitalic_n neurons, where n=128𝑛128n=128italic_n = 128 for the cylinder flow and n=64𝑛64n=64italic_n = 64 for the Y𝑌Yitalic_Y-shaped mixer. For the classical PINN, the “Parallel Hybrid Network” box is replaced with one fully connected layer n4𝑛4n\rightarrow 4italic_n → 4. There is no quantum layer in the classical case, so the output goes straight into the filter.Between adjacent layers, there is a sigmoid linear unit (SiLU) activation functionelfwing2018sigmoid . The PINN takes the (x,y,z)𝑥𝑦𝑧(x,y,z)( italic_x , italic_y , italic_z ) coordinates as inputs and yields (𝒗,p)𝒗𝑝(\bm{v},p)( bold_italic_v , italic_p ) as its output, where 𝒗𝒗\bm{v}bold_italic_v is the velocity vector with three components and p𝑝pitalic_p is the pressure.

III.1 Transfer Learning

Transfer learning is a powerful method of using the knowledge and experience of one model that has been pretrained on one problem to solve another problemTransfer_Learning ; mari2020transfer . It is extremely useful because it means that a second model does not have to be trained from scratch. This is especially helpful for fluid modeling, where selecting the most appropriate geometrical hyperparameters would otherwise lead to the simulations being rerun many times.

For transfer learning, we used a model from the previous section as a base, which had α0=30subscript𝛼0superscript30\alpha_{0}=30^{\circ}italic_α start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = 30 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT, where α𝛼\alphaitalic_α is the angle between the right pipe and the x𝑥xitalic_x axis (see Fig.4). Then, for each α={31,32,33,34,35}𝛼superscript31superscript32superscript33superscript34superscript35\alpha=\{31^{\circ},32^{\circ},33^{\circ},34^{\circ},35^{\circ}\}italic_α = { 31 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT , 32 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT , 33 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT , 34 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT , 35 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT }, we tried to obtain the solution, each time using the previously trained model as an initializer. For example, to transfer learn from 31superscript3131^{\circ}31 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT to 32superscript3232^{\circ}32 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT, we used the 31superscript3131^{\circ}31 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT model as a base and so on. Each iteration is trained for 100100100100 epochs with L-BFGS.Fig.4 shows that the PINN adapts well to changes in the value of α𝛼\alphaitalic_α. That is, our hypothesis was correct: with PINNs, one does not need to completely rerun the simulation on a parameter change, transfer learning from the base model will suffice.

III.2 Hybrid quantum PINN

Hybrid quantum physics-informed neural networks for simulating computational fluid dynamics in complex shapes (5)

Quantum machine learning was shown to be applicable to solving differential equationsquantum_expressivity ; paine2023physicsinformed ; PhysRevA.107.032428 . Here, we introduce a hybrid quantum neural network called HQPINN and compare it against its classical counterpart, classical PINN. As shown in Fig.1, the architecture of HQPINN is comprised of classical fully connected layers, specifically a multilayer perceptron, coupled with a Parallel Hybrid Networkkordzanganeh2023parallel . The latter is a unique intertwining of a Quantum depth-infused layersag_hyb_2022 ; schuld2021effect and a classical layer. Interestingly, the first 15 units (ϕ1,,ϕ15subscriptitalic-ϕ1subscriptitalic-ϕ15\phi_{1},\dots,\phi_{15}italic_ϕ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , italic_ϕ start_POSTSUBSCRIPT 15 end_POSTSUBSCRIPT, depicted in green) of the layer are dedicated to the quantum layer, whereas the information from one orange unit proceeds along the classical pathway. This parallel structure enables simultaneous information processing, thereby enhancing the efficiency of the learning process.

Regarding the quantum depth-infused layer, it is implemented as a variational quantum circuit (VQC), a promising strategy for navigating the complexities of the Noisy Intermediate-Scale Quantum (NISQ) epochpreskill2018quantum . The NISQ era is characterized by quantum devices with a limited number of qubits that cannot yet achieve error correction, making strategies like VQCs particularly relevantzhao2019qdnn ; dou2021unsupervised ; sag_hyb_2022 .

The capacity of HQPINN to direct diverse segments of the input vector toward either the quantum or classical part of the network equips PINNs with an enhanced ability to process and learn from various patterns more efficiently. For instance, some patterns may be optimally processed by the quantum network, while others might be better suited for the classical network. This flexibility in processing contributes to the robust learning capabilities of HQPINN.

III.2.1 Quantum Depth-Infused Layer

Transitioning into the building blocks of our model, the quantum depth-infused layer takes center stage. Quantum gates are the basic building blocks for any quantum circuit, including those used for machine learning. Quantum gates come in single-qubit (e.g., rotation gate Ry(θ)subscript𝑅𝑦𝜃R_{y}(\theta)italic_R start_POSTSUBSCRIPT italic_y end_POSTSUBSCRIPT ( italic_θ ), gate that plays a central role in quantum machine learning) and multiple-qubit gates (e.g., CNOT) and modulate the state of qubits to perform computations. The Ry(θ)subscript𝑅𝑦𝜃R_{y}(\theta)italic_R start_POSTSUBSCRIPT italic_y end_POSTSUBSCRIPT ( italic_θ ) gate rotates the qubit around the y-axis of the Bloch sphere by an angle θ𝜃\thetaitalic_θ, while the two-qubit CNOT gate changes the state of one qubit based on the current state of another qubit. Gates can be fixed, which means they perform fixed calculations, such as the Hadamard gate, or they can be variable, such as the rotation gate that depends on the rotation angle and may perform computations with tunable parameters.

To extract the results, qubits are measured and projected onto a specific basis, and the expected value is calculated. When using the σzsubscript𝜎𝑧\sigma_{z}italic_σ start_POSTSUBSCRIPT italic_z end_POSTSUBSCRIPT Pauli matrix observable, the expected value of the measurement is represented as: ψ|σz|ψquantum-operator-product𝜓subscript𝜎𝑧𝜓\left\langle\psi\right|\sigma_{z}\left|\psi\right\rangle⟨ italic_ψ | italic_σ start_POSTSUBSCRIPT italic_z end_POSTSUBSCRIPT | italic_ψ ⟩, with ψ𝜓\psiitalic_ψ signifying the wave function that depicts the current state of our quantum system. For a more detailed understanding of quantum circuits, including logic gates and measurements, standard quantum computing textbooks such asnielsen2002quantum offer a comprehensive guide.

To refine the functioning of the network we propose, the initial phase of data processing is performed on a classical computer. Subsequently, these data are integrated into the quantum gate parameters in an encoding layer (blue gates) repeated d=5𝑑5d=5italic_d = 5 times, forming part of the quantum depth-infused layer followed by the variational layer (green gates). Number of repetitions of the variational layer m=2𝑚2m=2italic_m = 2. As the algorithm develops, these gate parameters in the variational layer are dynamically adjusted. Finally, at the measurement level, the qubits are quantified, leading to a series of classical bits as the output.

Conceptually, a quantum algorithm resembles a black box that receives classical information as input and emits classical information as output. The objective here is to fine-tune the variational parameters such that the measurement outcome most accurately reflects the prediction function. In essence, this parameter optimization is akin to optimizing the weights in a classical neural network, thereby effectively training the quantum depth-infused layer.

III.2.2 Training hybrid quantum physics-informed neural network

The HQPINN consists of the classical PINN with weights pre-initialized from the previous stage (as described in Sec.III.0.5), a parallel hybrid network and a fully-connected layer at the end.

The training process of the hybrid PINN does not differ from that of the classical PINN except in the following ways. Firstly, all calculations are done on the classical simulator of quantum hardware, the QMware serverqmw_qmw_2022 , which has recently been shown to be quite good for running hybrid algorithmsmo_bench_2022 .

Secondly, how does one backpropagate through the quantum circuit layer? The answer is to use the “adjoint differentiation” method, introduced inadjointdiff , which helps to compute derivatives of a VQC on a classical simulator efficiently.

This time the model was trained for 100100100100 epochs using mini-batch gradient descent with the Adam optimizer (Fig.5). The mini-batch strategy was employed due to the very low training speed of quantum circuits, as they train on a CPU. We will then compare this model with a purely classical one, with the same architecture from Sec.III.0.5, but this time trained only with mini-batch Adam. All learning hyperparameters (learning rate, scheduler parameters, batch size) are shared between the quantum and classical models. Comparing the two, Fig.6 shows that the quantum model outperforms the classical one in terms of the loss value by 21%percent2121\%21 %. As the loss function for PINNs directly corresponds to PDE and BC satisfaction, it implies that HQPINN has achieved better physical accuracy before the gradient vanish. However, the mini-batch training strategy for the HQPINN is far from ideal and stems from the large training time of simulated quantum circuits. Proper hardware backend for highly parallel GPU-accelerated quantum computing could greatly extend the HQPINN advantage.

Hybrid quantum physics-informed neural networks for simulating computational fluid dynamics in complex shapes (6)

Additionally, the question of computational demands scaling is of importance. For the CFD examples considered in the paper, we used a three-qubit circuit, which is not computationally demanding to execute on a classical computer. However, with each additional qubit, the runtime of the quantum circuit execution will approximately double. That means, according to the benchmarksmo_bench_2022 , the runtime will become significant beyond 20 qubits circuits. Nonetheless, with the development of quantum computers, the scaling is expected to be more favorable as the runtime on a quantum chip would stay constant in case the circuit depth is fixed. Therefore, we believe that for more complex shapes, the simulation would require more data and, hence, larger HQPINN models require the need of quantum computers for simulation.

IV Discussion

In our investigation, we pursued two distinct goals aimed at advancing the capabilities of PINNs through the integration of quantum computing methodologies. The first objective focused on enhancing the performance of classical PINNs in solving PDEs within 3D geometries through the adoption of HQPINNs. This endeavor was motivated by the recognized success of classical PINNs in 2D problem settingsraissi2019physics and the existing gap in their application to more complex 3D scenarios. The significance of this goal lies in its potential to broaden the applicability of PINNs to a wider range of scientific and engineering problems characterized by three-dimensional spaces.

Our second goal was to investigate the presence and feasibility of transfer learning in classical PINNs when applied to 3D geometries. This exploration is crucial for understanding the ability of PINNs to generalize across different geometrical configurations and physical problems, potentially enabling more efficient re-training processes on novel problems and geometries. The overarching aim here is to enhance the model’s versatility and reduce computational costs associated with the training of neural networks for new tasks.

Our results in applying HQPINNs to 3D PDE problems demonstrate a notable improvement in reducing PDE loss, marking a significant step forward in our first objective. Specifically, we observed a 21%percent2121\%21 % reduction in loss when comparing the performance of HQPINNs against their purely classical counterparts. This quantitative improvement highlights the enhanced expressiveness and computational efficiency brought about by the integration of quantum layers, suggesting that quantum computing elements can indeed augment the capability of PINNs in handling the intricacies of 3D problems. However, it is important to recognize that despite these advancements, achieving an optimal solution for 3D PDEs remains a challenging endeavor. The observed loss reduction, while substantial, does not resolve the complexities of 3D geometrical problem solving, indicating the need for further refinement and exploration of the HQPINN framework.

In the realm of transfer learning, our exploration yielded encouraging signs that classical PINNs possess an inherent capability to adapt to variations in a geometrical shape, when theangle of a Y𝑌Yitalic_Y-shaped mixer was changed. This qualitative finding is instrumental in demonstrating the potential for PINNs to be applied in shape optimization tasks and other applications requiring flexibility across different geometrical configurations. However, the journey towards realizing full transfer learning and generalization capabilities in PINNs is still in its early stages. Our initial successes serve as a foundation for further research, underscoring the necessity for continued development and investigation into the mechanisms that enable effective transfer learning within PINNs.

In developing our HQPINNs, we have also considered recent advances in other quantum and quantum-inspired methods for solving nonlinear equations, such as based on the matrix product states approachgourianov2022quantum and variational quantum algorithms for nonlinear problemslubasch2020variational ; quantum_expressivity . While our approach primarily focuses on handling complex boundary conditions, the integration of these quantum-inspired methods and variational quantum algorithms could further improve the performance and accuracy of our proposed quantum PINN model.

Collectively, our findings contribute valuable insights into the potential of hybrid quantum neural networks and the prospects for transfer learning in solving complex physical problems. As we continue to push the boundaries of what is achievable with PINNs, our future efforts will focus on refining these models to approach accurate full-scale CFD simulation in complex 3D shapes.

The plan includes exploring better architectures for the quantum PINN, investigating their impact on expressiveness, generalizability and optimization landscape, and trying data-driven approaches. Entirely different networks, such as neural operatorsneural_ops ; fno and graph neural networksmesh_based_gnn ; sanchez2020learning , could also be considered in a quantum setting and enhanced with quantum circuits.

References

  • (1)MohdHafiz Zawawi, ASaleha, ASalwa, NHHassan, NazirulMubin Zahari,MohdZakwan Ramli, and ZakariaChe Muda.A review: Fundamentals of computational fluid dynamics (CFD).In AIP conference proceedings. AIP Publishing LLC, 2018.
  • (2)JohnDavid Anderson and John Wendt.Computational fluid dynamics, volume 206.Springer, 1995.
  • (3)GeorgeFinlay Simmons.Differential equations with applications and historical notes.In Differential Equations With Applications and HistoricalNotes. McGraw-Hill, 1972.
  • (4)AaronJon Katz.Meshless methods for computational fluid dynamics.Stanford University, 2009.
  • (5)OpenFOAM.https://www.openfoam.com/, 2022.
  • (6)Ansys.https://www.ansys.com/, 2022.
  • (7)Tomislav Marić, DouglasB. Kotheb, and Dieter Bothe.Unstructured un-split geometrical volume-of-fluid methods - Areview.Journal of Computational Physics, 420, 2020.
  • (8)Tomislav Marić, Holger Marschall, and Dieter Bothe.vofoam - a geometrical volume of fluid algorithm on arbitraryunstructured meshes with local dynamic adaptive mesh refinement usingOpenFOAM.arXiv preprint arXiv: 1305.3417, 2013.
  • (9)Shengze Cai, Zhiping Mao, Zhicheng Wang, Minglang Yin, and GeorgeEmKarniadakis.Physics-informed neural networks (PINNs) for fluid mechanics: Areview.Acta Mechanica Sinica, pages 1–12, 2022.
  • (10)Maziar Raissi, Paris Perdikaris, and GeorgeE Karniadakis.Physics-informed neural networks: A deep learning framework forsolving forward and inverse problems involving nonlinear partial differentialequations.Journal of Computational Physics, 378:686–707, 2019.
  • (11)Behnam Neyshabur, Hanie Sedghi, and Chiyuan Zhang.What is being transferred in transfer learning?arXiv preprint arXiv:2008.11687, 2020.
  • (12)Martin Hutzenthaler, Arnulf Jentzen, Thomas Kruse, and TuanAnh Nguyen.A proof that rectified deep neural networks overcome the curse ofdimensionality in the numerical approximation of semilinear heat equations.SN partial differential equations and applications, 1:1–34,2020.
  • (13)Philipp Grohs, Fabian Hornung, Arnulf Jentzen, and Philippe VonWurstemberger.A proof that artificial neural networks overcome the curse ofdimensionality in the numerical approximation of Black-Scholes partialdifferential equations.arXiv preprint arXiv:1809.02362, 2018.
  • (14)Michael Stein.Large sample properties of simulations using latin hypercubesampling.Technometrics, 29(2):143–151, 1987.
  • (15)IlyaM Sobol’, Danil Asotsky, Alexander Kreinin, and Sergei Kucherenko.Construction and comparison of high-dimensional Sobol’generators.Wilmott, 2011(56):64–79, 2011.
  • (16)Kirill Zubov, Zoe McCarthy, Yingbo Ma, Francesco Calisto, Valerio Pagliarino,Simone Azeglio, Luca Bottero, Emmanuel Luján, Valentin Sulzer, AshutoshBharambe, etal.Neuralpde: Automating physics-informed neural networks (PINNs) witherror approximations.arXiv preprint arXiv:2107.09443, 2021.
  • (17)Frank Gaitan.Finding flows of a Navier–Stokes fluid through quantumcomputing.npj Quantum Information, 6(1):1–6, 2020.
  • (18)Vedran Dunjko and HansJ Briegel.Machine learning & artificial intelligence in the quantum domain: areview of recent progress.Reports on Progress in Physics, 81(7):074001, 2018.
  • (19)Alexey Melnikov, Mohammad Kordzanganeh, Alexander Alodjants, and Ray-Kuang Lee.Quantum machine learning: from physics to software engineering.Advances in Physics: X, 8(1):2165452, 2023.
  • (20)Hartmut Neven, VasilS. Denchev, Geordie Rose, and WilliamG. Macready.QBoost: Large scale classifier training with adiabatic quantumoptimization.In Steven C.H. Hoi and Wray Buntine, editors, Proc. Asian Conf.Mach. Learn., volume25 of Proceedings of Machine Learning Research,pages 333–348. PMLR, 2012.
  • (21)Patrick Rebentrost, Masoud Mohseni, and Seth Lloyd.Quantum support vector machine for big data classification.Physical Review Letters, 113:130503, Sep 2014.
  • (22)Valeria Saggio, BeateE Asenbeck, Arne Hamann, Teodor Strömberg, PeterSchiansky, Vedran Dunjko, Nicolai Friis, NicholasC Harris, Michael Hochberg,Dirk Englund, etal.Experimental quantum speed-up in reinforcement learning agents.Nature, 591(7849):229–233, 2021.
  • (23)MoKordzanganeh, Daria Kosichkina, and Alexey Melnikov.Parallel hybrid networks: an interplay between quantum and classicalneural networks.Intelligent Computing, 2:0028, 2023.
  • (24)Andrii Kurkin, Jonas Hegemann, MoKordzanganeh, and Alexey Melnikov.Forecasting the steam mass flow in a powerplant using the parallelhybrid network.arXiv preprint arXiv:2307.09483, 2023.
  • (25)Michael Perelshtein, Asel Sagingalieva, Karan Pinto, Vishal Shete, AlexeyPakhomchik, Artem Melnikov, Florian Neukart, Georg Gesek, Alexey Melnikov,and Valerii Vinokur.Practical application-specific advantage through hybrid quantumcomputing.arXiv preprint arXiv:2205.04858, 2022.
  • (26)Asel Sagingalieva, Mohammad Kordzanganeh, Nurbolat Kenbayev, Daria Kosichkina,Tatiana Tomashuk, and Alexey Melnikov.Hybrid quantum neural network for drug response prediction.Cancers, 15(10):2705, 2023.
  • (27)A.I. Gircha, A.S. Boev, K.Avchaciov, P.O. Fedichev, and A.K. Fedorov.Training a discrete variational autoencoder for generative chemistryand drug design on a quantum annealer.arXiv:2108.11644, 2021.
  • (28)Serge Rainjonneau, Igor Tokarev, Sergei Iudin, Saaketh Rayaprolu, Karan Pinto,Daria Lemtiuzhnikova, Miras Koblan, Egor Barashov, MoKordzanganeh, MarkusPflitsch, and Alexey Melnikov.Quantum algorithms applied to satellite mission planning for Earthobservation.IEEE Journal of Selected Topics in Applied Earth Observationsand Remote Sensing, 16:7062–7075, 2023.
  • (29)Asel Sagingalieva, Andrii Kurkin, Artem Melnikov, Daniil Kuhmistrov, etal.Hybrid quantum ResNet for car classification and its hyperparameteroptimization.Quantum Machine Intelligence, 5(2):38, 2023.
  • (30)Nathan Haboury, MoKordzanganeh, Sebastian Schmitt, Ayush Joshi, Igor Tokarev,Lukas Abdallah, Andrii Kurkin, Basil Kyriacou, and Alexey Melnikov.A supervised hybrid quantum machine learning solution to theemergency escape routing problem.arXiv preprint arXiv:2307.15682, 2023.
  • (31)Javier Alcazar, Vicente Leyton-Ortega, and Alejandro Perdomo-Ortiz.Classical versus quantum models in machine learning: insights from afinance application.Machine Learning: Science and Technology, 1(3):035003, 2020.
  • (32)Brian Coyle, Maxwell Henderson, Justin ChanJin Le, Niraj Kumar, Marco Paini,and Elham Kashefi.Quantum versus classical generative modelling in finance.Quantum Science and Technology, 6(2):024013, 2021.
  • (33)Marco Pistoia, SyedFarhan Ahmad, Akshay Ajagekar, Alexander Buts, ShouvanikChakrabarti, Dylan Herman, Shaohan Hu, Andrew Jena, Pierre Minssen, PradeepNiroula, Arthur Rattew, Yue Sun, and Romina Yalovetzky.Quantum Machine Learning for Finance.arXiv preprint arXiv:2109.04298, 2021.
  • (34)Dimitrios Emmanoulopoulos and Sofija Dimoska.Quantum Machine Learning in Finance: Time Series Forecasting.arXiv preprint arXiv:2202.00599, 2022.
  • (35)ElAmine Cherrat, Snehal Raj, Iordanis Kerenidis, Abhishek Shekhar, Ben Wood,Jon Dee, Shouvanik Chakrabarti, Richard Chen, Dylan Herman, Shaohan Hu,Pierre Minssen, Ruslan Shaydulin, Yue Sun, Romina Yalovetzky, and MarcoPistoia.Quantum Deep Hedging.arXiv preprint arXiv:2303.16585, 2023.
  • (36)Arsenii Senokosov, Alexandr Sedykh, Asel Sagingalieva, Basil Kyriacou, andAlexey Melnikov.Quantum machine learning for image classification.Machine Learning: Science and Technology, 5(1):015040, 2024.
  • (37)Wei Li, Peng-Cheng Chu, Guang-Zhe Liu, Yan-Bing Tian, Tian-Hui Qiu, and Shu-MeiWang.An Image Classification Algorithm Based on Hybrid Quantum ClassicalConvolutional Neural Network.Quantum Engineering, 2022:1–9, 2022.
  • (38)ANaumov, ArMelnikov, VAbronin, FOxanichenko, KIzmailov, MPflitsch,AMelnikov, and MPerelshtein.Tetra-AML: Automatic machine learning via tensor networks.arXiv preprint arXiv:2303.16214, 2023.
  • (39)Farina Riaz, Shahab Abdulla, Hajime Suzuki, Srinjoy Ganguly, RavineshC. Deo,and Susan Hopkins.Accurate Image Multi-Class Classification Neural Network Model withQuantum Entanglement Approach.Sensors, 23(5):2753, 2023.
  • (40)Zhenhou Hong, Jianzong Wang, Xiaoyang Qu, Chendong Zhao, Wei Tao, and JingXiao.QSpeech: Low-Qubit Quantum Speech Application Toolkit.arXiv preprint arXiv:2205.13221, 2022.
  • (41)Robin Lorenz, Anna Pearson, Konstantinos Meichanetzidis, Dimitri Kartsaklis,and Bob Coecke.QNLP in Practice: Running Compositional Models of Meaning on aQuantum Computer.arXiv preprint arXiv:2102.12846, 2021.
  • (42)Bob Coecke, Giovannide Felice, Konstantinos Meichanetzidis, and Alexis Toumi.Foundations for Near-Term Quantum Natural Language Processing.arXiv preprint arXiv:2012.03755, 2020.
  • (43)Konstantinos Meichanetzidis, Alexis Toumi, Giovannide Felice, and Bob Coecke.Grammar-Aware Question-Answering on Quantum Computers.arXiv preprint arXiv:2012.03756, 2020.
  • (44)Oleksandr Kyriienko, AnnieE Paine, and VincentE Elfving.Solving nonlinear differential equations with differentiable quantumcircuits.Physical Review A, 103(5):052416, 2021.
  • (45)AnnieE. Paine, VincentE. Elfving, and Oleksandr Kyriienko.Physics-Informed Quantum Machine Learning: Solvingnonlinear differential equations in latent spaces without costly gridevaluations.arXiv preprint arXiv: 2308.01827, 2023.
  • (46)AnnieE. Paine, VincentE. Elfving, and Oleksandr Kyriienko.Quantum kernel methods for solving regression problems anddifferential equations.Physical Review A, 107:032428, 2023.
  • (47)MoKordzanganeh, Pavel Sekatski, Leonid Fedichkin, and Alexey Melnikov.An exponentially-growing family of universal quantum circuits.Machine Learning: Science and Technology, 4(3):035036, 2023.
  • (48)Maria Schuld, Ryan Sweke, and JohannesJakob Meyer.Effect of data encoding on the expressive power of variationalquantum-machine-learning models.Physical Review A, 103(3):032430, 2021.
  • (49)Maria Schuld, Alex Bocharov, KrystaM Svore, and Nathan Wiebe.Circuit-centric quantum classifiers.Physical Review A, 101(3):032308, 2020.
  • (50)DavidE Rumelhart, GeoffreyE Hinton, and RonaldJ Williams.Learning representations by back-propagating errors.Nature, 323(6088):533–536, 1986.
  • (51)Sebastian Ruder.An overview of gradient descent optimization algorithms.arXiv preprint arXiv: 1609.04747, 2016.
  • (52)AtilimGunes Baydin, BarakA. Pearlmutter, AlexeyAndreyevich Radul, andJeffreyMark Siskind.Automatic differentiation in machine learning: a survey.Journal of Machine Learning Research, 18(153):1–43, 2018.
  • (53)Kurt Hornik, MaxwellB. Stinchcombe, and HalbertL. White.Multilayer feedforward networks are universal approximators.Neural Networks, 2:359–366, 1989.
  • (54)CGreenshields and HWeller.Notes on computational fluid dynamics: General principles.CFD Direct Ltd.: Reading, UK, 2022.
  • (55)ParaView.https://www.paraview.org/, 2022.
  • (56)DiederikP Kingma and Jimmy Ba.Adam: A method for stochastic optimization.arXiv preprint arXiv:1412.6980, 2014.
  • (57)Stefan Elfwing, Eiji Uchibe, and Kenji Doya.Sigmoid-weighted linear units for neural network functionapproximation in reinforcement learning.Neural Networks, 107:3–11, 2018.
  • (58)Andrea Mari, ThomasR. Bromley, Josh Izaac, Maria Schuld, and Nathan Killoran.Transfer learning in hybrid classical-quantum neural networks.Quantum, 4:340, 2020.
  • (59)John Preskill.Quantum computing in the nisq era and beyond.Quantum, 2:79, 2018.
  • (60)Chen Zhao and Xiao-Shan Gao.QDNN: DNN with quantum neural network layers.arXiv preprint arXiv:1912.12660, 2019.
  • (61)Tong Dou, Kaiwei Wang, Zhenwei Zhou, Shilu Yan, and Wei Cui.An unsupervised feature learning for quantum-classical convolutionalnetwork with applications to fault detection.In 2021 40th Chinese Control Conference (CCC), pages6351–6355. IEEE, 2021.
  • (62)MichaelA Nielsen and Isaac Chuang.Quantum computation and quantum information, 2002.
  • (63)QMware.https://qm-ware.com/, 2022.
  • (64)Mohammad Kordzanganeh, Markus Buchberger, Basil Kyriacou, Maxim Povolotskii,Wilhelm Fischer, Andrii Kurkin, Wilfrid Somogyi, Asel Sagingalieva, MarkusPflitsch, and Alexey Melnikov.Benchmarking simulated and physical quantum processing units usingquantum and hybrid algorithms.Advanced Quantum Technologies, 6(8):2300043, 2023.
  • (65)Tyson Jones and Julien Gacon.Efficient calculation of gradients in classical simulations ofvariational quantum algorithms.arXiv preprint arXiv:2009.02823, 2020.
  • (66)Nikita Gourianov, Michael Lubasch, Sergey Dolgov, QuincyY vanden Berg, HessamBabaee, Peyman Givi, Martin Kiffner, and Dieter Jaksch.A quantum-inspired approach to exploit turbulence structures.Nature Computational Science, 2(1):30–37, 2022.
  • (67)Michael Lubasch, Jaewoo Joo, Pierre Moinier, Martin Kiffner, and Dieter Jaksch.Variational quantum algorithms for nonlinear problems.Physical Review A, 101(1):010301, 2020.
  • (68)Nikola Kovachki, Zongyi Li, Burigede Liu, Kamyar Azizzadenesheli, KaushikBhattacharya, etal.Neural operator: Learning maps between function spaces.arXiv preprint arXiv:2108.08481, 2021.
  • (69)Zongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, KaushikBhattacharya, etal.Fourier neural operator for parametric partial differentialequations.arXiv preprint arXiv:2010.08895, 2020.
  • (70)Tobias Pfaff, Meire Fortunato, Alvaro Sanchez-Gonzalez, and PeterW. Battaglia.Learning mesh-based simulation with graph networks.In International Conference on Learning Representations, 2021.
  • (71)Alvaro Sanchez-Gonzalez, Jonathan Godwin, Tobias Pfaff, Rex Ying, JureLeskovec, and Peter Battaglia.Learning to simulate complex physics with graph networks.In International Conference on Machine Learning, pages8459–8468. PMLR, 2020.
Hybrid quantum physics-informed neural networks for simulating computational fluid dynamics in complex shapes (2024)
Top Articles
Latest Posts
Article information

Author: Carlyn Walter

Last Updated:

Views: 5485

Rating: 5 / 5 (50 voted)

Reviews: 81% of readers found this page helpful

Author information

Name: Carlyn Walter

Birthday: 1996-01-03

Address: Suite 452 40815 Denyse Extensions, Sengermouth, OR 42374

Phone: +8501809515404

Job: Manufacturing Technician

Hobby: Table tennis, Archery, Vacation, Metal detecting, Yo-yoing, Crocheting, Creative writing

Introduction: My name is Carlyn Walter, I am a lively, glamorous, healthy, clean, powerful, calm, combative person who loves writing and wants to share my knowledge and understanding with you.