Skip to content
Pietro ZanottaJanuary 27, 20268 min read

Efficient physics-informed learning with built-in uncertainty awareness

Efficient physics-informed learning with built-in uncertainty awareness
11:10

Technical contributors
Dr. Ljubomir Budinsky, Dr. Çaǧlar Aytekin, Dr. Valtteri Lahtinen

 

Key takeaways

  • Physics-Informed Neural Networks offer a flexible way to solve PDEs, but scalability remains a challenge

  • Separable PINNs reduce the dimensionality burden, yet dense matrix operations still dominate cost

  • Quantum Orthogonal SPINNs use quantum circuits as orthogonal quantum layers in the networks to address this bottleneck

  • Orthogonality improves stability, regularization, and allows for uncertainty quantification without expensive normalization

 

Introduction

Partial Differential Equations (PDEs) form the mathematical foundation of most physical models used in engineering and science. They describe how quantities such as temperature, displacement, pressure, or electromagnetic fields evolve across space and time. From heat transfer and fluid flow to wave propagation and material response, PDEs appear wherever physical laws govern system behavior.

Accurately solving PDEs is essential, but rarely trivial. Classical numerical techniques such as finite element or finite difference methods rely on discretizing the domain into meshes. While these methods are well established, they often become prohibitively expensive as dimensionality increases or when high resolution is required. This phenomenon, commonly referred to as the curse of dimensionality, limits practical applications in many real-world scenarios.

Physics-Informed Neural Networks (PINNs) emerged as an alternative by reframing PDE solving as a learning problem. Instead of discretizing the entire domain, PINNs approximate solutions by embedding governing equations directly into the loss function of a neural network. This allows the model to learn solutions that satisfy both data and physics constraints. PINNs have demonstrated strong potential, especially in settings with sparse data or complex boundary conditions.

However, standard PINNs still struggle with scalability. High-dimensional problems require a rapidly growing number of collocation points, and the underlying neural networks rely on dense matrix operations that become computational bottlenecks. As a result, PINNs can be slow to train and difficult to deploy for large-scale or time-sensitive simulations.

Separable Physics-Informed Neural Networks (SPINNs) address part of this issue by exploiting separability in the structure of the PDE. By factorizing the problem across dimensions, SPINNs significantly reduce the number of required collocation points. Yet, even with this improvement, the computational cost of matrix multiplications inside each neural network layer remains a concern for large networks.

This work introduces Quantum Orthogonal Separable Physics-Informed Neural Networks (QO-SPINNs), a new architecture designed to address these remaining limitations. By integrating quantum-inspired orthogonal layers into the SPINN framework, QO-SPINNs reduce computational complexity, improve numerical stability, and allow uncertainty quantification of the model output without additional computational cost.

 

Quanscient Quantum Labs

 

Quantum-powered multiphysics simulations

Explore quantum advantage in physics and multiphysics simulations →

 

 

Case examples

Quantum orthogonal SPINNs

To evaluate the proposed architecture, QO-SPINNs were applied to a set of representative PDE problems that reflect common simulation challenges:

  • Forward solutions of advection-diffusion equations in one, two, and three dimensions
  • Nonlinear time-dependent dynamics through the Burgers’ equation
  • Inverse parameter identification for the Sine-Gordon equation
  • Uncertainty quantification for time-dependent PDE predictions

These examples cover linear and nonlinear physics, forward and inverse problems, and both deterministic and uncertainty-aware modeling scenarios. Together, they provide a broad view of how QO-SPINNs behave across different application types.

The main objectives guiding this work are:

  • Reducing the computational cost associated with high-dimensional physics-informed learning, particularly the cost of matrix operations inside neural networks.
  • Preserving or improving solution accuracy compared to existing PINN and SPINN approaches due to the regularization introduced by the orthogonality of the layers, ensuring that efficiency gains do not come at the expense of physical fidelity.
  • Integrating uncertainty quantification directly into the architecture, avoiding expensive post-processing steps or heuristic uncertainty estimates.


From PINNs to SPINNs

Standard PINNs approximate a PDE solution using a single neural network that takes spatial and temporal coordinates as inputs. While flexible, this approach requires a large number of collocation points in high dimensions, since the number of points grows exponentially with the number of variables.

SPINNs reduce this burden by decomposing the solution into a sum of separable components. Instead of learning a single high-dimensional mapping, SPINNs use multiple low-dimensional subnetworks, each operating on one coordinate. The outputs are then combined to reconstruct the full solution. This factorization reduces the number of required collocation points from exponential to linear scaling with dimension.

Despite this improvement, each subnetwork still relies on dense matrix multiplications, which scale quadratically with network width. For large models, these operations dominate training and inference time.

 

Quantum orthogonal layers

QO-SPINNs replace the standard dense layers inside SPINN subnetworks with Quantum orthogonal layers. These layers are inspired by quantum computing techniques for matrix-vector multiplication, where orthogonal transformations can be implemented efficiently using structured operations.

In practice, these layers enforce strict orthogonality in the network weights. Orthogonal matrices preserve vector norms and distances, which has several important consequences as below.

  • Numerical stability improves, reducing exploding or vanishing gradients
  • Lipschitz constants are naturally bounded, acting as a form of regularization
  • Distance preservation enables reliable uncertainty estimation

While the architecture is motivated by quantum algorithms that promise sub-quadratic complexity, the models in this work are trained classically by simulating the quantum-inspired transformations. This allows immediate evaluation of the architectural benefits without relying on current quantum hardware.

 

Quantum orthogonal SPINNs

By combining separable subnetworks with quantum orthogonal multilayer perceptrons, QO-SPINNs take advantage of both reduced collocation requirements and more efficient layer operations. Each subnetwork maps a single input dimension to a low-rank representation, and these representations are combined to form the final PDE solution.

 

Key results

Across all tested problems, QO-SPINNs achieved accuracy comparable to classical SPINNs and, in several cases, significantly outperformed them.

In one-dimensional advection-diffusion problems, QO-SPINNs matched SPINN accuracy while requiring fewer collocation points than standard PINNs. In two- and three-dimensional cases, QO-SPINNs achieved up to an order-of-magnitude reduction in error compared to SPINNs, despite using fewer parameters.

For nonlinear problems such as the Burgers’ equation, QO-SPINNs demonstrated stable training behavior and convergence comparable to classical approaches. Interestingly, the orthogonal layers enabled higher learning rates without destabilizing training, suggesting favorable optimization properties.

In inverse problems involving the Sine-Gordon equation, QO-SPINNs accurately recovered unknown physical parameters from sparse observations. This capability is particularly relevant in engineering contexts where material properties or system parameters must be inferred from limited measurement data.

 

Key benefits

Several practical benefits emerge from the results as below.

Scalability

By combining separability with orthogonal layers, QO-SPINNs reduce both the number of collocation points and the computational cost per forward pass.


Stability and regularization 

Orthogonal layers naturally constrain the network’s Lipschitz constant, leading to smoother training dynamics and improved regularization.


Parameter efficiency

In many test cases, QO-SPINNs achieved better accuracy using fewer parameters than classical SPINNs.


Reliable uncertainty awareness

The architecture enables uncertainty estimation that correlates strongly with actual prediction error, rather than producing arbitrary confidence values.

 

Uncertainty quantification

 

Uncertainty quantification is often treated as an afterthought in physics-informed learning. Many approaches rely on sampling-based methods such as Monte Carlo analysis, which can be computationally expensive.

QO-SPINNs take a different approach. Because orthogonal layers preserve distances, the latent representations learned by the network maintain meaningful geometric structure. This property allows the use of distance-aware Gaussian process methods for uncertainty estimation without requiring costly spectral normalization.

In practice, QO-SPINNs provide uncertainty estimates that align well with actual prediction errors. Regions where the model is less confident tend to coincide with higher approximation error, offering actionable insight rather than abstract confidence scores.

Comparisons with Monte Carlo dropout show that QO-SPINNs achieve lower prediction error while providing uncertainty estimates that better reflect true model performance over time.

 

Other benefits of related to Quanscient

Beyond individual benchmark results, this work highlights broader advantages relevant to multiphysics simulation platforms.

Advanced simulation workflows benefit from models that are not only accurate, but also interpretable and reliable. Built-in uncertainty estimation supports risk-aware decision making, particularly in safety-critical or high-cost design scenarios.

Orthogonal, physics-aware machine learning models reduce sensitivity to numerical artifacts and training instabilities. This improves robustness across different problem setups and parameter regimes.

Finally, efficiency gains enable broader design exploration. Faster simulations make it possible to test more configurations, investigate edge cases, and better understand system behavior under uncertainty.

 

Conclusion

Quantum Orthogonal SPINNs introduce a new architectural direction for physics-informed simulation. By integrating quantum-inspired orthogonal layers into a separable neural framework, QO-SPINNs address long-standing challenges in scalability, stability, and uncertainty awareness.

The numerical results demonstrate that these benefits are not merely theoretical. Across a range of PDE problems, QO-SPINNs maintain or improve accuracy compared to existing methods while reducing parameter counts and improving robustness.

While full quantum acceleration depends on future hardware developments, the architectural insights presented here are already valuable. They show how carefully designed structure, informed by physics and mathematics, can improve machine-learning-based simulation today.

For engineering teams working with complex multiphysics systems, approaches like QO-SPINNs point toward simulation workflows that are faster, more trustworthy, and better suited to real-world decision making.

 

References

Xiao, P., Zheng, M., Jiao, A., Yang, X., & Lu, L. (2025). Quantum DeepONet: Neural operators accelerated by quantum computing. Quantum, 9, 1761. https://quantum-journal.org/papers/q-2025-06-04-1761

Liu, J. Z., Lin, Z., Padhy, S., Tran, D., Bedrax-Weiss, T., & Lakshminarayanan, B. (2020). Simple and principled uncertainty estimation with deterministic deep learning via distance awareness. arXiv preprint arXiv:2006.10108. https://arxiv.org/abs/2006.10108

Kerenidis, I., Landman, J., & Mathur, N. (2021). Classical and quantum algorithms for orthogonal neural networks. arXiv preprint arXiv:2106.07198. https://arxiv.org/abs/2106.07198

quanscient.com

Join 1000+ others and start receiving our weekly blog posts to your inbox now

COMMENTS