Pre-history of Quanscient: The history and math behind finite element analysis
Expert contributor for this post
Prof. Robert Kotiuga
Associate Professor (ECE) at Boston University
Where does finite-element analysis (FEA) originate from? What kind of math is there behind it? Why is abstract nonsense important to what we do at Quanscient?
In this somewhat different blog post, we look at the interesting history and development of FEA, particularly from the viewpoint of electrical engineering.
We do not go into the mathematical details but instead focus on the high-level concepts and historical anecdotes.
This is the story of FEA and thus the pre-history of Quanscient. Let’s dive in!
How do we make these bridges hold up?
When solving the partial differential equations (PDEs) at the heart of many physics problems, the solution space tends to be infinite-dimensional.
As you can probably imagine, finding the solution from an infinite-dimensional space can be a needle-in-a-haystack situation.
This is why we make finite approximations instead, reducing the problems to simpler terms of linear algebra.
The idea for such techniques can be dated back to the mathematician Walther Ritz, a student of David Hilbert, and later on to the Russian mathematician Boris Galerkin. (In fact, the standard FEM approach today is called Galerkin FEM.)
The influence of Galerkin stems from structural mechanics, such as stress analysis of bridges and developing computational techniques for that before the era of computers by hand.
Eventually, these developments led to the finite-element method (FEM), the numerical method behind FEA. By triangulating a region, or in engineering lingo, generating a finite-element mesh, and introducing a piecewise linear approximation of the solution on it, you get a finite-dimensional linear system from a PDE, which can be solved using techniques of linear algebra.
Bring in the computers!
In a way, with Ritz and Galerkin, the first few decades of the 1900s were the birth of FEM, despite no programmable computers yet existing.
Coming to the late 1940s, mathematician Richard Courant was pioneering finite elements for purely mathematical reasons for specific existence proofs, with no direct connection to engineering.
These were the very early days of programmable computers. It was Courant’s and John von Neumann’s correspondence that eventually gave birth to FEM as we know it as a numerical method for computer-aided design.
However, civil engineers were the first ones to take it up as a design tool. They took these pioneering works and started doing finite elements in the 1950s for structural analysis.
There was an apparent reason FEA took off in this community: Bridges must not fail, and prototyping such large structures is quite limited.
Thus, FEA was the enabler for building structurally sturdy bridges without excessive materials.
Electrical engineers catch up
During the following decades, it became obvious that FEA is a general tool for solving engineering problems.
In terms of electrical engineering, this became imminently clear to a broader audience with the publication of “Finite elements for electrical engineers” by Peter Silvester and Ron Ferrari in 1983.
Around this time, P. Robert Kotiuga, today one of the scientific advisors of Quanscient, came in as a student of Peter Silvester, facing the enormous challenge of going from 2D to 3D in FEA in electromagnetics.
He turned to mathematics.
Technology transfer
Kotiuga’s studies soon led him to the modern mathematics of the era.
As a result, Kotiuga and Alain Bossavit, along with other researchers, introduced Whitney forms to the electrical engineering community in the late 1980s.
Whitney forms, which date back to the seminal works of Hassler Whitney and André Weil in the 1950s, represent a way to do calculus on triangulations independent of the dimensionality of the space.
This is exactly what you want to do in FEA: Remember, you triangulate a space to represent an approximative solution to a PDE. Whitney forms are natural basis functions you utilize to introduce this approximation, for example, on the nodes, edges, faces, or volumes of the finite-element mesh. And it works dimension-independently!
Whitney forms would become immensely influential in computational electromagnetics and FEA in general.
Moreover, with Kotiuga, the connection of FEA to the mathematics of algebraic topology became more apparent.
In particular, algebraic topological tools of homology and cohomology are essential in electrical engineering: They act as a formal bridge between field theory and circuit theory.
Homology, put simply, identifies the type and number of holes in a region. Cohomology attaches numbers to them. And if you know some calculus, you know that integrals bind numbers to regions of space. Simply put, this is how (co)homology relates to calculus and topology.
From a practical standpoint, (co)homology is a pivotal utility in formulating efficient finite-element models: for example, using it cleverly, we can solve electrical engineering problems with a way smaller number of unknowns without compromising accuracy.
Useful abstract nonsense
Raising the level of abstraction, all this math has a common framework and source: category theory.
Category theory is a branch of mathematics — although extremely useful, often referred to as abstract nonsense — that lets you talk about problems independent of what kind of space you are dealing with and see analogies between mathematical results in different types of spaces.
This abstract setting is where Whitney forms, homology, and cohomology all stem from.
Since with category theory, we can deal with problems independent of the type of space, it lets us connect all these abstract things, in particular, to triangulated spaces and linear algebra on a conceptual level.
Then, concretely, when you triangulate a space, you can still talk about all these things in an exact manner but with the tools of linear algebra in the context of FEA!
We can see all this from the bird's eye view of category theory or abstract nonsense.
At the bottom of it, it is this theoretical thinking behind Quanscient's FEA platform that helps us create efficient and mathematically correct data structures and build efficient simulation code.
Moreover, when developing our quantum algorithms, we utilize the same abstract mathematical machinery to conceptualize numerical simulations in the quantum setting.
Conclusion
The story of FEA is the story of the continuous, innovative technology transfer from pure math to engineering.
Category theory is a robust mathematical framework, which underlies the conceptual thinking behind Quanscient’s efficient numerical simulation algorithms.
Lesson learned: It may seem that pure mathematics has nothing to do with engineering or physics, but actually, there is no “engineering math” or “physics math” – there’s just math, and in utilizing math, only imagination is the limit.
Let us end with a thought-provoking quote from André Weil that fosters innovative thinking: "Logic is the hygiene of mathematics, but not the essence."
What do you think he meant with that?