Expert contributors for this post
Dr. Caglar Ayketin
Lead AI Developer
Introduction
While AI is becoming a hot topic in the engineering and simulation world, one of the key questions is how much of a difference it can actually make. Can it truly advance multiphysics simulation processes? What are the real use cases today, and what’s next?
To dive into these questions, we talked with Dr. Çağlar Aytekin, Lead AI Developer at Quanscient. We took a closer look at AI’s role in multiphysics simulations and how it is used in Quanscient’s products, mainly in Quanscient Allsolve. We discussed AI’s impact on simulation speed, accuracy, and engineering decisions, both in short-term developments and long-term projects.
Could you share a bit about your role as Head of AI at Quanscient and the types of projects you've been involved in?
My role in Quanscient is to investigate the potential use of AI in improving Quanscient’s main product Allsolve.
Currently, we are working on a broad usage of AI which covers both generative AI and predictive AI applications. We are both trying to make the user experience more smooth and efficient by Generative AI and trying to improve the solver via Predictive AI.
On the Generative AI side, we have deployed an advanced simulation assistant that can answer our users’ questions regarding our platform via referencing our documents. We also deployed an anomaly detector that points out the possible human errors in simulations, potentially saving hours of faulty simulation time. On the Predictive AI side, we have ongoing work regarding the use of AI in simulations. In particular, we are investigating neural operators, and training neural networks on our simulation data to allow offering fast insights to simulation results.
What AI features are implemented in Quanscient Allsolve, and how are they applied in real-world scenarios?
Currently we have deployed a simulation assistant that provides Allsolve documentation based answers. It’s powered by a special retrieval-augmented generation (RAG) that provides instant, context-aware answers to user queries related to Allsolve documentation.
Also, we have an anomaly detection tool that checks potential anomalies in the simulation prior to running. It detects human-errors at a lightning speed which would otherwise require dedicated hours by our supporting engineers to pinpoint. It predicts potential human-mistakes in the simulation and prevents running a lengthy and costly simulation.
What AI features are currently in development for Quanscient Allsolve?
Besides improving the simulation assistant and the anomaly detector, we are also developing a code anomaly detector and a code completer. We are also working on neural operators.
Training neural networks on a set of simulation results offer possibilities to provide very fast insights into how the simulation results would change when the design parameters are changed. This means, you don’t have to perform long simulations for every design parameter you have, and can get a fast insight on a vast set of design parameters. Neural operators are the key technology enabling this, so we are working on this topic.
Code completion is also in development to assist advanced users. It targets our advanced users who perform simulations via our scripting interface. Given our demo codebase, API documentation and the code that the user has written so far, it generates the code queried by the user in a free form language. It is quite similar to commercially available solutions such as GitHub copilot and cursor, but it is specifically optimized for our python library. We are also developing a code anomaly detector / code reviewer in line with this.
How do AI features in Quanscient Allsolve affect simulation efficiency and accuracy compared to traditional methods?
Currently traditional solvers excel on the accuracy, however AI, especially neural operators which are still in development, has potential to predict simulation outcomes on unseen parameter spaces rather fast. While the results may not be highly accurate, they provide fast rough estimates for a large number of simulations. Then users can run the few that look very promising in traditional solver for top accuracy.
In addition, on the efficiency side, we have specifically developed anomaly detection for reducing human error during simulations. Prior to running potentially very costly simulations, we provide a quick check of potential human-mistakes in the simulation in order to save compute hours.
Our LLM- based anomaly detector is trained on our demo projects and based on a normality learned on these demos and global physics knowledge, the LLM application checks the user project and points out potential anomalies if any.
In general, AI streamlines the simulation processes for engineers via rapid approximations, optimizing solver parameters, reducing complexity via data-driven approaches.
How do AI features in Quanscient Allsolve affect the speed of design iterations?
Currently we’re working on a neural operator based surrogate solver that would potentially provide immense design speed improvement via providing very fast and rough simulation results, where designers can invest (run real costly simulations) only on the potentially highlighted designs by AI.
This approach could save countless compute hours, especially for engineers with limited resources, by quickly filtering out unpromising designs and highlighting the most viable ones.
What challenges do engineers face when transitioning from traditional to AI-assisted simulation tools?
When we’re talking about a fast but not perfectly accurate neural surrogate solver, then the engineers will have to always take AI results with a bit of care.
However we would also have ways to give some sort of confidence with these results.
Have there been any major challenges or breakthroughs along the way?
Besides the discussed AI projects, we have also investigated PINNs (Physics Informed Neural Networks) as an alternative to traditional solvers. A major challenge in the particular application of PINN is the convergence, speed and accuracy issues of these solvers. However, the progress in AI is very fast and in my opinion overcoming these challenges will happen in the next few years.
I’m not sure if AI will completely replace traditional solvers, but it will probably be useful in extremely complex problems. Plus, hybrid approaches combining AI and traditional methods are already working very well.
What are the next immediate milestones for Quanscient Allsolve in relation to AI features?
In the short term we want to make the most of quick and impactful Generative AI applications. Therefore, we are constantly improving our simulation assistant and anomaly detector, as well as working on some new Generative AI tools.
We have made several PoCs regarding neural operators. It might potentially help automation, prediction and optimization of simulations also can make simulations faster and more scalable.
What is the long-term vision for AI in Quanscient?
There are so many opportunities to apply AI on Quanscient products. Ultimately I really believe it would be possible to interact with a simulator only by language as you’re chatting with a chatbot. There are many steps necessary to reach this goal but at least I don’t see any blockers preventing it. So, I really foresee an agentic product that not only answers the user question, but is able to take actions based on user requests.
Also, early discussions have started with the quantum team to explore potential AI-driven solutions.
Could you share any significant collaborations or partnerships Quanscient has had in AI-related projects?
Our GenAI partner is Google, as we are extensively using their cloud solutions and Gemini for all of our LLM applications through Vertex AI. We were also selected as a partner by Google for initial trials of Gemini as well as a continued funding through cloud credits.
How does the AI team's work align with the engineering and quantum computing teams' work?
AI isn’t just one team's responsibility, it’s deeply connected to everyone across the company. Every day, we’re having discussions with various teams to co-develop AI-based methods.
For generative AI, we mostly collaborate with the product team, and for predictive AI, we’re working closely with the solver and application engineering teams. We also keep things moving with several Slack channels where we share updates regularly.
Staying closely connected is crucial, and it’s all about having an open mind and a strong belief in AI’s potential.
I just want to highlight that AI is advancing at an incredibly fast pace, it’s probably the most rapidly developing technology in history. So, it’s vital that we stay positive and motivated, always pushing forward in this exciting field.