Authors
Juha Riippi
CEO & Co-founder, Quanscient
Dr. Andrew Tweedie
Co-CTO & Co-founder, Quanscient
A few weeks ago, I wrote that hardware engineering is stuck in the “pre-AI” era. The workflows and processes we use look pretty much the same today as they did back in 2019.
The core problem isn't the AI. The problem is the wall between the AI and our tools.
Software engineering is experiencing a massive "AI agentic revolution" right now because software is just text. AI can read it, write it, and execute it natively. Hardware engineering has been left behind because traditional CAD and CAE software are trapped behind heavy, point-and-click graphical interfaces.
To fix hardware engineering, we, too, have to make everything accessible with text. When every parameter, boundary condition, and physics rule can be represented as pure Python code, that wall disappears.
You can finally deploy autonomous AI agents to handle the tedious, low-level setup and validation loops that eat up a large portion of an engineer's day.
With the release of the new Quanscient SDK, this is now possible.
During our live launch event on April 30th, we demonstrated exactly what happens when you remove the UI bottleneck and give an AI agent programmatic access to massive cloud compute—and the world’s most performant multiphysics solver.
Before we break down exactly how that workflow operates, take two minutes to see the vision in action:
A 2-minute demonstration of the full agentic workflow.
The video shows the destination: moving beyond reactive validation to instantly generating optimal solutions. But how do we actually get there? Here is exactly how it works in practice.
A practical demonstration of the full agentic workflow
To show this in practice, we focused on a specific engineering task: building and running a Piezoelectric Micromachined Ultrasonic Transducer (PMUT) simulation.
PMUTs are complex electromechanical systems that require robust multiphysics simulation capabilities.
Before assigning tasks to an AI, it is important to provide the LLM with the right context and environment. We set up a local Python environment, installed the Allsolve SDK alongside libraries like NumPy and Matplotlib, and provided our Allsolve credentials in a secure, local .env file.
Crucially, we used Cursor’s “Skills” feature to create an allsolve-sdk skill. This gave the LLM the exact context and expertise needed to effectively use our SDK without guessing.
Once the environment was set, we interacted directly with an AI agent (Claude Opus operating inside the Cursor IDE).
| Tool | Role | Function |
|---|---|---|
| Linear | Task Manager (Communication) |
Where the engineer defines the initial parameters (the prompt) and the agent logs its progress or asks clarifying questions. |
| Cursor | IDE (Workspace) |
The coding environment where the AI agent lives, works, and outputs the final scripts. |
| Claude Opus | AI Agent (The Brain) |
Acts as the "capable intern," digesting the plain-text engineering instructions and writing the actual Python code. |
| Quanscient Allsolve | Solver & Compute Engine |
Provides the Python SDK (the programmatic gate) to execute the code, run the physics, and scale massive datasets in the cloud. |
The workflow starts just like assigning a project to a colleague. We created a task in Linear to direct the agent to:
-
Create a simple PMUT model
-
Extract and plot the relevant outputs
-
Perform a mesh convergence study
-
Track all progress in Linear
-
Create a .pptx report
Then, we simply pointed the agent to that ticket and asked it to execute the plan. The agent digested the requirements and formulated a step-by-step plan.
A human provides the context and instructions so the agent can generate a verifiable plan.
A practical benefit of this workflow is its built-in safety against hallucinations: if the agent lacks information, it asks.
During our demo, when the agent realized the initial prompt didn't specify the lateral extent of the top ground electrode, it didn't guess. It stopped and asked a clarifying multiple-choice question to get the missing context.
.png?width=411&height=213&name=missing%20information%20(1).png)
The agent safely pauses to ask for missing parameters instead of guessing.
Once answered, the agent wrote the exact Python code required to natively drive the Quanscient Allsolve SDK. Following its own generated plan, the agent then moved straight into validation after generating the model.
The agent automatically ran the model in parallel at four different mesh sizes, tracked how the center frequency shifted, and plotted the curve. This enabled quickly and confidently determining the exact mesh size needed to achieve 1% accuracy before scaling up.
To wrap up the task, the agent generated the PowerPoint presentation summarizing these simulation results for the team.
The agent drives the simulation, performs validation, and automatically generates a team-ready presentation.
Because the agent logs all its progress, code, and outputs back into the task manager, the entire process remains completely transparent and inspectable.
The takeaway is simple: you provide the intent and instructions. The agent handles the coding syntax, the low-level setup, and the reporting.
Scaling up with intelligent optimization
Getting a single simulation to run via code is a great first step, but scale is where this workflow actually transforms R&D.
Because the PMUT model was now defined entirely in Python, we were no longer limited by how fast a human could click through a menu to test new parameters.
We instructed the agent to automatically generate and run 10,000 variations of the design. By leveraging Allsolve’s elastic cloud compute architecture, the agent ran these variations in parallel to generate a massive, physics-aware dataset.
We then used this data to train an AI surrogate model. As we demonstrated in our recent MultiphysicsAI webinar, a well-trained surrogate can evaluate an entire design space in milliseconds.
It allows the AI to map the Pareto front—the absolute optimal set of designs where no single parameter can be improved without trading off another.
Once the system identifies the best candidates along that Pareto front, the workflow automatically validates those specific designs by running them back with Quanscient Allsolve.
This guarantees that the final optimized designs are not just statistical estimates, but are fully grounded in real physics.
.png?width=626&height=233&name=graphs%20(1).png)
Mapping the Pareto front enabled the AI to identify a design with 35% more bandwidth and 6.5% higher sensitivity, with the best results validated with Allsolve.
Without any manual trial and error, this agentic workflow identified a PMUT cell design that delivered a 35% increase in bandwidth and a 6.5% increase in sensitivity, all while perfectly hitting the target center frequency.
This is the true paradigm shift
This result is exactly why the traditional engineering workflow is becoming obsolete.
When you have an AI agent automating the model setup, cloud compute generating the data at scale, and MultiphysicsAI mapping out the best possible outcomes, you completely flip the R&D process.
We are no longer forced to ask the traditional, reactive question: "This is my design... how will it work?" Instead, by letting AI explore the entire design space, we can finally ask the question that actually matters: "These are my specifications... what is the optimal design?"
Building permanent assets instead of disposable scripts
There is one more crucial benefit to this approach. When you run a simulation in a traditional graphical interface, the setup is trapped in that specific file. If an engineer leaves the company, their implicit knowledge of exactly how they meshed or parameterized that model usually leaves with them.
By shifting to a code-based workflow, simulation setups stop being fragile, disposable files living on a single laptop. They become permanent organizational assets.
Because the agent writes the setup as pure Python, the underlying SDK code unlocks entirely new capabilities:
-
Programmatic execution of the simulation
-
Effective version control through Git
-
Further agentic modifications of the code
-
Agentic debugging, testing, and validation
And while the model is built with code, the resulting project is still incredibly accessible. It is fully parameterised, easily shareable with colleagues, fully inspectable, and can still be driven via the Quanscient UI.
It becomes centralized IP. Your team can pull it down next year, or another AI agent can use it as context to build upon—like taking that validated single PMUT unit cell and instantly extending it into a full 5x5 array.
.png?width=542&height=225&name=5x5%20pmut%20(1).png)
Another agent can use the validated unit cell as context to generate a full 5x5 array.
Every project you complete makes the next one faster.
Stop testing designs and start generating solutions
Allsolve’s SDK opens up the power of agentic workflows for hardware engineers.
It allows complex, multiphysics simulations to be built and tested by AI agents, allowing human engineers to finally focus on high-level creative problems. When approached correctly, the work generated by this process is documented, tested, and inspectable.
Overall, the shift from testing designs to generating solutions is happening right now. If you want to see exactly how this stack operates in practice, you can watch the full technical breakdown on the webinar recording page.
Or, if you are ready to see what code-driven hardware engineering looks like applied to your own models, you can request a live demo today.
Join 1000+ others and start receiving our weekly blog posts to your inbox now