The NAFEMS World Congress 2025 will host more than ten interactive workshops on a wide range of engineering simulation and analysis topics. Led by experts and NAFEMS working groups, these interactive sessions will get attendees involved in the discussion like never before, making sure your voice is part of the push towards where simulation is going in the future.
Validation of Engineering Simulations | |
AI/ML in Simulation: Practical Observations and Thoughts on Simulation Governance | |
The New AIAA Standard for Code Verification of CFD, The Importance of the Observed Order of Accuracy, and Exemplars to Show the Process | |
Status of Data-Driven Engineering | Details to come |
Standardisation for Manufacturing Process Simulation | |
VMAP - Vendor-Neutral Standard for CAE Data Storage | Details to come |
Improving How we Teach FEA | |
Is Engineering an Art or a Procedure? | |
Key Factors for Effective Engineering Virtualization | |
How to Model What we Don’t Know | Details to come |
Simulation-Driven Product Validation Strategies | Details to come |
Professional Simulation Engineer (PSE): How to Get Certified | |
|
|
This session is organized by members of the NAFEMS Simulation Governance and Management Working Group. There is currently a lot of excitement and enthusiasm around how Artificial Intelligence and Machine Learning will impact the way that simulation work is performed. On one end, such tools may become useful assistants, aiding the analyst in setting up, post-processing, and then documenting simulations using otherwise traditional tools. At the other extreme, AI/ML would replace traditional physics-based simulations, producing fast results based on its training set, with limited or no traceability on how it determined the answer. The simulation governance aspect of these technologies is currently uncertain and troubling. Existing tools for Verification and Validation are generally not fully applicable. New tools need to be developed to evaluate the adequacy of training sets versus the new problem for which results are being requested. Some providers make claims of ability to evaluate error, the robustness and accuracy of these evaluations needs to be studied and proven. Distinctions need to be made between tools that are based on data fitting only, versus ‘physics informed’ models. In this landscape of those promoting the product on one side, and those sceptical of change on the other, the SGMWG will seek to share practical observations on how simulation governance can currently be applied to these new tools, with the expectation that this will form a starting point for the eventual emergence of formal guidance as tools and methods mature. For example, many simulation users are accustomed to creating response surfaces or reduced order models which then become a fast and efficient means to seek out optimum designs. Once a proposed design has been identified, a full fidelity model is run as part of validating the design. In many ways AI/ML models can be compared to a high dimensional response surface. Thus such tools can be an efficient way to rapidly iterate a design, the final version of which would then be subjected to a full fidelity analysis using traditional simulation tools and appropriate VVUQ efforts.
This session is organized and executed by members of the NAFEMS Simulation Governance and Management Working Group. The main focus will be a presentation on the Guidelines for Validation of Engineering Simulation which has just been approved for publication. This publication defines an expanded means of validation, moving beyond dedicated high quality experiments commonly referenced in existing standards. A spectrum of validation methods is presented, categorizing them from the strict definition of Validation per ASME VVUQ standards through weaker validation approaches such as expert review. Attributes of validation rigor are defined, to aid in assessing the credibility of simulations. The goal is to provide greater flexibility to industrial users of simulations, allowing them to select a level of validation rigor consistent with the applications and risks associated with their simulations, as not all users need the highest levels of rigor associated with the aerospace or medical device industries.
AIAA has recently published a recommended practice for code verification for CFD, R-141-2021, and is now working on the publication of a full new international standard. This standard will be the first international standard for code verification (to the authors knowledge) in any field of engineering simulation and, as such, it represents an important step forward in terms of how to achieve confidence in the simulation tools that the NAFEMS membership uses.
Of key importance is the concept of the observed order of accuracy. Code verification requires the error in a simulation to be quantitatively evaluated, which by implication requires the solution to the simulation to be known. The error is calculated on a series of progressively refined meshes so that the trend in the error with the spatial discretization can be determined. Specifically, this trend is plotted on a log log graph and the observed trend should conform to a straight line, the gradient of which is the observed order of accuracy. This should then be compared with the formal order of accuracy claimed by the developer of the code.
NAFEMS SGM and CFD working groups have been working on some code verification exemplars to demonstrate the processes set out by AIAA: specifically, they are real cases using real commercial software tools where the observed order of accuracy is found not to correspond to the formal order claimed by the software developer. In each case, the reason for the discrepancy is explored and identified, so that suitable modifications to how the software tool is used can be made, and the tool can be used with confidence. The exemplars demonstrate the essence of the new standard and why considering the observed order of accuracy is important. Whilst the focus of the standard is on the performance of the simulation solver, as the exemplars show, considering the observed order of accuracy can also identify potential issues with the pre and post processing steps.
In Manufacturing Process Simulation, we use multi-scale, multi-physics models to obtain high-fidelity predictions of the transformations in the material during the production of metal or composite parts. This often requires material data for which there is no standardised way of testing. Also, the material modelling and manufacturing process simulation approaches that exploit these data to predict what will happen during manufacturing, are rarely standardised.
The consequence of this for industry is low re-use of material data and modelling: in complex supply chains, material characterisations are often repeated and models often created again from scratch. Worse: opportunities to use simulation are lost because people are not confident re-using existing data and models and therefore simply refrain from using simulation.
The objective of this session is to reflect on the current state of standardisation in this particular field, and identify opportunities to improve things.
Automation is a very strong trend in engineering and FEA calculations. We try to automate as many things as possible, simply to make them easier/faster/cheaper to do. But can engineering design be really automated, and to what extent this is possible? Also, what would be the consequences of this? Or to put it differently, is engineering an art, or a procedure? In this workshop we will dive deep into this issue, and have a discussion about this.
It seems there is a spectrum in FEA world. On one end, you can learn mathematics related to FEA on an expert level, and on the other end you can learn to use the software with most sophisticated options. But for me, it's a triangle, and at the top of it is the actual understanding on how things work, and what you're trying to achieve. This seems to be often omitted in how we teach FEA (and engineering in general). In this workshop I want to dive deeper into this idea, and have a discussion about this.
Virtualization of engineering has the potential to address strategic goals like reduction of time-to- market, managing an increasing product complexity and variety, or raising the flexibility to react on the market. Companies can realize it at various levels. But how shall a successful transformation look like? The workshop aims to provide an impulse for successful engineering virtualization and to identify potentials and fields of activity by analyzing and discussing industry examples with the participants. It will cover key aspects such as strategic goals, boundary conditions, process, organization, technology, data, or mindset. Depending on the discussion, experience-based recommendations will be added to the conclusion.
Aims
Who should participate?
This short workshop will provide information on the NAFEMS Professional Simulation Engineer (PSE) Certification scheme, presenting why it is useful, how it works, the application process and some client case studies. PSE enables simulation engineers to demonstrate competencies acquired throughout their professional career, located in the PSE Competency Tracker which will be described at https://www.nafems.org/professional-development/certification/
Stay up to date with our technology updates, events, special offers, news, publications and training
If you want to find out more about NAFEMS and how membership can benefit your organisation, please click below.
Joining NAFEMS© NAFEMS Ltd 2025
Developed By Duo Web Design