Director of Technology – nCode Products at HBM Prenscia
Fatigue Simulation - A Reflection on the Past, Present and Future of Fatigue Simulation
“Fatigue is the progressive weakening of a material caused by cyclic or otherwise varying loads, even though the resulting stresses are well within the static strength limits. The art of fatigue simulation is to be approximately right rather than exactly wrong.” Prof.KeithMiller
This presentation reviews the history of fatigue design and simulation. Starting with the famous DeHavilland Comet story, it highlights how a succession of devastating fatigue failures ruined the reputation of the world’s first commercial jet airliner. The physics behind these failures is introduced and it shows how they are successfully modelled using a basic simulation technique dating back to August Wohler in 1852.
Long before the invention of computers and pocket calculators, engineers developed mathematical models to simulate progressive failure through fatigue cracking. The presentation considers both phenomenological (or statistics-based) simulation models, and constitutive (or physics-based) models. It shows how the sophisticated fatigue models used today are a successful combination of these two approaches.
“No one believes a simulation except the design engineer, but everyone believes the test results except the test engineer!”Anon
At some point all simulation results require qualification by physical laboratory testing. The aim of a good simulation is to pass the qualification test first time and avoid the costly route of iterating design changes through physical testing. The presentation describes how simulation and testing are successfully combined so designs are both reliable and cost effective. In particular, it highlights the differences between parametric tests, which provide input parameters for the simulation, and qualification tests that ensure the design is satisfactory for production.
As designs and simulation models grow in complexity, the results become less certain. Today it is common to use kinematic simulation models to characterise the dynamic response of structures, as well as multi-physics simulations for modelling fluid flow, magnetic fields and thermal transients. Furthermore, materials are also becoming more complicated. Composite and additively manufactured components are becoming more commonplace, and manufacturing simulations are often required to identify the local material parameters. As these simulation models grow in complexity the uncertainty in the results also grows. Exactly how accurate are the simulation models?
The fatigue design of mechanical systems has historically followed a ‘deterministic’ process. That means, for a given set of inputs they will return a consistent set of fatigue life results with no scatter. In reality the inputs are statistically uncertain – they have an expected value and a variability. Deterministic design methods take no implicit account of uncertainty. In practice, the designer applies a safety factor to each input parameter to account for the uncertainty. Often an additional safety factor is also applied to the final result to allow for ‘modelling errors’. In most cases, the engineer is fairly certain that the simulation results are conservative, but cannot state with any confidence what the final safety margin, reliability or failure rate will be. Furthermore, it is almost impossible to qualify the simulation by experimental testing because the test lives are significantly higher than the conservative simulations would suggest.
In comparison, a ‘Probabilistic Fatigue Simulation’ method is ‘stochastic’ in nature. In order to take advantage of Probabilistic Fatigue Simulation, uncertainties in the input and the analysis model must be properly calculated. Whilst it is possible to estimate the probability distributions of simple parameters like lengths, thicknesses and radii, many parameters are much more complicated – for example, the ‘Target-driver’ analysis of a car. Uncertainties are usually divided into two types, these are:
Epistemic uncertainties are reduceable through better knowledge – for example, more accurate measurements of the target loading environment, or measurement of residual stresses, etc. Where epistemic uncertainties prevail, a cost-benefit study will reveal whether it is prudent to invest more money in measurement and simulation, or absorb the costs through over-design.
Aleatoric uncertainties are attributable to the inherent variability in a system and cannot be reduced – for example, the inherent scatter in a material fatigue curve.
We are often tempted to produce more and more precise simulation models, but in many cases the most significant uncertainty is not associated with the simulation model but rather with uncertainties in the input parameters. The presentation considers how the stochastic design method is used to:
A case study is presented.
Andrew holds a PhD in Mechanical Engineering and a Masters’ degree in Civil and Structural Engineering. Specialising in structural dynamics, vibration, fatigue and fracture, he has introduced many new technologies to the aerospace and automotive industries. He holds a European patent for the ‘Damage monitoring tag’ and developed the new vibration methods used for qualifying UK military helicopters. He has worked in consultancy with customers across the UK, Europe, Americas and the Far East, and has written publications on Fatigue, Digital Signal Processing and Structural Health Monitoring. He is a founding member of NAFEMS PSE Certification scheme and sits on the NAFEMS committee for Dynamic Testing. He is also a visiting lecturer on structural dynamics with The University of Sheffield.
Stay up to date with our technology updates, events, special offers, news, publications and training
If you want to find out more about NAFEMS and how membership can benefit your organisation, please click below.
Joining NAFEMS© NAFEMS Ltd 2024
Developed By Duo Web Design