With the evolution of computer science, numerical methods such as finite element methods are more and more used to understand physical problems. These tools are often used as an alternative to very costly experimental methods. Finite element analysis is already used in fields such as mechanical and civil engineering, crash analysis and biomechanics, allowing an interesting investigation of local strain and stress. In biomechanics of impacts for example, FE codes are used to simulate human body (soft tissues, bones etc.) which can be simulated with its environment: helmets, airbags, cars, barriers (Whitworth et al., 2004). A majority of mechanical design is based on static analysis but there are numerous applications however, in which it is appears necessary to take into account the highly non-linear, dynamic phenomena (Awrejcewicz et al. 2003, 2004).
For the design of a mechanical product, numerical methods are nowadays widely used at different step of the life cycle: at the beginning of the design process for design optimization (to investigate different solutions), for the comprehension of physical phenomenon which happened during a test (at a diagnostic point of view), in the development of standards etc…
For mechanical structures under impact, a lot of problems remain at different steps of the design, even if the lots of improvement have been made the last decades. A particular point concerns the way to transfer CAD models towards finite element model without loss of information. The problems of standard exchange and the data management can be raised.
The objective of the present chapter is to give theoretical foundations of crash analysis and to show how this simulation step can be integrated in the design process. Explicit Finite Element software as Radioss (Altair ©) can be used to the crash analysis. But many difficulties can arise during this analysis. Problems can come from the size of the model which can generate a time consuming simulation. So, for numerical models with lot of elements, how can this time step be reduced in order to optimize the simulation duration?
During the design process, how can the simulation, the data and the results can be managed in a context of collaborative design?
All these questions have to be raised in order to have a critical point of view and in order to use the numerical simulation for an optimized and a competitive design in a industrial framework.
2. The use of numerical simulation in the process of product design
2.1. Evolution of the product design
Within the current economical and industrial context, companies would like to obtain a better cost control and to streamline their product design in order to reach the famous “cost/quality/delay” objectives. It involves the development of new methods in design process with the enhancement of concurrent engineering contexts.
The engineering process is a set of interlinked activities and involving many actors in different areas of expertise but dependent on each other. The design process is an activity of the engineering process which is absolutely essential in the product lifecycle (AFNOR, 1994).
In the context of minimizing design time and parallelism of the activities of the design process, industrial practices have evolved from engineering process divided into sequences or phases to a concurrent engineering or integrated engineering process (Fig. 1). These concurrent design methods aim to enhance collaborative work in order to increase the responsiveness of the company to reduce costs. They are realized by a parallel design activities and the enhancement of collaborative sharing of data between resources and actors in the company.
Design process evolutions were followed by new design method and nowadays, with the use of 3D geometrical product components in CAD files, engineers include parameters and expert rules (considering as knowledge) to drive the geometry in CAD models through parametric and variational approaches (Fig. 2). These models are termed associative because they allow engineers to easily modify the geometric of a component by changing parameters values and generate new product architecture very quickly.
The aim is to reduce routine design (80% of the estimated design process), test a large range of product architectures vey quickly, especially in the upstream phase of the design process and enhance the product quality with time and cost reduction. This is in accordance with DFX: the Design For X approach which emphasizes the importance of considering the overall constraints of several design activities, and especially in the upstream phase of design process, to avoid major conflicts and to limit the redesign cycle.
2.2. The role of numerical simulation in the design process
Although the design process has evolved, the numerical simulations have also evolved considerably to become a key area in product design. Initially used at the end of the design process as validation or presentation of activities, simulation is currently used in the overall design process and especially in the upstream phase (trade-off, pre-design) using CAD/CAE - integration and parametric models to drive the design and identify the better concepts of product’s architecture earlier.
Thus, nowadays, it seems as necessary to use numerical simulation, especially the finite element simulation to lead the way to innovation. In the early design phases, numerical simulation allows for the management of a better design and quicker. This is particularly true in the area of mechanical systems more specifically in the automotive industry where the development speed has to be increased. That is the reason why the crash simulation techniques are gaining an increasing role in the product development instead of time-consuming validation testing.
These evolutions have led to a strong connexion between the design process and the numerical simulation and today we talk about “simulation driven design method” (Fig. 3). In this way it helps to streamline the design process, and to better take into account the constraints from the various expert domains in the product design and a better control. Indeed, the idea is to minimize physical prototypes which are very expensive to make and use the simulation even for the certification. Thus one of classical objective in automotive industry for future is to produce a car with just one prototype good at the first time.
3. CAD/CAE integration method
These evolutions have led to a strong connexion between the design process and the numerical simulation and today we talk about “simulation driven design”.
More than a connexion, it can be talk about a dependency which is not well handled yet. Indeed there exist a gap between designers and analysts. The large number of heterogeneous information handled in the design process combined with the low level of interconnection between CAD and crash simulation software tools often lead to data discrepancy and incoherence. Thus, the data and information are often scattered and duplicated, thus preventing data coherence, traceability, and reuse, and inhibiting the
respect of design steps sequences. This situation prevents companies from turning the information and know-how embedded in their geometric and simulation models into a shared structured knowledge that can be capitalized.
Different kinds of approaches exist nowadays to facilitate this connexion between the design process and the use of numerical simulation. Overall, these approaches try to develop the integration design / simulation in a collaborative environment.
In this context, support tools or methods for simulation in design have been developed. The aim of this type of tools and methods is to bind design and simulation tools (coupling software in a unique environment), create a real link between the geometry of a component and the simulation context, and automate the simulation task for the designers (Fig. 4). It is based on the methods of parametric models, idealization, meshing, and optimization. Some approaches are able to automate the transition from the geometric model (CAD File) to a numerical model by methods of automatic generation (from idealized models) or mesh discretization to generate a finite element model.
To give an industrial example, CAD/CAE integration models used in upstream design activity enable linking between the geometrical design and numerical simulation to construct “workbenches” dedicated to specific product components and physical domains. The workbench allows engineers to modify the geometry of a generic component by parametric driven design method. Then the model (idealized models used) is automatically or semi-automatically re-meshed and the calculation job is launched on a CPU. Finally, engineer retrieves the results for analyzing (Fig. 5).
The workbench used into several iteration loops allows for engineers to test several component architectures very quickly and identify the main design concepts with validation or not using simulation. The entire workbenches (also called expert models) are very different and heterogeneous because they are used in a large diversity of practice, with a diversity of tools, in a diversity of physical domains and moments in the design process. The expert models are based on various geometric representations with the advantage of product representations tailored to each individual situation.
Following this example, using design/simulation overall the design process allows for more flexibility and performance and show an important interest in the scientific and industrial domains.
3.1. Interest of design/simulation integration
The interests to bring closer together the design and the simulation are multiples and they can be grouped into three main parts:
First, the collaborative work with tractability and coherence between design and simulation activities:
Today engineers work on concurrent engineering context which mean they need to share an important volume of information in heterogeneous design and simulation activities. Each activity may takes place in different site using a large range of tools which are not able to communicate together. If design and simulation are totally independent and unsynchronised it is very difficult to take account of update in a model which impacts other models. The aim is to gather engineers on a collaborative model or a common tool which guaranteed the link between design and simulation and allowing better performances for traceability and coherence. Thus, design and simulation integration improve collaborative work in a project.
Next, reduce routine design and better take into account of constraint from several area of expertises:
Link design and simulation allow for better take into account of constraint from several areas of expertises. With parametric models (using the associativity), the constraints from geometric design are faster take into account in the simulation process, and reverses, which mean simulation results can impact the design and drive it. Well, it is easier to make loops between design and simulation and validate concepts by the simulation.
For example, with classical method we use design tool for component modelling and then specifics simulation tools for meshing, pre-processing, calculation, post-processing for the first design concept tested, and then it is necessary to start again for the next design modification. It takes very long time and engineers cannot test numerous product architecture.
With a design/simulation integration method it is possible to reduce the routine design several times (4 times and more) start for the second loop of modification (Fig. 6). This method carried out to earn in quality because engineers can test a large range of product design and identify the better, and limit the time consuming.
Better control in the design and the simulation activities which allow to streamline the design process:
Simulation groups several complexes activities which need experts to use specifics tools. With design/simulation integration, automatics processes used allow for designers to use the simulation with low level of knowledge in simulation. Also, it allows for capitalizing and secure know-how (simulation process, constraints, etc.) into models and thus streamlines the design process.
If design/simulation integration is now commonly used by industrials and offers significant gains in performance in the design process, some domain has particularity as crash. These particularities come from the size of the design and simulation model handled and the specific design context of crash activities.
3.2. Characteristics of crashworthiness simulations
Compared to other kind of simulations, such as structural or vibration analysis, vehicle crash analysis has got some typical characteristics we may speak about.
We can start our discussion dealing with the fact that all the carmakers around the world decided some years ago to reduce their need for physical prototypes.
It’s seems very difficult at that time to believe we can avoid real vehicle experiments regarding crash. But it’s one of their objectives.
We’ll not deal with pedestrian protection evaluation in this article; neither about biomechanical aspects in crashworthiness analysis.
All these topics have also very particular aspect.
The first aspect that appears when reviewing FEA models for crash is the complexity of such models.
Most of the time crash models embed hundred of parts, from main body panels to small hinge. They embed visible parts, (wheels) and none visible ones, (outer CV joint). Models include heavy parts, (battery) but also light, (foam), etc.
Because crash analysis is mainly a problem of intrusion of one part in another, geometries are often modelled as close as possible of their reality.
Shapes are complex
The facts that models include a lot of parts imply that the connections between these parts must be defined.
In reality parts and components are assemble using welding, (seam welding and spot welding); using bolt and even (more and more) glue.
Thus, in addition to include hundred of parts, the FEA model for crash will also need thousand of connections definitions, thousand of connection properties definitions, (fracture limits, etc.).
Designing vehicle body does not really depend on routine design but there is a strong impact on several other parts of the cars as the power unit, the cockpit, the frame, etc.
Hundred of parts, thousand of connections, millions of nodes: huge problem in terms of DOF.
Because of the size of the problem and because of the transient aspect of the simulation, the computation phase of a crashworthiness analysis can last several days.
Each year, even if the power of computer increases the duration of a typical crash computation remains the same.
Engineers are not yet in the process of stabilizing their models. They enrich them with more and more detailed parts, with finer meshes, with more precise contact management, etc. This way the computer power is harnessed to serve the quality of results instead of reducing computing time.
The big size of crash simulation models unfortunately also deals with the difficulties to manipulate these complex models during the pre-processing and post processing phases. These phases are also very demanding in terms of PC power regarding crash simulations.
Whereas some years ago the car-style was so important for final customers, the crashworthiness aspect appears more and more as a key point in the choice for a new vehicle.
Nowadays it is not uncommon to have some information regarding the Euro NCAP results of a new car directly in the advertisements for this new car; nor to see crash test dummy “playing” in such advertising.
Priority between Style and Crashworthiness has changes these few last years.
Thus crash as a strong impact on product design cost and that is the reason why industrials show an important interest and make research or developments. We propose to see some of them in the next section.
4. Several industrial methods for CAD/ CEA integration
4.1. Approaches with strong links to CAD
It’s now possible to deploy some approaches based on strong links between geometries and FEA components.
Unfortunately in these cases body shapes have to be simplified. Most of the time these kind of approaches are used at the very early stage of a new project. This lat point made these approaches very interesting.
Thanks to the strong link between CAD geometries and FEA, and because geometries are simplified, meshing operations can be performed using automatic mesher.
The link between CAD & FEA, the high level of automation makes short loops iteration possible.
4.1.1. AVP - an example based on skeleton and simplified geometry
AVP is a set of methodologies and CATIA V5 workbenches developed for a French car maker. It offers a team of engineers with expertise in CAD & Analysis the possibility to quickly model a new vehicle.
Within four weeks, (Fig. 7), a new body style can be defined. The whole structure product is divided into hollows parts, junctions and panels. Generally the same platform can be reused from a project to another.
A skeleton controls links between parts, consisting in strong parametric geometry.
For the most, parts are represented by multi-sections. Each sections consisting in a five segments polyline.
The high level of simplification guarantees the whole body automatic update process on design change. It made also possible the automatic quadrangle meshing of the body. The process complies the organisation’s meshing standards.
Specific CATIA V5 workbenches have been developed in order to pre-process a crash analysis case within the software.
User can model the specific connexions that are validated within his organisation. He can define all type of features needed for crash analysis, (sections, accelerometers, contact interfaces, etc.)
Finally, the high level of automation and the complete integration of these methodologies and tools within a unique software interface allow short loop iterations.
This kind of approach is very efficient during early design phases. It proposes an agile geometry, able to represent several architectures.
But when the car concept becomes mature, engineers need to design more and more precisely. Although geometries are parametric, they cannot evolve to more detailed geometries.
That is the big limitation of the approach.
4.1.2. Fast Concept Modeller - an example based on productive Design tools
FCM is a set of tools aiming at helping designers to create very quickly and easily vehicle geometries.
Based on very productive tools, FCM allows users to model a vehicle manipulating geometry objects directly on the screen.
Fast Concept Modeller is a single CATIA V5 workbench. The user interface favours “free hand” actions.
The geometries results, as they were with the AVP approach, are parametric and very simplified.
During the new project vehicle geometries can become more and more detailed. They can evolve from a beam model to a more complex beam-shell model including fillets, multi-flanges, etc.
Regarding the FEM functions included in the software, shell can be mesh using batch meshing technique (ANSA). In that case properties and connexions attributes defined on geometry are directly transferred on finite element model.
For the early stage of the process Beams are used. In that case, the car geometry is automatically discretized using variable cross section beams. This process is very powerful if optimization loops are engage on the beam structure. FCM can pilot the vehicle geometry from the result of such an optimisation.
4.1.3. Approach using software of the shelf
It exists powerful pre-processing software fully integrated into CATIA V5 and allowing expert simulations set-up (Fig. 8).
These software lies on the CATIA V5 philosophy, (all the model features have geometry support) but also extend the natural capability of CATIA V5 providing the user with direct access to nodes & elements.
Such as FCM, these kind of software offer batch meshing capabilities. This possibility bridges the gap in the CATIA mesher.
In this way geometry model can be much more detailed. In another hand, the possibility to deal directly with nodes & elements entities brings user the change to handle orphan mesh. Meshes perform with more dedicated software or meshes of a previous project can be easily used.
5. Limitations and opening
5.1. Detailed geometries
As we mentioned above, the approaches based on a strong link between FEM and geometry imply - most of the time - a poor level of detail in geometry: the simplest geometry is, the more automated the update on changes will be.
The gap between simple and detailed geometries is not easy to fill.
Even if during the early design phases geometries have to be very simple in order to be able to evaluate a lot of architectures and alternatives, while the project run engineers need to study the influence of small modifications. Teams quickly have to integrate manufacturing process parameters.
Unfortunately it not easy, (not possible) to use these very simplified model for the next steps.
Moreover, because CAD/ CAE integrated approaches embed geometries, meshes and analysis features, the associated numerical models are quite big and require the use of powerful workstation.
5.2. Integrated approaches = CAD + CAE
For an organisation, saying the same team will handle geometry & FEM models is a big challenge. It means FEA engineers have to be trained in CAD software, (more rarely, Designers are trained in simulation).
The FEA engineer job is changing slowly...
Double competency, Design + Simulation, will be on tomorrow a must have for young engineers.
5.3. Simulation life cycle management
With the natural trend bringing closer CAD and FEA, some techniques now enter in the simulation field.
Among them SLM, Simulation Life Cycle Management, is surely something which will become more and more important.
Designers and Simulation engineers are now working together. They need to share the same data. They will need some specific tools to do that more easily.
We already hear many testimonies speaking about the fact that integrated CAD/ CAE approaches urge team to ensure a better data traceability.
5.4. Optimisation and more
Probably the main advantage of a CAD/ CAE integration is the fact that organisation can perform short loop of iteration.
On each change the remaining work is automatically update then performed.
Optimisation is possible and even effective for more and more complex cases.
The next challenge will be the coupling between several types of simulations.
Being able to take into account the forging or stamping process of each part during the crash worthiness simulation is a big challenge, but will ensure an important level of accuracy.
Performing optimisation loops including crash and stamping simulation is yet unrealistic, but...
5.5. Knowledge management for design and simulation
We have seen design/simulation integrations method focused on models closer, but problems still exists about knowledge embedded in models. Indeed, each expert model manages parameters and rules independently from other model which uses the same knowledge. This knowledge is often duplicated and dependant of the models which using it. This situation favours knowledge inconsistency between models and it often happens that simulations are launched on different models sharing same parameter but on wrong values.
Indeed it is very difficult to make expert models communicate together because they are used with several tools which are not able to communicate together despite CAD/CAE integration method. It appears that is their no communication platform for this type of knowledge and today with the massive using of design and simulation models it is a real problematic.
Nowadays, researches are focused on this problematic in accordance with global PLM (Product Life Cycle) approach. The aim is to define a method and a model or meta-model (in UML - or MOF - which are modelling standards defined by OMG -) allowing to manage knowledge and share it through experts models with coherence. Some of research work proposes to capitalize parameters and rules extracted forms design and simulation models into generic information baseline and to built knowledge configuration synchronized with experts models. We propose to explain one of these researches called KCModel.
5.6. Perspectives with KCModel (Knowledge Configuration Model)
5.6.1. KCModel objectives
The aim of this research is to propose a new tool which helps users to ensure data, information, and knowledge consistency when shared in several and heterogeneous experts CAD and CAE models. This tool will focus on a new generic approach called KCModel: “Knowledge Configuration Model” based on knowledge configurations synchronized with expert models.
KCModel is formalized into meta-models in UML Language. In the context of KCModel, we consider as:
technical data, the parameters and expert rules extracted from experts models,
information, the data capitalized on, structured and organized into a specific entity to construct a technical and generic product information baseline,
knowledge, a set of technical product information entities instantiated from the baseline in a configuration used in specific design or simulation activity. This configuration is synchronized with a specific CAD or CAE model.
The purpose of the KCModel is to Capitalize, Trace, Re-use, and ensure the Consistency (CTRC) of technical data shared by several experts model, especially in the upstream step of design process (Fig. 9):
Capitalize on parameter and rules as a generic and cross functional baseline.
Share and trace through several users.
Re-use parameters and rules in expert models.
Ensure the consistency and save the modifications.
We propose now to explain the KCModel method and then to focus on the knowledge configurations and how the consistency can ensured.
5.6.2. Global KCModel method
The KCModel (Fig. 10) allows for capitalization of technical data extracted from different expert models, into an abstract generic information entity called “Information Core Entity” (ICE is the smaller information entity used). Data capitalized on, structured, organized and documented in these entities is then considered as technical information and all ICE centralized in a single point in a generic and a cross-functional baseline. To be used in a specific context (e.g. thermal load case on a piston for a milestone X in a project), we create a “Configuration Entity” (CE) instantiating ICE corresponding with the context of use. The configurations are then synchronized with the different expert models and managed in a consistent way. Each configuration is a representation of knowledge embedded in expert models. Configurations are compared between them to warn conflict to engineers.
This approach lets to manage the technical product information and its instances (set of parameters of values) by using configurations and versions. Explicit knowledge is handled in these models. Indeed, data is capitalized on ICE to become information. Information is transformed into knowledge when an individual understands its necessity to an activity, which means by creating knowledge configurations with ICE instantiate needed to a specific design or simulation activity.
This type of approach is highly compliant with design/simulation method and allows for reaching a high level of collaboration and performance in the design process.
6. Specificities of FE codes for impact simulation
Finite element codes are based on the spatial discretization of a continuous field, and consist in solving differential equations system. Engineers have to go through several steps, from the choice of the dimension of the model, to the constitutive laws or the choice of element formulations.
First of all, this complex task goes through the choice of the dimension of the model to be solved. Indeed, numerical engineers have to analyse the type of analysis to be solved in order to optimize the resolution of the physical phenomenon, and to face to the choice of the geometry and the physical model: can this phenomenon be modelled in a single dimension, or does it need 3 dimensions? The basic example of the flexion of a beam, which could be modelled with a beam, with shells or with solids elements can illustrate this first point.
This chapter will only deal with the finite element method, which is the most used method in the field of crash analysis. For dynamic simulations, such as crash simulations, the time dependence is added to the complex solver process. The problem to be raised is to have an optimized time discretization using either implicit or explicit scheme, with the introduction of a complex concept in dynamic numerical methods, the time step. Depending on the choice of the spatial discretization, on the choice of the material properties, this time step plays an important role in the simulation. Finally, dynamic finite element codes are complex codes and its specificities will be explained in the next sections.
6.2. Equation of the motion - dynamic formulation
A nonlinear finite element equation of motion is usually obtained from the principle of virtual work. This is the weak form for equilibrium equations which includes internal force, contact/friction force, inertia force, damping force, external force, and boundary condition. Finite element method (FEM) discretization of the equations of motion leads to the following matrix form of coupled set of second-order nonlinear deferential equations:
where is the vector of the nodal positions at current time and the vector of nodal accelerations. is the mass matrix, the stiffness matrix and the vector of external forces. This equation is non-linear (in and) due to the presence of contact, an possible material and geometrical nonlinearities. A time integration scheme must be chosen, it must be able to struggle with this strong non linear problem.
6.3. Basic contact notions
The consideration of contact boundary conditions in the finite element simulation of interacting components is nowadays established as state of the art. According to the principles of continuum mechanics, the contact conditions can be expressed as follows. Let us consider two deformable bodies and in potential contact and two potential contact surfaces are noted and. Let be the current position vector at an instant. The orthogonal projection of on the body surface is defined by. The contact distance vector (or gap vector) is defined by
is the oriented contact distance.
Let be the contact stress vector exerted by on the body. Next, the displacement vector, the velocity vector and the contact stress vector can be uniquely decomposed into a normal part and a tangential part as follows:
The unilateral contact law is characterized by a geometric condition of non-penetration, a static condition of no-adhesion and a mechanical complementary condition. These three conditions, known as the Signorini conditions, can be formulated as
In the case of dynamic contact, the Signorini conditions can be formulated, on, via the relative velocity
The bodies are separating when and remain in contact for. The formulation of the Signorini conditions can be combined with the sliding rule to derive the complete frictional contact law for the contacting part.. This complete law specifies possible velocities of bodies that satisfy the unilateral contact conditions and the sliding rule.
A key issue in the treatment of contact constraints in explicit dynamics is the choice of contact constraints to enforce at contacting nodes. The contact constraints evaluation has significant effect on the accuracy and efficiency of the analysis. A variety of numerical methods have been proposed in the literature to deal with this problem: lagrange multiplier methods, penalty (Chamoret, et al., 2004), augmented Lagrangian approach (Alart, et al., 1991) and bipotential method (Feng, et al., 2006) are the most frequently used. A good evaluation of this force requires in a first step locating all the potential contacting nodes. In an industrial context, where the number of contact nodes is important, it appears essential to develop contact searching algorithms (Zhong, et al., 1994), (Weyler, et al., 2011). A good procedure should be accurate to detect all the potential contact nodes and efficient to avoid the unnecessary research (and so an increased of the computation time).
6.4. Time integration scheme
For the simulation of dynamic problems such as crash analysis, the time discretisation is one of the major point that can strongly influence the accuracy and efficiency of the algorithm. The two main solution procedures are the explicit and implicit algorithms. The implicit scheme is unconditionally stable. But it has two main drawbacks: the first one is that a linear set of equations must be solved repeatedly so the computation time increases with the size of the model when using a direct solver. The second one concerns convergence which is sometimes hard to reach. In general finite element code dedicated to the simulation of transient dynamic phenomena such as crash or impact (e.g. Radioss, Altair Hyperworks, Michigan, USA), the temporal explicit scheme is used. Explicit numerical time schemes such as the well-known central difference scheme have been widely used as they do not require numerical iterations at each time step, and also for their good properties in term of accuracy and robustness with possible nonlinearities.
The state of the system is evaluated at each time step. The state at a given time t, is used to calculate the state at the
In this process, displacements are known at the time where the dynamic equilibrium of the system is solved, and needs only the inversion of the mass matrix. Furthermore, if a lumped mass matrix scheme is used, the mass matrix is diagonal and does not need inversion. The resolution of the system is very quick since each degree of freedom is calculated separately. Each stress are evaluated in each elements individually. At each time step, the state of equilibrium is updated, which corresponds to the propagation of a wave into the element. This important point lead to the conditional stability of the scheme, which means the existence of a critical time step for the stability of the resolution.
7. Specific problems to explicit scheme
For high speed simulations, temporal discretization can be performed by the central difference methods (CDM). In such explicit time integration method, specific conditions on the maximal time step for numerical stability are assumed. The maximum time step to be used is determined by the Courant number C (Courant, et al., 1967):
This requirement means that, during one time step, the distance travelled by the fastest wave in the model () should be smaller than the smallest characteristic element size () in the mesh, representing the shortest length for a wave arriving on a node to cross the element. c represents the wave velocity for a given material or the time step for the analysis should be smaller than the time a wave needs to cross the element.
With elements of 5 mm, and for a typical steel material law, this condition leads to an order of magnitude of 10-3 ms. Indeed with this order of magnitude of the time step, it appears that this specific scheme is an appropriate method to solve very rapid phenomenon, with high velocity leading to non highly non linear problems. For typical impact duration of 100-200 ms, it appears necessary to use this kind of integration scheme for an accuracy of the results.
This time step also depends on the number and the type of elements used to model the system. In automotive industry, FE models are developed using 4 nodes shell elements (Belytschko, et al., 1981). The following picture illustrates a FE model of a window, developed with shell elements.
With specific geometrical hypothesis (such as the ratio length / thickness), the use of these elements are powerful when modelling shell structures such as windshield, bonnet, doors in automotive engineering.
The use of such elements are used in the case study of the SIA vehicle of the University of Technology of Belfort-Montbeliard, which is submitted to impact.
8. A numerical study - UTBM’s vehicle for SIA Trophy - Esphyra
All the theoretical concepts previously defined lead to a case study, which is explained in order to illustrate the feasibility of FE simulations in the crash field.
The SIA trophy is an automotive challenge for automotive designers, manufacturers, universities, whose aim is to build and design a vehicle able to face to today’s new specifications in terms of innovations, respect of the environment. From a numerical point of view, standard procedures have been used in order to perform crash FE simulations.
The bodywork and the frame of the vehicle have been modelled with solid parts into a CAD software, in taking into account their thickness. Mid-surfaces of the bodywork and of the frame have been extracted in order to mesh them with 4 nodes shell elements as recommended in section 7, and illustrated in figure 14.
With steel material laws, whose parameters are listed in the following table, the structure was submitted to an impact with an initial velocity of 10 m/s against a deformable plane.
Results in terms of Von Mises stress distribution in the frame and in the bodywork are illustrated in figure 15.
The simulation of this impact used the theory described in previous section in terms of integration scheme used, in terms of time step and mesh, in finally in terms of contact.
|Young’s Modulus||210000 MPa|
|Sigma yield||210 MPa|
|Sigma max||240 MPa|
8.1. Conclusion and opening of crash simulations in the field of biomechanical simulations
Specific approach for FE simulations in crash fields are necessary for optimal calculations. Indeed, several points are in favour of explicit scheme in such simulations.
Despite of the conditional stability of the sheme and the little time step, there are lots of advantages using explicite scheme in the crash field, like its precision, its easy capability of the mass matrix inversion (if lumped mass matrix is used), quite low CPU cost etc…The more the velocity is high, the more the explicite scheme is adapted for the simulations.
These kind of numerical algorithms has prove itself, in terms of robustness and accuracy of the results. Since the beginning of the 70’s and the need of the investigation of “what happens” during a vehicle impact, these methods have not stop to improve. With the evolution of computer science, FE simulations are more and more used to investigate physical problems. It allowed creating complex models with a large number of elements. With 10 000 elements at the end of the 80’s, typical FE models of vehicle can reach today to several hundreds of thousands of elements. Indeed, precise FE models of vehicles are developed allowing a precise investigations of what happens during an impact, with the aim of an optimization of the structures and an improvement of the safety of the vehicles. This concept of safety become more and more important, and recent finite element simulations couple the finite element model of the vehicle with a FE model of a human. That is the concept of numerical impact biomechanics.
Automotive engineering and biomechanical engineering can gather their knowledge to improve the security of the vehicle occupants. At a numerical level, lots of studies have dealt with the development of finite element models of human structures: head (Roth, et al., 2010), or shoulders (Duprey, et al., 2005) for example which can be coupled to human environment in order to improve its security. In developing “biofidelic models” research can be lead on injury mechanisms (Roth, et al., 2009) which can help to evaluate the dangerous behaviour of a structure (Meyer, et al., 2009).
Finally, simulations have helped engineers to develop powerful models leading to an improvement of the life cycle for the design of a mechanical system. However, in a context of “world development” and “collaborative engineering”, it is necessary to have a specific design methodology in order to optimize the design process. Numerical simulation being a part of the design process, it is necessary to involve the simulation data in a PDM : the development of Simulation Data Management is today’s compulsory.
Finally, these last decades have shown the development of numerical simulation which became essential in the design process, especially in automotive engineering. In crash field the requirements and the standards have increase compared to the last decades, and lead to a number of tests which are now compulsory for the probate and the industrialization of a vehicle. Furthermore, the coupling of vehicle FE models with human body FE models for an improvement of the safety of a vehicle allowed the development of numerical biomechanics. Human body is now investigated at a numerical level, allowing optimizing the design of vehicle or protecting devices.
Furthermore, the numerical simulation makes the lifecycle decrease with an optimized and an performing management of simulation data. However limits in terms of interaction between CAD system and FE platform still exist.
This chapter has been writing with the collaboration of DPS – Digital Product Simulation, which is an expert in design and simulation integration for product development. Authors would like to thanks DPS (Digital Product Simulation) for their expertise, their help and their contribution to the writing of this chapter.
- CAD/CAE : Computer Aided Design / Computer Aided Engineering
- UML: Unified Modelling Language is a language used to formalise model object oriented. UML is defined by OMG.
- MOF: Meta Object facility is a language used to formalise meta-models object oriented. MOF is defined by OMG.
- OMG : Object Management Group – www.omg.org